Sobol Sensitivity Analysis

For my own understanding

First Published 4 months ago

Basics

Let’s say that we want to analyze how the output, of the function below varies with respect to the input parameters, :

How do we separate out the contribution of each of the parameters?

The One-at-a-Time Approach

Traditional Sensitivity Analysis approach, where one input variable is moved while keeping the others are their baseline. The sensitivity of the model output to a change in the input is commonly represented as

where represents a baseline value. However, the OAT approach has several limitations:

  1. OAT not fully explore the input space, and the unexplored space grows
  2. OAT does not consider parameters interactions
  3. OAT does not handle nonlinearity

There are, however, modifications to the OAT approach that extend extend it to be global, including the , but they still face limitations that have been discussed in literature.

Variance Based Approach (Sobol)

Sobol’s Sensitivity Analysis is a form of variance-based sensitivity analysis, that ranks parameters based on the variance they contribute to the output. Here we are referring to variance as the expected value of the squared deviation from the mean. For a parameter , it’s commonly written as:

where is the expected value of , . results in the following equivalent expression

which will be critical to understanding Sobol’s method.

To understand Sobol’s method, we have to first start with functional decomposition or ANOVA decomposition, which stated that the function can be represented as:

written more explicitly it is:

The number of terms in total is , including . It as this point that we make a couple assumptions:

  1. Each term is
  2. The functions are orthogonal, or satisfy the unicity condition:
    1. for
    2. call this the ANOVA-decomposition terms having zero mean

With the assumptions in mind, the functions are represented as:

**Note I removed from the below equations, which is the probability function of . This is allowed in this context because we are assuming independent input samples.

Variance can similarly be decomposed as:

where represents the total output variance.

First-Order Effects

represents the output variance that is attributed to parameter and is the variance caused by the interactions of parameters and :

Re-arranging the above equations we also have:

which shows that the total variance in is equivalent to the variance in the expected value of given plus the expected variance in given . Using the definition of variance as , the equation for above can be re-written in integral form as:

Re-writing the equation for at one point , we have the following:

For a global sensitivity analysis, we don’t want the variance at one point, but instead we want to attribute the variance caused by over all values of . So, we add an additional integral, over . This turns above into an equation for the expected value of variance over all

Using the equality from above, we see that the first term in the above equation is the same as the first term in the integral form of . Subtracting above from , we are left with

The left side of this equation is equivalent to through the equation that we explored above. So, we have simplified the equation to only the square of the expected outcome of the model, which is quite easy to compute, and a double integral. Using the shortened notation of , the equation becomes

Remember that is the same as . We re-introduce the notation of , which means for all values of except for the column. With this notation, the first part of the above equation becomes:

The interpretation is that there are two loops, one to compute the integral over and the second to compute the integral over . It’s here that a second matrix is introduced. This matrix is an independent sample of the input space and replaces the inner for loop that would need to occur for the double integral (Sobol has a proof of the validity of this method).

If you look closely at the function, you will also notice that is the same in both evaluations of . It is formalized using three matrices: , , and . The construction of the matrices is a simple process. Two independent samples of the input are made with dimensions . Then, a third matrix, , is composed by taking and replacing the column with the corresponding column of . The result is a 3 dimensional matrix, .

From this representation, it is easy to derive the number of simulations needed for the sensitivity analysis. We simply “stack” the matrices to arrive at . Using the sample matrices, the above equation is greatly simplified in the Monte Carlo framework. Only one integral remains, with the equation being written as:

Higher-Order Effects

So far we have been focused on , (really ), which is also referred to as the *first-order variance*. It is the variance that alone causes in the output. Yet, the power of variance-based or global sensitivity methods is that they can tell us about the interactions between parameters. Unless the model of interest is linear, there will be interaction effects, and it’s likely that we care about them. These further-order effects are usually summarized by two metrics in literature. with one investigating the total variance that causes and another investigating the variance due to two parameters, and .

The total variance is a rather simple modification to the first order equations. Basically, instead of computing the variance induced by a parameter , we want to find all the other first order variance and subtract that variance from the total. Its written as

The integral form is

and the Monte Carlo version

As compared to , this equation uses and , which have all columns in common, except for the . Remaining are the second order sensitivity indicies:

Re-Implementing in Rust

References

  1. https://doi.org/10.1016/j.cpc.2009.09.018