Sensitivity Analysis#

Pystra can compute the sensitivity of the FORM reliability index \(\beta\) with respect to the distribution parameters (mean and standard deviation) of each random variable, and with respect to the correlation coefficients.

Two methods are available, selected via the numerical argument of SensitivityAnalysis.run():

Method

Call

Cost

Correlation sens.

Finite difference (FD)

run(numerical=True)

\(2n+1\) FORM runs

No

Closed-form (CF)

run(numerical=False)

1 FORM run

Yes

The closed-form approach follows Bourinet (2017), differentiating the Nataf transformation chain analytically. The key equation (Eq. 17) decomposes the sensitivity into two terms:

\[\frac{\partial\beta}{\partial\theta_k} = \underbrace{\boldsymbol{\alpha}^T \mathbf{L}_0^{-1} \frac{\partial\mathbf{z}}{\partial\theta_k}}_{\text{first term}} + \underbrace{\boldsymbol{\alpha}^T \frac{\partial\mathbf{L}_0^{-1}}{\partial\theta_k} \mathbf{z}}_{\text{second term}}\]

where \(\boldsymbol{\alpha}\) is the FORM direction cosine vector, \(\mathbf{L}_0\) is the Cholesky factor of the modified (Nataf) correlation matrix, and \(\mathbf{z}\) is the correlated standard-normal design point.

The two terms have distinct physical meanings:

  • First term — captures how the marginal CDF transformation shifts at the design point when a distribution parameter changes. For example, increasing the mean of a variable moves where that variable’s CDF maps into standard-normal space, pulling the design point closer to or further from the origin. This term is present even for independent variables.

  • Second term — captures how the Nataf correlation structure changes. The modified correlation matrix \(\mathbf{R}_0\) depends (through double integration) on the marginal distributions, so perturbing a marginal parameter alters \(\mathbf{L}_0\) and hence the isoprobabilistic transformation. For uncorrelated normal variables \(\mathbf{R}_0 = \mathbf{I}\) regardless of the marginals, so this term vanishes identically. For correlated non-normal variables it can be the dominant contribution — and it is also the term that makes the finite-difference method numerically unstable, as demonstrated in the examples below.

[1]:
import numpy as np
import pandas as pd
import pystra as pr

Correlated Lognormals (Bourinet 2017, §4.2, Example 2)#

Two lognormally distributed random variables \(R = X_1\) and \(S = X_2\) with a linear correlation \(\rho = 0.5\). Both have a coefficient of variation of 1:

Variable

Distribution

Mean \(\mu\)

Std. dev. \(\sigma\)

CoV

\(R\)

Lognormal

5

5

1

\(S\)

Lognormal

1

1

1

The limit state function is \(g(R, S) = R - S\).

Because both variables are lognormal and the LSF is linear in standard-normal space, this problem has an exact analytical solution: \(\beta = 2.1218\) (Bourinet 2017, Eq. 30).

Problem setup#

[2]:
def lsf(R, S):
    return R - S
[3]:
limit_state = pr.LimitState(lsf)

model = pr.StochasticModel()
model.addVariable(pr.Lognormal("R", 5, 5))
model.addVariable(pr.Lognormal("S", 1, 1))
model.setCorrelation(pr.CorrelationMatrix([[1.0, 0.5], [0.5, 1.0]]))
[4]:
options = pr.AnalysisOptions()
options.setPrintOutput(False)

Baseline FORM result#

Run FORM first to see the baseline reliability index and design point.

[5]:
form = pr.Form(
    stochastic_model=model,
    limit_state=limit_state,
    analysis_options=options,
)
form.run()
form.showDetailedOutput()

==========================================================
FORM
==========================================================
Pf                       1.6927788976e-02
BetaHL                   2.1217876054
Model Evaluations        66
----------------------------------------------------------
Variable            U_star             X_star        alpha
R                -0.966683           1.580985    -0.455543
S                 1.888784           1.580984    +0.890214
==========================================================

Closed-form sensitivities#

The closed-form method (numerical=False) post-processes the converged FORM design point. It requires only a single FORM run and also returns correlation sensitivities.

[6]:
sa = pr.SensitivityAnalysis(
    stochastic_model=model,
    limit_state=limit_state,
    analysis_options=options,
)
cf = sa.run(numerical=False)

Finite-difference sensitivities#

The finite-difference method (numerical=True, default) perturbs each parameter by a small fraction of the standard deviation and re-runs FORM. It does not compute correlation sensitivities.

[7]:
fd = sa.run(numerical=True)

Comparison: closed-form vs. finite difference#

The analytical reference values are from Table 4 of Bourinet (2017).

[8]:
ref = {
    "R": {"mean": 0.5184, "std": -0.2548},
    "S": {"mean": -1.3629, "std": 0.0445},
}

rows = []
for var in ["R", "S"]:
    for p in ["mean", "std"]:
        sym = "\u03bc" if p == "mean" else "\u03c3"
        rows.append({
            "Parameter": f"\u2202\u03b2/\u2202{sym}_{var}",
            "Reference": ref[var][p],
            "CF": cf["marginal"][var][p],
            "FD": fd[var][p],
        })

df = pd.DataFrame(rows).set_index("Parameter")
df.style.format("{:+.4f}")
[8]:
  Reference CF FD
Parameter      
∂β/∂μ_R +0.5184 +0.5184 +0.5156
∂β/∂σ_R -0.2548 -0.2548 -0.2546
∂β/∂μ_S -1.3629 -1.3630 -1.3623
∂β/∂σ_S +0.0445 +0.0447 +0.0417
[9]:
pd.DataFrame(
    [{"Parameter": "\u2202\u03b2/\u2202\u03c1",
      "Reference": 2.4585,
      "CF": cf["correlation"][1, 0]}]
).set_index("Parameter").style.format("{:+.4f}")
[9]:
  Reference CF
Parameter    
∂β/∂ρ +2.4585 +2.4586

Practical note: FD instability for correlated non-normal variables#

The finite-difference method can be numerically unstable for correlated non-normal variables, especially when a sensitivity is small in magnitude. In this example, \(\partial\beta/\partial\sigma_S \approx 0.045\) is the smallest sensitivity, and the FD estimate is the least reliable.

The root cause is that perturbing a marginal parameter changes the Nataf modified correlation matrix \(\mathbf{R}_0\), so the entire isoprobabilistic transformation shifts. FORM then re-converges to a slightly different design point, introducing noise.

To illustrate, we run the FD method with several step sizes:

[10]:
rows = []
for delta in [0.1, 0.01, 0.001, 0.0001]:
    fd_trial = sa.run(numerical=True, delta=delta)
    rows.append({"Method": f"FD (\u03b4={delta})",
                 "\u2202\u03b2/\u2202\u03c3_S": fd_trial["S"]["std"]})
rows.append({"Method": "CF",
             "\u2202\u03b2/\u2202\u03c3_S": cf["marginal"]["S"]["std"]})
rows.append({"Method": "Reference",
             "\u2202\u03b2/\u2202\u03c3_S": 0.0445})

pd.DataFrame(rows).set_index("Method").style.format("{:+.4f}")
[10]:
  ∂β/∂σ_S
Method  
FD (δ=0.1) +0.0342
FD (δ=0.01) +0.0417
FD (δ=0.001) +0.1433
FD (δ=0.0001) -0.1061
CF +0.0447
Reference +0.0445

The FD results oscillate as the step size changes, while the closed-form result matches the analytical reference to four significant figures. This demonstrates the advantage of the closed-form approach for problems with correlated non-normal variables.

For uncorrelated normal variables, both methods agree closely because \(\mathbf{R}_0 = \mathbf{I}\) regardless of marginal parameters (the second term of Eq. 17 vanishes identically).


CalRel Example (Bourinet 2017, §4.1, Example 1)#

This example, taken from the CalRel software manual (Liu et al., 1989), has three correlated random variables and a nonlinear limit state function:

Variable

Distribution

Mean \(\mu\)

Std. dev. \(\sigma\)

\(X_1\)

Lognormal

500

100

\(X_2\)

Lognormal

2000

400

\(X_3\)

Uniform

5

0.5

The correlation matrix is:

\[\begin{split}\mathbf{R} = \begin{pmatrix} 1 & 0.3 & 0.2 \\ 0.3 & 1 & 0.2 \\ 0.2 & 0.2 & 1 \end{pmatrix}\end{split}\]

The limit state function is:

\[g(x_1, x_2, x_3) = 1 - \frac{x_2}{1000 \, x_3} - \left(\frac{x_1}{200 \, x_3}\right)^2\]

FORM gives \(\beta = 1.7728\) and \(p_f^{\text{FORM}} = 3.81 \times 10^{-2}\).

[11]:
def lsf_calrel(X1, X2, X3):
    return 1 - X2 / (1000 * X3) - (X1 / (200 * X3)) ** 2


ls1 = pr.LimitState(lsf_calrel)

model1 = pr.StochasticModel()
model1.addVariable(pr.Lognormal("X1", 500, 100))
model1.addVariable(pr.Lognormal("X2", 2000, 400))
model1.addVariable(pr.Uniform("X3", 5, 0.5))

R1 = np.array([
    [1.0, 0.3, 0.2],
    [0.3, 1.0, 0.2],
    [0.2, 0.2, 1.0],
])
model1.setCorrelation(pr.CorrelationMatrix(R1))
[12]:
opts1 = pr.AnalysisOptions()
opts1.setPrintOutput(False)

form1 = pr.Form(
    stochastic_model=model1,
    limit_state=ls1,
    analysis_options=opts1,
)
form1.run()
form1.showDetailedOutput()

==========================================================
FORM
==========================================================
Pf                       3.8133894361e-02
BetaHL                   1.7727641950
Model Evaluations        154
----------------------------------------------------------
Variable            U_star             X_star        alpha
X1                1.281626         631.952172    +0.723158
X2                0.481529        2320.035440    +0.271741
X3               -1.126170           4.525984    -0.634980
==========================================================

Closed-form sensitivities#

[13]:
sa1 = pr.SensitivityAnalysis(
    stochastic_model=model1,
    limit_state=ls1,
    analysis_options=opts1,
)
cf1 = sa1.run(numerical=False)
[14]:
# Reference values from Bourinet (2017), Table 2
ref1 = {
    "X1": {"mean": -0.0059, "std": -0.0079},
    "X2": {"mean": -0.0009, "std": -0.0006},
    "X3": {"mean":  1.2602, "std": -1.1942},
}

rows = []
for var in ["X1", "X2", "X3"]:
    for p in ["mean", "std"]:
        sym = "\u03bc" if p == "mean" else "\u03c3"
        rows.append({
            "Parameter": f"\u2202\u03b2/\u2202{sym}_{var}",
            "Reference": ref1[var][p],
            "CF": cf1["marginal"][var][p],
        })

pd.DataFrame(rows).set_index("Parameter").style.format("{:+.4f}")
[14]:
  Reference CF
Parameter    
∂β/∂μ_X1 -0.0059 -0.0059
∂β/∂σ_X1 -0.0079 -0.0079
∂β/∂μ_X2 -0.0009 -0.0009
∂β/∂σ_X2 -0.0006 -0.0006
∂β/∂μ_X3 +1.2602 +1.2603
∂β/∂σ_X3 -1.1942 -1.1948
[15]:
# Reference values from Bourinet (2017), Table 3
ref_corr = {
    (1, 0): -0.5151,
    (2, 0):  0.8916,
    (2, 1):  0.4688,
}

names1 = ["X1", "X2", "X3"]
rows = []
for (i, j), r in ref_corr.items():
    rows.append({
        "Parameter": f"\u2202\u03b2/\u2202\u03c1({names1[i]},{names1[j]})",
        "Reference": r,
        "CF": cf1["correlation"][i, j],
    })

pd.DataFrame(rows).set_index("Parameter").style.format("{:+.4f}")
[15]:
  Reference CF
Parameter    
∂β/∂ρ(X2,X1) -0.5151 -0.5150
∂β/∂ρ(X3,X1) +0.8916 +0.8914
∂β/∂ρ(X3,X2) +0.4688 +0.4687

This example includes a Uniform distribution (\(X_3\)), for which dF_dtheta is evaluated numerically via the base-class finite-difference fallback. The closed-form method handles this seamlessly — no analytical CDF derivative is needed for the Uniform distribution.


Summary#

Finite difference

Closed-form

FORM runs

\(2n + 1\)

1

Marginal sens.

Yes

Yes

Correlation sens.

No

Yes

Accuracy

Depends on step size \(\delta\)

Exact (up to quadrature)

Stability

Can be noisy for correlated non-normal variables

Stable

Distribution support

Any

Any (numerical dF_dtheta fallback)

Recommendation: Use the closed-form method (numerical=False) when accuracy and correlation sensitivities are needed. The FD method remains useful as an independent verification tool.