Direct Differentiation Method#
By default, the FORM algorithm uses a Forward Finite Difference (FFD) scheme to determine the gradient of the limit state function. However, in some cases, the derivative of the limit state function with respect to each of its parameters will be available; either as a closed form expression, or as an output from another algorithm (e.g. the sensitivity commands in OpenSeesPy). In such cases, a computational saving can be made, and can be significant, most especially for large complex finite element analyses.
This example demonstrates the use of Pystra’s DDM algorithm for a simple closed-form limit state function; and the speed-up possible for even this simple case.
[1]:
import pystra as ra
import numpy as np
import timeit
Define the limit state function and its gradient. Return both the evaluation of the limit state function, and the gradient vector. Also allow for the limit state function to be called in a vectorized manner, so in the following function the \(Xi,\ i \in \{1,...,6\}\) may be passed with dimension \(n\times 1\), so that the returned G
is size \({n\times 1}\) and grad_G
is size \({6\times n}\):
[2]:
def lsf(r, X1, X2, X3, X4, X5, X6):
"""
Calrel example from FERUM
"""
G = (
r
- X2 / (1000 * X3)
- (X1 / (200 * X3)) ** 2
- X5 / (1000 * X6)
- (X4 / (200 * X6)) ** 2
)
grad_G = np.array(
[
-X1 / (20000 * X3**2),
-1 / (1000 * X3),
(20 * X2 * X3 + X1**2) / (20000 * X3**3),
-X4 / (20000 * X6**2),
-1 / (1000 * X6),
(20 * X5 * X6 + X4**2) / (20000 * X6**3),
]
)
return G, grad_G
Set up a generic function that establishes and runs the model according to the differentiation type passed diff_mode
:
[3]:
def run(diff_mode):
limit_state = ra.LimitState(lsf)
# Set some options (optional)
options = ra.AnalysisOptions()
options.setPrintOutput(False)
options.setDiffMode(diff_mode)
stochastic_model = ra.StochasticModel()
# Define random variables
stochastic_model.addVariable(ra.Lognormal("X1", 500, 100))
stochastic_model.addVariable(ra.Lognormal("X2", 2000, 400))
stochastic_model.addVariable(ra.Uniform("X3", 5, 0.5))
stochastic_model.addVariable(ra.Lognormal("X4", 450, 90))
stochastic_model.addVariable(ra.Lognormal("X5", 1800, 360))
stochastic_model.addVariable(ra.Uniform("X6", 4.5, 0.45))
# Define constants
stochastic_model.addVariable(ra.Constant("r", 1.7))
stochastic_model.setCorrelation(
ra.CorrelationMatrix(
[
[1.0, 0.3, 0.2, 0, 0, 0],
[0.3, 1.0, 0.2, 0, 0, 0],
[0.2, 0.2, 1.0, 0, 0, 0],
[0, 0, 0, 1.0, 0.3, 0.2],
[0, 0, 0, 0.3, 1.0, 0.2],
[0, 0, 0, 0.2, 0.2, 1.0],
]
)
)
# Set up FORM analysis
form = ra.Form(
analysis_options=options,
stochastic_model=stochastic_model,
limit_state=limit_state,
)
# Run it
form.run()
return form
Call the default FFD method:
[4]:
ffd_count = 0
def run_ffd():
global ffd_count
form = run("ffd")
ffd_count += form.getNoFunctionCalls()
And the DDM, which makes used of the grad_G
returned from the lsf
:
[5]:
ddm_count = 0
def run_ddm():
global ddm_count
form = run("ddm")
ddm_count += form.getNoFunctionCalls()
Finally run both 100 times and compare the executation speed difference:
[6]:
number = 100
time_ffd = timeit.timeit(stmt=run_ffd, number=number)
time_ddm = timeit.timeit(stmt=run_ddm, number=number)
print("Total time taken (s):")
print(f"FFD: {time_ffd}; DDM: {time_ddm}")
print("Number of function evaluations:")
print(f"FFD: {ffd_count}; DDM: {ddm_count}")
print("Average time per call (s):")
print(f"FFD: {time_ffd/number}; DDM: {time_ddm/number}")
print(f"DDM speed-up: {time_ffd/time_ddm:.2f}")
Total time taken (s):
FFD: 3.8152463799342513; DDM: 3.757767920847982
Number of function evaluations:
FFD: 8500; DDM: 4300
Average time per call (s):
FFD: 0.03815246379934251; DDM: 0.03757767920847982
DDM speed-up: 1.02
For the present problem the speed-up is not very significant, but for more complex models, it can be, as may be apparent from the difference in the number of function calls.