Incorrect behaviour with Line Sampling and Random Slicing
Closed this issue · 3 comments
using UncertaintyQuantification
# Input
x = RandomVariable.(Normal(), [:x1, :x2])
# Model
y = Model(df -> df.x2 .+ 0.01 * df.x1 .^3.0 .+ sin.(df.x1), :y)
# Limit-state function
g(df) = 5 .- df.y
# Simulation
numberlines = 200
LS = LineSampling(numberlines, collect(0.5:0.5:10))
# Inputs
mean = Interval(-0.2, 0.2, :μ)
std = Parameter(1, :σ)
x_pbox = ProbabilityBox{Normal}.(Ref([mean, std]), [:x1, :x2])
# ERROR: MethodError: no method matching iterate(::UncertaintyQuantification.SlicingModel)
probability_of_failure([y], g, x_pbox, RandomSlicing(LS))
I have also observed a strange behaviour that when you perform a LineSampling simulation prior, and pass it to RandomSlicing
, it seems that it tries passing Intervals
to the model when evaluating instead of optimising g
. I think LS
is mutated in the previous simulation, and is leading to this behaviour:
using UncertaintyQuantification
# Input
x = RandomVariable.(Normal(), [:x1, :x2])
# Model
y = Model(df -> df.x2 .+ 0.01 * df.x1 .^3.0 .+ sin.(df.x1), :y)
# Limit-state function
g(df) = 5 .- df.y
# Simulation
numberlines = 200
LS = LineSampling(numberlines, collect(0.5:0.5:10))
# Inputs
x = RandomVariable.(Normal(), [:x1, :x2])
mean = Interval(-0.2, 0.2, :μ)
std = Parameter(1, :σ)
x_pbox = ProbabilityBox{Normal}.(Ref([mean, std]), [:x1, :x2])
# Works
probability_of_failure([y], g, x, LS)
# ERROR: MethodError: no method matching ^(::Interval, ::Float64)
probability_of_failure([y], g, x_pbox, RandomSlicing(LS))
We should add tests for RandomSlicing
and DoubleLoop
to check the compatibility of these methods with all of the precise simulation types.
Not to check the accuracy of these methods, just their compatibility. I.e. quick simulations with low sample number
The problem with LineSampling
occurs when we don't pass an important direction. there should be an easy fix.
It's a bit more involved. Not sure how the intervals end up in the Model
. Should be impossible.