Ai2 "LoadError: Numerically Inconsistent"
castrong opened this issue · 11 comments
network = NeuralVerification.read_nnet("./examples/networks/issue_net.nnet")
input_set = NeuralVerification.HPolytope([NeuralVerification.HalfSpace([1.0, 0.0, 0.0, 0.0], 0.3845579517032085), NeuralVerification.HalfSpace([0.0, 1.0, 0.0, 0.0], 0.8331193586918577), NeuralVerification.HalfSpace([0.0, 0.0, 1.0, 0.0], 0.6607352829885846), NeuralVerification.HalfSpace([0.0, 0.0, 0.0, 1.0], 0.270232549769154), NeuralVerification.HalfSpace([-1.0, 0.0, 0.0, 0.0], 0.022984248429274823), NeuralVerification.HalfSpace([0.0, -1.0, 0.0, 0.0], 1.061033037362016), NeuralVerification.HalfSpace([0.0, 0.0, -1.0, 0.0], 0.734914583454296), NeuralVerification.HalfSpace([0.0, 0.0, 0.0, -1.0], 0.7494557834041424)])
output_set = NeuralVerification.HPolytope([NeuralVerification.HalfSpace([1.0, 0.0, 0.0, 0.0], 0.47919249361761995), HalfSpace([0.0, 1.0, 0.0, 0.0], 0.21600299802246092), HalfSpace([0.0, 0.0, 1.0, 0.0], 0.13753126667274596), HalfSpace([0.0, 0.0, 0.0, 1.0], 0.536256411404834), HalfSpace([-1.0, 0.0, 0.0, 0.0], 0.05623084997936845), HalfSpace([0.0, -1.0, 0.0, 0.0], -0.02937453504189702), HalfSpace([0.0, 0.0, -1.0, 0.0], 0.7988290732045045), HalfSpace([0.0, 0.0, 0.0, -1.0], 1.1397183250614469)])
problem = NeuralVerification.Problem(network, input_set, output_set)
# Solve on Ai2
println(NeuralVerification.solve(Ai2h(), problem))
With network:
//Neural Network File Format by Kyle Julian, Stanford 2016.
2, 4, 4, 8,
4, 8, 4,
This line extraneous
-6.55e4,-6.55e4,-6.55e4,-6.55e4,
6.55e4,6.55e4,6.55e4,6.55e4,
0.0,0.0,0.0,0.0,0.0,
1.0,1.0,1.0,1.0,1.0,
-0.23004744778871178, 0.9210992236909084, -0.030635894568796118, -0.34332947427825244,
0.8978626764268087, 0.8309944551520903, -0.014461773871067951, -0.6884545083395874,
-0.9571306885211475, -0.4651249798060011, 0.8927464935383509, 0.23779053079463175,
0.2796571501428917, -0.4114885625241218, -0.05410680573253712, 0.6629014622254301,
-0.8632374852957523, 0.5957105726470227, 0.5747178822778407, 0.1353980115540212,
0.43882314695613855, -0.002452754445515737, 0.6443817883537535, -0.23630991424169778,
0.03346675961844259, 0.7115606398658421, 0.4798026539014697, -0.47832685658716123,
-0.5636833238093555, 0.04214094639926591, 0.7385695072981133, 0.6656665947649203,
0.6162281231994049,
-0.2842576636476495,
0.17361456246025053,
-0.8145690340743399,
0.9609853991020589,
0.7041393195522954,
0.38275285236140943,
-0.7922435452981706,
0.5566377731659151, -0.22097068945875353, 0.8681055231723533, 0.6648609539585149, 0.8963076897599609, -0.9407432187803808, 0.5519973844436779, 0.9539313268084384,
0.672011854126858, 0.7873646431266152, -0.48281530198422296, 0.056392080910015796, -0.7497530830241459, 0.22864777607158127, 0.05450773274033782, -0.7405932810079814,
-0.9107076528782465, 0.33876887402212397, 0.18209738189045765, 0.17489511201142527, -0.4812871934319878, -0.8244163696913085, 0.425612874065187, -0.889371405676664,
-0.2995076129559373, -0.44135435764626374, 0.15240142012015, 0.17114545464001596, 0.7469225136257442, -0.571483866728244, -0.8498590494961284, -0.4970584819809667,
0.9493071313547716,
-0.4000560612181365,
-0.9141989352816746,
0.2665395852249475,
Results in the following output and stacktrace:
In dd_blockelimination
glp_simplex: unable to recover undefined or non-optimal solution
In dd_blockelimination
glp_simplex: unable to recover undefined or non-optimal solution
In dd_blockelimination
glp_simplex: unable to recover undefined or non-optimal solution
In dd_blockelimination
glp_simplex: unable to recover undefined or non-optimal solution
In dd_blockelimination
glp_simplex: unable to recover undefined or non-optimal solution
In dd_blockelimination
glp_simplex: unable to recover undefined or non-optimal solution
In dd_blockelimination
glp_simplex: unable to recover undefined or non-optimal solution
In dd_blockelimination
glp_simplex: unable to recover undefined or non-optimal solution
In dd_blockelimination
glp_simplex: unable to recover undefined or non-optimal solution
In dd_blockelimination
glp_simplex: unable to recover undefined or non-optimal solution
In dd_blockelimination
glp_simplex: unable to recover undefined or non-optimal solution
In dd_blockelimination
glp_simplex: unable to recover undefined or non-optimal solution
In dd_blockelimination
glp_simplex: unable to recover undefined or non-optimal solution
In dd_blockelimination
glp_simplex: unable to recover undefined or non-optimal solution
In dd_blockelimination
glp_simplex: unable to recover undefined or non-optimal solution
In dd_blockelimination
glp_simplex: unable to recover undefined or non-optimal solution
In dd_blockelimination
glp_simplex: unable to recover undefined or non-optimal solution
In dd_blockelimination
glp_simplex: unable to recover undefined or non-optimal solution
In dd_blockelimination
glp_simplex: unable to recover undefined or non-optimal solution
In dd_blockelimination
ERROR: LoadError: Numerically inconsistent
Stacktrace:
[1] myerror(::Int32) at /Users/castrong/.julia/dev/CDDLib/src/error.jl:23
[2] dd_matrix2poly at /Users/castrong/.julia/dev/CDDLib/src/polyhedra.jl:53 [inlined]
[3] CDDLib.CDDPolyhedra{Float64,Float64}(::CDDLib.CDDInequalityMatrix{Float64,Float64}) at /Users/castrong/.julia/dev/CDDLib/src/polyhedra.jl:68
[4] CDDLib.CDDPolyhedra(::CDDLib.CDDInequalityMatrix{Float64,Float64}) at /Users/castrong/.julia/dev/CDDLib/src/polyhedra.jl:83
[5] getpoly(::CDDLib.Polyhedron{Float64}, ::Bool) at /Users/castrong/.julia/dev/CDDLib/src/polyhedron.jl:60
[6] getpoly at /Users/castrong/.julia/dev/CDDLib/src/polyhedron.jl:56 [inlined]
[7] getext(::CDDLib.Polyhedron{Float64}) at /Users/castrong/.julia/dev/CDDLib/src/polyhedron.jl:51
[8] vrep at /Users/castrong/.julia/dev/CDDLib/src/polyhedron.jl:156 [inlined]
Do you have more of the stacktrace? The first line above is already from CDDLib, which is beyond our paygrade!
I tried but am now not able to reproduce it with the same setup. I wonder if something with merging in master fixed it but it took restarting my environment to see that change? I am using Revise so I wouldn't have expected that and I had tried after my most recent merge.
That may be because you were using the algorithm that now is called Ai2h
in master.
Thanks for the suggestion! I tried running with Ai2h() instead of Ai2() in master and it just runs indefinitely (instead of leading to an error). As a result, I'm still not sure what changed to make the old error disappear.
The errors are perhaps related to how the inputs are used in the algorithm? Note that both Ai2h resp. the Ai2z approaches as shown below work fine in the example:
input_set = HPolytope([HalfSpace([1.0, 0.0, 0.0, 0.0], 0.3845579517032085),
HalfSpace([0.0, 1.0, 0.0, 0.0], 0.8331193586918577),
HalfSpace([0.0, 0.0, 1.0, 0.0], 0.6607352829885846),
HalfSpace([0.0, 0.0, 0.0, 1.0], 0.270232549769154),
HalfSpace([-1.0, 0.0, 0.0, 0.0], 0.022984248429274823),
HalfSpace([0.0, -1.0, 0.0, 0.0], 1.061033037362016),
HalfSpace([0.0, 0.0, -1.0, 0.0], 0.734914583454296),
HalfSpace([0.0, 0.0, 0.0, -1.0], 0.7494557834041424)]);
# "Ai2h"
@time begin
H1 = overapproximate(network.layers[1].weights * input_set + network.layers[1].bias, Hyperrectangle)
H1 = rectify(H1)
H2 = overapproximate(network.layers[2].weights * H1 + network.layers[2].bias, Hyperrectangle);
end
0.006664 seconds (1.06 k allocations: 78.672 KiB)
Hyperrectangle{Float64,Array{Float64,1},Array{Float64,1}}([2.3950844668146862, -0.13751415227951902, -2.1096209129694556, -0.6299855269624741], [3.1193665559469745, 2.295359657778874, 2.454380545304072, 2.4271559068135793])
# "Ai2z"
@time begin
input_set = overapproximate(input_set, Hyperrectangle) # exact
input_set = convert(Zonotope, input_set) # also exact
Z1 = affine_map(network.layers[1].weights, input_set, network.layers[1].bias)
Z1 = overapproximate(Rectification(Z1), Zonotope)
Z2 = affine_map(network.layers[2].weights, Z1, network.layers[2].bias)
end
0.009373 seconds (536 allocations: 42.047 KiB)
Zonotope{Float64,Array{Float64,1},Array{Float64,2}}([1.7115830883645933, -0.23495075807030685, -2.11436002496305, -0.3868240216039214], [-0.32634300914634634 0.7369189071413871 … 0.18472489913317133 0.15586653347355545; 0.2154030641984947 0.5327045705827566 … -0.1545206676442892 0.015391252910196256; 0.038867901587761676 -0.5260435688205888 … -0.09919108055920038 0.1201795609033839; -0.20050663784682085 -0.44243336453491733 … 0.1539373002888789 -0.23997320951003187])
plot(project(H2, [1, 2]), lab="H2")
plot!(project(Z2, [1, 2]), lab="Z2")
@mforets what you call Ai2h
there is called Box
, Ai2h
is the old implementation of Ai2 in NeuralVerification!
@mforets what you call Ai2h there is called Box, Ai2h is the old implementation of Ai2 in NeuralVerification!
ok, thanks.. i thought the h
was for Hyperrectangle :) now using the forward_partition
API, (i) both Box
and Ai2z
work, (ii) Ai2z
coincides with my plot above for the Z2 set, and (iii) Box
doesn't return a box but a zonotope which is better than H2 but worse than Z2, as expected i guess.
out_Box = forward_network(Box(), network, input_set)
plot!(project(out_Box, [1, 2]), lab="Box")
out_Ai2z = forward_network(Ai2z(), network, input_set)
plot!(project(out_Ai2z, [1, 2]), lab="Ai2z")
I also saw the call with Ai2h hang forever. When I interrupted eventually, I remember the stacktrace went through removehredundancy
in Polyhedra
, originally through convex_hull
. For low dimensional sets like these, this is worth looking into further. I can try digging into it again.
I remember the stacktrace went through removehredundancy in Polyhedra, originally through convex_hull
where/why is convex_hull
used? the approach "Box" works for high dimensions as well, as shown in @SebastianGuadalupe blog in dimension ~1000.
It is used in the old implementation, Ai2h
.
@mforets the h in Ai2h
is for Ai2{HPolytope}
(the old "tight" implementation). Maybe this name is too misleading. Any ideas on what would be a clearer alias? Ai2H
? Ai2poly
?