sisl/NeuralVerification.jl

Inconsistent Results between MIPVerify and NSVerify

castrong opened this issue · 1 comments

On the same problem I get a different result from MIPVerify and NSVerify, both of which should be complete. MIPVerify also is inconsistent with ILP, Planet, and Reluplex on some so I suspect that MIPVerify is the issue.

# Setup problem
network = NeuralVerification.read_nnet("./test/test_sets/random/small/networks/rand_[2,5,2]_3.nnet")
input_set = NeuralVerification.Hyperrectangle([-0.4700880055019103, -0.11446576023423427], [0.795682840722167, 0.12083185296873333])
output_set = NeuralVerification.PolytopeComplement(HPolytope([HalfSpace([1.0, 0.0], 1.152247437385072), HalfSpace([0.0, 1.0], 0.824396693498243), HalfSpace([-1.0, 0.0], 0.27530728609343824), HalfSpace([0.0, -1.0], -0.05687606354374264)]))
problem = NeuralVerification.Problem(network, input_set, output_set)

# MIPVerify and NSVerify Problem
println(NeuralVerification.solve(NeuralVerification.MIPVerify(), problem))
println(NeuralVerification.solve(NeuralVerification.NSVerify(), problem))

Leads to an output of:
AdversarialResult(:violated, 0.753269953515554)
CounterExampleResult(:holds, Float64[])

Network:

//Neural Network File Format by Kyle Julian, Stanford 2016.
2, 2, 2, 5,
2, 5, 2,
This line extraneous
-6.55e4,-6.55e4,
6.55e4,6.55e4,
0.0,0.0,0.0,
1.0,1.0,1.0,
-0.7365114899510892, -0.41154368275566355,
-0.8591329390595495, 0.9370977185855476,
-0.11047670316300051, -0.2654135126870667,
0.227516004509706, 0.021382971813836527,
-0.07722192536896744, -0.6012968553830342,
-0.5395494005121688,
0.9689132574915664,
0.0008503521368683487,
0.9240786368047944,
0.6142922344389037,
0.9115126809261311, 0.4138440851467826, -0.33913105667038046, 0.16700640073917672, -0.235120928568453,
0.6708292278443739, 0.7576636017826068, 0.8258791757476476, -0.17354379680520315, 0.05393765869881095,
0.9457155975714278,
0.8338760871488544,

The problem here is that although we encode the network as linear constraints using the bounds (computed from the input set), we didn't explicitly enforce the input constraint. Hence if a solution gets outside the input constraint, the encoding can be wrong. In the above example, we have the following results:

max disturbance: 0.753269953515554
adversarial input: [0.283182, -0.867736]
Layer 1:
neurons 1 layer (from optimization): [0.0, -0.0875308, 0.199874, 0.969952, 1.11419]
neurons 1 layer (from direct computation): [0.0, 0.0, 0.199874, 0.969952, 1.11419]
bounds 1 layer: [0.0, 0.468687, 0.0, 0.631064, 0.585321], [0.489541, 2.06234, 0.20314, 0.998293, 0.853521]
Layer 2:
neurons 2 layer (from optimization): [0.741727, 0.824397]
neurons 2 layer (from direct computation): [0.777951, 0.890716]
bounds 2 layer: [0.975499, 1.04731], [2.27453, 2.82913]
Output: 
adversarial output (from optimization): [0.741727, 0.824397]
adversarial output (from direct computation): [0.777951, 0.890716]

The neurons in the optimization clearly violates the bounds that are used to encode the constraints (see the second node in Layer 1). To resolve the problem, we need to add the input constraint to the problem.