tynguyen/unsupervisedDeepHomographyRAL2018

InvalidArgumentError (see above for traceback): Input matrix is not invertible.

chenyuyuyu opened this issue · 1 comments

Hi.When I Train Unsupervised model with synthetic dataset, using "python homography_CNN_synthetic.py --mode test --lr 1e-4 --loss_type l1_loss". At first it run well, but 7 hours later error happended.
I don't know how to debug. Do you have any ideas?

my environment :
cuda 8.0.61
python 2.7
tensorflow-gpu 1.2.1 (or higher)
opencv 4.1.1
<==================== Loading data ===================>

===> There are totally 500 test files
===> Train: There are totally 10000 training files
args lr: 0.0001 9e-05
===> Decay steps: 58117.5893057
--Shape of A_mat: [64, 8, 8]
--shape of b: [64, 8, 1]
--shape of H_8el Tensor("MatrixSolve:0", shape=(64, 8, 1), dtype=float32, device=/device:GPU:0)
('--Inter- scale_h:', True)
/home/chenxy/anaconda3/envs/last27/lib/python2.7/site-packages/tensorflow/python/ops/gradients_impl.py:93: UserWarning: Converting sparse IndexedSlices to a dense Tensor of unknown shape. This may consume a large amount of memory.
"Converting sparse IndexedSlices to a dense Tensor of unknown shape. "
====> Use loss type: l1_loss
--Shape of A_mat: [64, 8, 8]
--shape of b: [64, 8, 1]
--shape of H_8el Tensor("MatrixSolve_1:0", shape=(64, 8, 1), dtype=float32, device=/device:GPU:1)
('--Inter- scale_h:', True)
====> Use loss type: l1_loss
2019-11-05 11:33:53.441071: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2019-11-05 11:33:53.441128: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2019-11-05 11:33:53.441136: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2019-11-05 11:33:53.441142: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
2019-11-05 11:33:53.441147: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
2019-11-05 11:33:53.777061: I tensorflow/core/common_runtime/gpu/gpu_device.cc:940] Found device 0 with properties:
name: GeForce GTX 1080 Ti
major: 6 minor: 1 memoryClockRate (GHz) 1.721
pciBusID 0000:03:00.0
Total memory: 10.91GiB
Free memory: 10.72GiB
2019-11-05 11:33:53.999777: W tensorflow/stream_executor/cuda/cuda_driver.cc:523] A non-primary context 0x55cbe7c999a0 exists before initializing the StreamExecutor. We haven't verified StreamExecutor works with that.
2019-11-05 11:33:54.000875: I tensorflow/core/common_runtime/gpu/gpu_device.cc:940] Found device 1 with properties:
name: GeForce GTX 1080 Ti
major: 6 minor: 1 memoryClockRate (GHz) 1.721
pciBusID 0000:04:00.0
Total memory: 10.91GiB
Free memory: 10.75GiB
2019-11-05 11:33:54.179809: W tensorflow/stream_executor/cuda/cuda_driver.cc:523] A non-primary context 0x55cbe7c9dec0 exists before initializing the StreamExecutor. We haven't verified StreamExecutor works with that.
2019-11-05 11:33:54.180909: I tensorflow/core/common_runtime/gpu/gpu_device.cc:940] Found device 2 with properties:
name: Tesla K40c
major: 3 minor: 5 memoryClockRate (GHz) 0.745
pciBusID 0000:81:00.0
Total memory: 11.17GiB
Free memory: 11.09GiB
2019-11-05 11:33:54.377269: W tensorflow/stream_executor/cuda/cuda_driver.cc:523] A non-primary context 0x55cbe7ca2430 exists before initializing the StreamExecutor. We haven't verified StreamExecutor works with that.
2019-11-05 11:33:54.378280: I tensorflow/core/common_runtime/gpu/gpu_device.cc:940] Found device 3 with properties:
name: GeForce GTX 1080 Ti
major: 6 minor: 1 memoryClockRate (GHz) 1.721
pciBusID 0000:82:00.0
Total memory: 10.91GiB
Free memory: 10.75GiB
2019-11-05 11:33:54.379452: I tensorflow/core/common_runtime/gpu/gpu_device.cc:832] Peer access not supported between device ordinals 0 and 2
2019-11-05 11:33:54.379477: I tensorflow/core/common_runtime/gpu/gpu_device.cc:832] Peer access not supported between device ordinals 0 and 3
2019-11-05 11:33:54.379501: I tensorflow/core/common_runtime/gpu/gpu_device.cc:832] Peer access not supported between device ordinals 1 and 2
2019-11-05 11:33:54.379527: I tensorflow/core/common_runtime/gpu/gpu_device.cc:832] Peer access not supported between device ordinals 1 and 3
2019-11-05 11:33:54.379537: I tensorflow/core/common_runtime/gpu/gpu_device.cc:832] Peer access not supported between device ordinals 2 and 0
2019-11-05 11:33:54.379544: I tensorflow/core/common_runtime/gpu/gpu_device.cc:832] Peer access not supported between device ordinals 2 and 1
2019-11-05 11:33:54.379552: I tensorflow/core/common_runtime/gpu/gpu_device.cc:832] Peer access not supported between device ordinals 2 and 3
2019-11-05 11:33:54.379565: I tensorflow/core/common_runtime/gpu/gpu_device.cc:832] Peer access not supported between device ordinals 3 and 0
2019-11-05 11:33:54.379578: I tensorflow/core/common_runtime/gpu/gpu_device.cc:832] Peer access not supported between device ordinals 3 and 1
2019-11-05 11:33:54.379588: I tensorflow/core/common_runtime/gpu/gpu_device.cc:832] Peer access not supported between device ordinals 3 and 2
2019-11-05 11:33:54.379631: I tensorflow/core/common_runtime/gpu/gpu_device.cc:961] DMA: 0 1 2 3
2019-11-05 11:33:54.379641: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 0: Y Y N N
2019-11-05 11:33:54.379648: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 1: Y Y N N
2019-11-05 11:33:54.379655: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 2: N N Y N
2019-11-05 11:33:54.379662: I tensorflow/core/common_runtime/gpu/gpu_device.cc:971] 3: N N N Y
2019-11-05 11:33:54.379692: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:03:00.0)
2019-11-05 11:33:54.379704: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Creating TensorFlow device (/gpu:1) -> (device: 1, name: GeForce GTX 1080 Ti, pci bus id: 0000:04:00.0)
2019-11-05 11:33:54.379712: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Creating TensorFlow device (/gpu:2) -> (device: 2, name: Tesla K40c, pci bus id: 0000:81:00.0)
2019-11-05 11:33:54.379720: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Creating TensorFlow device (/gpu:3) -> (device: 3, name: GeForce GTX 1080 Ti, pci bus id: 0000:82:00.0)
===> Start step: 0
[>................................................................] Step: 14s924ms | Tot: 1ms | Train: 1, h_loss 26.239, l1_loss 0.616609, l1_smooth_loss 0.332019-11-05 11:35:10.127705: I tensorflow/core/common_runtime/gpu/pool_allocator.cc:247] PoolAllocator: After 53954 get requests, put_count=53950 evicted_count=1000 eviction_rate=0.0185357 and unsatisfied allocation rate=0.0204619
2019-11-05 11:35:10.127768: I tensorflow/core/common_runtime/gpu/pool_allocator.cc:259] Raising pool_size_limit_ from 100 to 110
[>................................................................] Step: 1m26s | Tot: 1m26s | Train: 1, h_loss 26.192, l1_loss 0.611157, l1_smooth_loss 0.332 [>................................................................] Step: 1m25s | Tot: 2m51s | Train: 1, h_loss 26.174, l1_loss 0.606227, l1_smooth_loss 0.328 [>................................................................] Step: 1m25s | Tot: 4m17s | Train: 1, h_loss 25.880, l1_loss 0.583459, l1_smooth_loss 0.311 [>................................................................] Step: 1m26s | Tot: 5m44s | Train: 1, h_loss 25.683, l1_loss 0.568325, l1_smooth_loss 0.301 [>................................................................] Step: 1m24s | Tot: 7m9s | Train: 1, h_loss 25.359, l1_loss 0.554256, l1_smooth_loss 0.2910 [>................................................................] Step: 1m25s | Tot: 8m34s | Train: 1, h_loss 25.107, l1_loss 0.543742, l1_smooth_loss 0.283 [>................................................................] Step: 1m26s | Tot: 10m788ms | Train: 1, h_loss 24.883, l1_loss 0.534829, l1_smooth_loss 0. [>................................................................] Step: 1m25s | Tot: 11m26s | Train: 1, h_loss 24.699, l1_loss 0.527596, l1_smooth_loss 0.27 [>................................................................] Step: 1m26s | Tot: 12m52s | Train: 1, h_loss 24.533, l1_loss 0.521479, l1_smooth_loss 0.26 [>................................................................] Step: 1m24s | Tot: 14m17s | Train: 1, h_loss 24.384, l1_loss 0.515694, l1_smooth_loss 0.26 [>................................................................] Step: 1m34s | Tot: 15m52s | Train: 1, h_loss 24.241, l1_loss 0.510811, l1_smooth_loss 0.26 [>................................................................] Step: 1m30s | Tot: 17m23s | Train: 1, h_loss 24.112, l1_loss 0.506412, l1_smooth_loss 0.25 [>................................................................] Step: 1m30s | Tot: 18m53s | Train: 1, h_loss 23.984, l1_loss 0.502348, l1_smooth_loss 0.25 [>................................................................] Step: 1m33s | Tot: 20m26s | Train: 1, h_loss 23.859, l1_loss 0.498583, l1_smooth_loss 0.25 [>................................................................] Step: 1m32s | Tot: 21m59s | Train: 1, h_loss 23.745, l1_loss 0.495188, l1_smooth_loss 0.25 [>................................................................] Step: 1m31s | Tot: 23m30s | Train: 1, h_loss 23.639, l1_loss 0.491997, l1_smooth_loss 0.24 [>................................................................] Step: 1m28s | Tot: 24m58s | Train: 1, h_loss 23.532, l1_loss 0.488869, l1_smooth_loss 0.24 [>................................................................] Step: 1m28s | Tot: 26m27s | Train: 1, h_loss 23.436, l1_loss 0.486157, l1_smooth_loss 0.24 [>................................................................] Step: 1m28s | Tot: 27m55s | Train: 1, h_loss 23.342, l1_loss 0.483412, l1_smooth_loss 0.24 [>................................................................] Step: 1m29s | Tot: 29m25s | Train: 1, h_loss 23.247, l1_loss 0.480875, l1_smooth_loss 0.24 [>................................................................] Step: 1m35s | Tot: 31m949ms | Train: 1, h_loss 23.154, l1_loss 0.478598, l1_smooth_loss 0. [>................................................................] Step: 1m28s | Tot: 32m29s | Train: 1, h_loss 23.066, l1_loss 0.476392, l1_smooth_loss 0.23 [>................................................................] Step: 1m32s | Tot: 34m1s | Train: 1, h_loss 22.982, l1_loss 0.474110, l1_smooth_loss 0.236 [=>...............................................................] Step: 1m34s | Tot: 35m36s | Train: 1, h_loss 22.901, l1_loss 0.472048, l1_smooth_loss 0.23 [=>...............................................................] Step: 1m27s | Tot: 37m4s | Train: 1, h_loss 22.820, l1_loss 0.470051, l1_smooth_loss 0.233 [=>...............................................................] Step: 1m26s | Tot: 38m31s | Train: 1, h_loss 22.744, l1_loss 0.468228, l1_smooth_loss 0.23 [=>...............................................................] Step: 1m26s | Tot: 39m57s | Train: 1, h_loss 22.666, l1_loss 0.466267, l1_smooth_loss 0.23 [=>...............................................................] Step: 1m26s | Tot: 41m24s | Train: 1, h_loss 22.595, l1_loss 0.464493, l1_smooth_loss 0.22 [=>...............................................................] Step: 1m26s | Tot: 42m50s | Train: 1, h_loss 22.521, l1_loss 0.462736, l1_smooth_loss 0.22 [=>...............................................................] Step: 1m26s | Tot: 44m16s | Train: 1, h_loss 22.453, l1_loss 0.461220, l1_smooth_loss 0.22 [=>...............................................................] Step: 1m35s | Tot: 45m52s | Train: 1, h_loss 22.387, l1_loss 0.459685, l1_smooth_loss 0.22 [=>...............................................................] Step: 1m31s | Tot: 47m23s | Train: 1, h_loss 22.320, l1_loss 0.458146, l1_smooth_loss 0.22 [=>...............................................................] Step: 1m29s | Tot: 48m52s | Train: 1, h_loss 22.253, l1_loss 0.456672, l1_smooth_loss 0.22 [=>...............................................................] Step: 1m32s | Tot: 50m24s | Train: 1, h_loss 22.190, l1_loss 0.455299, l1_smooth_loss 0.22 [=>...............................................................] Step: 1m36s | Tot: 52m1s | Train: 1, h_loss 22.126, l1_loss 0.453853, l1_smooth_loss 0.222 [=>...............................................................] Step: 1m29s | Tot: 53m31s | Train: 1, h_loss 22.063, l1_loss 0.452488, l1_smooth_loss 0.22 [=>...............................................................] Step: 1m29s | Tot: 55m904ms | Train: 1, h_loss 22.003, l1_loss 0.451199, l1_smooth_loss 0. [=>...............................................................] Step: 1m34s | Tot: 56m35s | Train: 1, h_loss 21.943, l1_loss 0.449866, l1_smooth_loss 0.22 [=>...............................................................] Step: 1m33s | Tot: 58m9s | Train: 1, h_loss 21.884, l1_loss 0.448597, l1_smooth_loss 0.219 [=>...............................................................] Step: 1m33s | Tot: 59m42s | Train: 1, h_loss 21.826, l1_loss 0.447414, l1_smooth_loss 0.21 [=>...............................................................] Step: 1m33s | Tot: 1h1m | Train: 1, h_loss 21.768, l1_loss 0.446217, l1_smooth_loss 0.2177 [=>...............................................................] Step: 1m29s | Tot: 1h2m | Train: 1, h_loss 21.714, l1_loss 0.445078, l1_smooth_loss 0.2170 [=>...............................................................] Step: 1m29s | Tot: 1h4m | Train: 1, h_loss 21.659, l1_loss 0.443904, l1_smooth_loss 0.2162 [=>...............................................................] Step: 1m28s | Tot: 1h5m | Train: 1, h_loss 21.605, l1_loss 0.442784, l1_smooth_loss 0.2155 [=>...............................................................] Step: 1m30s | Tot: 1h7m | Train: 1, h_loss 21.553, l1_loss 0.441711, l1_smooth_loss 0.2148 [=>...............................................................] Step: 1m27s | Tot: 1h8m | Train: 1, h_loss 21.503, l1_loss 0.440683, l1_smooth_loss 0.2141 [==>..............................................................] Step: 1m26s | Tot: 1h10m | Train: 1, h_loss 21.451, l1_loss 0.439621, l1_smooth_loss 0.213 [==>..............................................................] Step: 1m26s | Tot: 1h11m | Train: 1, h_loss 21.403, l1_loss 0.438687, l1_smooth_loss 0.212 [==>..............................................................] Step: 1m27s | Tot: 1h13m | Train: 1, h_loss 21.354, l1_loss 0.437716, l1_smooth_loss 0.212 [==>..............................................................] Step: 1m27s | Tot: 1h14m | Train: 1, h_loss 21.306, l1_loss 0.436762, l1_smooth_loss 0.211 [==>..............................................................] Step: 1m28s | Tot: 1h15m | Train: 1, h_loss 21.258, l1_loss 0.435809, l1_smooth_loss 0.210 [==>..............................................................] Step: 1m27s | Tot: 1h17m | Train: 1, h_loss 21.211, l1_loss 0.434848, l1_smooth_loss 0.210 [==>..............................................................] Step: 1m26s | Tot: 1h18m | Train: 1, h_loss 21.166, l1_loss 0.433960, l1_smooth_loss 0.209 [==>..............................................................] Step: 1m27s | Tot: 1h20m | Train: 1, h_loss 21.121, l1_loss 0.433098, l1_smooth_loss 0.209 [==>..............................................................] Step: 1m26s | Tot: 1h21m | Train: 1, h_loss 21.074, l1_loss 0.432201, l1_smooth_loss 0.208 [==>..............................................................] Step: 1m26s | Tot: 1h23m | Train: 1, h_loss 21.032, l1_loss 0.431393, l1_smooth_loss 0.208 [==>..............................................................] Step: 1m26s | Tot: 1h24m | Train: 1, h_loss 20.989, l1_loss 0.430558, l1_smooth_loss 0.207 [==>..............................................................] Step: 1m27s | Tot: 1h26m | Train: 1, h_loss 20.946, l1_loss 0.429782, l1_smooth_loss 0.207 [==>..............................................................] Step: 1m27s | Tot: 1h27m | Train: 1, h_loss 20.905, l1_loss 0.428980, l1_smooth_loss 0.206 [==>..............................................................] Step: 1m26s | Tot: 1h28m | Train: 1, h_loss 20.864, l1_loss 0.428209, l1_smooth_loss 0.206 [==>..............................................................] Step: 1m28s | Tot: 1h30m | Train: 1, h_loss 20.822, l1_loss 0.427423, l1_smooth_loss 0.205 [==>..............................................................] Step: 1m28s | Tot: 1h31m | Train: 1, h_loss 20.782, l1_loss 0.426673, l1_smooth_loss 0.205 [==>..............................................................] Step: 1m28s | Tot: 1h33m | Train: 1, h_loss 20.743, l1_loss 0.425946, l1_smooth_loss 0.204 [==>..............................................................] Step: 1m26s | Tot: 1h34m | Train: 1, h_loss 20.703, l1_loss 0.425169, l1_smooth_loss 0.204 [==>..............................................................] Step: 1m27s | Tot: 1h36m | Train: 1, h_loss 20.664, l1_loss 0.424445, l1_smooth_loss 0.203 [==>..............................................................] Step: 1m26s | Tot: 1h37m | Train: 1, h_loss 20.626, l1_loss 0.423709, l1_smooth_loss 0.203 [==>..............................................................] Step: 1m26s | Tot: 1h39m | Train: 1, h_loss 20.588, l1_loss 0.423001, l1_smooth_loss 0.202 [==>..............................................................] Step: 1m26s | Tot: 1h40m | Train: 1, h_loss 20.552, l1_loss 0.422324, l1_smooth_loss 0.202 [==>..............................................................] Step: 1m26s | Tot: 1h42m | Train: 1, h_loss 20.515, l1_loss 0.421610, l1_smooth_loss 0.201 [===>.............................................................] Step: 1m26s | Tot: 1h43m | Train: 1, h_loss 20.479, l1_loss 0.420979, l1_smooth_loss 0.201 [===>.............................................................] Step: 1m30s | Tot: 1h45m | Train: 1, h_loss 20.443, l1_loss 0.420344, l1_smooth_loss 0.201 [===>.............................................................] Step: 1m27s | Tot: 1h46m | Train: 1, h_loss 20.408, l1_loss 0.419715, l1_smooth_loss 0.200 [===>.............................................................] Step: 1m27s | Tot: 1h47m | Train: 1, h_loss 20.374, l1_loss 0.419077, l1_smooth_loss 0.200 [===>.............................................................] Step: 1m26s | Tot: 1h49m | Train: 1, h_loss 20.338, l1_loss 0.418427, l1_smooth_loss 0.199 [===>.............................................................] Step: 1m26s | Tot: 1h50m | Train: 1, h_loss 20.304, l1_loss 0.417816, l1_smooth_loss 0.199 [===>.............................................................] Step: 1m27s | Tot: 1h52m | Train: 1, h_loss 20.270, l1_loss 0.417188, l1_smooth_loss 0.198 [===>.............................................................] Step: 1m27s | Tot: 1h53m | Train: 1, h_loss 20.237, l1_loss 0.416634, l1_smooth_loss 0.198 [===>.............................................................] Step: 1m27s | Tot: 1h55m | Train: 1, h_loss 20.204, l1_loss 0.416032, l1_smooth_loss 0.198 [===>.............................................................] Step: 1m27s | Tot: 1h56m | Train: 1, h_loss 20.171, l1_loss 0.415416, l1_smooth_loss 0.197 [===>.............................................................] Step: 1m27s | Tot: 1h58m | Train: 1, h_loss 20.138, l1_loss 0.414849, l1_smooth_loss 0.197 [===>.............................................................] Step: 1m29s | Tot: 1h59m | Train: 1, h_loss 20.106, l1_loss 0.414306, l1_smooth_loss 0.197 [===>.............................................................] Step: 1m29s | Tot: 2h1m | Train: 1, h_loss 20.075, l1_loss 0.413756, l1_smooth_loss 0.1967 [===>.............................................................] Step: 1m30s | Tot: 2h2m | Train: 1, h_loss 20.044, l1_loss 0.413221, l1_smooth_loss 0.1964 [===>.............................................................] Step: 1m29s | Tot: 2h4m | Train: 1, h_loss 20.012, l1_loss 0.412650, l1_smooth_loss 0.1960 [===>.............................................................] Step: 1m30s | Tot: 2h5m | Train: 1, h_loss 19.983, l1_loss 0.412146, l1_smooth_loss 0.1957 [===>.............................................................] Step: 1m33s | Tot: 2h7m | Train: 1, h_loss 19.953, l1_loss 0.411599, l1_smooth_loss 0.1954 [===>.............................................................] Step: 1m33s | Tot: 2h8m | Train: 1, h_loss 19.922, l1_loss 0.411070, l1_smooth_loss 0.1950 [===>.............................................................] Step: 1m33s | Tot: 2h10m | Train: 1, h_loss 19.893, l1_loss 0.410569, l1_smooth_loss 0.194 [===>.............................................................] Step: 1m32s | Tot: 2h11m | Train: 1, h_loss 19.864, l1_loss 0.410056, l1_smooth_loss 0.194 [===>.............................................................] Step: 1m30s | Tot: 2h13m | Train: 1, h_loss 19.836, l1_loss 0.409549, l1_smooth_loss 0.194 [===>.............................................................] Step: 1m30s | Tot: 2h14m | Train: 1, h_loss 19.807, l1_loss 0.409056, l1_smooth_loss 0.193 [===>.............................................................] Step: 1m28s | Tot: 2h16m | Train: 1, h_loss 19.779, l1_loss 0.408603, l1_smooth_loss 0.193 [====>............................................................] Step: 1m29s | Tot: 2h17m | Train: 1, h_loss 19.753, l1_loss 0.408109, l1_smooth_loss 0.193 [====>............................................................] Step: 1m27s | Tot: 2h19m | Train: 1, h_loss 19.726, l1_loss 0.407637, l1_smooth_loss 0.192 [====>............................................................] Step: 1m27s | Tot: 2h20m | Train: 1, h_loss 19.698, l1_loss 0.407153, l1_smooth_loss 0.192 [====>............................................................] Step: 1m26s | Tot: 2h22m | Train: 1, h_loss 19.672, l1_loss 0.406692, l1_smooth_loss 0.192 [====>............................................................] Step: 1m26s | Tot: 2h23m | Train: 1, h_loss 19.644, l1_loss 0.406237, l1_smooth_loss 0.191 [====>............................................................] Step: 1m27s | Tot: 2h25m | Train: 1, h_loss 19.617, l1_loss 0.405781, l1_smooth_loss 0.191 [====>............................................................] Step: 1m26s | Tot: 2h26m | Train: 1, h_loss 19.591, l1_loss 0.405339, l1_smooth_loss 0.191 [====>............................................................] Step: 1m27s | Tot: 2h27m | Train: 1, h_loss 19.565, l1_loss 0.404897, l1_smooth_loss 0.191 [====>............................................................] Step: 1m28s | Tot: 2h29m | Train: 1, h_loss 19.539, l1_loss 0.404433, l1_smooth_loss 0.190 [====>............................................................] Step: 1m27s | Tot: 2h30m | Train: 1, h_loss 19.514, l1_loss 0.404012, l1_smooth_loss 0.190 [====>............................................................] Step: 1m27s | Tot: 2h32m | Train: 1, h_loss 19.490, l1_loss 0.403577, l1_smooth_loss 0.190 [====>............................................................] Step: 1m27s | Tot: 2h33m | Train: 1, h_loss 19.465, l1_loss 0.403162, l1_smooth_loss 0.190 [====>............................................................] Step: 1m29s | Tot: 2h35m | Train: 1, h_loss 19.441, l1_loss 0.402747, l1_smooth_loss 0.189 [====>............................................................] Step: 1m31s | Tot: 2h36m | Train: 1, h_loss 19.416, l1_loss 0.402336, l1_smooth_loss 0.189 [====>............................................................] Step: 1m29s | Tot: 2h38m | Train: 1, h_loss 19.391, l1_loss 0.401917, l1_smooth_loss 0.189 [====>............................................................] Step: 1m27s | Tot: 2h39m | Train: 1, h_loss 19.366, l1_loss 0.401521, l1_smooth_loss 0.188 [====>............................................................] Step: 1m27s | Tot: 2h41m | Train: 1, h_loss 19.342, l1_loss 0.401105, l1_smooth_loss 0.188 [====>............................................................] Step: 1m27s | Tot: 2h42m | Train: 1, h_loss 19.318, l1_loss 0.400708, l1_smooth_loss 0.188 [====>............................................................] Step: 1m30s | Tot: 2h44m | Train: 1, h_loss 19.295, l1_loss 0.400314, l1_smooth_loss 0.188 [====>............................................................] Step: 1m27s | Tot: 2h45m | Train: 1, h_loss 19.271, l1_loss 0.399921, l1_smooth_loss 0.187 [====>............................................................] Step: 1m29s | Tot: 2h47m | Train: 1, h_loss 19.248, l1_loss 0.399544, l1_smooth_loss 0.187 [====>............................................................] Step: 1m27s | Tot: 2h48m | Train: 1, h_loss 19.225, l1_loss 0.399165, l1_smooth_loss 0.187 [====>............................................................] Step: 1m29s | Tot: 2h50m | Train: 1, h_loss 19.202, l1_loss 0.398792, l1_smooth_loss 0.187 [=====>...........................................................] Step: 1m33s | Tot: 2h51m | Train: 1, h_loss 19.180, l1_loss 0.398408, l1_smooth_loss 0.186 [=====>...........................................................] Step: 1m30s | Tot: 2h53m | Train: 1, h_loss 19.158, l1_loss 0.398033, l1_smooth_loss 0.186 [=====>...........................................................] Step: 1m27s | Tot: 2h54m | Train: 1, h_loss 19.136, l1_loss 0.397679, l1_smooth_loss 0.186 [=====>...........................................................] Step: 1m27s | Tot: 2h56m | Train: 1, h_loss 19.114, l1_loss 0.397290, l1_smooth_loss 0.186 [=====>...........................................................] Step: 1m27s | Tot: 2h57m | Train: 1, h_loss 19.092, l1_loss 0.396938, l1_smooth_loss 0.186 [=====>...........................................................] Step: 1m28s | Tot: 2h58m | Train: 1, h_loss 19.071, l1_loss 0.396579, l1_smooth_loss 0.185 [=====>...........................................................] Step: 1m27s | Tot: 3h26s | Train: 1, h_loss 19.049, l1_loss 0.396225, l1_smooth_loss 0.185 [=====>...........................................................] Step: 1m27s | Tot: 3h1m | Train: 1, h_loss 19.028, l1_loss 0.395865, l1_smooth_loss 0.1853 [=====>...........................................................] Step: 1m26s | Tot: 3h3m | Train: 1, h_loss 19.007, l1_loss 0.395526, l1_smooth_loss 0.1851 [=====>...........................................................] Step: 1m26s | Tot: 3h4m | Train: 1, h_loss 18.986, l1_loss 0.395203, l1_smooth_loss 0.1849 [=====>...........................................................] Step: 1m26s | Tot: 3h6m | Train: 1, h_loss 18.966, l1_loss 0.394842, l1_smooth_loss 0.1847 [=====>...........................................................] Step: 1m26s | Tot: 3h7m | Train: 1, h_loss 18.945, l1_loss 0.394488, l1_smooth_loss 0.1845 [=====>...........................................................] Step: 1m26s | Tot: 3h9m | Train: 1, h_loss 18.925, l1_loss 0.394175, l1_smooth_loss 0.1843 [=====>...........................................................] Step: 1m26s | Tot: 3h10m | Train: 1, h_loss 18.904, l1_loss 0.393843, l1_smooth_loss 0.184 [=====>...........................................................] Step: 1m26s | Tot: 3h11m | Train: 1, h_loss 18.883, l1_loss 0.393518, l1_smooth_loss 0.183 [=====>...........................................................] Step: 1m28s | Tot: 3h13m | Train: 1, h_loss 18.864, l1_loss 0.393197, l1_smooth_loss 0.183 [=====>...........................................................] Step: 1m27s | Tot: 3h14m | Train: 1, h_loss 18.845, l1_loss 0.392881, l1_smooth_loss 0.183 [=====>...........................................................] Step: 1m27s | Tot: 3h16m | Train: 1, h_loss 18.825, l1_loss 0.392557, l1_smooth_loss 0.183 [=====>...........................................................] Step: 1m27s | Tot: 3h17m | Train: 1, h_loss 18.805, l1_loss 0.392247, l1_smooth_loss 0.183 [=====>...........................................................] Step: 1m29s | Tot: 3h19m | Train: 1, h_loss 18.786, l1_loss 0.391926, l1_smooth_loss 0.182 [=====>...........................................................] Step: 1m31s | Tot: 3h20m | Train: 1, h_loss 18.766, l1_loss 0.391615, l1_smooth_loss 0.182 [=====>...........................................................] Step: 1m30s | Tot: 3h22m | Train: 1, h_loss 18.747, l1_loss 0.391311, l1_smooth_loss 0.182 [=====>...........................................................] Step: 1m29s | Tot: 3h23m | Train: 1, h_loss 18.728, l1_loss 0.390995, l1_smooth_loss 0.182 [======>..........................................................] Step: 1m26s | Tot: 3h25m | Train: 1, h_loss 18.710, l1_loss 0.390693, l1_smooth_loss 0.182 [======>..........................................................] Step: 1m26s | Tot: 3h26m | Train: 1, h_loss 18.691, l1_loss 0.390394, l1_smooth_loss 0.181 [======>..........................................................] Step: 1m33s | Tot: 3h28m | Train: 1, h_loss 18.673, l1_loss 0.390099, l1_smooth_loss 0.181 [======>..........................................................] Step: 1m31s | Tot: 3h29m | Train: 1, h_loss 18.654, l1_loss 0.389809, l1_smooth_loss 0.181 [======>..........................................................] Step: 1m30s | Tot: 3h31m | Train: 1, h_loss 18.635, l1_loss 0.389516, l1_smooth_loss 0.181 [======>..........................................................] Step: 1m33s | Tot: 3h32m | Train: 1, h_loss 18.617, l1_loss 0.389227, l1_smooth_loss 0.181 [======>..........................................................] Step: 1m33s | Tot: 3h34m | Train: 1, h_loss 18.599, l1_loss 0.388930, l1_smooth_loss 0.181 [======>..........................................................] Step: 1m32s | Tot: 3h36m | Train: 1, h_loss 18.581, l1_loss 0.388637, l1_smooth_loss 0.180 [======>..........................................................] Step: 1m30s | Tot: 3h37m | Train: 1, h_loss 18.564, l1_loss 0.388364, l1_smooth_loss 0.180 [======>..........................................................] Step: 1m30s | Tot: 3h39m | Train: 1, h_loss 18.546, l1_loss 0.388078, l1_smooth_loss 0.180 [======>..........................................................] Step: 1m30s | Tot: 3h40m | Train: 1, h_loss 18.529, l1_loss 0.387813, l1_smooth_loss 0.180 [======>..........................................................] Step: 1m30s | Tot: 3h42m | Train: 1, h_loss 18.512, l1_loss 0.387525, l1_smooth_loss 0.180 [======>..........................................................] Step: 1m34s | Tot: 3h43m | Train: 1, h_loss 18.495, l1_loss 0.387244, l1_smooth_loss 0.179 [======>..........................................................] Step: 1m32s | Tot: 3h45m | Train: 1, h_loss 18.478, l1_loss 0.386972, l1_smooth_loss 0.179 [======>..........................................................] Step: 1m33s | Tot: 3h46m | Train: 1, h_loss 18.460, l1_loss 0.386691, l1_smooth_loss 0.179 [======>..........................................................] Step: 1m34s | Tot: 3h48m | Train: 1, h_loss 18.444, l1_loss 0.386423, l1_smooth_loss 0.179 [======>..........................................................] Step: 1m35s | Tot: 3h49m | Train: 1, h_loss 18.427, l1_loss 0.386161, l1_smooth_loss 0.179 [======>..........................................................] Step: 1m36s | Tot: 3h51m | Train: 1, h_loss 18.410, l1_loss 0.385898, l1_smooth_loss 0.179 [======>..........................................................] Step: 1m36s | Tot: 3h53m | Train: 1, h_loss 18.394, l1_loss 0.385641, l1_smooth_loss 0.178 [======>..........................................................] Step: 1m34s | Tot: 3h54m | Train: 1, h_loss 18.378, l1_loss 0.385385, l1_smooth_loss 0.178 [======>..........................................................] Step: 1m32s | Tot: 3h56m | Train: 1, h_loss 18.361, l1_loss 0.385110, l1_smooth_loss 0.178 [======>..........................................................] Step: 1m32s | Tot: 3h57m | Train: 1, h_loss 18.344, l1_loss 0.384854, l1_smooth_loss 0.178 [======>..........................................................] Step: 1m34s | Tot: 3h59m | Train: 1, h_loss 18.329, l1_loss 0.384604, l1_smooth_loss 0.178 [=======>.........................................................] Step: 1m32s | Tot: 4h51s | Train: 1, h_loss 18.314, l1_loss 0.384355, l1_smooth_loss 0.178 [=======>.........................................................] Step: 1m34s | Tot: 4h2m | Train: 1, h_loss 18.297, l1_loss 0.384115, l1_smooth_loss 0.1779 [=======>.........................................................] Step: 1m33s | Tot: 4h3m | Train: 1, h_loss 18.281, l1_loss 0.383865, l1_smooth_loss 0.1778 [=======>.........................................................] Step: 1m35s | Tot: 4h5m | Train: 1, h_loss 18.266, l1_loss 0.383631, l1_smooth_loss 0.1776 [=======>.........................................................] Step: 1m38s | Tot: 4h7m | Train: 1, h_loss 18.250, l1_loss 0.383370, l1_smooth_loss 0.1775 [=======>.........................................................] Step: 1m32s | Tot: 4h8m | Train: 1, h_loss 18.234, l1_loss 0.383129, l1_smooth_loss 0.1773 [=======>.........................................................] Step: 1m30s | Tot: 4h10m | Train: 1, h_loss 18.218, l1_loss 0.382876, l1_smooth_loss 0.177 [=======>.........................................................] Step: 1m31s | Tot: 4h11m | Train: 1, h_loss 18.203, l1_loss 0.382635, l1_smooth_loss 0.177 [=======>.........................................................] Step: 1m30s | Tot: 4h13m | Train: 1, h_loss 18.188, l1_loss 0.382404, l1_smooth_loss 0.176 [=======>.........................................................] Step: 1m29s | Tot: 4h14m | Train: 1, h_loss 18.173, l1_loss 0.382161, l1_smooth_loss 0.176 [=======>.........................................................] Step: 1m27s | Tot: 4h16m | Train: 1, h_loss 18.158, l1_loss 0.381939, l1_smooth_loss 0.176 [=======>.........................................................] Step: 1m27s | Tot: 4h17m | Train: 1, h_loss 18.143, l1_loss 0.381711, l1_smooth_loss 0.176 [=======>.........................................................] Step: 1m27s | Tot: 4h19m | Train: 1, h_loss 18.128, l1_loss 0.381454, l1_smooth_loss 0.176 [=======>.........................................................] Step: 1m26s | Tot: 4h20m | Train: 1, h_loss 18.113, l1_loss 0.381229, l1_smooth_loss 0.176 [=======>.........................................................] Step: 1m26s | Tot: 4h22m | Train: 1, h_loss 18.099, l1_loss 0.381004, l1_smooth_loss 0.176 [=======>.........................................................] Step: 1m26s | Tot: 4h23m | Train: 1, h_loss 18.084, l1_loss 0.380790, l1_smooth_loss 0.175 [=======>.........................................................] Step: 1m26s | Tot: 4h24m | Train: 1, h_loss 18.070, l1_loss 0.380549, l1_smooth_loss 0.175 [=======>.........................................................] Step: 1m26s | Tot: 4h26m | Train: 1, h_loss 18.055, l1_loss 0.380339, l1_smooth_loss 0.175 [=======>.........................................................] Step: 1m26s | Tot: 4h27m | Train: 1, h_loss 18.041, l1_loss 0.380107, l1_smooth_loss 0.175 [=======>.........................................................] Step: 1m30s | Tot: 4h29m | Train: 1, h_loss 18.027, l1_loss 0.379878, l1_smooth_loss 0.175 [=======>.........................................................] Step: 1m29s | Tot: 4h30m | Train: 1, h_loss 18.012, l1_loss 0.379647, l1_smooth_loss 0.175 [=======>.........................................................] Step: 1m27s | Tot: 4h32m | Train: 1, h_loss 17.998, l1_loss 0.379425, l1_smooth_loss 0.175 [=======>.........................................................] Step: 1m27s | Tot: 4h33m | Train: 1, h_loss 17.984, l1_loss 0.379205, l1_smooth_loss 0.174 [========>........................................................] Step: 1m26s | Tot: 4h35m | Train: 1, h_loss 17.970, l1_loss 0.378988, l1_smooth_loss 0.174 [========>........................................................] Step: 1m27s | Tot: 4h36m | Train: 1, h_loss 17.957, l1_loss 0.378771, l1_smooth_loss 0.174 [========>........................................................] Step: 1m27s | Tot: 4h38m | Train: 1, h_loss 17.943, l1_loss 0.378558, l1_smooth_loss 0.174 [========>........................................................] Step: 1m27s | Tot: 4h39m | Train: 1, h_loss 17.929, l1_loss 0.378355, l1_smooth_loss 0.174 [========>........................................................] Step: 1m27s | Tot: 4h40m | Train: 1, h_loss 17.915, l1_loss 0.378141, l1_smooth_loss 0.174 [========>........................................................] Step: 1m26s | Tot: 4h42m | Train: 1, h_loss 17.901, l1_loss 0.377922, l1_smooth_loss 0.174 [========>........................................................] Step: 1m29s | Tot: 4h43m | Train: 1, h_loss 17.888, l1_loss 0.377727, l1_smooth_loss 0.173 [========>........................................................] Step: 1m28s | Tot: 4h45m | Train: 1, h_loss 17.875, l1_loss 0.377539, l1_smooth_loss 0.173 [========>........................................................] Step: 1m28s | Tot: 4h46m | Train: 1, h_loss 17.861, l1_loss 0.377336, l1_smooth_loss 0.173 [========>........................................................] Step: 1m27s | Tot: 4h48m | Train: 1, h_loss 17.848, l1_loss 0.377132, l1_smooth_loss 0.173 [========>........................................................] Step: 1m27s | Tot: 4h49m | Train: 1, h_loss 17.835, l1_loss 0.376942, l1_smooth_loss 0.173 [========>........................................................] Step: 1m28s | Tot: 4h51m | Train: 1, h_loss 17.822, l1_loss 0.376748, l1_smooth_loss 0.173 [========>........................................................] Step: 1m27s | Tot: 4h52m | Train: 1, h_loss 17.808, l1_loss 0.376552, l1_smooth_loss 0.173 [========>........................................................] Step: 1m27s | Tot: 4h54m | Train: 1, h_loss 17.796, l1_loss 0.376356, l1_smooth_loss 0.173 [========>........................................................] Step: 1m27s | Tot: 4h55m | Train: 1, h_loss 17.783, l1_loss 0.376159, l1_smooth_loss 0.173 [========>........................................................] Step: 1m27s | Tot: 4h57m | Train: 1, h_loss 17.770, l1_loss 0.375960, l1_smooth_loss 0.172 [========>........................................................] Step: 1m37s | Tot: 4h58m | Train: 1, h_loss 17.757, l1_loss 0.375771, l1_smooth_loss 0.172 [========>........................................................] Step: 1m33s | Tot: 5h18s | Train: 1, h_loss 17.744, l1_loss 0.375581, l1_smooth_loss 0.172 [========>........................................................] Step: 1m32s | Tot: 5h1m | Train: 1, h_loss 17.732, l1_loss 0.375392, l1_smooth_loss 0.1725 [========>........................................................] Step: 1m31s | Tot: 5h3m | Train: 1, h_loss 17.720, l1_loss 0.375207, l1_smooth_loss 0.1724 [========>........................................................] Step: 1m29s | Tot: 5h4m | Train: 1, h_loss 17.707, l1_loss 0.375017, l1_smooth_loss 0.1723 [========>........................................................] Step: 1m28s | Tot: 5h6m | Train: 1, h_loss 17.694, l1_loss 0.374831, l1_smooth_loss 0.1721 [========>........................................................] Step: 1m29s | Tot: 5h7m | Train: 1, h_loss 17.682, l1_loss 0.374640, l1_smooth_loss 0.1720 [=========>.......................................................] Step: 1m28s | Tot: 5h9m | Train: 1, h_loss 17.670, l1_loss 0.374454, l1_smooth_loss 0.1719 [=========>.......................................................] Step: 1m28s | Tot: 5h10m | Train: 1, h_loss 17.657, l1_loss 0.374258, l1_smooth_loss 0.171 [=========>.......................................................] Step: 1m28s | Tot: 5h12m | Train: 1, h_loss 17.645, l1_loss 0.374077, l1_smooth_loss 0.171 [=========>.......................................................] Step: 1m36s | Tot: 5h13m | Train: 1, h_loss 17.633, l1_loss 0.373894, l1_smooth_loss 0.171 [=========>.......................................................] Step: 1m32s | Tot: 5h15m | Train: 1, h_loss 17.621, l1_loss 0.373720, l1_smooth_loss 0.171 [=========>.......................................................] Step: 1m30s | Tot: 5h16m | Train: 1, h_loss 17.609, l1_loss 0.373545, l1_smooth_loss 0.171 [=========>.......................................................] Step: 1m28s | Tot: 5h18m | Train: 1, h_loss 17.597, l1_loss 0.373368, l1_smooth_loss 0.171 [=========>.......................................................] Step: 1m28s | Tot: 5h19m | Train: 1, h_loss 17.585, l1_loss 0.373184, l1_smooth_loss 0.171 [=========>.......................................................] Step: 1m28s | Tot: 5h21m | Train: 1, h_loss 17.573, l1_loss 0.373002, l1_smooth_loss 0.171 [=========>.......................................................] Step: 1m28s | Tot: 5h22m | Train: 1, h_loss 17.562, l1_loss 0.372830, l1_smooth_loss 0.170 [=========>.......................................................] Step: 1m27s | Tot: 5h24m | Train: 1, h_loss 17.550, l1_loss 0.372653, l1_smooth_loss 0.170 [=========>.......................................................] Step: 1m28s | Tot: 5h25m | Train: 1, h_loss 17.538, l1_loss 0.372482, l1_smooth_loss 0.170 [=========>.......................................................] Step: 1m28s | Tot: 5h27m | Train: 1, h_loss 17.526, l1_loss 0.372299, l1_smooth_loss 0.170 [=========>.......................................................] Step: 1m36s | Tot: 5h28m | Train: 1, h_loss 17.515, l1_loss 0.372128, l1_smooth_loss 0.170 [=========>.......................................................] Step: 1m32s | Tot: 5h30m | Train: 1, h_loss 17.504, l1_loss 0.371962, l1_smooth_loss 0.170 [=========>.......................................................] Step: 1m31s | Tot: 5h31m | Train: 1, h_loss 17.492, l1_loss 0.371791, l1_smooth_loss 0.170 [=========>.......................................................] Step: 1m29s | Tot: 5h33m | Train: 1, h_loss 17.481, l1_loss 0.371626, l1_smooth_loss 0.170 [=========>.......................................................] Step: 1m29s | Tot: 5h34m | Train: 1, h_loss 17.469, l1_loss 0.371464, l1_smooth_loss 0.170 [=========>.......................................................] Step: 1m29s | Tot: 5h36m | Train: 1, h_loss 17.458, l1_loss 0.371293, l1_smooth_loss 0.169 [=========>.......................................................] Step: 1m29s | Tot: 5h37m | Train: 1, h_loss 17.447, l1_loss 0.371116, l1_smooth_loss 0.169 [=========>.......................................................] Step: 1m28s | Tot: 5h39m | Train: 1, h_loss 17.436, l1_loss 0.370954, l1_smooth_loss 0.169 [=========>.......................................................] Step: 1m28s | Tot: 5h40m | Train: 1, h_loss 17.425, l1_loss 0.370783, l1_smooth_loss 0.169 [=========>.......................................................] Step: 1m28s | Tot: 5h42m | Train: 1, h_loss 17.414, l1_loss 0.370616, l1_smooth_loss 0.169 [==========>......................................................] Step: 1m38s | Tot: 5h43m | Train: 1, h_loss 17.403, l1_loss 0.370454, l1_smooth_loss 0.169 [==========>......................................................] Step: 1m32s | Tot: 5h45m | Train: 1, h_loss 17.392, l1_loss 0.370305, l1_smooth_loss 0.169 [==========>......................................................] Step: 1m35s | Tot: 5h47m | Train: 1, h_loss 17.381, l1_loss 0.370136, l1_smooth_loss 0.169 [==========>......................................................] Step: 1m30s | Tot: 5h48m | Train: 1, h_loss 17.370, l1_loss 0.369974, l1_smooth_loss 0.169 [==========>......................................................] Step: 1m29s | Tot: 5h50m | Train: 1, h_loss 17.359, l1_loss 0.369803, l1_smooth_loss 0.169 [==========>......................................................] Step: 1m28s | Tot: 5h51m | Train: 1, h_loss 17.348, l1_loss 0.369638, l1_smooth_loss 0.168 [==========>......................................................] Step: 1m29s | Tot: 5h53m | Train: 1, h_loss 17.338, l1_loss 0.369479, l1_smooth_loss 0.168 [==========>......................................................] Step: 1m29s | Tot: 5h54m | Train: 1, h_loss 17.327, l1_loss 0.369318, l1_smooth_loss 0.168 [==========>......................................................] Step: 1m28s | Tot: 5h56m | Train: 1, h_loss 17.317, l1_loss 0.369161, l1_smooth_loss 0.168 [==========>......................................................] Step: 1m29s | Tot: 5h57m | Train: 1, h_loss 17.306, l1_loss 0.369008, l1_smooth_loss 0.168 [==========>......................................................] Step: 1m36s | Tot: 5h59m | Train: 1, h_loss 17.295, l1_loss 0.368849, l1_smooth_loss 0.168 [==========>......................................................] Step: 1m34s | Tot: 6h39s | Train: 1, h_loss 17.285, l1_loss 0.368691, l1_smooth_loss 0.168 [==========>......................................................] Step: 1m33s | Tot: 6h2m | Train: 1, h_loss 17.275, l1_loss 0.368535, l1_smooth_loss 0.1682 [==========>......................................................] Step: 1m32s | Tot: 6h3m | Train: 1, h_loss 17.264, l1_loss 0.368386, l1_smooth_loss 0.1681 [==========>......................................................] Step: 1m30s | Tot: 6h5m | Train: 1, h_loss 17.254, l1_loss 0.368228, l1_smooth_loss 0.1680 [==========>......................................................] Step: 1m28s | Tot: 6h6m | Train: 1, h_loss 17.243, l1_loss 0.368077, l1_smooth_loss 0.1679 [==========>......................................................] Step: 1m28s | Tot: 6h8m | Train: 1, h_loss 17.233, l1_loss 0.367915, l1_smooth_loss 0.1678 [==========>......................................................] Step: 1m29s | Tot: 6h9m | Train: 1, h_loss 17.224, l1_loss 0.367772, l1_smooth_loss 0.1678 [==========>......................................................] Step: 1m28s | Tot: 6h11m | Train: 1, h_loss 17.214, l1_loss 0.367617, l1_smooth_loss 0.167 [==========>......................................................] Step: 1m28s | Tot: 6h12m | Train: 1, h_loss 17.203, l1_loss 0.367470, l1_smooth_loss 0.167 [==========>......................................................] Step: 1m34s | Tot: 6h14m | Train: 1, h_loss 17.194, l1_loss 0.367321, l1_smooth_loss 0.167 [==========>......................................................] Step: 1m32s | Tot: 6h15m | Train: 1, h_loss 17.183, l1_loss 0.367172, l1_smooth_loss 0.167 [==========>......................................................] Step: 1m30s | Tot: 6h17m | Train: 1, h_loss 17.174, l1_loss 0.367031, l1_smooth_loss 0.167 [===========>.....................................................] Step: 1m33s | Tot: 6h18m | Train: 1, h_loss 17.164, l1_loss 0.366884, l1_smooth_loss 0.167 [===========>.....................................................] Step: 1m34s | Tot: 6h20m | Train: 1, h_loss 17.154, l1_loss 0.366727, l1_smooth_loss 0.167 [===========>.....................................................] Step: 1m33s | Tot: 6h21m | Train: 1, h_loss 17.144, l1_loss 0.366583, l1_smooth_loss 0.167 [===========>.....................................................] Step: 1m33s | Tot: 6h23m | Train: 1, h_loss 17.134, l1_loss 0.366440, l1_smooth_loss 0.166 [===========>.....................................................] Step: 1m36s | Tot: 6h25m | Train: 1, h_loss 17.124, l1_loss 0.366297, l1_smooth_loss 0.166 [===========>.....................................................] Step: 1m36s | Tot: 6h26m | Train: 1, h_loss 17.115, l1_loss 0.366161, l1_smooth_loss 0.166 [===========>.....................................................] Step: 1m36s | Tot: 6h28m | Train: 1, h_loss 17.105, l1_loss 0.366021, l1_smooth_loss 0.166 [===========>.....................................................] Step: 1m31s | Tot: 6h29m | Train: 1, h_loss 17.095, l1_loss 0.365882, l1_smooth_loss 0.166 [===========>.....................................................] Step: 1m32s | Tot: 6h31m | Train: 1, h_loss 17.085, l1_loss 0.365740, l1_smooth_loss 0.166 [===========>.....................................................] Step: 1m30s | Tot: 6h32m | Train: 1, h_loss 17.076, l1_loss 0.365607, l1_smooth_loss 0.166 [===========>.....................................................] Step: 1m32s | Tot: 6h34m | Train: 1, h_loss 17.066, l1_loss 0.365467, l1_smooth_loss 0.166 [===========>.....................................................] Step: 1m31s | Tot: 6h35m | Train: 1, h_loss 17.057, l1_loss 0.365331, l1_smooth_loss 0.166 [===========>.....................................................] Step: 1m28s | Tot: 6h37m | Train: 1, h_loss 17.047, l1_loss 0.365201, l1_smooth_loss 0.166 [===========>.....................................................] Step: 1m28s | Tot: 6h38m | Train: 1, h_loss 17.038, l1_loss 0.365061, l1_smooth_loss 0.166 [===========>.....................................................] Step: 1m27s | Tot: 6h40m | Train: 1, h_loss 17.028, l1_loss 0.364926, l1_smooth_loss 0.166 [===========>.....................................................] Step: 1m27s | Tot: 6h41m | Train: 1, h_loss 17.019, l1_loss 0.364787, l1_smooth_loss 0.165 [===========>.....................................................] Step: 1m28s | Tot: 6h43m | Train: 1, h_loss 17.010, l1_loss 0.364656, l1_smooth_loss 0.165 [===========>.....................................................] Step: 1m32s | Tot: 6h44m | Train: 1, h_loss 17.001, l1_loss 0.364521, l1_smooth_loss 0.165 [===========>.....................................................] Step: 1m31s | Tot: 6h46m | Train: 1, h_loss 16.991, l1_loss 0.364384, l1_smooth_loss 0.165 [===========>.....................................................] Step: 1m31s | Tot: 6h47m | Train: 1, h_loss 16.982, l1_loss 0.364251, l1_smooth_loss 0.165 [===========>.....................................................] Step: 1m29s | Tot: 6h49m | Train: 1, h_loss 16.973, l1_loss 0.364125, l1_smooth_loss 0.165 [===========>.....................................................] Step: 1m34s | Tot: 6h50m | Train: 1, h_loss 16.964, l1_loss 0.363998, l1_smooth_loss 0.165 [===========>.....................................................] Step: 1m34s | Tot: 6h52m | Train: 1, h_loss 16.955, l1_loss 0.363857, l1_smooth_loss 0.165 [============>....................................................] Step: 1m33s | Tot: 6h54m | Train: 1, h_loss 16.946, l1_loss 0.363725, l1_smooth_loss 0.165 [============>....................................................] Step: 1m33s | Tot: 6h55m | Train: 1, h_loss 16.937, l1_loss 0.363590, l1_smooth_loss 0.165 [============>....................................................] Step: 1m30s | Tot: 6h57m | Train: 1, h_loss 16.928, l1_loss 0.363456, l1_smooth_loss 0.165 [============>....................................................] Step: 1m29s | Tot: 6h58m | Train: 1, h_loss 16.919, l1_loss 0.363323, l1_smooth_loss 0.165 [============>....................................................] Step: 1m31s | Tot: 7h11s | Train: 1, h_loss 16.910, l1_loss 0.363194, l1_smooth_loss 0.164 [============>....................................................] Step: 1m31s | Tot: 7h1m | Train: 1, h_loss 16.902, l1_loss 0.363065, l1_smooth_loss 0.1648 [============>....................................................] Step: 1m28s | Tot: 7h3m | Train: 1, h_loss 16.893, l1_loss 0.362929, l1_smooth_loss 0.1648 [============>....................................................] Step: 1m28s | Tot: 7h4m | Train: 1, h_loss 16.885, l1_loss 0.362801, l1_smooth_loss 0.1647 [============>....................................................] Step: 1m28s | Tot: 7h6m | Train: 1, h_loss 16.876, l1_loss 0.362674, l1_smooth_loss 0.1646 [============>....................................................] Step: 1m27s | Tot: 7h7m | Train: 1, h_loss 16.867, l1_loss 0.362546, l1_smooth_loss 0.1645 [============>....................................................] Step: 1m27s | Tot: 7h9m | Train: 1, h_loss 16.859, l1_loss 0.362419, l1_smooth_loss 0.1644 [============>....................................................] Step: 1m28s | Tot: 7h10m | Train: 1, h_loss 16.850, l1_loss 0.362291, l1_smooth_loss 0.164 [============>....................................................] Step: 1m28s | Tot: 7h12m | Train: 1, h_loss 16.842, l1_loss 0.362177, l1_smooth_loss 0.164Traceback (most recent call last):
File "homography_CNN_synthetic.py", line 595, in
train()
File "homography_CNN_synthetic.py", line 337, in train
_, h_loss_value, l1_loss_value, l1_smooth_loss_value, lr_value = sess.run([apply_grad_opt, total_h_loss, total_l1_loss, total_l1_smooth_loss, learning_rate])
File "/home/chenxy/anaconda3/envs/last27/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 789, in run
run_metadata_ptr)
File "/home/chenxy/anaconda3/envs/last27/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 997, in _run
feed_dict_string, options, run_metadata)
File "/home/chenxy/anaconda3/envs/last27/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1132, in _do_run
target_list, options, run_metadata)
File "/home/chenxy/anaconda3/envs/last27/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1152, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: Input matrix is not invertible.
[[Node: gradients/MatrixSolve_grad/MatrixSolve = MatrixSolve[T=DT_FLOAT, adjoint=true, _device="/job:localhost/replica:0/task:0/cpu:0"](transpose_1/_551, gradients/concat_grad/tuple/control_dependency/_593)]]

Caused by op u'gradients/MatrixSolve_grad/MatrixSolve', defined at:
File "homography_CNN_synthetic.py", line 595, in
train()
File "homography_CNN_synthetic.py", line 265, in train
grads = opt_step.compute_gradients(l1_loss)
File "/home/chenxy/anaconda3/envs/last27/lib/python2.7/site-packages/tensorflow/python/training/optimizer.py", line 386, in compute_gradients
colocate_gradients_with_ops=colocate_gradients_with_ops)
File "/home/chenxy/anaconda3/envs/last27/lib/python2.7/site-packages/tensorflow/python/ops/gradients_impl.py", line 540, in gradients
grad_scope, op, func_call, lambda: grad_fn(op, *out_grads))
File "/home/chenxy/anaconda3/envs/last27/lib/python2.7/site-packages/tensorflow/python/ops/gradients_impl.py", line 346, in _MaybeCompile
return grad_fn() # Exit early
File "/home/chenxy/anaconda3/envs/last27/lib/python2.7/site-packages/tensorflow/python/ops/gradients_impl.py", line 540, in
grad_scope, op, func_call, lambda: grad_fn(op, *out_grads))
File "/home/chenxy/anaconda3/envs/last27/lib/python2.7/site-packages/tensorflow/python/ops/linalg_grad.py", line 69, in _MatrixSolveGrad
grad_b = linalg_ops.matrix_solve(a, grad, adjoint=not adjoint_a)
File "/home/chenxy/anaconda3/envs/last27/lib/python2.7/site-packages/tensorflow/python/ops/gen_linalg_ops.py", line 336, in matrix_solve
adjoint=adjoint, name=name)
File "/home/chenxy/anaconda3/envs/last27/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 767, in apply_op
op_def=op_def)
File "/home/chenxy/anaconda3/envs/last27/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2506, in create_op
original_op=self._default_original_op, op_def=op_def)
File "/home/chenxy/anaconda3/envs/last27/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1269, in init
self._traceback = _extract_stack()

...which was originally created as op u'MatrixSolve', defined at:
File "homography_CNN_synthetic.py", line 595, in
train()
File "homography_CNN_synthetic.py", line 233, in train
pts1_splits[i], gt_splits[i], patch_indices_splits[i], reuse_variables=reuse_variables, model_index=i)
File "/home/chenxy/unsupervisedDeepHomographyRAL2018-master/code/homography_model.py", line 82, in init
self.solve_DLT()
File "/home/chenxy/unsupervisedDeepHomographyRAL2018-master/code/homography_model.py", line 242, in solve_DLT
H_8el = tf.matrix_solve(A_mat , b_mat) # BATCH_SIZE x 8.
File "/home/chenxy/anaconda3/envs/last27/lib/python2.7/site-packages/tensorflow/python/ops/gen_linalg_ops.py", line 336, in matrix_solve
adjoint=adjoint, name=name)
File "/home/chenxy/anaconda3/envs/last27/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 767, in apply_op
op_def=op_def)
File "/home/chenxy/anaconda3/envs/last27/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 2506, in create_op
original_op=self._default_original_op, op_def=op_def)
File "/home/chenxy/anaconda3/envs/last27/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1269, in init
self._traceback = _extract_stack()

InvalidArgumentError (see above for traceback): Input matrix is not invertible.
[[Node: gradients/MatrixSolve_grad/MatrixSolve = MatrixSolve[T=DT_FLOAT, adjoint=true, _device="/job:localhost/replica:0/task:0/cpu:0"](transpose_1/_551, gradients/concat_grad/tuple/control_dependency/_593)]]