videoflow/videoflow-contrib

human_tracking.py:111 ValueError: all the input arrays must have same number of dimensions

Opened this issue · 0 comments

Steps to reproduce

git clone git@github.com:videoflow/videoflow-contrib.git
cd videoflow-contrib/solutions/human_tracking
docker build -f Dockerfile . -t human_track
docker run --rm -ti human_track

Output:

Downloading data from https://github.com/videoflow/videoflow-contrib/releases/download/example_videos/people_walking.mp4
29442048/29434959 [==============================] - 3s 0us/step
Downloading data from https://github.com/videoflow/videoflow-contrib/releases/download/detectron2/R50_FPN_3x.pkl
Downloading data from https://github.com/videoflow/videoflow-contrib/releases/download/models/humanencoder_mars_128.pb
2021-01-22 23:39:51,883 - videoflow.core - INFO - Allocated processes for 14 tasks
WARNING: Logging before flag parsing goes to stderr.
I0122 23:39:51.883627 139949160965952 flow.py:94] Allocated processes for 14 tasks
2021-01-22 23:39:51,884 - videoflow.core - INFO - Started running flow.
I0122 23:39:51.884556 139949160965952 flow.py:95] Started running flow.
11247616/11244842 [==============================] - 2s 0us/step 
  5210112/237002693 [..............................] - ETA: 1:03WARNING: Logging before flag parsing goes to stderr.
W0122 23:39:54.168034 139949160965952 deprecation_wrapper.py:119] From /home/appuser/.local/lib/python3.6/site-packages/videoflow_contrib/humanencoder/encoder.py:61: The name tf.GraphDef is deprecated. Please use tf.compat.v1.GraphDef instead.

W0122 23:39:54.168594 139949160965952 deprecation_wrapper.py:119] From /home/appuser/.local/lib/python3.6/site-packages/videoflow_contrib/humanencoder/encoder.py:62: The name tf.gfile.GFile is deprecated. Please use tf.io.gfile.GFile instead.

W0122 23:39:54.245696 139949160965952 deprecation_wrapper.py:119] From /home/appuser/.local/lib/python3.6/site-packages/videoflow_contrib/humanencoder/encoder.py:67: The name tf.Session is deprecated. Please use tf.compat.v1.Session instead.

2021-01-22 23:39:54.246129: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
  5701632/237002693 [..............................] - ETA: 1:022021-01-22 23:39:54.271066: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 3999980000 Hz
2021-01-22 23:39:54.271419: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x5454780 executing computations on platform Host. Devices:
2021-01-22 23:39:54.271443: I tensorflow/compiler/xla/service/service.cc:175]   StreamExecutor device (0): <undefined>, <undefined>
237002752/237002693 [==============================] - 24s 0us/step
2021-01-22 23:40:22.958344: W tensorflow/compiler/jit/mark_for_compilation_pass.cc:1412] (One-time warning): Not using XLA:CPU for cluster because envvar TF_XLA_FLAGS=--tf_xla_cpu_global_jit was not set.  If you want XLA:CPU, either set that envvar, or use experimental_jit_scope to enable XLA:CPU.  To confirm that XLA is active, pass --vmodule=xla_compilation_cache=1 (as a proper command-line flag, not via TF_XLA_FLAGS) or set the envvar XLA_FLAGS=--xla_hlo_profile.
Process Process-9:
Traceback (most recent call last):
  File "/usr/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
    self.run()
  File "/usr/lib/python3.6/multiprocessing/process.py", line 93, in run
    self._target(*self._args, **self._kwargs)
  File "/home/appuser/.local/lib/python3.6/site-packages/videoflow/engines/task_functions.py", line 15, in task_executor_fn
    task.run()
  File "/home/appuser/.local/lib/python3.6/site-packages/videoflow/core/task.py", line 89, in run
    self._run()
  File "/home/appuser/.local/lib/python3.6/site-packages/videoflow/core/task.py", line 171, in _run
    output = self._processor.process(*inputs)
  File "/home/appuser/human_tracking.py", line 111, in process
    axis = 1
ValueError: all the input arrays must have same number of dimensions