vame.egocentric_alignment: array of sample points is empty
suzannevdveldt opened this issue · 4 comments
I get the following error in the vame.egocentric_alignment. The error persists even when setting the pose confidence value to .7, and the code only runs when setting the pose confidence value to 0.1. Is there a better solution for this? I am afraid my motifs will not be very reliable if include such low confidence data points.
Any advice would be greatly appreciated, the complete error is copied below:
Aligning data SERT862_2021_09_28_SA_01, Pose confidence value: 0.20
ValueError Traceback (most recent call last)
in ()
3 # pose_ref_index: list of reference coordinate indices for alignment
4 # Example: 0: snout, 1: forehand_left, 2: forehand_right, 3: hindleft, 4: hindright, 5: tail
----> 5 vame.egocentric_alignment(config, pose_ref_index=[0,1,3,4,5,6,9,10])
4 frames
<array_function internals> in interp(*args, **kwargs)
/usr/local/lib/python3.7/dist-packages/numpy/lib/function_base.py in interp(x, xp, fp, left, right, period)
1437 fp = np.concatenate((fp[-1:], fp, fp[0:1]))
1438
-> 1439 return interp_func(x, xp, fp, left, right)
1440
1441
ValueError: array of sample points is empty
Hi Suzanne,
In the egocentric alignment script we interpolate all values below a given confidence.
The error means that when you filter with a confidence threshold of 0.7, there are not more points to interpolate at all, because the general confidence in your data is lower.
As you say, you shouldn't work with a confidence that is that low, so the right first step here would be to improve your dataset on the DLC side, first. Train your model with more labels or refine the network following steps J and K from the DLC user guide: https://deeplabcut.github.io/DeepLabCut/docs/standardDeepLabCut_UserGuide.html#
Then check if you can re-run the script with a high confidence again - compare to our example csv - there, the general confidence for all body parts is typically very high (>0.99) and just drops if the animal is e.g. rearing. The alignment script is designed with similar data in mind and would need manual adaptation otherwise (e.g. removing the interpolation part).
Hope this helps,
Best,
Pavol
Hi Pavol,
Thank you for your quick reply and your helpful insights. The problem appears to be on the DLC output side; due to my task, the mouse is obscured in some parts of the maze. The example csv runs well in my pipline.
I'll look if I can adapt the alignment script by removing the interpolation part, let me know if you have any suggestions or pointers on how to do that.
Thanks a lot again!
Hi Suzanne,
For a start, set the confidence to a low value (0.1 or even 0.0), than nothing will be interpolated.
Note that if the mouse is not visible in some parts of the maze, the corresponding DLC time series data is essentially meaningless for VAME. I would recommend to either cut it out from the input data or set the values to some static numerical number- than these times will become assigned to a "not detected" motif cluster.
Best,
Pavol
Hi Suzanne,
I just could find the time to look at your issue myself. One part of your issue arises from the way you define the egocentric alignment function: vame.egocentric_alignment(config, pose_ref_index=[0,1,3,4,5,6,9,10])
The argument pose_ref_index
is supposed to have to anchor points, not more to align the animal egocentrically.
For this, see the example from the demo.py code:
# Align your behavior videos egocentric and create training dataset:
# pose_ref_index: list of reference coordinate indices for alignment
# Example: 0: snout, 1: forehand_left, 2: forehand_right, 3: hindleft, 4: hindright, 5: tail
vame.egocentric_alignment(config, pose_ref_index=[0,5])
Here, we define the snout and the tail base as anchor points and rotate the animal around this axis.
I hope this helps you with going forward on your VAME journey.
Cheers,
Kevin