umyelab/LabGym

Analyzing behavior on already segmented videos

Closed this issue · 3 comments

Dear Ye lab,

I was wondering whatś the best course of action to use LabGym behavior quantification with spotlight modules on preprocessed datasets. In particular, I have produced videos containing several flies (<10) which last several hours and I wish to try out LabGym's rich ability to detect interactions. For every video, I have the x, y coordinates of every animal's centroid and several body parts (pose), as well as its identity (proof checked). So I think I don't need to run the detectors, and instead I can skip directly to the behavior part, but I couldn't find documentation regarding this.
I imagine there might be other users who may have similar situations, where the animals in their raw videos have been segmented with some other tool, so I would really appreciate if you could teach us how it should be done! :)

Hi,

LabGym does not process keypoint data to identify behaviors. So in this case, you still need to use its Detector function first. But I guess you already have the centers/body keypoints and identities digitally marked for each fly in videos so that they have different 'markers' on them right? If so, then you probably only need to annotate very few numbers of frames for training a Detector that can easily distinguish different individual flies without identity switching in tracking. To do so, you can annotate each fly as a different 'class' in Roboflow, like 'fly1', 'fly2', ... After you have a trained Detector you can use it to generate behavior examples and train a Categorizer to recognize different behaviors.

Hi @yujiahu415 ! Thanks for the quick answer :) Yes, I have a digital identity marker for each fly, and I can easily rebuild the contours because I know where each fly is and the background is simple. So I would rather not train any Detector at all and just jump straight to the Categorizer.
What's the expected format expected by the Categorizer? In terms of directory structure and file formats? I think I might be able to recreate it, as if I had completed the LabGym Detector part.

Hi,

LabGym is fundamentally different from tools that use keypoint tracking data. So it was not designed to use this data. Instead, it presents behavioral information with two pieces of data, one is the animation that is a set of raw pixels of behaving subjects over time with or without background; the other is the pattern image that imprints the motion patterns of whole body and body parts contours with changing colors to indicate the temporal sequences. The spotlight mechanism is implemented during the step of generating these two pieces of behavior data. And the behavior categorization by the Categorizer also conducts on these two pieces of data. So the input data for the Categorizer is a pair of animation and pattern image. And the function of LabGym Detector in facilitating the process of generating animations and pattern images, as well as tracking individuals, is also different from simply inputting the keypoint tracking data or even body contours. If you want to know more details about these mechanisms you may take a look at LabGym1.x and 2.x papers, or even the code.

But it might not be easy to re-format or re-create what is needed for the Categorizer in your case and this process is probably harder than simply training a Detector.

Anyway, all these are just my suggestions.