mviereck/microscopy-tools

Discussion about movement detection

thekryz opened this issue · 1 comments

Hello mviereck, and thank you for the friendly discussion over at enblend/enfuse.
Honestly, I think you have helped me out a lot already with the idea of comparing the min/max conversions of my time-series to get an overview of movement. Since you suggested continuing the discussion over here though, I'm starting a thread - maybe you have further ideas.

To get a bit deeper into what I'm doing: I'm working with MEA (Motion Energy Analysis) in human interaction. In order to extract motion energy out of videos of people interacting with MEA, one needs to define regions of interest (ROIs). Since those regions can be split up into different body sections (e.g. upper body, lower body, head) and have to be defined for the whole duration of a video, I need to get an overview of where these body parts move to throughout the interactions and then define them as precisely as possible.

Now, you already know that my first attempt was with using Enfuse, and my preferred solution there is with the option --hard-mask. Your suggested solution with ImageMagick's min/max comparison works very well as well and shows movement better than the Enfuse-solution.

That might actually be enough to solve the problem for me. I'll test this with a few videos and see where it gets me. Thanks again for your help and if you have any further suggestions, I'll be glad to test them.

Glad that I could help. :-)

With stackfuser and imfuse in this repository I am researching image comparision with a different target that rather tries to avoid and hide movements. However, the same techniques can be used to detect movement events.

Some ideas that might improve your results:

  • Converting the source frames to grayscale with -grayscale RMS might improve and/or ease the further analyzing process.
  • The diff images will contain some useless noise. ImageMagick provides several noise reduction possibilities. E.g. try option -kuwahara: https://imagemagick.org/script/command-line-options.php#kuwahara
  • To detect movement, the background should be as static as possible. I assume you already use a fixed camera, e.g. with a tripod. If there is still some inaccuracy that shows background movement, you can stabilize the video with https://github.com/georgmartius/vid.stab or with align_image_stack. (stackfuser can be used as a frontend for both).
  • Comparing each single frame with median.tif could allow you to generate an image sequence or video that only shows movement. Something like:
convert IMAGELIST -evaluate-sequence median median.tif

convert frame1.tif median.tif -compose difference -composite mediandiff_frame1.tif
[...]
convert frameN.tif median.tif -compose difference -composite mediandiff_frameN.tif
  • You might be able to assess the amount of movement in each frame with:
convert mediandiff_frame1.tif -format "%[fx:standard_deviation]\n" info:
[...]
convert mediandiff_frameN.tif -format "%[fx:standard_deviation]\n" info:

The value of standard_deviation is printed for each image and should be low for low movement and high for heavy movement. Though, that needs testing and can likely be improved. Maybe mean gives better results than standard_deviation if using grayscale images.


Since those regions can be split up into different body sections (e.g. upper body, lower body, head) and have to be defined for the whole duration of a video, I need to get an overview of where these body parts move to throughout the interactions and then define them as precisely as possible.

That's a difficult task. ImageMagick provides an option -connected-components that might help to detect different areas: https://imagemagick.org/script/connected-components.php
For this task you might get some help in the ImageMagick Forum: https://www.imagemagick.org/discourse-server/index.php
If you open tickets at ImageMagick, I'd be interested to read them, too.