Feature Request: Post-Study Trial Replay Mode
Closed this issue · 4 comments
Hi @jackbrookes ,
I have a feature request, I didn't find any information on this in the documentation, but I may have missed it.
It would be great to be able to have a post-study replay mode where the researcher could replay, in the Unity editor, a selected trial, based on the recorded data, to be able to perform a detailed inspection of the participant's behavior during the trial, from both a first-person and a third-person (bird's-eye/overview) perspective. This would also allow the researcher to inspect the data acquired for validity and, if needed, to run any additional algorithms to process additional information (i.e., image processing/segmentation of where the participant looked), which can be processing heavy and, therefore, shouldn't be run during run-time as it could impact the framerate and, therefore, the quality of the experiment.
This could also be useful for cued-recall interview as the researcher could show the first-person perspective to the participant and ask questions about it. This would provide more possibilities than simply video recording the VR screen during the study, which could also impact framerate during the study and would be limited to studies running on a computer. Video recording of Unity, from either of these perspectives, could be created by extending this replay mode with the Unity's Recorder feature.
An example of this replay mode feature can be seen in the EVE Framework - https://github.com/cog-ethz/EVE
Another feature that may be interesting, which could go with this, is to draw paths (for instance using the line renderer) for the continuously tracked objects. This can be helpful for generating images from an overview perspective of the scene, which could be useful both to understand participant behavior in a visual way and for generating a figure, which could be used to show and explain what occurred in one of the trials.
Thank you.
Ivan
I thought about this, but it is far too complex to be included in UXF. There are many things that happen outside of UXF that are not recorded, such as physics interactions. A "Replay" system is even a challenge for AAA games, and it would be even harder within UXF where UXF does not know what extra stuff you are adding to the scene. EVE is a lot more limited.
It could possibly be done just for movement of tracked objects (head, hands in VR), but it would not be worth my time for something I don't think is that valuable. It would be a case of reading in CSV files of positions and rotations, and updating the corresponding objects frame by frame. Sounds easy, but I can imagine lots of issues along the way to make it 100% compatible with UXF. If this is super important for your workflow, I'd suggest building something yourself to read in the _movement.csv
files and update transforms in a scene.
On the point of image processing, etc, your best bet would be to capture the screen with FFMPEG, which runs on trial start/stop. That is what I do for my experiments, but you'd have to set that up for your own needs.
Hi @jackbrookes,
That makes sense. The physics aspect would be difficult, unless everything that was dynamic in the scene was tracked and physics was disabled during replay, but this can get complex quickly.
I will take a look at the movemen.csv and think about what can be done.
Regarding image processing, out of curiosity, why do you capture and process it during the trial or do you use FFMPEG to record the screen as a file (i.e., mp4) and then process it offline? If you do use it to record to a file, why do you prefer to use FFMPEG library directly instead of something like the Unity Recorder?
I record videos with FFMPEG, I have never done image processing, I just use them to inspect what participants did manually.
Your linked package Unity Recorder only works in the Editor
That's a good point, the Unity recorder being limited to only working in the editor is an issue.