This paper explores a model of collective behavior in animal groups based purely on vision. The model is based on the idea that animals use their vision to perceive the positions and orientations of their neighbors, and then use this information to adjust their own movement.
It is a model without spacial representation and collision, meaning that it does not explicitly track the positions of the individuals in the group. Instead of it, it assumes that each individual are aware of the positions and orientations of its neighbors within a certain range.
- Review of concepts and existing models
- Understand the vision-based interaction approach and how it differs
- General overview of the problem and general idea on how the problem will be approached
- Summarize the various collective behaviors produced by the model
- Polishing of the previous report based on the received comments
- Details about the methods and the proposed methodology for verification
- Choose a programming language/environment like Python or JavaScript
- Decide on 2D vs 3D implementation
- Select metrics to analyze like neighbor distance, collisions
- Implement the model based on selected criteria
- Tweek the implementation
- Analyze metrics for different parameters
- Compare emergent behaviors to paper
- Write the final report with every detailed processe and results
- Recreate the model
- Implement the raycast vision
- Add predator simulation
We were able to implement all of the goals
To run the model without the predator, go to simulation.py and run on the terminal:
python3 simulation.py
To run the model with the predator, go to simulation_with_predator.py and run on the terminal:
python3 simulation_with_predator.py