Deep learning project for NMA 2022: Modeling how the brain deals with noisy input. Based on NMA's Vision with Lost Glasses project.
How visual system in the brain solves noisy object classification?
Imagine you lost your spectacle and the world around you is completely blurred out. As you stumble around, you see a small animal walk towards you. Can you figure out what it is? Probably yes, right? In this situation, or in foggy/night-time conditions, visual input is of poor quality; images are blurred and have low contrast and yet our brains manage to recognize it. Is it possible to model the process? Does previous experience help?