/ViT-CD

Vision Transformer Approach to Crash Detection

Apache License 2.0Apache-2.0

ViT-CD

Source code and paper for the IEEE Paper presented at the Global Conference on Information Technologies and Communications 2023 in Bangalore India by Reva University.

Abstract

Traditional methods for accident detection often struggle to handle the complexities of dynamic scenes and varying lighting conditions, leading to sub-optimal performance in accident detection. Subsequently, Vision Transformers’ self-attention mechanisms can be utilized to capture spatial relationships and contextual information within video frames, introducing an innovative solution for real-time vehicle collision detection in CCTV footage. The model proposed in this paper aims to utilize attention mechanisms within transformers to analyze and interpret visual data, presenting an efficient method for identifying potential collisions. The findings demonstrate the efficiency and accuracy of the proposed approach in relatively improving accident detection accuracy conclude with high-level insights into future research directions highlight the potential impact of ViT-based systems on enhancing road safety and underscore the need for their continued exploration and development