Griffith University Researchers Design AI Video Surveillance System To Detect Social Distancing Breaches

Researchers at Griffith University have developed an AI video surveillance system to monitor social distancing breaches in an airport without compromising privacy. The team eliminated the traditional need to store sensitive data on a central system by keeping image processing hedged to a local network of cameras.

According to Professor Dian Tjondronegoro from Griffith Business School, data privacy is currently one of the most serious concerns with this technology. The system has to monitor people’s activities to be effective constantly.

The case study was completed at Gold Coast Airport, which, before the COVID-19 outbreak, had 6.5 million passengers annually, with nearly 17,000 passengers on-site daily. The airport has hundreds of cameras cover 290,000 square meters with several shops and more than 40 check-in points.

The research team tested several cutting-edge algorithms, lightweight enough for local computation, across nine cameras. The test was carried out in three related case studies- automatic crowd counting, testing automatic people detection, and social distance breach detection to find the best balance of performance without compromising on reliability accuracy. 

Their aim was to develop a system capable of real-time analysis possessing the ability to detect as well as notify airport staff of social distancing breaches.

Three cameras were utilized for the automatic social distance breach detection. They covered the waiting area, check-in area, and food court. Two people were tasked to compare live video feeds and the AI analysis results to check if people marked as red were in breach.

Researchers discovered that camera angles affect the ability of AI to detect and track people’s movements in a public area significantly and thereby recommend angling cameras between 45 to 60 degrees.

According to Professor Tjondronegoro, their AI-enabled system design is flexible enough to allow humans to double-check results thus, reducing data bias and improving transparency in how the system functions.

The system can be scaled up in the future by adding new cameras and then be adjusted for other purposes. It can very well be utilized to prevent overcrowding, security breaches, etc., in public places. The study demonstrates that responsible AI design can be helpful in future developments of this application of technology.



Consultant Intern: Kriti Maloo is currently pursuing her B.Tech from Indian Institute of Technology (IIT) Bhubaneswar. She is interested in Data Analytics and its applications in various domains. She is a Bibliophile and loves to explore new advancements in the field of technology.

🐝 Join the Fastest Growing AI Research Newsletter Read by Researchers from Google + NVIDIA + Meta + Stanford + MIT + Microsoft and many others...