With self-driving cars that are powered by machine learning algorithms, vast amounts of driving data is required for them to function safely. However, if they could learn how to drive in the same way as babies do so—by watching and mimicking others around them—they would require far less compiled driving data. Eshed Ohn-Bar, a Boston University researcher is pushing for new ways of self driving cars to learn safe driving technique; by watching other drivers on the road and predicting their responses.
In an effort to make road safety more achievable through everyday life, Boston University researchers recently presented their research at the 2021 Conference on Computer Vision and Pattern Recognition. These academics have a plan for training autonomous vehicles–a desire that came from working in such a competitive field where data sharing is often discouraged by major corporations looking to protect themselves from competition.
The researchers’ proposed machine learning algorithm works by estimating the viewpoints and blind spots of other nearby cars to create a bird’s-eye view map for self driving cars. This helps them detect obstacles like other vehicles or pedestrians, negotiate turns without crashing into anything, and understand how drivers behave when making their moves on the road.
The self-driving cars of the future could learn by watching other vehicles on the road. These “learning by watching” machines can translate what they see from surrounding cars into their own frames, or perspective. This means that if there are human drivers around without sensors and another company’s auto-piloted vehicle nearby, these observations will all be central to teaching this algorithm how to navigate through its surroundings with utmost safety and caution.
The team of researchers found that the self-driving cars with their “watch and learn” algorithm were able to navigate two virtual towns—one with straightforward turns, obstacles similar to training environment, and another town where there are unexpected twists like five-way intersections. They tested this by having autonomous vehicles driven by it in both scenarios. After just one hour of driving data was used for machine learning purposes, 92% of the time they arrived safely at their destinations successfully without any accidents occurring!
The research team is looking ahead and believes that their technique for teaching autonomous vehicles to self-drive could be used in other technologies as well. In the future, delivery robots or even drones might all learn by watching AI systems around them.
Asif Razzaq is an AI Journalist and Cofounder of Marktechpost, LLC. He is a visionary, entrepreneur and engineer who aspires to use the power of Artificial Intelligence for good.
Asif's latest venture is the development of an Artificial Intelligence Media Platform (Marktechpost) that will revolutionize how people can find relevant news related to Artificial Intelligence, Data Science and Machine Learning.
Asif was featured by Onalytica in it’s ‘Who’s Who in AI? (Influential Voices & Brands)’ as one of the 'Influential Journalists in AI' (https://onalytica.com/wp-content/uploads/2021/09/Whos-Who-In-AI.pdf). His interview was also featured by Onalytica (https://onalytica.com/blog/posts/interview-with-asif-razzaq/).