Drones and robots. They are becoming increasingly popular in recent years, with advances in technology making them more accessible and capable than ever before. We now have a variety of options, from aerial drones used for photography and surveillance to ground-based robots used for manufacturing and logistics. These machines are transforming industries and revolutionizing the way we live and work.
Other than being fun toys to play with, they are actually a critical component in many tasks. One area where these tools are particularly promising is in the field of autonomous navigation. With the ability to explore and map unknown environments, these machines have the potential to support a wide range of applications, from search and rescue operations to precision agriculture and beyond.
However, building effective autonomous navigation agents is a major challenge, particularly when it comes to exploration. We need to ensure they can operate in unfamiliar environments before we can rely on them. They must be able to explore their surroundings and build accurate maps, all without human intervention or supervision.
Exploring unseen environments is a major challenge in building autonomous navigation agents. There has been a lot of research on training exploration policies to maximize coverage, find specific goals or objects, and support active learning. Modular learning methods have been particularly effective for embodied tasks, as they learn exploration policies that can build semantic maps of the environment for planning and downstream tasks such as object goal or image goal navigation.
In parallel, there has been a significant body of work on learning implicit map representations based on Neural Radiance Fields (NeRF), which offer a compact and continuous representation of appearance and semantics in a 3D scene. However, most approaches to building implicit representations require human-collected data. But can you imagine if we could build implicit representations without relying on humans? We could send out autonomous drones, robots, etc., and map out the entire place in 3D. It would be amazing, right?
Well, let us meet AutoNeRF. It trains embodied agents to explore unseen environments efficiently and autonomously collect data to generate NeRFs. AutoNeRF is a modular policy trained with Reinforcement Learning (RL) that can explore an unseen 3D scene to collect data for training a NeRF model autonomously.
AutoNeRF enables autonomous drones and robots to collect data required for training neural implicit representations of a scene. NeRF serves as a continuous and compact representation of the density, RGB appearance, and semantics of the scene. With AutoNeRF, the robot or drone is initialized in an unknown environment and is tasked with collecting data in a single episode within a fixed time budget. The observations collected by the agent during this episode are used to train the NeRF model, which is then evaluated on various downstream tasks in robotics, including mapping, new view rendering, planning, and pose refinement.
AutoNeRF has two primary phases: Exploration Policy Training and NeRF Training. During the Exploration Policy Training phase, an exploration policy is trained using intrinsic rewards in a set of training environments. This policy enables the robot or drone to navigate the scene while collecting observations. In the NeRF Training phase, the exploration policy is utilized to collect data in unseen test scenes, where one trajectory per scene is collected to train the NeRF model. Finally, the trained NeRF model is evaluated on various downstream tasks to test its effectiveness in Embodied AI applications.
One of the key advantages of AutoNeRF is its ability to generate high-quality implicit map representations using data collected by autonomous agents. This has important implications for a variety of applications, including virtual reality, robotics, and autonomous driving.
Check out the Paper and Project. Don’t forget to join our 21k+ ML SubReddit, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more. If you have any questions regarding the above article or if we missed anything, feel free to email us at Asif@marktechpost.com
Ekrem Çetinkaya received his B.Sc. in 2018, and M.Sc. in 2019 from Ozyegin University, Istanbul, Türkiye. He wrote his M.Sc. thesis about image denoising using deep convolutional networks. He received his Ph.D. degree in 2023 from the University of Klagenfurt, Austria, with his dissertation titled "Video Coding Enhancements for HTTP Adaptive Streaming Using Machine Learning." His research interests include deep learning, computer vision, video encoding, and multimedia networking.