What’s it all about?
Self-driving cars have made huge progress in terms of scanning and navigating the environment. But cars have a very solid starting point, as they operate in mapped environments.
Even then, they perform really impressive under controlled conditions, but they sometimes do crazy things in real-world scenarios. There have been instances of accidents caused by autonomous vehicles, injuring and killing humans, even though they were trained to avoid such mishaps.
The future of navigation
Future robots need to be able to operate in environments that are unmapped and poorly understood. Take for example rescue missions, agricultural and forest maintenance, space exploration, or even just bots working at private homes, cleaning bots in public spaces etc.
What kind of improvement do we need in order to have real breakthroughs in this technology?
The next phase for bots to really improve navigating their environment, is they need to surpass purely geometric maps, and get a more semantic understanding of their environment. This means they need to have more active interactions with their surroundings, discover new objects and learn about their meaning, and understand the implications for how they should behave in different scenarios.
Instead of driving in a human-made roadmap along predefined lines, they need to be more independent in understanding their environment, and basically making their own roadmaps. This means the robots need to build way higher levels of autonomy, including complex self-monitoring, self-reconfiguration, and repair.
Even if a robot is trained to understand its environment, the slightest of changes require the robots to re-learn and adapt to new environments. And mostly, this will have a small consequences, like maybe a delay in carrying out an assigned task. But it could also have bigger impacts, like causing accidents.
Although machine learning and computer vision technologies are currently being leveraged to overcome the mapping challenge, these technologies aren’t foolproof and still function best only under controlled environments.
And of course the cool thing about robots, is as the system evolves through machine learning, all the individual bots also get smarter thanks to their system upgrades. But until we reach an inflection point in the field of machine learning, robots will always need a certain degree of human assistance. Like Tesla cars having a big autonomy, but still needing a human driver standing by. Real-life scenarios are highly unpredictable. No matter how trained the robot is or how good its adaptability to new environments, there always arises a situation for which the robot is not prepared for.
Wrapping it up
- Autonomous driving is mature because it starts from a geographical map
- But there are still many challenges with navigation technologies for unmapped environments
- Technology needs to evolve into a more semantic understanding of the surroundings,
- As long as we don’t overcome these challenges, robots will still need human support to navigate their surroundings
A question for you
Would you hand over control of your car to a robot/autonomous software?