Black Tnt News promo

DeepRoute.ai CEO: SDVs and AGI herald ‘the era of robots’


DeepRoute.AI’s CEO believes software-defined vehicles are the key to artificial general intelligence in the physical world. By Megan Lampinen

Artificial intelligence (AI) is shaping the development and functionality of software-defined vehicles (SDVs). The promise is great but the journey has only just begun. Many industry players are working towards the vision of driverless cars that can navigate 24/7 in any and all conditions. For Chinese self-driving company DeepRoute.ai, the AI capabilities needed to realise that paradigm open up tremendous potential in applications far beyond the roadways.

The starting point

Led predominately by China, connected and intelligent interactions in vehicle cockpits are rapidly becoming the norm. Most Chinese consumers purchasing even a mid-range car expect it to offer a host of smart driving functions. “Today, cars are controlled from screens and preferably by voice interaction,” says Maxwell Zhou, Chief Executive of DeepRoute.ai. “Automakers are starting to integrate ChatGPT to improve the in-car digital assistants and make interactions even smarter.” Volkswagen Group and Stellantis are two big-name players leading the integration of these early generative AI (GenAI) systems across their vast line-ups.

Cerence’s Chat Pro leverages a multitude of sources, including ChatGPT

GenAI is also facilitating real-time decision-making in automated driving systems, which are gradually taking on more of the driving tasks, particularly in urban areas. “This is a top priority in new car purchases,” Zhou points out. Data from the China Passenger Car Association shows that in 2023, more than 55.3% of new energy vehicles came with integrated SAE Level 2 and L2+ functionality.

But as the industry moves towards greater levels of automation, another form of AI is gaining traction: artificial general intelligence (AGI). While GenAI refers to algorithms that generate new content—videos, code, images, etc.—AGI acts more like a human in terms of common sense, understanding and learning. It can then apply that ‘general’ knowledge to all sorts of tasks. “The most important difference with AGI is the generalisations,” Zhou tells Automotive World.

Today’s self-driving systems are highly tailored and trained for specific use cases and regions, usually relying on a high-definition (HD) map that needs constant updating. But with AGI, an autonomous vehicle (AV) that can drive in London can also drive in San Francisco or Beijing with its generalised learnings applied from one city to another. “Waymo can only drive in a few places like Phoenix or San Francisco,” says Zhou. “If it goes somewhere else, it won’t work. The power of the new AI technologies is totally different and gives us the potential to drive everywhere.”

It also means there’s no need for an HD map, which has been one of the unique selling points of DeepRoute.ai’s self-driving system. “You would need to hire thousands of people just to maintain these maps, and you would need to cover everywhere—Europe, the Americas, China. It’s simply not going to be possible. AGI is the way to Level 5 autonomous driving, as well as to robots,” Zhou proclaims.

Data is the key

Zhou, who has led autonomous driving projects at Baidu, Texas Instruments, and DJI, suggests cars are the starting point for a wider evolution within all of robotics. Specifically, they represent the first kind of robots that will exist in the tens of millions of units. Hedges and Company estimates that there are about 1.5 billion vehicles on the world’s roads today. Over time these vehicles will inevitably be retired and replaced by highly automated or fully autonomous vehicles. These vehicles will produce enormous amounts of data about the physical world, which can be harnessed to further train and iterate on the AI algorithms. “You need to collect more data and train your models,” he says. “There’s a lot of work to be done, but data is the key.”

A fleet of vehicles parked in front of a city
A fleet of DeepRoute.ai’s Robotaxis

The learnings can feed into a foundation model that could be easily transferred to other robots’ scenarios. And in the opinion of many industry players, cars are indeed becoming robots. In his 2024 GTC keynote, Nvidia Chief Executive Jensen Huang asserted, “Everything that moves will be robotic—there is no question about that. And one of the largest industries will be automotive.”

As Zhou explains, “The foundation AI model that we train is based on the data we collect from cars. It could benefit all robots. In the past, robots were built for a single purpose, and that purpose would need to be defined. But we’re moving towards this new approach in which there is no need to input a specific definition for the robot task. If these models work for autonomous driving, they should work for other robots.”

The starting point

One of the most important aspects of training these AI models is the need to understand the physical world. “There needs to be common sense,” says Zhou. “The AI needs to understand distance, humans, how vehicles work—for instance, that they don’t drive on top of a fence. We believe the common sense in these neural networks will eventually be transferable for other tasks, and the best place to start is with autonomous cars.”

The power of the new AI technologies is totally different and gives us the potential to drive everywhere

In 2021, DeepRoute.ai launched a production-ready autonomous driving solution that does not rely on HD maps. That same year it also launched a robotaxi service, concentrated in the central business districts of Shenzhen. It is currently working with a Chinese automaker on mass production of smart driving cars and at least three mass market car models are expected to debut later in 2024. As these and other similar systems appear in vehicles, they can feed into the foundation model, which can be migrated to other forms of robotics thanks to the move towards what Zhou calls ‘AI 2.0’. As he emphasises: “We really see the power of the new AI; it’s not like traditional AI. Up until last year we were trying to enter the data and train the models, but we realised we simply couldn’t solve the problem that way. Using this new architecture, we solved it. This new technology should be able to migrate for all robots. The era of robots is coming.”

And the timeline? Zhou suggests that within the next five years the world could see “a lot of general robots” across various applications. As for the “era of robots”, that could be another ten years, but he emphasises that “it will definitely happen.”



Source link

About The Author

Scroll to Top