When Will Cars Be Truly Self-Driving?
The followings are the points of view of automotive experts in the auto industry:
- Many auto industry executives and experts predicted years ago that humans should be able to ride in self-driving cars today. Still, the technology is far from being able to replace human drivers.
- Experts believe that to achieve level 5(L5) autonomous driving, artificial intelligence support equivalent to the human level is required, and this goal may take more than ten years or even decades to achieve.
- Experts predict that current self-driving technology is expensive for personal use because of cost. Still, it could be used in public transportation, carrying multiple passengers and operating around the clock to help cover the cost.
- Artificial intelligence is a key prerequisite for autonomous driving functions, but it cannot cover all “edge cases” because it cannot reason and lacks intuition. AI also cannot learn from small amounts of data as humans can.
What is The Current Status Of Autonomous Driving?
According to the predictions of some auto industry executives and technical experts a few years ago, Tencent Technology News says we should have entered the era of self-driving cars, and we can watch TV, play games, and even nap. However, after billions of dollars in research and development spending, self-driving car technology has not advanced to a level that can replace human drivers.
Articles For Further Reading:
-
Mercedes-Benz Executives will not Approve Autonomous Driving Until it Reaches 99.999% Safety
-
What does the Acceleration of 5G Commercialization mean to Autonomous Driving and Connected Cars?
-
What Impact Will The Full Implementation of 5G Technology Have on The Automotive Industry?
As a result, many car companies and tech startups have lowered their goals or pushed back the timelines for their achievement. In October, Ford Motor Co. and Volkswagen AG shut down their self-driving car joint venture, Argo AI.
ON A QUARTERLY EARNINGS CALL, Ford CEO Jim Farley told analysts: “We’re developing self-driving cars.” There is still a long way to go.”
Meanwhile, regulators are getting involved. In December, it investigated several rear-end crashes caused by General Motors Co’s Cruise self-driving taxis. Tesla’s driver assistance technology has also been involved in multiple traffic accidents.
The Society of Automotive Engineers International (SAE International) divides autonomous driving technology into five levels.
- Among them, L1-class cars use lane-centering technology or adaptive cruise control to keep the vehicle at a specific distance from the vehicle in front.
- L2 cars need to have the above two functions. But regardless of the level, the driver still needs to have firm control of the vehicle.
- Level 3 and Level 4 cars can drive autonomously under limited conditions, such as on certain types of roads and weather conditions.
- Level 5 vehicles can drive autonomously on any road under any weather conditions.
The Wall Street Journal invited three experts to discuss the future of self-driving cars: Alexander Bahn, a professor of electrical engineering and computer science at the University of California, and Raj Rajkumar, a professor of electrical and computer engineering at Carnegie Mellon University. And Jefferson Wang, senior managing director of the mobility practice at consulting firm Accenture.
The Following are The Full Q &A Excerpts From The Conversation:
Obstacle 1: Human-level Artificial Intelligence
Question: Can a car achieve L5 autonomous driving? At this level, the car might not even have a steering wheel and pedals. If so, when?
Answer 1: This requires human-level AI support, and there is no generally accepted theory of achieving this goal. As long as there is no human-level artificial intelligence, the ability to move autonomously will be limited.
Answer 2: Fully autonomous driving at a particular time often leads to unrealistic expectations. In reality, the development of autonomous driving is essentially incremental. It’s not clear that full automation anywhere, anytime is the endgame. The market will tell us the answer.
Ans 3 Rajkumar: In fact, it may never be possible, or at least it will be in the next decade or even decades. It goes beyond the technological means that exist or will be available in the foreseeable future. However, a more limited but very useful solution will soon be deployed.
Question 2: Now, let’s discuss the technologies required for self-driving cars below Level 5. First, what needs to be improved for systems that replace the human eye, such as radar, camera, GPS, and lidar? Lidar is a laser-based system that creates a 3D view of the road around vehicles, buildings, pedestrians, and cars.
Ans1: Generally speaking, our current technology level is already very advanced, and the key challenge is cost. A high degree of redundancy is required in high levels of autonomous driving. Combining camera, radar, and lidar systems with high-definition maps and high computing requirements will make L4 vehicles too expensive for individuals. So we want these vehicles to be used as public transport, carrying multiple passengers simultaneously, ideally running 24/7 to help cover the cost.
Ans2: There are two different issues here. One, some companies only want to rely on cameras for financial reasons. Since smartphone cameras make them ubiquitous, small, and cheap, the resulting system will be very affordable. Unfortunately, today’s camera-only vision technology will not be able to match the cognitive capabilities of the human eye plus human neural processing for a long time to come.
Second, lidar is a very critical component. Although the cost is falling, it is still high. Radar, ultrasonic, and GPS are already within acceptable cost.
Question 3: What about artificial intelligence? Critics say that, unlike human drivers, AI cannot reason, lacks intuition, and relies too much on the data sets it was trained on to cope with unfamiliar situations.
Ans1: AI is a key prerequisite for self-driving functionality, but it can’t cover all “edge cases” (or unusual events, like a dog running in front of a car or a lane change at a construction site) because it can’t reason and also lacks intuition. Neural networks in artificial intelligence perform what is called “model-blind curve fitting” in an attempt to minimize error.
For this, they need millions of examples. AI also cannot learn from small amounts of data as humans can.
Ans2: Due to recent major advances in AI, the self-driving problem has been thoroughly considered an AI problem. But this is changing the subject! Humans have built vast passenger airlines, rail networks, nuclear power plants, and spaceships, all automated to varying degrees. Instead of using artificial intelligence, they rely on ingenious engineering based on science.
Today’s artificial intelligence is not at the level needed to build self-driving cars, and artificial intelligence is only one of many tools in the self-driving car toolbox.
Obstacle 2 Dealing with “Edge Cases”
Question: How can AI systems be improved for “edge cases”? The AI program does not see these situations and could lead to dangerous driving situations.
Ans1: When a camera sees such a scene, its corresponding neural network may not detect it correctly, especially if the obstacle is not included in its AI training data set.
When radar or lidar looks at it, they may see an obstacle but may not know the type. But the vehicle is still aware of certain obstacles, such as children, dogs, cows, cats, kangaroos, and even pedestrians in funny costumes, and can slow down or stop.
Sensor redundancy is key, and relying on artificial intelligence to deal with the myriad of known and unknown scenarios and obstacles won’t work.
Ans2: What if we had enough experience and data to train a neural network to recognize cats and dogs but not enough to recognize cattle and sheep? Will we let cars drive in rural France?
The AI community is working hard to address these issues. Known as transfer learning, this field focuses on learning a specific set of scenarios and applying them to previously unknown environments. So, we may have the opportunity to train a neural network that can recognize cats and dogs as well as cattle and sheep at the same time.
Obstacles 3: Detecting Human Intentions and Interpreting Human Expressions
Question: Apparently, AI can’t do other things that human drivers can, such as read the body language or facial expressions of pedestrians stepping off the road or interpret drivers’ intentions in vehicles approaching intersections. Can technology do this?
Ans1: Work is underway on detecting human intent, reading human expressions, etc. However, these techniques are not foolproof. Due to incorrect conclusions, AI tools will likely adopt a conservative and cautious approach for the foreseeable future.
: Algorithms can detect the movement of cats, dogs, sheep, and cows, as can standard tools in machine vision. But what is their intention? In many ways, human modeling and forecasting are more sophisticated.
The field of hybrid autonomy (that is, human-machine interaction) is still in its infancy, so the ability of fully autonomous vehicles to interact with unknown humans remains a problem that machine learning has not fully solved.
Ans2: I agree that self-driving cars will improve recognizing and processing more complex patterns. Today, neural networks are surprisingly good at matching complex patterns in datasets.
But they do this only through correlation. Correlations can be spurious, so we shouldn’t expect self-driving cars to mimic the human brain.
Question: Can self-driving car technology handle weather conditions like fog and snow? Fog and heavy snow can obscure road signs and affect a car’s cameras and sensors.
Ans1: There will be advances in sensing and actuation technologies that will collect more data, learn more about weather conditions, and make it acceptable to the vehicle.
This can be seen by looking at the progress made in aviation. Historically, aerospace engineering has driven us to build ever-stronger aircraft. Over time, aircraft can fly in increasingly dangerous turbulence. However, aircraft operations have a “safety set”, defined by sensing and conditions. Keeping operating within this range is critical to maintaining flight safety.
Ans2: Sensor companies and automakers always look for ways to keep sensors clean. For example, install them inside and outside the car in housing, have heating elements to remove ice and snow, use windshield wipers, etc. The more important question is whether the sensors can detect road artifacts under different weather and lighting conditions, which is why early robo-taxis are deployed in areas where it rarely rains or snows.
This situation again shows that sensing requires more redundancy. The camera lens may be dirty. However, radar can still see through harsh weather conditions, and lidar works well even in rain and snow.
Ans2: I agree that sensing technology will make a lot of progress. However, it will continue to face limitations in harsh conditions. In the context of smart cities, we must balance improving vehicle technology and upgrading infrastructure, such as using sensors to supplement traffic signals, road signs, etc.
Obstacle 4 Infrastructure factors
Question: How important is the vehicle’s communication with infrastructure such as traffic lights and other vehicles for self-driving cars?
Ans1: As long as safety-critical event detection exists only on the vehicle, it will subject the vehicle to a higher degree of scrutiny and certification. Infrastructure integration may lead to faster-automated deployments if some of these capabilities are enabled.
Ans2: Vehicle-to-road coordination technology called CV2X will enable vehicles to communicate with properly equipped vehicles, traffic signals, road signs, pedestrians, cloud infrastructure, etc.
The return on this investment is that without artificial intelligence, the vehicle can precisely know the correct state of a traffic light hundreds of meters away, 5 to 10 times the range of computer vision! It can also receive information about road closures, cars Information on collisions, and traffic jams ahead.
Ans 3: CV2X has huge potential. However, automakers and regions still use different systems, which hinders interoperability. Therefore, establishing standards is a key success factor. In addition, CV2X requires investment in infrastructure.
Question: Are the safety expectations for self-driving cars too high, given that cars today can get into accidents and no one is calling for a ban on cars?
Ans1: We’re used to the concept of humans causing car accidents, but since autonomous vehicles are so new, even a single crash or fatality gets a lot of media and public attention.
About 30 people have died in accidents involving self-driving cars, but that number pales in comparison to the roughly 40,000 people who die on highways each year in the U.S. alone. This expectation that computers should be perfect places the burden of making self-driving technology safer on developers and researchers.
Ans2: Building and maintaining trust requires strict regulation, even more stringent than traditional car regulation. There is no doubt that regulation should not limit innovation, but I don’t think that’s the main challenge right now.
Leave a Reply