Self-driving or autonomous driving systems are one of the most promising applications of near-term Artificial Intelligence development and research. This path-breaking research utilizes an immense amount of data – collected while driving – that is labelled and contextually rich.
The levels of driving autonomy have been defined quite clearly by the NHTSA in the USA:
From the perspective of complex perception and controls, the technology that solves autonomous driving can possibly be extended to other important tasks such as action and motion recognition in videos. An economically attractive approach for autonomous driving that still expands the AI is for it to be a vision-based system – the same sense that humans use when driving. Vision-based control systems and reinforcement learning became practical and mainstream mainly due to technological developments like deep and recurrent neural networks and unbounded access to world or game interactions. These interactions allow for the possibility of revisiting previous states with new policies and simulating possible events for further training of deep neural network-based controllers.
In the case of controls problems in which the unbounded interaction of a learning agent with the open world is not practical, there are two possible ways out. One is to hand code a simulator, and the other is to learn to simulate. Hand coding involves considerable expertise in defining the rules of physics and programming the randomness of the world into the code. Simulation is crucial to efficiently verify the huge amounts of functional requirements!
Frequent simulation of such situations is critical to the mitigation of these causes of accidents.
A number of virtual models need to be developed in order to simulate these situations.
Stages of Autonomy in Driving
- Level 0 – No Automation
The driver is completely in charge. - Level 1 – Function Specific Automation
Single control functions like speed selection, braking or lane following are automated. - Level 2 – Combined Function Automation
More than one control function is automated. The driver is expected to be available to take over the control of the vehicle at all times and at short notice. - Level 3 – Limited Self-driving Automation
The vehicle remains in control most of the time. The driver is expected to be available to occasionally take over the controls with comfortable transition times. - Level 4/5 – Full Self-driving Automation
The vehicle stays in control all the time. The driver is not required to be available to take over the controls at any time.
From the perspective of complex perception and controls, the technology that solves autonomous driving can possibly be extended to other important tasks such as action and motion recognition in videos. An economically attractive approach for autonomous driving that still expands the AI is for it to be a vision-based system – the same sense that humans use when driving. Vision-based control systems and reinforcement learning became practical and mainstream mainly due to technological developments like deep and recurrent neural networks and unbounded access to world or game interactions. These interactions allow for the possibility of revisiting previous states with new policies and simulating possible events for further training of deep neural network-based controllers.
Methods of Testing Autonomous Functions
- Virtual Tests/Simulations – These tests encompass the analysis of a large number of scenarios, environments, system configurations and driver characteristics.
- Proving Ground Tests – For testing the performance of driving robots and self-driving cars during critical manoeuvres.
- Field Tests – For the investigation of real driving situations and calibrating system specifications.
In the case of controls problems in which the unbounded interaction of a learning agent with the open world is not practical, there are two possible ways out. One is to hand code a simulator, and the other is to learn to simulate. Hand coding involves considerable expertise in defining the rules of physics and programming the randomness of the world into the code. Simulation is crucial to efficiently verify the huge amounts of functional requirements!
Latest Developments in the Testing of Autonomous Vehicles
Methods- Greater simulation for verification of control algorithms and traffic rule compliance.
- Structured search for functional deficits, instead of waiting for them to arise during test driving
- Continuously assessing and adapting to external conditions and rules
- Reliably judging if the limits of vehicle autonomy are close
- Announcing the end of the autonomous mode early enough that the driver can take over (Level 3 Autonomy)
- Bringing the vehicle to a safe stop (in Level 3 Autonomy) if the driver fails to take over
Major Reasons for Accidents during Autonomous Driving
- Failure of components or hardware issues
- Deficiency in sensing road, traffic and environmental conditions
- Control Algorithm deficiency in complex and difficult situations
- Accidents caused by driver behaviour
- Faulty interaction between the driver and vehicle
Frequent simulation of such situations is critical to the mitigation of these causes of accidents.
Challenging Traffic Situations Recreated in Simulations
- When following: Preceding car drives into a traffic jam without braking
- When a car cuts in: The cut-in vehicle breaks hard, leaving no space for evasion
- When a car cuts out: Preceding car cuts out just before an obstacle, leaving no room for evasion
A number of virtual models need to be developed in order to simulate these situations.
Virtual Models Required for Driving Simulation Platforms:
Sensor Models: Activity simulation of vehicle sensors like the camera and other sensors Vehicle Models: Vehicle dynamics for lateral and longitudinal vehicle motion simulation Road Model: Road network model for the provision of virtual road and infrastructure information Traffic Model: Traffic simulation for vehicles, pedestrians and bicycles with driver behaviour models and vehicle models. These models need to be original and generate realistic as well as unique situations to train autonomous driving systems.Conclusion
Before autonomous vehicles can drive you anywhere, they have to prove that they will not drive you into trouble. A huge number of test driving kilometres will be needed to attain this level of reliability and safety. This is not practically and economically possible or sensible, which is why simulations are needed to train these autonomous driving systems.About drivebuddyAI
drivebuddyAI is a fleet management system with AI technology, designed to provide visibility and help with actionable insights to improve operations – from driving safety to driver selection & driver coaching. drivebuddyAI’s technology brings with it a driver drowsiness detection system, driver fatigue detection, and a collision warning system for increased driving safety. drivebuddyAI’s autonomous technology has also been tested on the simulators discussed in this article, in addition to proving-ground and on-road field tests. Visit drivebuddyai.co for further information!
AI Technologycollision warning systemDriver Behaviordriver fatigue detectionDriver SafetyFleetfleet management