What is the technology behind an autonomous vehicle?

Juan Carlos Aguilera

When talking about autonomous vehicles there are several different approaches that exist based on the type of technologies that the specific manufacturer is looking to use, but there are some technologies that are shared between almost all of them.

 

The autonomy tasks can be divided into three main areas, Control of the Vehicle, Perception of the Environment, and Motion Planning, and inside each of these areas, there are specific technologies or procedures that are helping the industry to get robust solutions. 

 

Control of the vehicle.

Control of the vehicle is probably the most elemental task of all, if you can’t control the vehicle then it does not matter how well you detect things that surrounds the vehicle or how well are planned, because it will not be able to execute that plan, no matter how precise it is.

Vehicle Control is achieved by two main different parts, the mathematical model and the control algorithm. The mathematical model will model the vehicle dynamics and will allow the computer to predict the behaviors of the vehicle under multiple circumstances. 

The modern systems are controlled or at least monitored by computers which evaluate and verifies that the functionality of the steering and braking system are in good shape due to its criticality for safety. Those same computers usually provide the interfaces needed to communicate and manipulate the lateral and longitudinal speed, in other words, it will allow modifying its lateral and longitudinal displacement.

Common Vehicle.jpg

Figure 1: Driver Information Flow

TeleoperationVehicle.jpg

Figure 2: Autonomy Computer Information Flow

In a very general way, this should allow the vehicle to start controlling how its movment, but this will only allow having a teleoperated functionality. For the vehicle to move by instruction on in a way yhay resembles an autonomous vehicle, the model of the system and the control algorithm will be required.

The mathematical models and the controllers are unique for each type of vehicle with its own specific characteristics such as wheel size, width, length, height, weight, etc with all that information a mathematical (dynamic/kinematic) model is going to be developed which will provide the ability to understand and predict the vehicle’s behavior given a specific scenario.

For example: A vehicle will not behave the same if it tries to brake at 100 km/h than if it tries to brake at 10 km/h; it will not behave the same if you press the brakes 10% or if you press it 100% It will also change if the road is wet and slippery or dry and hot. A good mathematical model will give the opportunity to understand all these constraints.

The Controller is a program that takes the model, the desired action, and the current state of the vehicle and calculates the behavior of the vehicle, looks for the correct values that need to be sent to the actuators in order to achieve the desired action. 

This is a huge area that has been under investigation for a lot of time commonly called “Control Theory”, there are some highly known control algorithms that have been used regularly for autonomous vehicles, some of the most common controllers are Proportional, Integral, and Derivative Controller ( PID ), Stanley Controller, and Model Predictive Controller ( MPC ). Each algorithm has its benefits and its constraints so an evaluation of the specific use case will need to be done and select the one the fits best your application. 

AutomatedControlVehicle.jpg

Figure 3: Autonomy Computer with control loop Information Flow

Environment Perception

 

Perception of the environment is achieved by all the activities that capture information of the world around the vehicle at a given point of time, then translate that information into a useful set of data that will be used to make decisions and instruct the control algorithm how to control safely the vehicle. 

 

What is required to get the environment information?

 

Sensors, Sensors, and much more sensors.

 

A Sensor is a device that responds to a physical stimulus ( such as heat, light, sound, pressure, magnetism, or a particular motion) and transmits a resulting impulse ( as for measurement or operating control )  [Merriam Webster]

Here I list the (arguably) most relevant sensors that an autonomous vehicle need. 

  • Inertial Measurement Unit ( IMU )

  • Camera

  • RADAR

  • Ultrasonic 

  • LIDAR

  • Global Navigation Satellite System (GNSS)

Each sensor has it's own world, and we are going to discuss all of them on a different post but for now, let's just talk about the information each one provides and how it helps the perception of the environment. 

The IMU will provide to the vehicle the change of acceleration , angular velocity and orientation of the vehicle, that is critical information to take any decision , it will give a better sense of the real state of the vehicle and will allow the vehicle to understand if the motion commands sent to the vehicle are actually working, it will also provide essential information to the vehicle controller. 

Cameras , RADARs and LIDARs are going to provide information about the world. Cameras can provide essential information for object detection and classification while , RADARS and LIDARs will provide information as well of the distance between the vehicle and the objects nearby.

Some of the information that is typically retrieved from the cameras is, type of object, distance of the object, but it's main reponsability is to provide visual aid for street signals like, stops, traffic lights, crossings etc, information that does not have any type of volume but it's represented graphically. 

What's the main difference between RDAR and LIDAR?

Both sensors help to provide information of the distance between an object and the vehicle, the radar will show if there is an object or not and the distance but it will not provide any information about the characteristics of the object, while the LIDAR can provide a high resolution data cloud with much more information like width, height, distance, volume of the objects around the vehicle. 

In order to take decision the perception system has get all the information from all the sensors and use it together to make the best decision, the sensors complement each other, to get an accurate object classification graphical information as well as the properties of the object are required. To put all this information together there is a technique called sensor fusion, which will take the input of the multiple sensors and put all of them together to get coherent and understandable information of the world.

Imagine that there is a car in front of the vehicle, that car may be detected by the RADAR, LIDAR, and Cameras, but each one might throw slightly different information at slightly different times. You can get a different conclusion of each one of them, maybe it can be determined based on the RADAR information something can be detected in front of the vehicle at 20m but can’t be determined what it is. That same thing can be detected with the LIDAR at 21m and it can also provide the volume object but it could be a wall or a car, you don’t know exactly. Finally, the cameras recognize it is a car if all that information is put together can classify and take a decision based on what information is more reliable for each sensor. Use the classification based on the images retrieved by the camera and the volume of the LIDAR but take the distance thrown by the RADAR and with that information take a better decision.

Motion Planning

For Motion Planning, one relevant sensor suite that must be introduced is the Global Navigation Satellite System ( GNSS ), this will provide all the necessary information to the system about the localization of the vehicle in the world. 

The main objective of Motion planning is to move the vehicle from point A to point B in a safe way for the vehicle and any element that interact with the vehicle, it might sound like an easy task. It’s only moving from A to B right?. In reality that is a super complex task, there are infinite elements and variables that can change during the mission of the car, and the vehicle must consider all the possible alternatives to make the best decision on each step of the way. 

 

Motion Planning can be subdivided into three subsections, Mission Planner, Behavioral Planner, and the Local Planner.

 

The main objective of the mission planner is to define the mission to navigate through the roads toward a specific destination. You can think of this mission planner as the map, it will provide the roads the vehicle is going to take, and plan according to depending on several different factors like traffic.

tesla_map.jpg

Figure 11: Tesla GPS Map.

Source: Unsplash

Author: Brecht Denil

Behavioural Planning’s main objective is to understand its current conditions, under what the vehicle is moving, and make the necessary considerations for a safe action. On the Behavioural planning, the vehicle is going to consider information like: 

  • What are the Rules of the Road? 

  • What type of road is driving? 

  • What the climatological conditions are right now? 

  • What is the amount of traffic is around it? 

  • Are there any cyclists or pedestrians nearby?. 

 

It should make decisions like how much time should it remain stopped at a traffic light or a stop signal, when can it moved safely.

 

Some actions like, Track the vehicle’s own speed, track the front vehicle’s speed, decelerate to stop, remain stopped for a period of time, or merge into a lane are the type of considerations that the behavioral planning is in charge of. 

traffic_light.jpg

Figure 12: Traffic Lights.

Source: Unsplash

Finally the Local Planner

The local planner will be responsible to determine and make the right decisions about the acceleration, deceleration, or steering that the vehicle can take safely to execute the action that was previously provided by the behavioral plan. 

 

To put this in an example, imagine that you are driving from your office to your house, the mission planner is going to trace the main route, considering all the different options that exist, the behavioral planner is going to understand if you are in a parking lot or in a street if you have a speed limit or not if you are about to exit the parking lot and take the first street, is going to keep you stopped until you are good to integrate to the road, once the behavioral planner determines is safe to incorporate to the road the Local planner is going to determine with what speed is going to enter, if it does have enough time to do it or not, how much it needs to turn to the left or to the right, once all these decisions have been done the Local planner is going to communicate with the Vehicle Control which will send the right information to the vehicle for it to start moving.  

car_crossing.jpg

Figure 13: Traffic Crossing.

Source: Unsplash

Author: bantersnaps

I hope this can provide a better, general understanding on the technologies behind an autonomous vehicle. In upcoming articles we will approach in a more detailed way (what? Each technology? Each concept). Until then please feel free to look for additional information. There are plenty or resources on the internet.

Please feel free to leave any comments with questions or suggestions on the section below.