Written by TKS Toronto student, Zara Syed.

 

Self-driving cars make our lives so much easier. Your car drives so you don’t have to. You can commute to work while reading the news or eating the breakfast you so often skip. You’ll be at your destination with minimal effort. All you need to do is tell your car where you want to go, and… it takes you.

So easy and so awesome. 

But how?

First, the car locates itself and its destination.

The car locates itself, and its destination on a highly accurate electronic map called the HD map. The car uses three methods of understanding location.

  1. Absolute location — the car finds the location using satellite signals
  2. Relative location — the car calculates the location by adding the distance and direction travelled to the last determined location
  3. Hybrid location — the car uses a combination of absolute and relative location

Once the car knows where it is in the world, it matches the real-world location to a location on its electronic map. We call this map matching.

Second, the car plans its route.

The car plans its route using the electronic map (EM). We store information on the electronic map in three layers:

  1. The active layer (roads and structures)
  2. The dynamic layer (real-time surroundings input from sensors)
  3. Analysis layer (where it does its decision making)

 

Example of an HD map
The car sees all possible routes on the active layer and information like current traffic on the dynamic layer. Using this information and path planning algorithms, the car can pick the best route.

Finally, the car is on its way.

It uses sensors (like lasers and radars) to sense its surroundings (like other cars, where the lanes are, pedestrians, etc.) and stores the information in the dynamic layer.

Cars can use laser perception to sense how far objects are. A laser sends out a laser beam that hits objects and bounces back to a transmitter on the car. The car uses the laser’s reflection time and strength of the signal returned to determine distance. 

Bats use echolocation the same way; they make noise at an ultrasonic frequency, which humans can’t hear. The sound then bounces off of objects back to the bat’s ears as an echo that the bat uses to form an image of its surroundings.

Cars can also use radar perception to sense how far objects are. The car sends out millimetre waves that hit objects and bounce back to the car. Using how long the wave takes to come back, the car can sense the proximity of objects in its surroundings.

 

Example of environmental perception using proximity

Cars use visual perception to understand traffic signals, see where lanes are, and identify objects. Just like humans can see roads, cars, pedestrians and more when driving, cars can use computer vision to do the same.

Example of computer vision

Recap:

  • The car locates itself and its destination on the HD map
  • It uses algorithms to plan a route
  • Then drives safely to its planned path using sensor input to watch for obstacles

Self-driving cars can do all the things humans do when driving! But better. And differently. Let the car do the driving.