Autonomous Vehicles: Reality or another Science Fiction Fantasy?
Recently, with companies like Tesla introducing the concept of self driving cars and features like autopilot, there have been many questions about the scope of self driving cars and when they will become reality. To get a better understanding, let’s cover some of the fundamentals that run behind the scenes.
What is Machine Learning?
Machine learning is a branch of Artificial Intelligence that focuses on providing systems with data that they can use to teach themselves. See the image below to see an example of image classification:
Unfortunately, the machine learning models that run the self driving cars are much more complex in this since their input is more than a simple 2-D image: it’s a 360 viewing angle of the surroundings of the car. And the surroundings could be endless: different types of cars, trucks, signs, faded lanes, construction, people, and other obstacles the car would have to detect.
As the number of inputs increases, we deal with more and more data, and eventually we run into issues such as learning biases, or having multiple ways to represent the same data. To the right we see polynomials of varying degrees, with the higher degrees in more complicated data sets, which are how the real world ones are.
So how do we make predictions about what to output?
In one word: algorithms. There are many types of algorithms that are used to estimate outputs. The first one I will cover are Pattern Recognition or Classification Algorithms. Once the ADAS cameras on the car receive images from the car’s surroundings, the system needs to classify those images and identify the objects in them. These algorithms help in filtering by detecting object edges, assigning line segments to curves and arcs. These arcs and line segments are then used as the primary factor to classify an object. This process is made simpler by converting the 3D view to data in 2D through homography. Below are examples of how this is done:
But what happens when an Image isn’t properly classified?
This is where cluster algorithms come into play. They take images that are normally low resolution and use centroid-based and hierarchical modeling to group them into groups that have the highest commonality.
So how do these Cars make Decisions?
This is done via Decision Matrix Algorithms. They are designed for systematically identifying and rating relationships between sets of information. The most commonly used algorithms here are GDM(Gradient Boosting) and Adaboosting. Once the analysis is done, the car can make the decision to turn left or right, to stop or accelerate, etc. The answers to these questions are all found once object and situation classification are complete.
So what does all this mean in terms of the chances we will have self-driving cars? Well, some of them already exist and have the capability to navigate simple road maps in a very small radius. The algorithms and modeling techniques above work well for known routes and surroundings, which is why Tesla autopilot is able to function very well on highways, since it’s mostly just lanes, exits, and other cars. The issue arises when we get unexpected objects or obstacles, which is why autopilot isn’t fully autonomous and requires the attention of the driver. In the upcoming years, this form of assisted autopilot will only improve, and until models can counter unexpected obstacles, faded lanes, or other irregularities, fully autonomous vehicles are still many years away. Looking at this from a non-technical perspective, there are many policy-making issues that come into play: traffic laws, responsibility for accidents, etc. It seems we are on the edge of a revolution in the AI industry, but we still have years to come until society is completely ready to embrace them.