Raj Nair, the development chief of Ford Motor’s self driving cars, recently told the New York Times that in their current state, autonomous driving cars aren’t ready to take on complex driving situations.
The comments came in the aftermath of the first fatal car crash to occur when a car was in autopilot.
When Ford, one of the largest auto makers in the world, decided that autonomous cars aren’t currently safe to be on the road, how safe could anyone feel in or next to a car in autopilot?
On Oct. 14, 2015, entrepreneur Elon Musk’s car company Tesla sent out a software update to its all-electric sedan, the Model S. The step was a leap toward autonomous car availability, even with the car’s price of at least $66,000. However, it has raised questions of whether the Model S and similar cars are currently safe enough to be driving themselves. Can a machine make the complex, split-second decisions that a human can? The answer, currently, is no.
Two-thirds of Minnehaha juniors and seniors polled said they would not feel safe in a self driving car. This inherent distrust of machines is currently supported by the fact that self-driving cars can’t make complex decisions.
For example, autonomous cars are having trouble turning left when there is oncoming traffic and driving in snow or heavy rain. In addition, these cars can’t read hand signals, and therefore couldn’t follow a traffic officer’s directions.
The only way that an increase in the safety of self-driving cars would occur would be if technology improved dramatically. The autopilot feature on the Tesla Model S recently came out, with a fatality already occurring, according to the National Transportation Safety Board (NTSB).
Tesla has said that there were several factors that caused the car not to break before smashing into a semi-truck at a 90-degree angle. One of these factors was simply the fact that the truck was white.
If the white coloring of a truck causes the autopilot feature not to function properly, autonomous cars have a long way to go before making the word safer.
There are also ethical issues with these vehicles.
Imagine a self driving car is heading down a road, and a child runs out onto the street, chasing a ball.
Does the machine try to stop the car, possibly hitting the child, or swerve out of the way, into oncoming traffic? Either way, if someone were injured or killed, is the machine to blame? Do you really want to put your life or the life of a child in the hands of an algorithm?
An overlooked problem is the is a huge privacy issue with self-driving cars.
You will have to program the car to take you to your destination, and this means that someone could have access to information concerning where you live, work, go to school, and possibly find out where you are at any given moment.
This is not some futuristic science-fiction possibility.
Charlie Miller and Chris Valasek, two hackers, even demonstrated this for Wired magazine last year, hacking into the entertainment system to adjust the climate and radio in a Jeep Grand Cherokee. Miller and Valasek’s test also shed light on an even more threatening prospect: a hacker taking control of your car.
Many people have trouble with not being in control.
They have problems with a car driving itself, but it is infinitely more terrifying to consider that your car could be taken over and shut down, or worse, driven by remote control.
Miller and Valasek also showed that hacking can be done anonymously over the Internet, from the comfort of their own home, which could be many miles away.
The possibilities within this new type of crime are myriad. In the future, car manufacturers will have to make sure that their cars are safe, but currently there is absolutely no guarantee.
Self-driving, autonomous cars may be the cars of the future.
One day, they may save time, money, and more importantly, lives.
That day is far away, with a long transitional period necessary before these cars can be accepted and commonplace. Autonomous cars are not currently safe.