Transdev, the company that operates a majority of Melbourne’s busses, is preparing to expand their fleet of busses over the course of the next two years. Additions include self-driving shuttles.
It’s easy to see why companies like Transdev are embracing driverless technology. Even though as of now self-driving cars are classed as a driving assist, it’s only a small technological leap away from true driverless vehicles. Recently Transdev showed off an EasyMile’s EZ10, a fully autonomous buss, in Canberra and Darwin.
There are many potential benefits to be reaped from having self-driving cars. As autonomous vehicles improve, we could see algorithms that help to reduce traffic congestion, and when autonomous vehicles become the predominant form of transportation, speed limits will be increased to take advantage of the car’s computer’s faster reaction time. These would save passengers much time and frustration, but easily the biggest boom that autonomous vehicles could grant is increased safety. Computers don’t take risks, they can be equipped with better sensors than human drivers, and can calculate the best possible way to crash should a crash become inevitable. In an interview with The Age, Transdev’s chief strategy officer, Christian Schreyer, cited that “90% of all accidents are caused by human error”.
However, this statistic, so often cited as the main reason we need to implement self-driving vehicles is not entirely accurate. It overstates how much damage can be attributed to human error, discounts mechanical error, and is taken from unreliable, post-crash, data. Eliminating human error is not the silver bullet solution to eliminating vehicular accidents.
In March of this year in Arizona, a major testing ground for Uber’s self-driving vehicles, a woman was struck down and killed by a self-driving SUV in what is being called an ‘unquestionable failure’ by Arizona’s governor, Doug Ducey. In the wake of the fatal incident, Mr Ducey, who had previously approved of the autonomous car testing through the use of an executive order, has suspended Uber from continuing their self-driving tests in Arizona.
Also in March, one man died in an accident involving his Tesla Model X SUV’s autopilot. Data retrieved from the wreckage showed that the driver had removed his hands from the wheel repeatedly, something that the car manual discourages, with the autopilot being a driver assist tool, not a substitute. The driver’s hands were removed from the steering wheel for six seconds when the crash occurred, initial reports suggesting that the autopilot failed on account of the lane splitting, something the autopilot is ill-equipped to handle.
Another concern that often goes overlooked with self-driving cars is the potential for a car to be remotely hacked with malicious and criminal intent. Cars that are designed to communicate with other cars around them are at risk of being remotely hacked. A hacker could then alter what the autopilot registers as the road or as a safe distance from the cars around it. Such an incident has the potential for untold damage and would put the lives of not only the target but those around them at deadly risk.
So while it is inevitable that self-driving cars will likely become the norm in the coming decades, it’s important to assess the risks and understand what can be done to mitigate any potential damage.