Boffins Manage To Fool Driverless Car Software
Scientists at four of the USA’s most respected Universities, led by the University of Washington, published a paper last week that has sobering implications for the future of driverless cars.
Whilst players in the motor industry race for market dominance in the future goldmine of autonomous vehicles, these researchers turned from ‘getting it right’ to consider ‘what could go wrong.’
They took the clockwork used by some vehicles to distinguish road signs and explored how easy it was to fool the artificial intelligence. The resulting report indicates that it may not be very difficult at all.
A stark example was that they put four small black and white stickers, of carefully calculated size and position, onto an American style ‘STOP’ sign. To the human eye it just looked like vandalism that is all too commonplace. As nearly all of the sign’s surface remained unobscured, there was no mistaking it.
However, the driverless car software frequently interpreted this as a ’45mph Speed Limit’ sign,
The consequences of this happening on the road hardly bear thinking about. The boffins achieved this by analysing the code built into the driverless cars, figuring out how it recognised signs and working out how to fool it. In other words, they used an approach well worn by hackers of traditional computers.
In the early days, before they learned how to use their skills to extort money, hackers just made systems for crash for fun. The idea of a new generation loitering at roadsides trying to make autonomous cars crash for fun is an extreme concern.
The boffins hope that their work is taken seriously by the manufacturers. We can but agree.