26 Apr 2017
4:00 pm - 5:30 pm
470 Stephens Hall
Dr. Jack Stilgoe
Senior Lecturer, Department of Science and Technology Studies, University College London
In the middle of the afternoon on May 7th, 2016, near Williston, Florida, Joshua Brown joined the list of more than a million people who die every year on the world’s roads. However, his death was different. He was his car’s only occupant but, as far as we know, he was not driving. His car was in ‘Autopilot’ mode, a feature advertised as relieving “drivers of the most tedious and potentially dangerous aspects of road travel.” The technology in his Tesla Model S that was designed to keep him safe failed to differentiate between a white truck that was crossing his carriageway and the bright white sky behind it. Brown’s Tesla hit the trailer at 74mph (in a 65mph zone), after which it left the road and hit a post. Had the car veered left instead of right, crossing onto the opposite carriageway, the world’s first fatal self-driving car crash could have caused a higher death toll and far greater controversy.
Such incidents reveal the uncertainties and imperfections of complex new technologies. But they also reveal large gaps in governance. Self-driving cars promise to be one of the most disruptive technologies of the early 21st Century. Proponents imagine them as a solution to problems as varied as road safety, sustainability, congestion and accessibility. Realizing the potential economic and social benefits of self-driving cars while mitigating their hazards will mean improving upon the default mode of governance, which tends to recognize the effects of technology only in hindsight.
If technology is a form of social experiment, with limited predictability and control, then we should pay particular attention to questions of who learns what from innovation as we seek to make its progress more responsible. With driverless cars, this question is sharpened by the nature of the technologies themselves. Driverless cars depend on machine learning and, as advertised by Tesla, ‘fleet learning’, with the improvement of the technology taking place in the wild, learning from humans while seeking to improve on their performance as drivers. Much of the governance discussion surrounding driverless cars has drawn upon thought experiments in practical ethics. A social learning approach changes the questions we might ask as we seek to govern this emerging technology: Who should control software and data? How should driverless cars be labelled? What other issues lie beyond the question of the lives they save and the lives they take?
Additional sponsorship comes from: CSTMS