Pages Menu
TwitterFacebook
Categories Menu

Posted on Jun 18, 2015

Will your self-driving car be programmed to kill you if it means saving more strangers?

Will your self-driving car be programmed to kill you if it means saving more strangers?

Read the full article on Science News.

The computer brains inside autonomous vehicles will be fast enough to make life-or-death decisions. But should they? A bioethicist weighs in on a thorny problem of the dawning robot age.

Imagine you are in charge of the switch on a trolley track. The express is due any minute; but as you glance down the line you see a school bus, filled with children, stalled at the level crossing. No problem; that’s why you have this switch. But on the alternate track there’s more trouble: Your child, who has come to work with you, has fallen down on the rails and can’t get up. That switch can save your child or a bus-full of others, but not both. What do you do?

This ethical puzzler is commonly known as the Trolley Problem. It’s a standard topic in philosophy and ethics classes, because your answer says a lot about how you view the world. But in a very 21st-century take, several writers have adapted the scenario to a modern obsession: autonomous vehicles. Google’s self-driving cars have already driven 1.7 million miles on American roads, and have never been the cause of an accident during that time, the company says. Volvo says it will have a self-driving model on Swedish highways by 2017. Elon Musk says the technology is so close that he can have current-model Teslas ready to take the wheel on “major roads” by this summer.

pdf50x50