Let’s set the scene and give a little context to our question.
Imagine a scenario where you are travelling in an autonomous car at high speed (not illegal, just high-speed). This could be on a motorway or back country road; the point is that you are motoring along. As you’re making headway on a journey, a child accidentally walks into the path of your car before falling-over. They are left motionless; paralysed by fear and bright headlights of your driverless car.
The car in this instance is then required to make an instantaneous decision. A judgement that it will base on objectivity and logic. The car recognizes that the child is younger than you and therefore theoretically has a longer lifespan. Considering this, the car swerves drastically to avoid hitting the child, throwing yourself and the vehicle off the side of the road, and to your inevitable demise.
Would you buy a car that made this decision? A vehicle that objectively weighed the value of your life against another?This is a dilemma that autonomous car manufacturing companies around the world are having to deal with. Who should be saved in an inevitable collision, and is it the passengers or the car’s right to make that decision? The question hearkens back to an ethical brainteaser called ‘the Trolley’. A conundrum that has been debated around the dinner table for many generations.
In this problem, a train is travelling down a track towards a group of five workers. You cannot communicate with them, but what you can do is redirect the train. However, upon redirection, the train would then be sent hurtling towards a singular worker on the tracks, who would then also meet an equally gruesome ending. So, what do you do? Do you leave the train to plough into the 5 workers, or do you save the majority by sacrificing the one?
You may have been able to answer the above based on the interests of the greater good (thanks, Hot Fuzz), but would your opinion change if you had to physically push the single person in front of the train to stop it hitting the five? Both scenarios have the same outcome, but would specifically be the person responsible for artificially causing the death of an innocent individual stop you from acting in the same way?
It’s a complex moral dilemma that all those involved with autonomous transport technologies will eventually have to agree upon. Can you put worth on a life, and if so, are there quantifiable differences in said worths?
With all these big, quite theoretical questions being thrown around, it is important not to stray from our initial query. In the knowledge that your futuristic autonomous car may kill you, dependent on your perceived worth in relation to others, would you still be happy to travel in one? Are you happy to trust your personal safety to a robot?
We’d love to hear if you have any comments on this. Click here to simply drop us a tweet, or here for our other linked platforms!