The ethical dilemma of ‘algorithmic morality’

The ‘trolley problem’ is an iconic philosophical thought experiment that has shaped our understanding of what is right and wrong. For those of you not familiar with it, try this simple game to familiriase yourself with the basics of this dilemma. This imaginary scenario has shaped our understanding of what is right and wrong for centuries, after being introduced by Philippa Foot back in 1967 [Source]


So what is your stance on this? When presented with this, a true Utilitarian would opt for saving as many lives as possible.

Would you do nothing and have the trolley killing the five people on the main track or would you pull the lever diverting the trolley onto the side track, killing one person?

Is sacrificing one life to save the lives of many others the best possible outcome?

The trolley problem and ethics of driverless cars

The trolley problem was first introduced in 1960’s But we’re suddenly in a world in which autonomous machines, including self-driving cars, have to be programmed to deal with Trolley Problem-like emergencies in which lives hang in the balance. Autonomous vehicles  promise to dramatically reduce the number of traffic accidents;nevertheless  some accidents will be inevitable, because some situations will require to choose the lesser of two evils. Undoubtedly driverless cars can throw up a whole host of ethical issues.

Every time a car heads out onto the road, drivers are forced to make moral and ethical decisions that impact not only their safety, but also the safety of others.All of these decisions have both a practical and moral component to them, which is why the issue of allowing driverless cars—which use a combination of sensors and pre-programmed logic to assess and react to various situations—to share the road with other vehicles, pedestrians, and cyclists, has created considerable consternation among technologists and ethicists.

As we are about to endow millions of vehicles with autonomy, taking algorithmic morality seriously has never been more urgent,” study by Bonnefon, J. Shariff, A., Rahwan, I. (2015)

For those who are interested in the subject of how we can program machines to be ethical and design algorithm-based systems that are ethical, a great starting point in understanding of how we as humans approach complex ethical dilemmas  is a book by David Edmonds.‘Would you kill the fat man’ tells the story of why and how philosophers have struggled with the trolley problem as an ethical dilemma. The author  he provides an entertaining narrative of giving an informative tour through the history of moral philosophy with examples from real historical events.



The author starts with a comprehensive historical tour of the trolley problem referencing philosophers who developed and influence this ethical dilemma over the years. The book then offers an analysis of the trolley problem and an investigation of the many factors that can influence our interpretations of the trolley variations. It includes ten versions of the problem so there are numerous moral valuations to explore. Through this narrative the author manages to show how neuroscience, psychology and behavioural economics are playing an increasing role in ethics.

This book is great for those who want to have a ‘gentle introduction’ to ethics. Even though it is a scientific book, it is very accessible to anyone (without the need for a philosophy degree to understand it)

The paperback  version of the book is available here and the audio version of the book, which is available here )

The illustration used for the featured image of this post is by Riko Enomoto

0 comments on “The ethical dilemma of ‘algorithmic morality’

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: