The Trolley Problem has become a part of popular culture, it is featured in the NBC show The Good Place, there are many comical variations of it in an article from The New Yorker, and there are countless memes on the topic. In this piece, I will address a couple of modern approaches to the Trolley Problem (if you want to know more about the origins of the Trolley Problem, please see another piece I wrote — The Trolley Problem: The Origins). If you don’t know what the Trolley Problem is, watch this video:
The episode about the Trolley Problem in The Good Place is also quite interesting (the image in the beginning of this article is from that episode). The Good Place is, as far as I know, the first TV Show about Ethics ever made (please correct me in the comments if I am wrong). The premise, if you have never seen it, is that people die and if they were good they go to the good place. There are several characters in the show who think they are in there by mistake, including Chidi Anagonye (played by William Jackson Harper) who is a moral philosopher professor and gives ethics lessons to the group who feels like they are there by mistake and want to become better people. There is also a demon called Michael (played by Ted Danson), he does not quite grasp the concept of “being good”. Here is the opening of The Trolley Problem episode:
Later in the same episode, Michael says that the Trolley Problem “is so theoretical”, and he places Chide in the actual situation:
In a real-world scenario our decisions are not made the same way as when we are thinking in a purely hypothetical manner. Some have said that because of this fact, thought experiments like The Trolley Problem have little use in the real world. However, in this piece, I will point out one very real use these thought experiments now have. But let’s start with this idea that the way people react when faced with certain situations is different than how they react in hypothetical situations that are so to speak, detached.
Example 1 — If the Trolley continues on its tracks it will kill five people, but if you veer off, you only kill one person:
Example 2 — If you throw a Fat Man off the bridge you kill one and save the five:
What Green and Cohen discovered by testing these and other moral dilemmas is that they engage very different parts of the brain: “when the people were pondering a hands-off dilemma, like switching the trolley onto the spur with the single worker, the brain reacted differently: only the area involved in rational calculation stood out. Other studies have shown that neurological patients who have blunted emotions because of damage to the frontal lobes become utilitarians: they think it makes perfect sense to throw the fat man off the bridge.” See “Brain imaging study sheds light on moral decision-making.”
In example 1, the calculation part of the brain is activated, in example 2 the emotional part of the brain is activated. Initially, when Green first created his experiments, he thought that there was a radical separation between reason and emotion in moral responses. If that was so, the differences in responses to the situations do not need to be accounted for in a ethical theory. These experiments would indicate that moral intuitions are not good guidance for ethical theories since they can vary depending on biological factors (see *Joshua Green, “Solving the Trolley Problem”.) He thought we should be using reason and not emotions for our decisions.
More recently, Green has developed other experiments and concluded that both emotion and rationality are needed for morality, however, he still thinks that we should be aware and cautious with our emotional responses since: “Emotional responses, which are influenced by humans’ biological makeup and social experiences, are like the presets: fast and efficient, but also mindless and inflexible. Rationality is like manual mode: adaptable to all kinds of unique scenarios, but time-consuming and cumbersome.” (Peter Saalfield, “The Biology of Right and Wrong”).
Another recent use of the Trolley Problem is applying it to self-driving cars. When a self-driving car faces a situation where someone will get hurt, what principles should it use? Should it prioritize keeping the driver safe? Should it prioritize the least possible damage? What sort of moral principles should it use?
MIT has a program called Moral Machine:
You can participate in this research by going to http://moralmachine.mit.edu/ and click on Start Judging.
The issues we are having to face regarding self-driving cars appear because while in real-world scenarios we can be made to make a decision in the moment and therefore we are not held fully responsible, in decisions regarding how self-driving cars will behave we have the opportunity to think ahead of time. If we think back to the Green and Cohen examples, we are now faced with using only the rational part of the brain, since the emotional part can easily be set aside. But does that make our task easier?
Please watch this video that illustrates the issue at hand:
To end this reflection, let’s go back to The Good Place. Besides example 1 and 2 above, there is another variations that the philosopher Judith Jarvis Thompson proposed (also mentioned in The Trolley Problem: The Origins). In this example, Thompson added a third track where we ourselves are with our hand of the lever:
What should we do in this situation? Turn the Trolley to us, turn it to the other person, or let the trolley continue straight on? What should we do in a self-driving car? Sacrifice ourselves and hit a wall or hit someone else? The demon Michael in The Good Place, after learning ethics from Chidi, thinks he has the answer:
What do you think?