Originally published on Medium on March 14, 2018.
I sum up my feelings about utilitarianism this way: utilitarianism is good ethics but bad morals.
First off, I need to clarify that the difference between morality and ethics is vague and inconsistent. This is one of those times where I’m artificially pushing two terms apart to make them easier to think about. For the most part, I use the terms interchangeably. The distinction I will focus on here is that “ethics is about society but morality is about you”[1].
If consequences could actually be predicted and values assigned, the idea of maximizing aggregate utility makes pretty good sense. When looking at things from a societal level, it often makes sense to analyze the consequences of the system in utilitarian terms. In this sense, utilitarianism is a good ethical theory. Under the surface, most ethical systems are either utilitarian or oracular. That is, they are either looking to increase some good or they come from some oracular source (usually the divine or evolution, both of which are problematic oracles, although for very different reasons). I do not mean to claim that everything is Utilitarianism-with-a-capital-U. Rather, that when we push hard on the “why?” of an ethical system, we can generally say that the reason is to enhance well being. Pinning down what that means is a much harder thing. So for the moment let’s leave aside exactly what utility is.
Since predicting effects and assigning values is not possible in practice — complex systems are inherently chaotic and unpredictable — it is not feasible to apply utilitarian thinking to individual moral choices. Acts have second, third, Nth order consequences that we cannot predict. Even worse, humans have consistent biases which make us bad at predicting consequences, and our assignment of utility tends to follow our biases about our friends and foes. In other words, we think we are better than we are, others are worse than they are, and acts that hurt those we like less damaging than acts that hurt those we dislike. I tend to be rather hard on variants of act utilitarianism for this reason. Maybe if we were perfectly fair oracles, we could make correct decisions based on utility. Sadly, we are not. This imperfection is also why I tend to categorically reject ends justifies means thinking, which has the same predictability failings as act utilitarianism while also ignoring the side effects that achieving a particular end may have.
We tend to do better with rule based systems where the rules are those that cause the right things to happen most of the time. We can justify these rules using utilitarian reasoning (e.g. rule utilitarianism variants). We can justify them on the basis of what makes a person good (e.g. virtue ethics). We can make the rules a matter of personal duty (à la Kant). There are as many ways of thinking about these rules as there are systems of ethics. The superiority of these systems over act utilitarianism is that they simplify decision making. Just follow the rules.
That seems to leave little room for utility in day-to-day decision making. Maybe the deep thinkers who try to understand what rules should be will spend time thinking about trade-offs in human flourishing. For the rest of us, we will just keep on doing as we are told.
Like that? I didn’t think so.
The problem with this view is that we do not want to have our personal moral rules handed to us by distant ethical thinkers. We may take some system, whether it be secular or religious, as a starting point, but when that system does not make sense to us, we should question it and tweak it to be meaningful for us. This is where utility becomes important again. If we are going to be creators of our own moral systems, and if we are going to work with others to try to influence the ethical systems of the society we live in, we need to be able to get down to the root of why one rule is better than another.
We also need to understand how to think in terms of utility because rule systems are never complete. They will never cover every situation. “Don’t lie” is a good rule in groups that need a general level of trust to function effectively. It is not useful when a lie can save your life. We need to understand that rules come with (usually implicit) contexts that define when they apply. In situations where that context does not apply, we will essentially be relying on act utilitarian style analysis to make a decision in the moment. We can then use that new experience to modify the old rules.
And that, at length, is why I think that utilitarianism is a valuable tool to have in our ethical toolkit even though it does not provide a useful guide to individual moral choices.
[1] Christopher Panza, PhD & Adam Potthast, PhD, Ethics For Dummies (Wiley Publishing, Inc., 2010), 10