Population ethics

21 Mar 2023

Population ethics is a subfield of philosophy that attempts to sensibly compare two populations that may differ in size and material circumstances, and decide which state of the world is better. That is, if we had the choice between one of these states being a reality, which should we choose?

The way in which one answers these questions can have profound consequences. For example, if life is sacrosanct, then is there a moral obligation to have children? Is not having children when you could ethically tantamount to murder? Or, given the various risks that humanity faces, e.g. climate change, would it in fact be deeply immoral to bring more children into the world?

It also turns out that if you approach this topic in a naive way, you can get very confused very fast. People can end up accepting a series of reasonable-sounding logical steps that lead to conculsions that are, to say the least, extremely dubious.

Exhibit A: The repugnant conclusion

Let’s start in a simple-minded way. We’re given two states of the world, and we want to be able to decide which is better. We like numbers, so why don’t we try to assign a number to how ‘good’ each state is, where high numbers means better.

Presumably different people have different ideas about how good each state is, and will each assign their own number. We might imagine that the numbers represent some common unit of goodness - let’s call it utility. So for example, just as we all agree to measure distances in metres so that we can readily compare without having to worry about unit conversions, perhaps we can in principle do the same thing with utility.

So far, so simple.

Ok since we have a common unit of utility, maybe the natural way to aggregate is to add together. Let’s say in a given state of the world, person \( i \) has utility \( u_i \). Then the total utility is

\[U = \sum_{i=1}^{N} u_i. \label{utility} \tag{1}\]

Great, now we can begin to think about how we might use this utility function to make decisions about which states are better than others. This is where the fun starts.

Let’s imagine we have states of the world \( A \) and \( B \). State \( A \) has 100 people who have wonderful lives. State \( B \) has 100 people who have lives that are just a tiny bit better than if they had never come into existence at all.

We don’t need to do any moral calculus to think that \( A \) is clearly better than \( B \). Ok, but now let’s add more people whose lives are just marginally worth living to state \( B \). If there were \( 10,000 \) such people, would \( B \) be better than \( A \)? How about \( 1,000,000 \)?

Most people’s intuition will tell them there is no number of additional people in state \( B \) that will make it better than \( A \). But that clashes with our equation \(\ref{utility}\). If we multiply any small, positive quantity up by a sufficiently large number, we can get a result that is as big as we want. That is to say, there is some number of people living lives that are just barely worth living, that is preferable to a smaller number of people living wonderful lives.

This is called the repugnant conclusion, and a lot of people don’t like it.

Ok, so what did we do wrong?

Maybe we shouldn’t be adding together everyone’s utility. Does it really make sense to do that?

The issue you have if you want to get rid of additive utilities is that you end up in situations where adding another perfectly happy person to the world isn’t any better than if you didn’t. Or adding a completely miserable person to the world isn’t any worse than if you didn’t. That seems like a non-starter.

How about this: maybe some states are not comparable to others. There is just no sensible way of doing it.

The issue with this is that it isn’t so much a solution to our problem as it is giving up. Remember we want to be able to decide which course of action is the best. There is no way of doing this if there are possibilities that are simply not comparable to others.

Ok then what’s left. Accept the repugnant conclusion? But now you can be led down some very strange roads. For example, you could justify inflicting extraordinary, undeserved suffering on people now in order to bring about a future that has trillions of people living lives that are marginally worth living. Maybe you’re willing to bite that bullet. I’m not.

Hidden option C: We are idiots who don't know how to do maths

Or, maybe the real place we went wrong was when we allowed philosophers to do maths. Ok, I jest. But more seriously, why should we use numbers to represent how good a state of the world is. Who says it should have anything to do with numbers?

Indeed, every decent economics graduate learns about the canonical example of an ordering that cannot be represented using real numbers: lexicographic preferences. In short, imagine I like both tea and spending time with my partner. However, there is no amount of tea in the world (or China) that would convince me to spend any less time with her. Those preferences simply cannot be represented using numbers. Turns out there just aren’t enough real numbers to accommodate/fathom my depths.

Ok, so why can’t we have lexicographic preferences or similar when it comes to population ethics? I.e. there is no number of lives that are barely worth living that would be morally preferable to 100 people having wonderful lives. Seems easy enough. We don’t have to come to any of the utterly bizarre conclusions that philosophers think are forced upon us. We just can’t use numbers to represent our preferences - we’re not such simple creatures.

It really is that easy. Nonetheless, you would be truly shocked by the sheer volume of commentary this problem has generated in academic philosophy circles. And if you’re anything like me, absolutely appalled by the moral positions that various esteemed thinkers take seriously and have adopted as a result of this thinking.

Never do philosophy kids.