Effective altruism, longtermism and Pascal's mugging
04 Jan 2023
What is effective altruism?
It is a social movement that broadly aims to do the ‘most good’ in the world with limited resources. They advocate for making career choices and charitable donations that maximise altruistic impact. Their activities include writing research papers on ethics, analysing effectiveness of charitable organisations, giving career advice, and promoting pledges to donate a proportion of one’s income to charity. The main individuals who are credited with starting the movement are Will MacAskill and Toby Ord, both at the University of Oxford, and Australian moral philospher Peter Singer, at Princeton university.
What is longtermism?
Longtermism has become a signature idea within the effective altruism community. It broadly argues for greater emphasis to be placed on the long-term future in ethical decisions. The basic gist is that future humans/sentient beings who have not yet come into existence should be given the same ‘moral weight’ as those alive now. This tends to be coupled with the idea that the number of beings alive right now could be completely dwarfed by the number of future humans, especially if humanity survives for millions/billions of years into the future and colonises other planets.
Sounds dandy. Ok, and what is the last thing you mentioned?
Pascal’s mugging.
Let’s say you’re walking along the street one day and you’re accosted by a mugger. However, he is an extremely incompetent mugger who has forgotten his weapon. So, he instead makes the following proposal: give me all the money in your wallet, and tomorrow I will give you back twice that amount of money. You, being sensible, immediately dismiss his offer - what are the chances that he would actually follow through? The mugger, sensing he is losing you, continues to increase his offer. He says that if there is even the tiniest non-zero probability that he will honour the deal, then you should accept. Eventually he wins you over by offering a bajillion dollars on the morrow sharp if you agree, but otherwise he will kick a gazillion puppies right in the face.
Ok, you have a problem, don't you Steven. Why don't you tell me about it.
Yeah I do have a problem. Well-intuited, me.
The notion that we are not giving sufficient ‘moral weight’ to future individuals compared to individuals alive now, comes from a little mish-mash of undergraduate economics and philosophy. In economics 101, you learn that the default setting for economic models is to discount future utility. That means that we prefer a bowl of ice cream now compared to a bowl of ice cream tomorrow, or a year from now. This is expressed mathematically as follows:
\begin{align} U = \sum_{t=1}^{T} \beta^t u(x_t).\end{align}
Here, \( x_t \) is some amount of a good received at time \( t \) (e.g. a bowl of ice cream), \( u \) is a utility function which measures the benefit that you get from said good at the moment of consumption, and \( \beta \) is a number between 0 and 1 that we call the utility discount factor. The utility discount factor captures how much we prefer benefits now compared to benefits at a later date. For example, let’s say units of time are measured in days and \( \beta = 0.5 \). That would mean that a bowl of ice cream tomorrow is ‘worth’ half as much as a bowl of ice cream today.
Now, the fine upstanding philospher sees this and is frankly outraged. So outraged, he almost smashes his monocle. What does this presume to say - that the you of tomorrow has only half the moral worth of today’s you? What egregious claptrap is this!
I jest, but only a little. But that’s basically the longtermists; they think that \( \beta \) should be equal to 1, for reasons.
Of course what they completely ignore is that the utility discount factor is really just a parsimonious way of capturing the fact that everything is a lottery, without actually having to model everything as a lottery, because frankly that’s a pain in the arse. The choice is not between a bowl of ice cream now or tomorrow. It is a choice between a bowl of ice cream now, and an almost-certain bowl of ice cream tomorrow plus a small probability that you meet your tragic and untimely demise in an unforseeable bovine trebuchet incident before tomorrow, thus depriving you of said bowl of ice cream.
Ok I jest again, maybe a little more this time.
The point is there is a chance you may not be around to get that bowl of ice cream tomorrow, or you might lose your taste buds, or whatever else. Of course there’s also a chance that a magical fairy will wave her wand and make it so that tomorrow you get twice the enjoyment from that bowl of ice cream that you otherwise would have. But basically, there’s a lottery over events that could happen, and the end result of that lottery is that humans tend to prefer sure things today rather than near-sure things tomorrow, or not-so-sure things a year from now. As a quick fix to model this, we set \( \beta \) a little less than 1.
It is not difficult to figure this out. Many people know it, and have pointed it out. This has not stopped the longtermists from building a utopian (read: probably dystopian) moral calculus though.
So yeah, let’s say we take the longtermist idea seriously - there will be no utility discounting, no siree. On top of that, let’s presume that there is a non-zero chance that there will be an arbitrarily large number of human beings with wonderful lives in the future. Does this remind you of a certain unarmed robbery of a certain historical Frenchman yet?
Armed with this ideology, you could come to some rather dubious conclusions, to say the least. For example, you could argue that instead of using our resources to provide food for millions of people on the brink of famine today, we should instead give that money to our well-to-do academics and intellectual superiors at Oxbridge and the Ivy league so they can do urgently-needed research on where to best expend further resources.
Yeah, right.
In fact, you could use this ideology to justify inflicting any amount of suffering on people who are alive today, in order to serve the interests of hypothetical future people. After all, what’s worse: the suffering of a few million people, or the suffering of ONE ZILLION SQUILLION future people, who let’s face it are, for all ethical intents and purposes, really alive today.
This thinking is the product of ordinary academic minds furiously scurrying to justify their existence. They need some new, counterintuitive take that they can write papers about, use to position themselves as the thought-leaders of a burgeoning social movement and get tenured positions at universities.
It’s easy not to take this too seriously. Surely they are on the fringe.
Then you realise that that the effective altruism movement may have control over several billions of dollars.
Then you find out that a prominent effective altruist, by the name of Sam Bankman-Fried, was (allegedly!) running a cryptocurrency Ponzi scheme, which basically amounted to stealing people’s life savings and donating it to his preferred political party.
Very effective. Much altruism.
Really what this is is some kids with a god complex and a utopian ideology that, like so many before it, would almost certainly lead to misery on the grandest scale imaginable if ever fully realised. The only thing they need to make it a reality is a little more power.