Disagreements with effective altruism and longtermism
Coming back to edit and publish this post, I had to check the original file creation date: October 26, for what it’s worth. On that date effective altruism (EA) was a movement in the ascendent, with a charismatic leader fresh off a well-funded (hmm) book tour. Today it is the Do-Gooder Movement that Shielded Sam Bankman-Fried from Scrutiny while the EA-adjacent are rapidly distancing themselves.
All I’m saying is that I’d look a lot smarter if I’d posted this before FTX, and its founder Sam Bankman-Fried, came crashing down to earth in the days around November 8. But anyway this wasn’t going to be a disagreement with the participants or practice of EA, so much as the ideas.
Thanksgiving, the official kickoff to the American charitable giving season, is nearly here, so now is as good a time as any to clarify my own philosophy of giving.
I should be an effective altruist #
I should, probably, identify with the effective altruism movement. I fit the type (economist, works in tech, likes data, worked in development…). It was in the air, in some primordial form, when I was in grad school in Oxford. I recall listening, sitting on a floor in Balliol College in 2010, to Toby Ord make his argument for giving ten percent of one’s income to charity.
I found that persuasive, even if I didn’t act on it then (grad student stipend etc). In fact I’ve never given away that much of my income, but Ord’s arguments for altruism have persuaded me to give much more than I otherwise would have. The “effective” in EA also spoke to me: by far the largest part of my giving has gone to EA-approved charities, principally GiveDirectly and AMF.
But I’m not (or am I?) #
But I do not consider myself a capital-E Effective capital-A Altruist.
My basic disagreement with effective altruism1 is that while I might be a utilitarian, were I to find myself in some benevolent god-dictator position, I don’t believe I personally, in my actually-existing life, have an equal obligation to all people.2 I believe in circles of obligation extending outward and diminishing as they do: family, then friends, then people in my nearby community (e.g. city), then people elsewhere in the world.3
I don’t have some axiomatic philosophical basis for believing this. It just feels natural to me. I think many people believe something like this, albeit everyone weighs those tiers a bit differently. I recognize that this belief system can lead to (or justify) tribalism, sectarianism, discrimination, and so on, and I don’t really have an answer to that.
The practical consequence of this disagreement is actually relatively minor: it means I probably give a little less than I would in total, and that typically only around two-thirds of my giving goes to global causes, while one-third I reserve for things closer to home.
I also think most people should move in an EA direction by weighting their tiers of obligation more evenly. Meanwhile I doubt even the most ardent EA actually donates so much as to reduce their standard of living to that of a poor person in a poor country.4 So it’s obviously a spectrum and if you want to tell me I’m on the EA end of it, or EA-adjacent, I won’t deny it.
And I’m especially not a longtermist #
I have much greater disagreements with longtermism, and if I did consider myself an effective altruist, would find myself in the branch of that movement that feels increasingly alienated by this turn.
I really don’t believe in “total utilitarianism.” I don’t think a world with 50 people enjoying 1 util each is, all else equal, better than one with only 20 people enjoying 1 util each. In fact I find that a deeply weird idea.
I would not assign specific utility to those not yet born. Though I do, of course, assign value to the future beyond those currently living, I do so in a way that is vaguer and less mechanical—but more… normal?—than longtermism seems to demand.
I don’t believe in giving equal weight to future people, even with some geometric discount rate, let alone without. They’re far too hypothetical for me to be comfortable with that.
I think it’s incredibly hard—in general—to assess the long term impact of any action taken today.5 History is very nonlinear and the road to hell really is paved with good intentions. Probably some of the roads to heaven are paved with bad intentions. That doesn’t mean one shouldn’t hold and follow good intentions, but a hoped-for good outcome at the end of a long road is a fairly poor a reason to take it, if the road itself is not obviously good.
I recognize that the EA community probably has answers to all of these points, spread around in forums, blog posts, blog comments, podcasts, works of allegorical fiction and so on: they’re thoughtful people. I’ve read a bit, but far from comprehensively. That’s fine! My mind is always open to change, if you want to try changing it you can find me on Twitter (while it lasts).
But what even is EA? Its a diffuse enough movement that there’s not complete agreement on that. On one hand Matt Yglesias, from outside-but-adjacent, says “EA is applied consequentialism” (and it sure looks that way to me), while Will MacAskill, very much from the inside, says “there is a strong community norm against ‘ends justify the means’ reasoning” and remphasized that just recently in the context of the FTX collapse. ↩
I disagree with Peter Singer’s opinion about his drowning child thought experiment. There feels, to me, to be a morally relevant difference between a child drowning in front of you and one doing the same or equivalent far away. Perhaps I’m a monster? ↩
It’s hard to imagine governments working in any other way. Who wants a national government that doesn’t privilege its own citizens? Of course you can be a utilitarian citizen whose government is not utilitarian, but then you’ll tend to want to do things like minimize your taxes and relocate to islands in the Caribbean. (And here is my chance to point out that I was sceptical about the SBF-EA linkup before it was cool.) ↩
They may justify this as follows: If I give so much of my income away that I am homeless, then I will not long maintain employment at Google/Facebook/Alameda Research, and therefore I maximize my lifetime giving by instead retaining enough for a comfortable rich-country software-engineer lifestyle. This might be true! Of course it is also the reasoning a billionaire entrepreneur might give for minimizing philanthropy, since they can (surely!) give even more money down the line if they keep building their business. ↩
I would make an obvious exception for climate change, but this is a system that—as complex as it is—is incredibly simple compared with social systems, and is very well-studied. Also, the relevant time frames are much shorter than those sometimes invoked by longtermists. ↩
Comments are moderated and will not appear immediately.