In Defence of Effective Altruism
In 2017, persuaded by the arguments of the Effective Altruism (EA) movement, I took the Giving What We Can Pledge to donate 10% of my income to the most effective causes for the rest of my life. I believe this is the right thing to do and have written several times about why I think this is one of the most important movements of our age.
Recently, however, I have noticed more and more pushback and criticism of EA. This was likely inevitable as the movement gained more popularity, but the level of criticism, and the misrepresentation of the EA community, has been disappointing. Most recently, I was quite shocked to read a critique from Kathleen Stock on the UnHerd website. And I was even more amazed to read the comments on the article and social media. It was a complete misrepresentation of the movement and seemed to be mainly based around ad hominem attacks and conflating EA with longtermism.
As a proud member and advocate of EA, I feel compelled to mount a defence. The article, ironically written by a Professor of Philosophy, on several occasions mocks the ‘philosopher geeks’ and ‘skinny, specky, brainy philosophers’ behind EA. None of this has anything to do with the ideas underpinning EA, and as someone who has studied philosophy, it is the nature of the ideas and their applicability to the real world that matter to me. What really is the point of philosophy if it isn’t trying to make the world a better place? Philosophy gave birth to science, mathematics and politics, among other studies and is our best tool for dealing with the ethical conundrums the modern world poses. It is a credit to philosophy that brilliant and important ideas, such as those behind EA, are further developed and advanced by it.
And for what it is worth, the ideas at the root of EA haven’t been cooked up recently by academics at Oxford – the foundational thinking goes back further to the likes of philosopher Peter Singer and economist Yew-Kwang Ng. This isn’t to discredit the huge impact that Will McKaskill and the founders of EA have made, but they continue in a tradition of thought already laid out. And with regards to the Giving Pledge, many of the world’s major religions have promoted altruistic giving and donating a portion of your income as an important moral value. This is a practice thousands of years old, so why not focus on how that giving can be best made for maximum impact?
Stock’s article also takes issue with longtermism, which is only a part of EA. It takes specific aim at the views of crypto billionaire Sam Bankman-Fried and conflates them with the EA movement as a whole. It references ‘EA’s earlier focus on targeting malaria’, implying that this, as well as tackling poverty today, is no longer a major concern. It expands on this by saying, ‘only unreliable and partial emotion could lead you to care more about the lives of real people over those who are yet to exist.’
However, most donations to EA go to the Global Health and Development fund, with the most recent payout from that fund being $4,767,923 to the Against Malaria Foundation. Targeting malaria, and more widely caring about the lives of people today, is still at EA’s core, and that donation was made to have the best possible chance of making the most difference in the world. It will mean that real people today, who might not typically benefit from how we usually do charity, live more days on this Earth. That is a real impact of the giving. And it is a consequence of the ideas behind EA. A quick glance at where other payouts go shows the commitment to ensuring that many don’t die of easily preventable causes and that we improve the lives of those in the most need. It is really hard to see why this aim is so problematic. Would the world be better off if we all gave to the Donkey Sanctuary and the Captain Tom Foundation? And for what it is worth, out of the four pots, the Long-Term Future Fund is the third in terms of payout amounts, also behind the Animal Welfare Fund.
Longtermism is a part of EA, but not all of it. The movement is mostly still concerned with helping people today in the most effective means possible. EA’s motivation is simple: to use reason and evidence to do the most good. As mentioned, this really shouldn’t be a controversial aim. And longtermism is certainly an essential ethical consideration when taking this aim seriously. Part of the Long-Term Future Fund, and this was before the Covid-19 outbreak, donated to pandemic preparedness. Imagine a world five years ago that put significantly more funding into pandemic preparedness. Millions fewer would have died. These are real people who might not otherwise have died if we had funded and prepared appropriately. I again struggle to see why this is contentious.
And are not the actions of so many people before us longtermist? And don’t we thank them for it? Our history is full of people who wanted to make the future better, from those who died in multiple world wars to fight dangerous ideas to those who donate their bodies to scientific research, or Emily Davison, who threw herself to her death to raise the issue of women’s suffrage.
The criticism of ‘emotive thought experiments’ as a way of expressing the ideas behind longtermism is bizarre. Stories and human emotions are huge drivers of change. Yes, EA is underpinned by logic, but it improves the lives of real people and those yet to be born, and that is a beautiful thing. Expressing that idea through stories and using emotion is not a flaw but a strength and a necessity of the movement. Emotion should not be a part of the decision-making but a part of expressing the good the movement can do.
I felt compelled to write a response to Stock’s article because the EA movement saves real people's lives right now and cares about the generations to come. This really matters. The EA community also actively encourages feedback and always poses questions and counterarguments to its own aims. It is an intellectually honest movement, but this is a two-way street, and the criticism should also be intellectually honest if it is to be taken seriously.