What I understood is "EA is bullshit, but it's good bullshit, so it's ok". I think the problem with this and, in fact, the whole premise of this blog is that the meaning of "bullshit" is negative. "Everything is Bullshit" -> Everything is bad? Not necessarily, as we are starting to delve into the different kinds of good and bad bullshit. I think I sort of get it, but for many readers it might be hard to get past the negativity of "bullshit" to fully grasp the concept of good bullshit.
I'm skeptical. In general bullshit is bad because it doesn't tie to facts. Bullshitters don't care if something is actually true, I'm not quite there with EA. I hold that it's sort of bullshit. Still, it could turn into something.
However, it seems as though it is structured as - believe this and it will become true, which is too squishy for me. If it reveals itself to be a belief system rather than a movement based on an actual biological characteristic, I'm certainly not down to join.
I get it, virtually all human activity involves at least some status motivation and self-deception. It's tainted with some bullshit.
Does Pinsof think that all human activity is 100% bullshit? It sure seems like there's a spectrum to me. If you think that EA is bullshit, take a look at non-EA altruism.
If you want to make the world better, you should probably engage with substantive arguments about how to do that effectively. The EA community is a repository of those arguments, and it's quite low-bullshit.
People use the slightest whiff of bullshit in any domain as an excuse to not engage with it. Don't do that. Hold your nose and donate some damn money to a high-impact charity. Brag about it to your friends afterward or not, I don't care.
Yes, EA is “pretty bullshitty” (like pretty much everything), but other forms of do-gooding are even worse (especially in politics, as I mention in the piece). As I said, I’m a fan of EA. I agree high-impact charities like GiveDirectly are good. But I am skeptical of the longtermist turn and especially the obsession with the AI apocalypse. I’ll write about that soon.
Ok - However, I advise some carefulness and doing a lot of research before writing on AI and existential risks.
I wasn't super convinced a few years ago (it sounds sci-fi, doesn't it?), but the more I looked into the arguments, the more I felt like there something big happening here.
I have seen many smart people brand AI risk as speculative and risky, mostly based on vibes and low-level arguments like "I'm sure we'll somehow manage to adapt". Which could happen - but since the situation is quite high-stakes and unprecedented, I think the takeaway is that we shouldn't be super confident about being 100% "everything is okay" or "100% doomer". We should then err towards a precautionary principle and be careful - which is what AI safety is about.
Thanks, yea definitely a lot of lazy anti-AI doomerism out there. And I agree the people into AI doomerism are very smart and have a lot of good arguments. But I still think they’re wrong. I’m looking for people to give feedback on my anti AI doomer piece before I post it to make sure I didn’t make any dumb mistakes or attack any straw men. You interested?
I'm probably not the best person to do this since I don't work in AI safety (I'm more of a generalist) but I think I can have a look and see if anything seems really off. I wrote a summary article on the topic in French so I could help.
A common theme of this blog is cutting through the bullshit of the stories we tell and examining the underlying psychology and social dynamics behind our thoughts and behavior. For this reason the blog has done a tremendous amount of good to help myself and others see the world more accurately. You have occasionally mentioned a desire to add some positivity to the blog. I have a suggestion here. Venture, if only temporarily, into the world of self-help. Why do I think you would be good at it? First, you could direct your advice at what people actually want while setting aside the socially desirable things we claim to wait. Second, you could use your understanding of human motivation to help us achieve our goals and practice virtues along the way. Third, you could sift through common advice and see why it really exists - is it useful for the recipient or is it given because of what it does for the advice-giver? Or the audience? Also, (of course) is it true?
I'm looking forward to the next post either way, but this would be a fascinating direction to take things.
The ideals of empiricism and democratic/free inquiry can allow us some escape from a quagmire of status jockeying and pretending. Pretending as hard you can still doesn't produce empirical evidence for your position. And while all of us may attend to status and fall prey to signaling games, the Earth is a big place. There is no universal set of status wells, nor should there be. That is, I don't necessary give the slightest crap about the people you want to impress. The Un regime's propaganda hasn't persuaded me they have a magical god-king, no matter how high the local stakes of fervent belief. None of us has an identical set of interests and motivations. A common consequence of that is *my* motivation is to expose your bullshit and perhaps vice versa. This state of inherent plurality of communities, nations, and institutions leaves your bullshit ever vulnerable to being skewered.
This is maybe why EA has the deficits you sometimes write about (and I would agree with some of those). It isn't a plurality, it's a homogeny. At least, today it isn't.
I do wonder whether I do good things just to virtue signal or to make myself feel good "enough" even if I didn't actually do the most good possible. I do wonder if other people wonder the same things about me, and that unsettles me. I hate the idea of people cynically interpreting everything I do, say, wear, etc. I also hate that I can come up with such interpretations for anything I do (consider that everything I wear can be said to signal something)
To an extent, effective altruism has given me some plausible deniability. Maybe it is also just another form of virtue signaling -- look, I *really* am good! Maybe it's the best way I can feel good about doing good, given what I now know about how our brains are poorly adapted to properly do good. Either way, the core principles of EA remain excellent ideals for those who want to make the world better, to the best of their abilities. You may call it bullshit, but I do think that for some people improving the world is a big part of what gives them fulfilment in their lives, it's not all pretense.
Nothing I disagree with here. Unfortunately we don't have a good word to deal with this sort of pretense that simultaneously feels sincere and fulfilling, and is also objectively good. And to differentiate this good, sincere, fulfilling pretense with the bad, insincere, or unfulfilling pretense. We're very bad at talking about our various pretendings, differentiating them, and evaluating them by how good or bad they are. Hopefully this essay will lead the way toward thinking more clearly about it.
One thing, this paragraph probably boils down to entirely tribal behaviour and seems to be what we really need to get a better understanding of: "Second, I’ve argued that our desire to make the world a better place is bullshit. Instead of selflessly trying to help everyone on the planet, we’re mainly trying to win virtue points (while showing we don’t care about virtue points), signal support for our political allies, morally one-up our rivals, fit in, dominate people, make others feel indebted to us, and prevent our status games from collapsing. We cannot admit we want these ugly things, so we say we want to make the world a better place instead. Because it sounds better."
Yea you could think of tribal behavior as an umbrella term for a lot of this. Though I'd think of it as tribalism plus status-seeking, because you can occasionally get status for helping people outside your tribe.
Trouble is, tribes are so complicated nowadays. We used to belong to just one tribe, maybe a few tribes within the main tribe too, but essentially just one overall. Nowadays, we belong to so many - nationality, city, class, race, beliefs and so on; it's hard to work out where one ends and another begins. So what is inside or outside a tribe is difficult to determine.
My favorite blog on the interweb, and now on a subject squarely in my wheelhouse. So, with self-awareness I may be bullshittingly projecting my own status, I will start by saying that I have some level of expertise on this subject as I've been designing and funding large scale social impact interventions for many years. I say that only to add weight to my emphatic underscore of David's point, that the EA movement is fabulously, incredibly, gloriously bullshitty. First of all, it's invented by an academic, Peter Singer, who knows jack shit about how nonprofits actually work. The virtue signaling he's instigated is indeed nauseating. That said, I'll make the same point as I've made here before: there is variance in any population, and that means positive as well as negative outliers. In other words, there is a small, very wonderful set of altruists who are keenly, quietly focused on solving world problems effectively, and they use data, analysis, and clear thinking to measurably improve the world, and they do so free of bullshit. You have to go 3, may 4 standard deviations above the mean, but you will find folks like Chuck Feeney, the co-founder of Duty Free shops who gave away $8 billion in secret, traveled in economy class and didn't own a house or a car. There never was a speck of a turd on that dude. More broadly, I'd put an asterisk on "Everything is Bullshit" and have the note read "except in rare, wonderful instances, examples that we should all aspire to so as to make life meaningful."
Thanks, Donald. I agree with everything, but would merely add the cynical footnote that the especially high esteem you and others hold for Chuck Feeny was no doubt part of the unconscious calculus going on inside his head and motivating his (I agree, commendable) behavior. I'm very happy there is a social incentive for people like Chuck Feeny to exist, and I echo your praise of him for that reason.
PPS do you read Gurwinder? As I told him, I'd pay cash money to see you two have a conversation. He's a super sharp dude like you and it would be amazing fun. Go after his Stocism as BS and watch the sparks fly.
Yea I haven’t read much Gurwinder but I’ve generally enjoyed the stuff of his I’ve read. There seems to be a lot of overlap between his audience and mine. I’m sure I’d have fun chatting with him. Feel free to bug him for me.
The problem with EA is its practitioners have mostly never been poor, hungry, or starving. They have, however, read a lot of science fiction and may be Thermians: https://tempo.substack.com/p/thermians
I think EA could be a viable method to help out others. I'm not against altruism. I just wouldn't automatically say that altruism is a superior desire compared to pursing own's interest for oneself, i.e. selfishness. At the end of the day, it's all about profit. Some people profit by helping themselves directly. Others profit by helping themselves and feeling positive by helping someone else.
It's an impossible calculus but EA tends to focus on assumptions rather than problems. Where's the attention to improving the tools that NGOs use, for instance? Where's the attention to preventing political malfeasance? Where's the sort for a parallel society?
I haven't done a deep dive into "Effective Altruism" in a philosophical sense. My first encounter with a vocal proponent would fit virtue signaling vanity via perception manipulation.
That said, with my basic understanding of the concept. What would you say of people that practice invisible altruism like Keanu Reeves, Dolly Parton, Robin Williams, Levar Burton, Mark Hamill, etc?
This seems like the argument on true altruism being impossible due to the dopamine reward of perceived self-worth from helping someone. Which I find to be a logical critique. Similar to happiness being neurochemical dependency. (On some level) determinism impacting behaviors. Or the benefit of overcoming adversity to an organism as argued here.
I wish you would have steel manned some of the qualities of true altruism akin to what is seen in those I listed. Similarly to how I wish some ardent proselytizing Effective Altruists would accept the biopsychosocial pressures worth critique.
(Effective) Altruism is awesome and real, but so is skepticism of those seemingly preaching to the world how righteous their mirror selfies look.
For me whether or not altruism gives us pleasure (and is therefore selfish) is a red herring. I'm not interested in selfishness in the sense of giving us pleasure, because I think happiness is bullshit: https://www.everythingisbullshit.blog/p/happiness-is-bullshit. For me the more interesting and important question is whether altruism is selfish in the sense of benefitting us via status or reciprocity or alliances or something. I think all altruism is selfish in this latter sense (regardless of happiness). For more on this idea, see here: https://www.everythingisbullshit.blog/p/darwin-the-cynic
Interesting. The evolutionary fitness argument is a good one. I'll admit I hadn't thought of the positive social aspect of anonymous donating in the way you have, I'll have to ponder that one. There's a paper on altruistic Meerkat predation alerting predictability based on proximity & genetic similarity that has some great graphs on the social aspects you discuss for visual learners.
I entirely agree with you that Positive Psychology is great, although in its infancy. I think you may like Second-Wave Positive Psychology by Dr. Paul Wong. It may still have some flaws based on my understanding, but I would liken it to a fusion of PP, Taoism, & "Darwinian Cynicism".
I don't have the same perspective, but I did appreciate the mention in the Darwin piece on the anomalous expressions of altruism. That is where my belief in True Altruism lies. Other than genetic aberration as hypothesized, the only other logical reasoning is neurochemical reinforcement which is why I mentioned the dopamine argument.
"Even if the Rule was changed to Do unto others as they want to be done to, we can't know how anybody but ourselves wants to be done to. What the Rule means, and how we apply it honestly, is this: Do unto others as you truly feel like doing unto others. Meet a masochist with this rule and you do not have to flog him with his whip, simply because that is what he would want you to do unto him." Richard Bach
All we can ever do is give people what we think they need. We "externalize" this (nice concept!) by saying that everyone needs food or shelter or healthcare or a good job. But the people in EA are doing whatever everyone else does. Giving people what the giver thinks they need.
Someone wanting food to feed their children is a feeling. Someone saying they want food to feed their children is a behavior. Not only are they different things, they exist in completely different domains. You have boiled this down to "everything is bullshit" which is brilliant!
You simply hand wave about motivations and intentions, make baseless statements like “suffering is good”, and make strawmans about effective altruism and the people involved in it.
You should read Peter Singer’s “The life you can save”. If the only motivations you can see for people exhibiting humanity is “domination, status games, and moral superiority”, maybe that says more about you than anyone else. Why can care, solidarity, and kindness not be motivations?
I can see why it would seem like handwaving if you haven’t read my previous posts where I argue for the thesis that eg suffering is good for us and we don’t want to make the world better. This post relies on a lot of building blocks I set up previously so you may want to check them out. Also I’m not criticizing EA, I’m a fan, I said so in the post.
This is brilliant. You are talking about the 🐘 in the 🧠. Just brilliant
What I understood is "EA is bullshit, but it's good bullshit, so it's ok". I think the problem with this and, in fact, the whole premise of this blog is that the meaning of "bullshit" is negative. "Everything is Bullshit" -> Everything is bad? Not necessarily, as we are starting to delve into the different kinds of good and bad bullshit. I think I sort of get it, but for many readers it might be hard to get past the negativity of "bullshit" to fully grasp the concept of good bullshit.
Yea, that’s a good summary, and good point. I think it’s been hard for people to wrap their heads around this one for that reason.
I'm skeptical. In general bullshit is bad because it doesn't tie to facts. Bullshitters don't care if something is actually true, I'm not quite there with EA. I hold that it's sort of bullshit. Still, it could turn into something.
However, it seems as though it is structured as - believe this and it will become true, which is too squishy for me. If it reveals itself to be a belief system rather than a movement based on an actual biological characteristic, I'm certainly not down to join.
I get it, virtually all human activity involves at least some status motivation and self-deception. It's tainted with some bullshit.
Does Pinsof think that all human activity is 100% bullshit? It sure seems like there's a spectrum to me. If you think that EA is bullshit, take a look at non-EA altruism.
If you want to make the world better, you should probably engage with substantive arguments about how to do that effectively. The EA community is a repository of those arguments, and it's quite low-bullshit.
People use the slightest whiff of bullshit in any domain as an excuse to not engage with it. Don't do that. Hold your nose and donate some damn money to a high-impact charity. Brag about it to your friends afterward or not, I don't care.
Yes, EA is “pretty bullshitty” (like pretty much everything), but other forms of do-gooding are even worse (especially in politics, as I mention in the piece). As I said, I’m a fan of EA. I agree high-impact charities like GiveDirectly are good. But I am skeptical of the longtermist turn and especially the obsession with the AI apocalypse. I’ll write about that soon.
Ok - However, I advise some carefulness and doing a lot of research before writing on AI and existential risks.
I wasn't super convinced a few years ago (it sounds sci-fi, doesn't it?), but the more I looked into the arguments, the more I felt like there something big happening here.
I have seen many smart people brand AI risk as speculative and risky, mostly based on vibes and low-level arguments like "I'm sure we'll somehow manage to adapt". Which could happen - but since the situation is quite high-stakes and unprecedented, I think the takeaway is that we shouldn't be super confident about being 100% "everything is okay" or "100% doomer". We should then err towards a precautionary principle and be careful - which is what AI safety is about.
For instance, see https://aisafety.info/
Thanks, yea definitely a lot of lazy anti-AI doomerism out there. And I agree the people into AI doomerism are very smart and have a lot of good arguments. But I still think they’re wrong. I’m looking for people to give feedback on my anti AI doomer piece before I post it to make sure I didn’t make any dumb mistakes or attack any straw men. You interested?
I'm probably not the best person to do this since I don't work in AI safety (I'm more of a generalist) but I think I can have a look and see if anything seems really off. I wrote a summary article on the topic in French so I could help.
Cool. These might be useful (note the differences between survey results and donations!):
https://forum.effectivealtruism.org/posts/sK5TDD8sCBsga5XYg/ea-survey-cause-prioritization
https://effectivealtruismdata.herokuapp.com/#donations-sankey
We can all do as we please. If other people approve, great. If other people don't approve of what we do, oh well.
A common theme of this blog is cutting through the bullshit of the stories we tell and examining the underlying psychology and social dynamics behind our thoughts and behavior. For this reason the blog has done a tremendous amount of good to help myself and others see the world more accurately. You have occasionally mentioned a desire to add some positivity to the blog. I have a suggestion here. Venture, if only temporarily, into the world of self-help. Why do I think you would be good at it? First, you could direct your advice at what people actually want while setting aside the socially desirable things we claim to wait. Second, you could use your understanding of human motivation to help us achieve our goals and practice virtues along the way. Third, you could sift through common advice and see why it really exists - is it useful for the recipient or is it given because of what it does for the advice-giver? Or the audience? Also, (of course) is it true?
I'm looking forward to the next post either way, but this would be a fascinating direction to take things.
Thanks, Ross. It’s a good idea. Let me mull on it.
The ideals of empiricism and democratic/free inquiry can allow us some escape from a quagmire of status jockeying and pretending. Pretending as hard you can still doesn't produce empirical evidence for your position. And while all of us may attend to status and fall prey to signaling games, the Earth is a big place. There is no universal set of status wells, nor should there be. That is, I don't necessary give the slightest crap about the people you want to impress. The Un regime's propaganda hasn't persuaded me they have a magical god-king, no matter how high the local stakes of fervent belief. None of us has an identical set of interests and motivations. A common consequence of that is *my* motivation is to expose your bullshit and perhaps vice versa. This state of inherent plurality of communities, nations, and institutions leaves your bullshit ever vulnerable to being skewered.
This is maybe why EA has the deficits you sometimes write about (and I would agree with some of those). It isn't a plurality, it's a homogeny. At least, today it isn't.
I do wonder whether I do good things just to virtue signal or to make myself feel good "enough" even if I didn't actually do the most good possible. I do wonder if other people wonder the same things about me, and that unsettles me. I hate the idea of people cynically interpreting everything I do, say, wear, etc. I also hate that I can come up with such interpretations for anything I do (consider that everything I wear can be said to signal something)
To an extent, effective altruism has given me some plausible deniability. Maybe it is also just another form of virtue signaling -- look, I *really* am good! Maybe it's the best way I can feel good about doing good, given what I now know about how our brains are poorly adapted to properly do good. Either way, the core principles of EA remain excellent ideals for those who want to make the world better, to the best of their abilities. You may call it bullshit, but I do think that for some people improving the world is a big part of what gives them fulfilment in their lives, it's not all pretense.
Nothing I disagree with here. Unfortunately we don't have a good word to deal with this sort of pretense that simultaneously feels sincere and fulfilling, and is also objectively good. And to differentiate this good, sincere, fulfilling pretense with the bad, insincere, or unfulfilling pretense. We're very bad at talking about our various pretendings, differentiating them, and evaluating them by how good or bad they are. Hopefully this essay will lead the way toward thinking more clearly about it.
Fantastic work.
One thing, this paragraph probably boils down to entirely tribal behaviour and seems to be what we really need to get a better understanding of: "Second, I’ve argued that our desire to make the world a better place is bullshit. Instead of selflessly trying to help everyone on the planet, we’re mainly trying to win virtue points (while showing we don’t care about virtue points), signal support for our political allies, morally one-up our rivals, fit in, dominate people, make others feel indebted to us, and prevent our status games from collapsing. We cannot admit we want these ugly things, so we say we want to make the world a better place instead. Because it sounds better."
Yea you could think of tribal behavior as an umbrella term for a lot of this. Though I'd think of it as tribalism plus status-seeking, because you can occasionally get status for helping people outside your tribe.
Trouble is, tribes are so complicated nowadays. We used to belong to just one tribe, maybe a few tribes within the main tribe too, but essentially just one overall. Nowadays, we belong to so many - nationality, city, class, race, beliefs and so on; it's hard to work out where one ends and another begins. So what is inside or outside a tribe is difficult to determine.
My favorite blog on the interweb, and now on a subject squarely in my wheelhouse. So, with self-awareness I may be bullshittingly projecting my own status, I will start by saying that I have some level of expertise on this subject as I've been designing and funding large scale social impact interventions for many years. I say that only to add weight to my emphatic underscore of David's point, that the EA movement is fabulously, incredibly, gloriously bullshitty. First of all, it's invented by an academic, Peter Singer, who knows jack shit about how nonprofits actually work. The virtue signaling he's instigated is indeed nauseating. That said, I'll make the same point as I've made here before: there is variance in any population, and that means positive as well as negative outliers. In other words, there is a small, very wonderful set of altruists who are keenly, quietly focused on solving world problems effectively, and they use data, analysis, and clear thinking to measurably improve the world, and they do so free of bullshit. You have to go 3, may 4 standard deviations above the mean, but you will find folks like Chuck Feeney, the co-founder of Duty Free shops who gave away $8 billion in secret, traveled in economy class and didn't own a house or a car. There never was a speck of a turd on that dude. More broadly, I'd put an asterisk on "Everything is Bullshit" and have the note read "except in rare, wonderful instances, examples that we should all aspire to so as to make life meaningful."
Thanks, Donald. I agree with everything, but would merely add the cynical footnote that the especially high esteem you and others hold for Chuck Feeny was no doubt part of the unconscious calculus going on inside his head and motivating his (I agree, commendable) behavior. I'm very happy there is a social incentive for people like Chuck Feeny to exist, and I echo your praise of him for that reason.
Touché - LOL
PPS do you read Gurwinder? As I told him, I'd pay cash money to see you two have a conversation. He's a super sharp dude like you and it would be amazing fun. Go after his Stocism as BS and watch the sparks fly.
Yea I haven’t read much Gurwinder but I’ve generally enjoyed the stuff of his I’ve read. There seems to be a lot of overlap between his audience and mine. I’m sure I’d have fun chatting with him. Feel free to bug him for me.
I've already done so. And not surprising about the overlap. You are both unusually gifted thinkers and writers.
The problem with EA is its practitioners have mostly never been poor, hungry, or starving. They have, however, read a lot of science fiction and may be Thermians: https://tempo.substack.com/p/thermians
This is probably the best overview of EA that I have read: https://quillette.com/2023/12/21/how-effective-altruism-lost-its-way/
I do like the idea of funding things that actually have an impact. But quantification has limits.
I think EA could be a viable method to help out others. I'm not against altruism. I just wouldn't automatically say that altruism is a superior desire compared to pursing own's interest for oneself, i.e. selfishness. At the end of the day, it's all about profit. Some people profit by helping themselves directly. Others profit by helping themselves and feeling positive by helping someone else.
It's an impossible calculus but EA tends to focus on assumptions rather than problems. Where's the attention to improving the tools that NGOs use, for instance? Where's the attention to preventing political malfeasance? Where's the sort for a parallel society?
I haven't done a deep dive into "Effective Altruism" in a philosophical sense. My first encounter with a vocal proponent would fit virtue signaling vanity via perception manipulation.
That said, with my basic understanding of the concept. What would you say of people that practice invisible altruism like Keanu Reeves, Dolly Parton, Robin Williams, Levar Burton, Mark Hamill, etc?
This seems like the argument on true altruism being impossible due to the dopamine reward of perceived self-worth from helping someone. Which I find to be a logical critique. Similar to happiness being neurochemical dependency. (On some level) determinism impacting behaviors. Or the benefit of overcoming adversity to an organism as argued here.
I wish you would have steel manned some of the qualities of true altruism akin to what is seen in those I listed. Similarly to how I wish some ardent proselytizing Effective Altruists would accept the biopsychosocial pressures worth critique.
(Effective) Altruism is awesome and real, but so is skepticism of those seemingly preaching to the world how righteous their mirror selfies look.
For me whether or not altruism gives us pleasure (and is therefore selfish) is a red herring. I'm not interested in selfishness in the sense of giving us pleasure, because I think happiness is bullshit: https://www.everythingisbullshit.blog/p/happiness-is-bullshit. For me the more interesting and important question is whether altruism is selfish in the sense of benefitting us via status or reciprocity or alliances or something. I think all altruism is selfish in this latter sense (regardless of happiness). For more on this idea, see here: https://www.everythingisbullshit.blog/p/darwin-the-cynic
Interesting. The evolutionary fitness argument is a good one. I'll admit I hadn't thought of the positive social aspect of anonymous donating in the way you have, I'll have to ponder that one. There's a paper on altruistic Meerkat predation alerting predictability based on proximity & genetic similarity that has some great graphs on the social aspects you discuss for visual learners.
I entirely agree with you that Positive Psychology is great, although in its infancy. I think you may like Second-Wave Positive Psychology by Dr. Paul Wong. It may still have some flaws based on my understanding, but I would liken it to a fusion of PP, Taoism, & "Darwinian Cynicism".
I don't have the same perspective, but I did appreciate the mention in the Darwin piece on the anomalous expressions of altruism. That is where my belief in True Altruism lies. Other than genetic aberration as hypothesized, the only other logical reasoning is neurochemical reinforcement which is why I mentioned the dopamine argument.
Thank you for the thought-provoking reads.
"Even if the Rule was changed to Do unto others as they want to be done to, we can't know how anybody but ourselves wants to be done to. What the Rule means, and how we apply it honestly, is this: Do unto others as you truly feel like doing unto others. Meet a masochist with this rule and you do not have to flog him with his whip, simply because that is what he would want you to do unto him." Richard Bach
All we can ever do is give people what we think they need. We "externalize" this (nice concept!) by saying that everyone needs food or shelter or healthcare or a good job. But the people in EA are doing whatever everyone else does. Giving people what the giver thinks they need.
Someone wanting food to feed their children is a feeling. Someone saying they want food to feed their children is a behavior. Not only are they different things, they exist in completely different domains. You have boiled this down to "everything is bullshit" which is brilliant!
This post is Bullshit.
You simply hand wave about motivations and intentions, make baseless statements like “suffering is good”, and make strawmans about effective altruism and the people involved in it.
You should read Peter Singer’s “The life you can save”. If the only motivations you can see for people exhibiting humanity is “domination, status games, and moral superiority”, maybe that says more about you than anyone else. Why can care, solidarity, and kindness not be motivations?
Maybe some people actually give a shit.
I can see why it would seem like handwaving if you haven’t read my previous posts where I argue for the thesis that eg suffering is good for us and we don’t want to make the world better. This post relies on a lot of building blocks I set up previously so you may want to check them out. Also I’m not criticizing EA, I’m a fan, I said so in the post.
There are literally hyperlinks to give base to those baseless statements
Have you read his previous pieces and what he linked to just after “suffering is good”?