61 Comments
User's avatar
The Water Line's avatar

I feel like there's a kind of equivocation about what we "really" value.

For example, when someone is learning to play an instrument, it's possible to describe this as satisfying the evolutionary goal of signaling competence, cultural awareness, etc. to potential allies and mates. BUT it's also possible to describe this by saying the person *values creating music*. The two descriptions aren't incompatible. Valuing the creation of music is the means by which the person is signaling their competence, etc.

It's like arguing people don't "really" want to have sex. They only "really" want to pass on their genes to another generation. Well no... People *do* want to have sex, and this satisfies their genes' goals of replicating into another generation.

Expand full comment
David Pinsof's avatar

When I say people "really" don't want to make the world a better place, I mean they don't "really" want it as an end in itself: insofar as they ever do want it, they only want it *as a means* to achieving some deeper, less flattering goal. This matters, because if we want to shape people's behavior, we have to understand what that less flattering goal is. To take your music example, if the person only values music *as a means* to signaling their competence, then we can increase musical production by increasing the social rewards for displaying musical competence. If we want to lower the amount of music in the world, we can stop giving status to people for displaying their musical competence. The distinction is crucial, because if people "really" wanted music as an end in itself, then the status they gained from it would have no effect on their behavior: they'd keep on making music regardless of how much status they got from it. I'm saying it's more insightful, useful, and predictively powerful to focus on the deeper goal, because that is the lever we must pull if we want to shape the behavior, and that is the key to understanding and predicting when and where the behavior will be performed. We must choose insight over idealism.

Expand full comment
The Water Line's avatar

I see, I think this might just be a semantic disagreement then. It seems you're saying what "really" counts as people's values are the psychological adaptations that helped self, family, and allies. And valuing music is just a means towards satisfying these values. A person doesn't *really* value music, since they might stop given different social rewards.

I would instead say that our values change when the social rewards change. The person who produces a lot of music *really does* value music. And when they receive different social rewards and they stop playing music, then they've stopped valuing music.

I would also add that I think valuing music can also become an end in itself because evolution hasn't made us infinitely malleable. For example, here's one crude way an adaptation could work:

Step 1. For the first 10 years of life, observe the activities the high status folks are doing.

Step 2. Set as a terminal value whatever was the most frequent activity.

This would serve the adaptive function of acquiring status, but it would do so by making, say, playing music an end in itself. And so later on in life, even when they no longer receive any social rewards, the person might continue to value playing music.

Expand full comment
David Pinsof's avatar

Thanks, I see what you’re saying, and I appreciate your thoughtful comments, but I don’t find your proposal for a “new terminal desire creation mechanism” (as I understand it) very plausible. To evaluate the evolutionary plausibility of this mechanism, we have to compare it to alternative mechanisms--i.e., what other mechanisms would have already been around at the time. Those would have likely been the same old Darwinian goals we share with other animals (sex, status, etc), plus reinforcement learning to figure out whatever achieves those goals in our specific environment. That alternative would seem to work just fine. If x got us status, we’d learn to do x. If x stopped giving us status, we’d learn to stop doing x. What you’re proposing is the evolution of a new system, that creates new terminal desires, based on paltry evidence from individual lifetimes (compared to the millions of years of evidence that created the existing terminal desires) for no reason. Such a system would be *less optimal* than the alternative, despite requiring the evolution of a brand-new system. I just don’t buy it. We have no need for this hypothesis. Besides, the hypothesis would predict that assimilation into new cultures and subcultures would be very rare or impossible, which we already know is false--we change our “values” at the drop of a hat as we move from subculture to subculture (I did this a lot in my youth). All this hypothesis is doing, it seems to me, is making us feel better. It’s not actually explaining anything we couldn’t already explain. So I would get rid of it. The rest is, as you say, semantics, and you can feel free to use whichever words you prefer.

Expand full comment
The Water Line's avatar

I agree that my proposal would posit alternative mechanisms besides reinforcement learning. But I think that's necessary to explain many sorts of behaviors/preferences that we have. Some examples:

1. People's sexual preferences seem to start around puberty and are relatively insensitive to change later in life.

2. People sometimes have a bad experience with certain foods early in life, which creates a long-lasting aversion.

3. People's music preferences seem to crystallize around the time when they're teenagers.

4. People who've undergone some violent experience will sometimes vividly relive that experience for many years or decades afterwards.

5. People can develop hobbies as children that they fixate on for their entire lives, sometimes despite ridicule (e.g. model trains, math, etc.).

Is this all just reinforcement learning?

Expand full comment
David Pinsof's avatar

Thanks, interesting points. Sexual desires are evolved terminal desires and so not a counterexample in my view. Food appetites and aversions are also terminal desires and represent a risk-averse evolutionary strategy (fool me once, shame on me; fool me twice, genetic oblivion). Music I’m not so sure--that might be a byproduct of the fact that our social environments also crystallize in youth, but we don’t yet have a good evolutionary theory of music (it’s still a mystery imo), so I don’t think this is a strong piece of evidence one way or the other, without knowing more about wtf music is for. Post trauma makes sense for ultra unexpectedly low value experiences that absolutely cannot happen again (another kind of evolutionary risk aversion--just normal reinforcement learning applied to an extreme case). I’m skeptical of lifelong hobbies that aren’t socially rewarded. That might happen for autistic people with social deficits, but I have a hard time imagining neurotypical people who fixate on trains their whole life despite getting ridiculed for it. So I still don’t buy it, but I appreciate the friendly and thoughtful pushback.

Expand full comment
Mike Hind's avatar

This piece reinforces an intuition that rational insight is more 'beautiful' in some way than sentiment. But evolution seems to have selected for low self-knowledge, given how hard it is to develop - and how few of us ever try.

Expand full comment
Ross Andrews's avatar

I just watched the David Pinsof episode of the Modern Wisdom podcast. He's much younger and better looking than I had imagined. I had always pictured him being about 50 years old... normally it takes many years of crushed hopes and dreams to become this cynical.

Expand full comment
David Pinsof's avatar

Haha. Thanks for the compliment. I’m actually not that cynical by temperament. I’m pretty gentle and trusting. I like to think—and this might be bullshit—that the reason I’m cynical is not because it suits my personality but because it is the correct, most insightful way of seeing things. It actually took me a while to accept Darwinian cynicism. I resisted it for a long time. It was the inescapability of its logic and the power of its insight that eventually won me over.

Expand full comment
Plasma Bloggin''s avatar

There's one fatal flaw in this argument, and while it is gestured at in the post, it fails to actually address it. That flaw is the that the "goals" of evolution aren't the same as our own goals. Evolution directly selects for only one thing: Increasing the frequency of our genes in future generations. But no one desires that as an end in itself. The claim that we must desire only selfish, nepotistic, or groupish ends because those are the only things evolution selects for commits the error of conflating the goals that we actually follow (that is, the goals we pursue for their own sake) with the "goal" that evolution had in creating those goals to begin with (with "goal" in scare-quotes because evolution is an unconscious process without actual goals). But this is obviously invalid: There are at least two ways evolution could produce agents with goals that differ from the reasons evolution produced the goals to begin with.

The first is mesa optimization: Evolution is optimizing for passing on genes, but the agents it creates don't need to directly optimize for that or even know about it. They can optimize for something else that happens to correlate with passing their genes on. This is where basically all human desires come from: We desire the pleasure of eating food, and the taste of certain kinds of foods, for their own sake because we evolved to have these desires. The reason we evolved this way is because eating was necessary for our survival, and because certain foods contain nutrients that were scarce in the ancestral environment, so people that had a strong desire for the tastes of those foods tended to survive more. But just because that's the reason the desires evolved doesn't mean that we somehow have that as our true terminal desire rather than just enjoying the food. We would continue to enjoy eating food even if we didn't have to do it to survive, and we would dislike the feeling of hunger even if it were impossible to starve to death. We still like the tastes of those foods that were important in the ancestral environment even though they now make us less healthy, and thus less likely to pass on our genes, and we know this. So clearly, our desire to eat food and our enjoyment of certain types more than others are not just instrumental goals that mask our true terminal goal of passing our genes on - they are terminal goals that differ from the terminal goal of natural selection, despite being given to us by natural selection.

A similar thing can be said about sexual pleasure. Evolution gave us a desire for sexual pleasure because it made us more likely to reproduce and thus pass our genes on. But the desire for sexual pleasure itself is a terminal desire, and in many cases people want it despite actively wanting to avoid reproduction - that's why contraception exists.

So how does this apply to altruism? Even if evolution is actually maximizing for group or kin altruism, it might be that just producing a universal altruistic impulse is a simple way to do this. After all, you'll spend the most time hanging around your own groups or your own kin anyway, so they'll get the largest benefits from your altruism. Thus, evolution can promote genuine altruism simply as a means of making you more likely to help your family and groups pass on the altruistic genes. Genuine altruism might also make your group more likely to attempt to cooperate with other groups rather than fighting them (as long as you're still willing to fight back against non-altruistic groups), which could help groups that are more altruistic outcompete those that aren't.

Altruism might also be useful for reasons of social status, as you suggested in the post, except that there's no reason that evolution would have to make altruism a fake goal that the status-seeking agent doesn't really care about rather than just making it a terminal goal. This would mean that people do indeed pursue altruism for its own sake, even if the reasons that they evolved to be this way were because it increased their status. In fact, there are obvious reasons why we would evolve to be altruistic as a terminal goal rather than a merely instrumental one, even if the only reason we evolved this way was because having that goal increased our status. First of all, it's a lot easier to convince people that you're genuinely following altruistic goals if you really are doing so than if you're secretly being selfish or favoring those close to you while pretending to be altruistic. Secondly, altruism is just a simpler goal to follow and is therefore probably easier. It takes less cognitive effort, and therefore less energy, to just do what you think is best for others than to figure out how best to convince others that you're doing what's best for them while secretly pursuing some other goal.

The second way evolution can produce agents that optimize for different goals than evolution does is as a by-product of traits that are themselves useful. As long as the agent gets more benefits from having those traits than the by-product costs them, they will not be selected against. And there are plausible ways altruism could be a by-product of adaptive traits even if it itself is not adaptive. For example, you agree that kin altruism and group altruism are adaptive. But impartial altruism is just a generalization of these. It may have arisen from our capacity for abstract thought, combined with kin and group altruism. Abstract thought led us to universalize the moral principles that we already applied to our ingroups, thus leading to genuine altruism. Not only is this plausible, but it's also in line with the actual history of moral development - it explains the expanding moral circle, including its expansion to non-human animals, which your explanation can't do.

Both of these are ways that evolution could "indirectly" select for altruism. In the first case, it selects for it because having altruistic impulses is indirectly useful for achieving the things that evolution selects for directly, and in the second case, evolution doesn't directly select for altruism at all, but altruism arises as a by-product. However, neither of these explanation require that *humans* only pursue altruistic goals indirectly. Both of them imply that humans would have terminal goals that are altruistic. So when you say, "Yes, it is possible to accidentally improve the world, as a kind of mistake, when we were actually just trying to help ourselves, our families, or our allies," you're not addressing the real argument at all! When we're saying evolution selected for altruism "by accident," that doesn't mean that humans evolved to accidentally perform altruistic actions - it means that humans evolved to perform altruistic actions on purpose despite altruism not being directly selected for.

And there's very strong reason to think that this actually happened, given that humans in fact behave altruistically all the time. Sure, you can come up with a just-so story to explain away acts of altruism as really being for other motives, but that's way more complicated and ad hoc than just assuming that humans have the goals they appear to have and believe themselves to have. There would need to be a very strong argument to show otherwise, but the only argument that exists is based on a misunderstanding of how evolution works and a conflation of the ultimate reason we evolved to have some goal with the intentionally pursued terminal goal itself. The "buried signaling" explanation for private donations to charity is extremely implausible, for example. There's absolutely no way that the increase in status you would get if someone were to somehow find out that you made a private donation is outweighed by the tiny probability of that happening, and it's even less likely that people who make private donations are actually doing it for that specific terminal goal, rather than out of the goodness of their hearts, or because it makes them feel good. You might argue that the latter, "It makes them feel good," is still a selfish motive and thus comports with your theory. Maybe so, but there's not much difference between, "I act exactly like an altruist because it makes me feel good, even though what I truly care about is just feeling good and not other people," and, "I act as an altruist because I care about other people." If doing altruistic things makes you feel good, isn't that just another way of saying that you value altruism for its own sake?

Expand full comment
David Pinsof's avatar

Yea of course we don’t value fitness—we value proxies of fitness (sex, status, food, etc). But the point is that “being a proxy of fitness” is a very specific thing. Only certain things can plausibly be a proxy for fitness. Indiscriminate altruism just cannot be a plausible proxy for fitness. Selective altruism (ie only help when it boosts fitness) will always be a better proxy for fitness than indiscriminate altruism. Always. By definition. I just don’t find your just-so story for the evolution of indiscriminate altruism plausible at all. Also, have you taken a look at our species lately? I do not see evidence that indiscriminate altruism is a part of human nature, and tons of evidence that selfishness, status-seeking, nepotism, tribalism are parts of human nature. So not only is your account evolutionarily implausible; it is also empirically implausible. It simply does not match any realistic description of our species.

Expand full comment
Plasma Bloggin''s avatar

I don't know what is supposed to so implausible about the stories of how evolution could produce true altruism - they are all pretty straightforward methods by which altruism could be adaptive, and one of them is the reason you endorse, just without the extra posit that people are only pretending to act altruistically.

And it doesn't matter that selective altruism is a better proxy for fitness than indiscriminate altruism (if this even true - indiscriminate altruism allows for cooperation between groups). A selective desire for sexual pleasure, such that one only desires sex that allows them to pass on their genes, is also a better proxy for fitness than sexual pleasure that doesn't depend on whether the sex is reproductive, and yet the latter is what evolution produced. Evolution is not a perfect optimizer, and even if it was, it might select for a goal that's a worse proxy for fitness rather than a goal that's a better proxy simply because the costs of pursuing that goal (e.g., the energy the brain must expend on it) are lower.

"I do not see evidence that indiscriminate altruism is a part of human nature, and tons of evidence that selfishness, status-seeking, nepotism, tribalism are parts of human nature."

First of all, the claim that some people act for altruistic reasons some of the time doesn't imply that selfishness, status-seeking, nepotism, and tribalism aren't parts of human nature. So observing those things is irrelevant. Evolution can produce those things and altruism simultaneously, and none of the explanations I gave for how altruism could evolve require that those other traits wouldn't also evolve.

Secondly, of course if you just focus on the negative things and then come up with a story to explain away any apparent instances of altruism, you're going to come away from it thinking that humans are not altruistic. That fact that you can attempt to explain away those instances doesn't prove that they don't exist. And I don't even agree that your theory can explain away all instances of altruism. I think the purported explanation of anonymous donation fails, I don't think it can possibly explain why someone would sacrifice their lives for a stranger, and it also doesn't explain the altruism some people have towards animals - you might get increased status for some forms of care towards animals, but the theory can't explain why this is if we don't have a generalized altruistic impulse, since caring about animals would give us no greater reason to assume that someone is trustworthy to other humans, and it also can't explain why people sometimes act altruistically towards animals in ways that lower their status.

And even if you can explain away all instances of altruism, I don't think it's possible to do that without making the theory unfalsifiable. If even things like anonymous charity-donations or sacrificing your life to help someone unrelated to you can be explained by virtue signaling or some sort of group altruism, then what couldn't? If this is the case, then the fact that you don't observe any altruism that you see as genuine provides no evidence against the view that such behavior does exist, since you would be able to explain it away even if it did.

Expand full comment
David Pinsof's avatar

Evolution selects among alternatives. We already have alternatives to indiscriminate altruism that exist in our species: tribal, status-seeking, reciprocal, and/or nepotistic altruism. You cannot say evolution was too “constrained” or “imperfect” to select these alternatives over indiscriminate altruism, because we know these alternatives exist and were available to be selected. If x is a better version of y, then x will evolve to displace y. No, y cannot evolve “alongside of” x if it is a worse version of x. You also seem confused about this “true altruism” business. I’m not sure what you mean by that term and it seems like it generates nothing but confusion and emotional reasoning. I would avoid it. I’m just talking about what motivations are more or less evolutionarily plausible. A motivation for indiscriminate altruism is not very evolutionarily plausible, especially when you compare it to the alternatives we know exist. Show me a large number of humans (>10%) who care about strangers as much as their children, or rivals as much as allies, or strangers as much as friends, or pets as much as much as parasites, or who are insensitive to what is socially rewarded or morally praised in their peer group, or whose altruistic behavior is unaffected by the presence of observers, or by the moral standards of those observers, or by the social status of those observers, or by the opportunity to look morally superior to others, or whose altruistic behavior is insensitive to how grateful and indebted the recipient feels, or by the recipient’s likelihood (or observers’ likelihood) of benefitting them in any way, and you will falsify the theory. But it’s really not a theory so much as a metatheory—a way of devising theories that are more likely to be correct because they are more evolutionarily plausible. What would falsify your view? If all the status-seeking and conformity and virtue signaling and tribalism and nepotism of our species cannot cast doubt on it, and if the basic logic of natural selection cannot cast doubt on it, then I’m not sure what could. It seems you are the one who is unwilling to change your mind. This will be my last comment. Thanks for engaging.

Expand full comment
Plasma Bloggin''s avatar

When I say, "true altruism," I mean the very thing we're debating over! Specifically, actions that have the motive of making the world a better place/helping people in general. I'm not sure why you act like this is an unclear concept and you don't know what it means when you wrote an entire Substack post arguing that this kind of motive is evolutionarily impossible.

It's very much possible that altruistic motives can exist alongside non-altruistic ones. The "evolution selects among alternatives" argument only works when the only alternatives on the table are pure altruism or purely acting from selfish, nepotistic, and groupish motives. If having altruism in addition to all those motives outcompetes them, it doesn't matter that having those motives alone would outcompete altruistic motives alone. And this argument doesn't even attempt to address the possibility that altruism is a by-product of other evolved psychological factors that are themselves adaptive. This means that even if you're right about altruism being directly selected against, you're wrong about the claim that it's evolutionarily impossible. As long as the psychological factors that altruism is a byproduct of are selected for more strongly than altruism is selected against, altruism will evolve. This isn't just a hypothetical - there are features of human psychology that actually do seem to produce altruism. You agree that limited forms of altruism - that is, altruism towards our family and ingroup - are already selected for. Well, combine these with a capacity for abstract thought that leads us to develop moral principles and apply them universally, and you get people who genuinely want to help others. Alternatively, combine the fact that we often have social reasons to claim that we're altruistic with cognitive dissonance that encourages us to make our moral beliefs fit with our moral claims and our actions fit with our moral beliefs. I just don't buy the claim that it's evolutionarily impossible for altruism to develop when there are many obvious ways that our own evolved psychology could produce altruistic motives that your arguments say nothing against. The arguments you make all assume that different psychological features are totally disconnected, so that altruism could just be selected away without affecting anything else, and that evolution would directly select only for the goals that most perfectly mimic the selection process itself.

Your comments about falsifiability completely miss the countours of the debate. Your claim was that people have no altruistic motives whatsoever, and that it's evolutionarily impossible for them to have any. That means a single instance of someone behaving altruistically falsifies your view. Maybe you want more than that to make sure we're not just mistaken about the single instance - sure, that's fine. But you can't ask for the completely unreasonable standard of finding 10% of humanity that act from purely altruistic motives with no selfishness, nepotism, or ingroup favoritism to be seen. No one is claiming that that's the case! Everyone agrees that people often act from selfish, nepotistic, and groupish motives. The difference is that you claim that everyone always acts from these motives, and any appearance to the contrary is an illusion, whereas I think that people sometimes act from other motives, including a generalized desire to make the world better that isn't aimed solely towards someone's self, family, or ingroup.

And your attempt to turn it around and say that I'm refusing to admit when my view is falsified is completely toothless. My claim is that some people have altruistic goals among their other goals, and they sometimes act on them. That's it. I never claimed that anyone is a perfect utilitarian that acts only on altruistic goals with no favoritism towards themselves, their family, or their ingroup. Indeed, I don't think people like that exist. But I think people sometimes act with the motive of helping people who aren't part of their family or ingroup because they believe it's good in itself to help those people. That's an extremely modest claim, whereas the claim you make, that people never, ever, have goals like this, and that it would be impossible for anyone to ever have goals like this, is extremely strong. If there's a different standard of evidence, it's because you're making an extremely strong claim that would require strong evidence and that can be refuted by a single counterexample, whereas I'm making a very weak claim that only requires a single example of altruistic behavior to prove it.

And none of the pieces of evidence that you say should falsify my claim even address my claim. I agree that people often, even usually, act from non-altruistic motives. So just throwing out a bunch of examples of people doing that doesn't falsify my belief. What would falsify my belief is if someone did a comprehensive study of all the behaviors we think are altruistic and proved that none of them involve any altruism at all, or if we found in experiments that, say, people always act 100% selfishly (rather than just *less* altruistically) when they don’t think they're being watched.

And of course, your "evolutionary logic" doesn't falsify my view because it's bad logic. I just spent a huge wall of text explaining why it's bad logic. The fact that I disagree with your logic doesn't prove that I'm unwilling to change my mind. That claim betrays a hugely overconfident view of your position - you have such certainty that you think the very fact that I disagree with it is evidence that I'm irrationally refusing to admit that my view has been falsified. No, I just think your position is poorly-argued, so I don't accept the argument.

Expand full comment
David Pinsof's avatar

Whoa, if you’re willing to concede that at least 90% of human behavior is not “truly” altruistic, then I’m not sure we really disagree about anything. There’s not much of a difference between 90 and 100%. This makes us allies as far as I’m concerned. Clearly a theory (or metatheory) that explains at least 90% of human behavior is a very very very good theory! Way beyond the explanatory power of anything else in social science. If you’re complaining that I can merely explain 90% of the human condition then I’ll take your complaint as a compliment.

Expand full comment
Plasma Bloggin''s avatar

To be clear, what I conceded is that at least 90% of humans are not pure altruists, not that at least 90% of human behavior is not altruistic, although I'm open to the latter possibility as well. In fact, I will go further and say that 100% of humans are not pure altruists. I agree that the evolutionary considerations you mention can explain a lot about human behavior, including why we are so often selfish and why we favor our family and ingroups. The only thing I disagreed with was when you said that humans *never* have altruistic motives and that we should explain any apparent cases of altruism by hidden selfish, nepotistic, or groupish motives. If that's not actually what you intended to claim, then we are not in disagreement.

Expand full comment
Clara's avatar

It's quite psychologically liberating to see things this way. I used to think this was a fascinating but gloomy argument, but I'm changing my mind. Thanks for a great piece!

Expand full comment
David Pinsof's avatar

Thanks, Clara. If you don’t mind, could you elaborate on how it has been psychologically liberating for you? I’m always looking for ways to frame these ideas that are less alienating and I’m wondering if you found a more helpful framing I could steal from you. :)

Expand full comment
Clara's avatar

Sorry – that was a bit cryptic from me! I’m certain I’m not saying anything you haven’t already thought of, but I suppose I just meant, first of all, that it opens up a very accessible, clear way of evaluating human behaviour for people who aren’t aware of your field. This could be fun in itself as it suggests there is a whole realm of the unconscious that they may not have considered much before. Secondly, I’d imagine a lot of people feel sceptical of others’ (and their own) motivations and professed ideals much of the time, but lack a framework to justify such feelings. They could end up feeling guilty or lonely about having these thoughts. Your ideas (and those of others in your area) alleviate a lot of that self-blame as it turns out that these intuitions were probably right all along! I think we often kind of suspect of a lot of what you’re saying, but in a vague, unformed way. This helps flesh out all those ideas intellectually and makes them feel less unacceptable. Lastly, as you’ve observed, so much idealism ends up leading to disastrous consequences, so this critique could actually encourage some much needed scepticism. It’s OK and good to be a bit suspicious of ourselves and others – very freeing!

Expand full comment
David Pinsof's avatar

Thank you for this! Very helpful and glad you enjoyed it.

Expand full comment
Piotr Pachota's avatar

Great post as always.

I would like some clarification about the "gene of nepotism" and such. Are you referring to literal genes and evolution or is it rather memes than genes as in: nepotistic cultures and societies outperform non-nepotistic cultures and societies?

I think this distinction needs to be made, as in hedonistic societies like Roman Empire before adopting Christianity or WEIRD countries today, status is decoupled from reproduction opportunities and number of offspring, hence the evolution of status related genes stops (Rob Henderson called optimizing for status in such environment an 'evolutionary misfiring'). However, regardless of reproduction, high status people in such societies still have the power to promote memes and drive the cultural norms.

Expand full comment
David Pinsof's avatar

Thanks for the kind words. I’m talking about literal genes and evolution. If you click on the link from that paragraph it will take you to a review of the evolutionary biology theory (kin selection) I’m referring to.

Expand full comment
Barbarous EP's avatar

Great article. I would also add, as per Trivers, motives of selfish genes below the selfish individual - but that is selfish nitpicking for status on an otherwise 10/10 article.

Expand full comment
Ken's avatar

Self interest can be complicated. Suppose I think I can solve a problem that makes my life easier some how. In the process of solving this problem, maybe I realize that is has greater value to me if I give the answer away. What is the evolutionary selection there? It may free up my time and reduce the risk for my offspring. Will the ability to solve the problem and the ability to see that the value of spreading the idea propagate? It seems unlikely. I don't know if this example is strong enough to make my case, but think of the skills to make fire from friction. Or, do you think the firemakers kept things to themselves for aeons?

Expand full comment
David Pinsof's avatar

Sharing valuable info can be selected for if it benefits the individual and their kin. This could be the case if a) info sharing is reciprocated by others, b) the info sharer gets status from sharing, or c) the info is selectively shared with kin or allies. I think all of the above apply to the evolution (and psychology) of info sharing. So it still sits comfortably in the unholy trinity.

Expand full comment
Ken's avatar

Okay, thanks!

If I understand your points correctly then, there is a solid case for the sharing of resources, which would likely be selected for. Even vampire bats do this. The entire group benefits, which increases the chances of individual and offspring survival. Further, there is a probability that the process to decide could evolve depending on the family-group/larger-group sharing payoff, vs secrecy. This would also extend to group/group dynamics.

As for the foolishness of ideation, the economists probably have a head start on that.

So, the complicated nature of the analysis doesn't really preclude selection for the propensity of the fairly simple behavior.

As an aside, I find it easier to accept the evolutionary selection for a propensity to behave a certain way, rather than for the behavior itself. Behavior is largely learned. Groups and groups of groups all have language, but they are not all the same. Differing languages do generally solve the same problems. But to extend the thought, languages that do not have symbols and concepts of things, can belong to cultures do not have those things, so behavior can also not develop. And, what was once adaptive may also change. The Ik in central Africa gave up generalized reciprocity after their circumstances changed with the arrival of European colonists. This may have changed again over the last 4-5 decades, but the studies I read back then were depressing.

Expand full comment
David Pinsof's avatar

Yes, agreed on all this. I'd just add a few caveats. The group benefitting is a side effect; it's not the reason why the sharing behavior evolved--the sharing evolved to benefit the sharer and their kin. Also, the sharing should not be indiscriminate: it should be focused toward kin, or toward those who are most able and willing to reciprocate (i.e., trusted group members). And it should be restricted to high-variance resources, where group members are equally vulnerable to shortfalls and mutually benefit from reducing the risk. It shouldn't apply to resources that are a product of individual effort or skill. And yes, agreed that evolution does not select for behaviors directly; it selects for psychological mechanisms that regulate behavior. That is the key difference between evolutionary psychology and the behavior-focused "sociobiology" that preceded it.

Expand full comment
Ken's avatar

Also agreed. Though, I'm interested in the caveats related to effort and to skill. Propensity for effort is heritable and trainable and perhaps your point is that sharing with "low effort" member is undesirable? Likewise propensity for skills, is heritable and skills are trainable. But since we require a group for survival, and since epigenetic changes can directly affect the next generation and the following one, it would seem that overall group fitness is still a benefit if undesirable epigenetic propagation can be reduced even in "less fit" members, assuming they contribute at a nominal level. After all, many of those "other" members/offspring are very likely to be involved genetically with the sharer's offspring.

Expand full comment
David Pinsof's avatar

The idea re effort: if I have more than you because I tried harder than you, or because I’m more skilled than you, then it does not benefit me to share with you, because I am unlikely to be in a situation where the tables will be turned and you will have more than me. That’s why it benefits me to share with you when the shortfall is due to luck--“there but for misfortune”--because luck affects us both equally and offers a greater opportunity for reciprocity. This explains why people feel little sympathy for lazy, stupid, or incompetent people, but feel great sympathy for unlucky people (especially when they are equally vulnerable to being unlucky in the same way). It explains a lot of our political rhetoric surrounding welfare. Some people even try to frame low effort or low talent as misfortune--it’s genetic, and that’s a kind of luck--to win sympathy for themselves or for their political allies, and maybe that tactic works on some people. Not sure. But the tactic itself is rooted in our evolved psychology. And yes, when our group’s fitness is linked with our own fitness, we will evolve to help our groups, by definition. But that just means we will evolve to monitor the world for situations when our group’s fate is bound up with our own (eg in war), and restrict our group altruism to those cases.

Expand full comment
Ken's avatar

Nice, a way to separate propensity from economics and cost benefit rational choices. And, I see: unlucky genetics is a little easier to connect to medical issues, where there is a little more stomach for sharing the risk. Low effort/low skill can be more easily viewed as choice "cheating" even though there is probably a genetic lottery component to it.

Expand full comment
Lucy's avatar

Outstanding. thank you.

Expand full comment
Ross Andrews's avatar

Yet another brilliant David Pinsof Substack post. The Starbucks analogy is a good one, and I think it’s a healthy way to look at people in general. No one has a problem with Starbucks trying to make money for two reasons: first, providing products and services to people is a good thing. Second, of course they’re trying to make money… it’s a business! When looking at those around us, we should respect the fact that their motives - sex, status, in-group and family altruism - are normal and inevitable human motives just like earning profits for a business.

Expand full comment
Andrew Smith's avatar

Good point about indiscriminate altruism. We have been wired to protect those around us by way of evolution, but not everyone on the planet.

Expand full comment
Keith Ngwa's avatar

Once again, you make another hypocritically moralistic article while claiming to believe in Darwinism. If morality and values are bullshit then so is the very idea of moral progress, because there wouldn't exist any objective moral standard to progress towards at all.

"Just think of the most villainous groups in history—the most zealous Nazis, Maoists, inquisitors, and holy warriors. None of these groups saw themselves as apes vying for dominance."

A lot of those guys really did openly see themselves exactly as that. Most people who join violent ideological groups are not wide eye Idealists at all, some of them aren't even believers of their ideologies. Most are actually very blunt about their self-interests and don't even bother making a justification.

"None of them reflected on their ugly, unconscious motives."

You are assuming that self-interests and selfishness is universally seen as evil when it isn't.

"What should terrify us most is not the lone cynic, but the mob, the movement, the higher purpose—the feeling of being part of “something larger than ourselves.” "

If morality is bullshit then there's nothing objectively wrong about being a delusional mass murdering Idealist at all.

The notion that all Human and other Animal behavior & thought is just subconsciously driven for reproduction is a pop cultural caricature of Evolution that Darwin himself rejected. Humans and other animals have all sorts of heritable psychological & behavioral traits that have either zero or negative impact on their chances to reproduce yet Natural Selection doesn't do away with them. Also, the idea that Evolution would produce organisms that desire and psychologically need things (like "meaning", "beauty", etc) that don't actually exist goes against how Natural Selection & Biology in general actually works. Its one of several reasons why Darwin himself rejected Nihilistic interpretations of his theory and wasn't an Atheist despite popular belief.

Expand full comment
David Pinsof's avatar

I never argued for moral nihilism (the view that nothing is objectively right or wrong), and I’m not a moral nihilist. It is possible for morality to be mostly bullshitty, mostly harmful, and real (or capable of being true or false), which is what I believe. Separately, what is your evidence for groups of violent/villainous people being explicitly selfish or non-moralistic? I have asked far and wide for such evidence among social scientists and found nothing (several mentioned the mongols but I looked into it and they are positively dripping with moralism). So where are you getting this from?

Expand full comment
Keith Ngwa's avatar

The Mongols, Ancient Greeks, Romans, Turks, Arabs, Huns, Fulani, Aztecs, Hebrews, Imperial Japanese, Tang Chinese and even most of the Crusades was basically it's leaders outright saying "we are better than you and we're gonna take all your shit and kill you because we can", with any ethical or religious justifications being secondary or after the fact. There was nothing moralistic about the Mongol Conquest because they never even enforced any aspects of their culture onto their hosts and more often assimilated into their hosts themselves.

The modern decline of violence is merely because the incentives for war have decreased and nobody benefits from WW3. But this is temporary and this current era of peace will end like all of the others in the past.

What basis do you have for calling anything Good or Evil at all? This is the primary problem with all Secular morality and all Western morality since the Enlightenment, it has zero basis in objective reality and always degenerates into solipsistic relativism because it can't ever get beyond arbitrary feelings & consequences. There's not a single thing or act in all of Human history that has universally been regarded as Evil by all people. For instance, the notion that violence/killing is inherently bad & always wrong is absent in the vast majority of cultures & religions worldwide (it isn't even in the Bible). For many cultures, morality is openly situational and tribal with standards of humane treatment only applying to the in-group, and there have been many philosophers and even sects of every major religion that outright rejected the very idea of objective morality.

Expand full comment
David Pinsof's avatar

Can you provide a citation and/or quote supporting this idea that all those ancient groups said “we’re gonna take all your shit and kill you because we can”? I agree they all thought they were superior to the groups they were conquering (who were dumb evil savages), but that is itself a kind of moralistic justification. I’m looking for an example of a group who doesn’t think they’re superior at all, and just wants to kill and plunder for its own sake, or for selfish gain. Any citations for that claim?

Expand full comment
Paula Ghete's avatar

I enjoyed reading this article and I agree with most of it, but I have some thoughts and questions. I’d be curious to hear what you think – if you’re interested in reading and replying.

I think I’m quite cynical myself, yet reading this article made me realize that maybe I’m a bit less cynical than I had thought. I still believe that some people may truly want to make the world a better place – even if part of their motivation is still selfish or self-serving. After all, if I succeed in making the world a better place for everyone, *I* will also get to live in a better world. Plus, I’d experience more meaning, feel better about myself in the process, etc.

But isn’t there room for an honest desire to make the world better (mostly) for the sake of it? Just like we do small acts of kindness for strangers we’ll never see again. If evolution would only select for selfishness and small moral circles (that include our family or in-group), why do we still feel good when we help others indiscriminately – including strangers or people who cannot reciprocate? Why wasn’t this selected against so that we’d only feel good when we help those who are family or part of our group?

Since you mentioned knowledge and beauty, why do you think we seek and take pleasure in them? And why do we experience something that feels like pure joy when we see a beautiful painting or finally understand something about human nature, for example? Does it all come down to increasing or signaling status?

Expand full comment
David Pinsof's avatar

Thanks for your thoughtful comments, Paula. I’m afraid the logic of Darwinism should cast doubt on any genuine motivation to make the world a better place, even if doing so benefits the self in the process. The reason is that fitness is relative to other members of the breeding population. If I benevolently help everyone have an extra kid, including myself, then my benevolent genes will have no advantage over alternative genes. They will not take over the population. It is only when a gene benefits its carriers relative to noncarriers that it increases in frequency. That’s why our motivations are mostly in comparison to other people--I wrote about this in “There’s a problem with our desires”. As for why we find it rewarding to help strangers, I think the answer is that we are motivated to convince others that we are good people (according to how our culture defines “good”), and we are also motivated to convince ourselves that we are good people, so that we can more effectively convince others. Just as a defense lawyer rehearses their case and gathers evidence for their defendent’s innocence prior to the trial, we rehearse our case and gather evidence for our own goodness prior to the “trials” we face in life if anyone ever questions our character. That’s why we find it rewarding to help strangers--especially when they are particularly sympathetic according to our culture, when we can help them in a particularly valiant way, and when the specific way we’re helping them is considered praiseworthy in our culture. At the end of the day, yes, I think it is a kind of status-seeking and a kind of signaling. Which doesn’t mean it’s bad. It just means it’s the best Darwinian explanation for it anyone has yet come up with. As for the beauty of art, there may be some self-interested benefits to it as a kind of play or practice or fine-tuning of one’s cognitive and perceptual equipment. You can read more about that hypothesis here: https://dl.icdst.org/pdfs/files/308beb8f79706ad5590518a03d7a1866.pdf

Expand full comment
Paula Ghete's avatar

Thank you for your reply! Your article and your comment haven't answered all my questions, but they made me reflect. I'll try to read some of your other posts to see what else I can learn or reflect on.

Expand full comment
Just bullshitting's avatar

"The distinction is crucial, because if people "really" wanted music as an end in itself, then the status they gained from it would have no effect on their behavior: they'd keep on making music regardless of how much status they got from it."

>So the test of whether people 'really' want X is: would they do X in the absence of acquiring status by doing X?

> People donate to charity anonymously

>Pinsof: BUT they MUST be seeking status secretly even if they donate anonimously! It's "buried" signaling, you see!

It appears your "theory" is unfalsifiable.

In short, David, it seems you are ... bullshitting.

Expand full comment
David Pinsof's avatar

No, if people didn't get extra status from donating anonymously, it would falsify the theory.

Expand full comment
Just bullshitting's avatar

Why would that falsify the theory? Do people get extra status for playing music "anonymously", where people can't hear them?

If people didn't get extra status from donating anonymously, you would simply say that our stone-age brain doesn't "get" anonimity since "anonimity" did not exist when our minds evolved. I know how it works.

Expand full comment
David Pinsof's avatar

It seems your theory that the theory is unfalsifiable is, itself, unfalsifiable.

Expand full comment
Just bullshitting's avatar

Nope, you're just wrong.

If it's all about self-interested status-seeking, why didn't natural selection make us more cynical: why do people "look upon anonymous donors with awe and reverence", as you say? Why don't they automatically assume some strategic ruse? Wouldn't that be more beneficial since it is true?

The very fact that people do make these kinds of distinctions shows that there are, in fact, meaningful differences between people: some people are more or less selfish, and we evolved to register those differences.

Expand full comment
Will Peterson's avatar

"Any altruism that wasn't laser-focused toward our families or our allies [...] would be mercilessly selected against by evolution."

What about a flexible and adaptable concept of family, ally, or tribe? We see it all the time. Fans of a particular professional sports team consider other fans of the same team from 2,000 miles away similar to close family or friends in psychological studies. People who share an amorphous identity like "queer" or "Jew" consider others of the same identity to be close allies, and are willing to be altruistic.

If our biological sense of family-hood is that flexible, maybe it's pretty easy to trick it to include all of humanity?

Expand full comment
David Pinsof's avatar

No, sadly, it’s not easy to trick people into including all of humanity in their group, any more than it’s easy to trick a sports fan to root for a rival team, a Jew to treat Muslims as family members (or vice versa), or “queer” people to lovingly embrace Christian fundamentalists (or vice versa). Ingroups have outgroups, and the evolutionary function of an ingroup is to enhance the fitness of insiders relative to outsiders—not to benefit the entire world. To form an ingroup with the entire world is to defeat the whole purpose of forming an ingroup, from a Darwinian perspective.

Expand full comment
Nicholas Moore's avatar

I think there's a subtle mistake in the rationale here, which is to equate the effect (Darwinian self-interest) with the cause (that, therefore, the behaviour itself must have been performed cynically).

Just because the outcome of a certain behaviour is to favour the interest of the gene, you cannot generalise that the motivation behind the behaviour intended this outcome; indeed, as you have pointed out before, humans have very low self-awareness.

When it comes to the evolution of behaviour, the only thing upon which selection acts is the outcome of the behaviour – it does not care about the mechanisms which lead to that outcome. In the case of human behaviour, those mechanisms involve neural processes – thoughts, emotions, etc – which depend upon the development of the brain, including memories, personality, preferences etc. It is not necessary for the motivation of the human to be cynical in order for the outcome – the level at which evolution acts – to have the effect of promoting the interest of the set of genes responsible for the behaviour.

It is entirely plausible – from the perspective of natural selection – for people to altruistically care about other people enough to give charitably and anonymously because they simply care about the cause to which they are giving. As an example, many people donate to Cancer Research because they lost a loved one to cancer, and they don't want other people to experience that same pain – the behaviour is caused as a side-effect of empathy, not cynicism (empathy itself having likely evolved under the selective pressures of our early social environment). Of course, the outcome of this behaviour may well provide social benefits to the individual responsible, as you highlight, or may even be viewed through the lens of "curing cancer would be in the best interests of the individual's family and gene line", but this is making the mistake of interpreting the mechanisms of a behaviour – the individual's motivations – through the lens of the outcomes of that behaviour.

I would say, therefore, that to generalise a cynical interpretation onto the motivations of all humans is to oversimplify the reality, which is that a great many different textures of minds, and a great many different motivations – some noble, some selfish – add up to the complex outcomes which have created human civilisation. Your mind and motivations may be cynical, but that does not mean that everyone else's must be as well.

Expand full comment
David Pinsof's avatar

Thanks, Nicholas. I probably should have included a paragraph getting into this nuance about conscious awareness, but alas, the piece was already getting long. My take is: I don’t think it really matters whether the motive is conscious or not. Worldviews help us predict and explain people’s behavior. That’s their job. If I’m trying to predict and explain behavior, cynicism is going to help me do that, regardless of whether the cynical motives are conscious or not. It doesn’t matter why people *think* they’re donating to charity. What matters is why they are *actually* donating to charity—the specific causal factors that are influencing their decision. Those factors are inevitably going to be reputational, status-driven, nepotistic, reciprocal, and/ or groupish in nature. If I’m trying to design institutions with incentives that promote charitable giving, I need to look at people’s *actual* motives, not the motives they say they have. If I’m trying to predict what Starbucks is going to do next, I need to look at what’s going to make Starbucks the most money—not at what “enriches the human spirit.” Maybe everyone at Starbucks sincerely believes they’re trying to enrich the human spirit. Fine, doesn’t matter. That’s not what’s going to help me predict Starbucks’ financial decisions. Motives are the things that causally influence and explain our behavior across a range of situations, regardless of whether we’re aware of them or not. Motives (and psychological systems) are the things that natural selection selects—not behaviors. When you understand the process that created our motives, there is simply no other option than to say those motives are selfish, nepotistic, or groupish. If you can think of another way for a motive to evolve that doesn’t involve replicating the genes that built it, I’d love to hear it. But until then, I’m going with Darwinian cynicism as the best way to explain and predict people’s behavior.

Expand full comment
Keith Ngwa's avatar

"Motives are the things that causally influence and explain our behavior across a range of situations, regardless of whether we’re aware of them or not. Motives (and psychological systems) are the things that natural selection selects—not behaviors."

This is just flat out false on all levels. Natural Selection actually DOES select for behaviors, more so than it does for motives. There's an entire field of science called Behavioral Genetics that demonstrate this clearly.

Neuroscience also as demonstrated numerous times that beliefs, values and motives have little to no direct impact on the behavior of humans and other animals, they only influence the way people think and feel about their behaviors because their effects are largely post-hoc. You brain makes decisions to do a behavior before you become consciously aware of it.

Motives and values are largely post-hoc rationalizations of instinctual desires and self-interests, which could be a wide range of things genetically. And reproduction is NOT the universal motivation of all human behavior or even that of other animals (no scientist in Evolutionary Biology thinks it is, not even Darwin himself), as large percent of people and wild animals choose to never have offspring. There is no single universal motivation behind all behaviors because motivations themselves don't cause behaviors and motivations & desires change overtime.

Expand full comment