69 Comments
User's avatar
Shane Littrell, PhD's avatar

Moral particularism > utilitarianism (and consequentialism)

Expand full comment
David Pinsof's avatar

Not a bad summary.

Expand full comment
Shane Littrell, PhD's avatar

It's why I often take issue with some of the research from the "trolley-ologists" out there. The "utilitarianism vs consequentialism" type of either/or thinking in that area of research often doesn't reflect how people engage in moral reasoning in the real world. I could be wrong, but it seems to me that most people are moral particularists - at least to a limited degree.

Expand full comment
Networkneuromod's avatar

I think this was just called practical wisdom at one point. I like to watch the "numbers people" go through a long and winding numerical process through decades of new -isms, just to come to the understanding that contemplative thinkers had it more right all along without all the circular calculations

Expand full comment
Everything-Optimizer's avatar

I agree with you. But I'll also be a pedantic mathematician and go further to say utilitarianism fails well beyond these considerations.

The appeal of utilitarianism is also in its quantification of preferences for modeling purposes - considering economic actors as making decisions by maximizing utility, and, correspondingly, impartial adjudication of different incentive structures in the context of microeconomic and social choice theory for the purpose of designing institutions.

And it falls flat here, too.

1) As you note people don't actually have closed form utility functions, and a better microeconomic theory of market decisions is maximizing cumulative prospects, which is the sum of gains from some reference point.

2) Within the field of Welfare Economics, the study of technical formalism on social well being, utilitarianism suffers from various gotchas, and it's not taken seriously in formal Economic Theory. Among various competitors, I prefer (the Nobel Prize winner) Amartya Sen's framework of maximizing capability vectors, that is the variety of possible options the people in a polity have - consume goods and services, start a business, engage in community involvement, etc.

Expand full comment
Sha's avatar
12hEdited

I'm basically a moral nihilist. For me, taking evolutionary psychology at its logical conclusion seems to mean that all values (morality, beauty, good etc.) are bullshit that our brains evolved to believe in in order to maximize our fitness. "Facts" exist but values will never be able to be measured in an empirical, scientific way (they don't exist in a relevant way).

Expand full comment
David Pinsof's avatar

Yea I used to hold this view, but I think there are facts about the objective criteria that trigger human emotions absent any biases, and that’s usually what we’re talking about when we talk about morality, beauty, etc. Things can therefore be objectively beautiful, moral, etc. (at least to humans). I find this view more plausible than the view that the entirety of our moral and aesthetic discourse is false. I have found no convincing evolutionary explanation for why we would have or need so much false discourse. And plenty of good evolutionary explanations for why such discourse is rooted in features of human nature that we all seem to be referring to with words like “ethical” or “beautiful.”

Expand full comment
Sha's avatar

Thanks for your answer! Yeah I understand this view, but to me this is seems to be using "objective" in a sense that is too weak, arbitrarily anthropocentric and dismissive of disagreement about deeply held values between people.

Expand full comment
David Pinsof's avatar

Yea I admit it doesn't have as much teeth as other versions of "objective" morals or aesthetics, but sometimes we cannot get what we want from reality. And yea, it's definitely anthropocentric, but I don't think that's so arbitrary given that we are humans and greatly benefit from knowing what is objectively compelling to other humans.

Expand full comment
Networkneuromod's avatar

I implore you to engage with cognitive science, neuroscience, and philosophy more and engage less with linear, easily digestible forms of data to try to achieve "fit" and thereby sufficient feelings of data confirmation. There are cognitive layers and for that matter, cognitive distortions, that implicate what "feeling strongly" can entail. There are corresponding presuppositions that appeal to different levels that contribute to resulting feelings. It is not a matter of simple plausible deniability or subjective flattening that determine directional, objective necessity, nor are there reductive models that capture this proportionally.

Expand full comment
Enrico's avatar
13hEdited

I agree with more or less everything, except where you are against the decision in the trolley problem of throwing the fat man under the train.

After all "our moral intuitions are partly utilitarian". I care in the SAME way (more or less: nothing at all) both about the fat man and the other five which would be saved. So here utilitarianism is probably the only way I can deal with the problem (under its conditions: most reasonable people would REFUSE the conditions of those being THE ONLY TWO options and try something else instead - try to save EVERYBODY).

On the contrary, it would be wrong to throw my mom (child, spouse, friend, whatever) under the train, since I care for one but not (less so) for the others. I would actually throw five people from the bridge to save my mom (child, spouse, etc). That by the way reflects the "correct" evolutionary perspective: save *MY* genes.

(finally, the old joke: the real utilitarian solution to the trolley problem is go back in time, kill the one who devised the idea, and spare a lot of people a lot of troubles)

Expand full comment
David Pinsof's avatar

I actually didn't say I'm against the decision to throw the fat man under the trolley. I just listed it as an example of the types of arguments people make against utilitarianism. I'm actually not sure if it's "right" or not. I think it's objectively shameful and disgusting. I think anyone who is capable of physically murdering someone in that context, in that way, should be ashamed of--and disgusted with--themselves. A capacity to physically murder someone that quickly and brutally is objectively undesirable in a social partner--I would be afraid to be around that person. And the person who committed the murder ought to feel guilty about the fat man they murdered and compassionate toward his family and friends. If I were in that context, I would be too filled with feelings of expected shame, guilt, compassion, and self-disgust to go through with it, and I think that's a good thing. I think that makes me a more desirable social partner. Then again, I also get that I'd feel compassion toward the five people who'd die if I didn't act, and guilt for not saving them. I think those emotions would also be on the scale, and I'm not sure what the "correct" weighting of them all would be. All I know is: 1) a good person should feel all those conflicting emotions in that context, 2) I wouldn't push the man if I were in that context, and 3) I would be disgusted with, and afraid of, anyone who did push the man in that context (and I think I would be correct to feel that way).

Expand full comment
Matt in Tokyo's avatar

Sounds like emotivism. I think ultimately there are two distinct ways people talk about morality and we really ought to be distinguishing between them.

I think it’s completely correct to point out that most of the time when people say X is wrong, they’re mostly just expressing their personal emotional aversion to that thing (often playing a subconscious status game or similar).

But the other type is when you say (to take one of your examples) torturing animals is wrong - something I hope most people believe would be wrong regardless of how humans happened to evolve or how we happen to feel about it. If someone edited my brain to think torturing animals was morally good, I would just be wrong.

All else being equal, the universe is a better place if it has less suffering. People who disagree with that I think tend to smuggle in extra changes (ignoring the “all else being equal” part) or else just lack imagination. If lessons could be learnt without suffering, surely that is preferable. If more suffering is somehow preferable, then it tempts the absurd conclusion we should suffer maximally.

Expand full comment
David Pinsof's avatar

Unfortunately your “all else equal” claim also leads to absurd conclusions: that we should eliminate all suffering, pump people full of opiates without their consent, plug them into the matrix, painlessly kill everyone in their sleep, etc. But I agree torturing animals is wrong. And I agree it’s not just an expression of my emotions. It’s a statement of fact about what any normal unbiased human will experience at the sight (or knowledge of) an animal being tortured—and what they will think of the torturer. Moral claims are factual claims about the prototypical triggers of emotions and whether those triggers were present. And yes, I don’t think we can learn lessons (and fulfill all the other helpful functions of suffering) without suffering. The correct way to understand suffering is that it is a mechanism with a function. And no that doesn’t mean we should maximally suffer, any more than we should maximally feel any emotion with a useful function.

Expand full comment
Charles Egan's avatar

This is how I read Hume's view--a moral judgement is a belief that a property will evoke certain sentiments in impartial observers.

Expand full comment
David Pinsof's avatar

Haven't read Hume in a while, but if your reading is correct than consider me a Humean.

Expand full comment
Networkneuromod's avatar

You are at least a human but even a Humean might be skeptical on that.

Expand full comment
Matt in Tokyo's avatar

Pumping people full of opiates etc doesn’t satisfy the condition of all else being equal though - it completely changes what society looks like, people’s lives and the connections they have with others (things that people value). Criticisms of utilitarianism often smuggle in these other consequences that haven’t been taken into account.

An all else being equal scenario is for example animals in factory farms: We think they suffer greatly, but due to the problem of other minds, we don’t know for sure. I think all else being equal, a universe where they suffer greatly is worse than a universe where they don’t, even if they seem identical to an outside observer (that is to say even if the animals behave identically, they still cry out and convince us they’re in pain, and so on).

Expand full comment
David Pinsof's avatar

That's fair but then your "all else equal" claim applies to basically nothing in reality. What change can you realistically make in the world that affects nothing else? Things are causally interrelated in very complicated ways. If you're only talking about parts of the causal nexus that have no relationship to anything else, then you are basically talking about 0% of reality.

Expand full comment
Networkneuromod's avatar

"Personal emotional aversion" is not plainly equal when you ask for form argument, in light of necessity and weight of factors. That is to say, one personal emotional aversion can be more backed and also more implicated than another persons. You even showed an example to this effect with your "often playing a subconscious status game or similar". Any honest discourse about a serious topic-which is rare in a postmodern 'ironic detachment' landscape should be seeking, for example, what is a vice to a power game and what is meta-cognitively considerate of downstream outcomes unrelated to ego prioritization.

Your "less suffering" suggestion is very common materialist position that continually has to revise itself not only on any idea of "progressive" linearity away from it in society but also between comparative factors of suffering and the temporal implication of each that do not fit neatly into a model unless the human condition is understood; the same human condition that would recursively defeat the feasibility of the model itself, as the model discounts convictions on foresight.

Expand full comment
Jamie Freestone's avatar

Great piece as usual. Happiness is bullshit, check. Utilitarianism is based on bullshit folk psychology (at best), check. What emotions are, check.

My only quibble here is that surely there can be some "real" utilitarians out there, people who try their best at doing ethics/morality according to their principles (however misguided & partially successful). And surely some of them could be doing it in a non-status seeking way. Like their actions might even damage their status, make them feel bad, etc. but for some purely intellectualised reason they still do them.

I'm a hardcore Darwinian, but to me that means that although anything non-adaptive gets culled in the medium or long term, in the very short term (a single generation) anything can happen: no rules, random mutations, lesions on the brain, mind viruses, personal experiments, etc., so it's possible a person's behaviour is totally eccentric & not amenable to an evo pysch framing. Some of the more monk-like utilitarians in the EA/rationalist community are, I think, at least in some of their decisions, not status seeking (even in the paradoxical way you're written about).

Expand full comment
David Pinsof's avatar

Yea maybe, but if what you're talking about is really happening, it can't be very common. I'm not so much interested in weirdos with random mutations or brain lesions as I am in making broad, useful generalizations about humans in general. I'm interested in explaining as much variance in human behavior as possible, not explaining the behavior of John Smith in particular, even if John Smith is the exception to my cynical thesis. We have to content ourselves with explaining most of what's going on in human behavior. We'll never be able to explain every last idiosyncracy of every last individual. Or if we can, we'll need to nail down the basics first. Let's try to figure out what's going on at the most basic level, and then we can argue about John Smith if you want.

Expand full comment
Jamie Freestone's avatar

Totally fair. Though as a weirdo myself I feel profoundly unseen.

Expand full comment
Schneeaffe's avatar

>anger is “designed” by natural selection to detect unfair treatment

What do you think of anger at inanimate objects, or at ones own mistake?

>Yes, we’re extremely biased about morality

>It’s just a strategy for winning The Opinion Game and creating a new social norm where pretending to be utilitarian wins you status points.

To what extent do you think "the current meta" really changes what is a correct emotional trigger? It seems like you have to for evaluating shame for example - and this potentially brings a lot of more typically "moral" considerations back into play.

>Torturing animals is bad

This doesnt seem like an intuition a predatory animal would evolve.

Expand full comment
David Pinsof's avatar

Good points and good questions. Re anger at inanimate objects, I’m not sure, but it might be implicit anger at the person who made the object for doing a crappy job making it. Re anger at one’s own mistakes, I’m not sure, but it might be a kind of anger at one’s past self for treating one’s present self unfairly (for failing to take into account the interests of one’s future self at the time of the decision). Re the point about “the current meta.” Sorry I’m not familiar with that expression—you’ll have to explain it to me and repeat the question. Re torturing animals being bad. You’re right that it’s a bit odd for a predatory animal to have such feelings about torturing animals. But there’s a difference between eating animals (which I don’t think is morally wrong) and torturing them. I suspect the reason we have a negative response to animal torture is that we correctly view the sort of person who would torture animals as a sociopath or sadist, and we wisely want to avoid and condemn that sort of person. We also admire the person who feels so much compassion that it overflows onto animals, and we want to associate with that person. So we should objectively feel proud about caring about animal welfare (because it triggers admiration in others), and objectively icked out by people who torture animals or don’t give a shit about animals being tortured.

Expand full comment
Schneeaffe's avatar

Do you think getting angry at past selves is "correct"? You might also find my old theory interesting:

https://old.reddit.com/r/slatestarcodex/comments/e2loqp/why_we_get_angry_at_inanimate_objects/

>Sorry I’m not familiar with that expression

https://en.wikipedia.org/wiki/Metagame#Competitive_gaming, or maybe something like "the signalling equilibrium".

>I suspect the reason we have a negative response to animal torture is that we correctly view the sort of person who would torture animals as a sociopath or sadist

This actually is a great example of the metagame issue. We view people who do it as dangerous, therefore you would only do it if you didnt care about being seen as dangerous, which meany you probably are dangerous. Its self-sustaining, and I dont think humans consistently land in this equilibrium. Many predators also "play with" their prey, in ways that sure seem torturous, and quite analogous to e.g. bullfighting. Many cultures had some form of torturing animals as entertainment. We have *somehow* gotten into a different equilibrium, and a more detailed look at the process might have more to say about "moral reasoning".

Expand full comment
William of Hammock's avatar

Utilitarianism fails to account for the effects of adopting "maximization" or "optimization" in principle, when in practice, we overfit every narrative to a pretty tiny working memory space. It then ignores the process by which we distill and bundle information to fit that limitation (abstraction and maths being... one of them?).

I demand that your next post be "Bayesianism is Bullshit," because it simply "fails better" in all the same ways. It is a very very very good way to calibrate our working memories, and a shitty shitty shitty way of remembering how distillation, bundling, unpacking and "enrichment" (projection) occurs in its immediate surrounds.

Expand full comment
David Pinsof's avatar

I wouldn't say bayesianism is entirely bullshit--I think it can be intellectually useful in a variety of ways--but like anything it can be abused and taken to absurd extremes.

Expand full comment
William of Hammock's avatar

Ah, but how much have thought about it in particular?

Bayesianism is in a weird place where "truth value" is arguably abandoned for "predictive value." The latter is pretty unabashedly utility-centric, even while there is something pragmatic about conflating the two. But this conflicts directly with evolutionary psychology, at least as I understand it. Truth value and utility (be it genetic, phenotypic, social, etc) can be orthogonally related. Bayesianism in principle says to maximize entropy (orthogonality in this case), but in practice, similar priors and base rates are presumed sufficiently similar, which is a minimizing entropy operation. Bayesianism is hoisted by its own pragmatic utility, but its commitments are pretty conspicuously barren. I might even say it approaches "optimized bullshitting."

Expand full comment
William of Hammock's avatar

Forgot the most important part: the overlap between priors and vibes, even if the method is pretty great at minimizing vibes, given initial vibes.

Expand full comment
Matthias E's avatar

I am hanging between (modern) forms of Stoicism and Epicureanism. Hope to become more clear about it with your Essays. What would be less bullshit ? In your texts are many value judgments, where do they come from, what is the core of our values ? Affections( pleasure/pain) ? Reason ?What about mental perturbations and ataraxia / Eudaimonia ? I think this ancient ideas can navigate us out of nihilism.

Expand full comment
David Pinsof's avatar

My value judgments, like anyone else’s value judgments, ultimately come from emotions. I think the only way to understand those emotions is with good evolutionary psychology, and the only way to understand the things in the world that trigger those emotions is by using the scientific method, broadly construed.

Expand full comment
Matthias E's avatar

Thanks, yes that was my guess too. That's why I'm currently leaning more towards Epicureanism. All the arguments and examples and thought experiments which want to judge (ethical-, quality-) hedonism would like to show bad results. But it is precisely this "bad" that lies in the feelings. There is a bad feeling in murder a fat man for other people, eating babies or killing someone to spend the organs or enslaving many other people (who would rightly fight back and we ourself would not want to be one of them). All in feeling feelings about things/ideas and in core our deepest affections but yes with reason and understanding we can take images / ideas before our mind and the consequences of our actions and outcomes and again value them with our feelings. :)

Epicureanism, types of hedonism and feelings were often underrated in western thought / Philosophy. Letter of Menoceus is the greatest practical philosophical text in my current view :)

Expand full comment
Pedro Villanueva's avatar

Interesting. This seems like the "meaning of life" question; what one should try to maximize dilemma. Personally, and I'm aware this comes of as rather selfish, I have never really been bother by questions like this. My philosophy is to just "do whatever the hell I feel like doing". If I want to work on a project, I do; If I want to learn about x, I do. Maybe it's because I live on a country where being free to pursue whatever one desires is NOT a given; one has to fight for it.

Perhaps I should think about how to improve other people's lives more.. within whatever optimizing policy I come up with

Expand full comment
Chris DeMuth Jr's avatar

Act utilitarianism can drive one crazy -- without commitments, you can get way too swayed by momentary rationalizations. Rule utilitarianism can be useful if imperfect for developing principles.

Expand full comment
David Pinsof's avatar

Yea rule utilitarianism is definitely better, but unfortunately also vulnerable to rationalization. Politics is largely about people rationalizing their side’s policy preferences as being optimally rule utilitarian.

Expand full comment
David Pinsof's avatar

Which isn’t to say that my view isn’t vulnerable to rationalizations either. I just don’t think a morality invulnerable to rationalizations exists.

Expand full comment
John A. Johnson's avatar

As a life-long student of the evolutionary basis of morality, I deeply appreciated this essay. Ordinary utilitarianism as a philosophy of morality clearly does not represent how morality actually functions in everyday life. In David's words, "utilitarians suck at psychology." Even if one could rationally calculate whether act A or act B would create the greatest happiness for the greatest number of people (we cannot; nobody can accurately predict how their behavior will impact the world because that is beyond anybody's cognitive powers), that is not how ordinary people decide what is morally good or bad and whether to behave in ways that they believe are morally good (or bad; yes, sometimes we choose to do things that we believe are immoral). Evolved moral sentiments or emotions are obviously what motivate our decisions, and these emotions evolved because they maximized gene transmission.

One small shortcoming of this essay is the way it employs examples of behavior that David *feels* are good or bad (how could it be otherwise if emotions define our perceptions of moral goodness?) but are presented as obvious "truths." "It’s obviously bad to hurt people. It’s good to help them if you can." My own view is that so-called "moral truths" represent (conscious or unconscious) efforts to make one's moral pronouncements more efficacious in influencing others' behavior by appearing to be impersonally objective. In reality, nothing is uniformly only beneficial or only harmful for everyone in any situation at any period of time. In reality, acts have a variety of beneficial and harmful consequences that vary across people and time. But when moral pronouncements are presented as "truths" they can seem more compelling. Compare "I feel bad when you hurt people" with "It is objectively wrong to hurt people." For more details, see https://www.psychologytoday.com/us/blog/cui-bono/202307/a-psychological-theory-of-moral-truth

Which brings me to another self-serving reference to my own research on morality, specifically an essay on what I call (tongue-in-cheek) Real Utilitarianism, in which I define a new kind of utilitarianism based on causal efficacy, where I describe "goodness" as the power to bring about particular results. Not the greatest happiness for the greatest number of people, but any specific harm or benefit. Rather than spell out the details here, I will point to the URL for this position, https://sites.psu.edu/drj5j/real-utilitarianism/ . This essay is one small part of online postings about my research on the psychology of morality. An overview of my research can be found at https://sites.psu.edu/drj5j/virtues-web-site/ .

Expand full comment
J. Goard's avatar

Your reasoning here feels like it's in an ethereal plane parallel to my own, noting all of the same implications and drawing opposite conclusions. My reasoning goes:

Mathematical intuitions produce really bad math. The number of possible shuffles of a standard deck of cards, greater than the number of atoms in the galaxy? That's crazy!

Physical intuitions produce really bad physics. The water emerging from the curved tube is surely going to keep moving "the same way" -- curving! Well, until its curvingness slows down...

Yes, intuitions in some form are always part of the clay with which the iterative process of social moral reasoning is seeded (apologies for the mixed metaphor). And the same is true for math, physics, logic, biology, psychology, etc. I think I get to ethical consequentialism in about the same way as I get to the math and physics that are far superior to raw intuition.

Expand full comment
Philip's avatar

Suppose a man has the opportunity to forcibly impregnate a woman and force her to bear the child (both against her will). Suppose he rejects the opportunity -> Did he act wrongly or rightly according to the "new morality" described in this essay?

Expand full comment
David Pinsof's avatar

Rightly. We did not evolve to maximize fitness. We evolved the sorts of motivations that would have promoted fitness in ancestral environments, including motivations to cultivate a good reputation, choose long-term mates based on criteria like generosity and fidelity, form monogamous pairbonds, cooperatively rear offspring, avoid becoming a social pariah, avoid making enemies, avoid getting killed or exiled, follow social norms, deter cheaters and liars and bullies, etc. The suite of emotions we evolved would trigger feelings of revulsion in any normal, non-sociopathic, unbiased human contemplating the scenario you've described.

Expand full comment
Philip's avatar

How can anything be morally wrong on your view then?

Expand full comment
David Pinsof's avatar

My answer is in the section "the keys to morality."

Expand full comment
Philip's avatar
6hEdited

On your view, "morality is the stuff that objectively triggers our moral emotions". Returning to my example, suppose the man is overcome by his evolved lust, and he impregnates the woman. Then he acted rightly. Or suppose he is overcome by his evolved desire to cultivate a good reputation, choose long-term mates etc. etc. and he doesn't impregnate her. Then he acted rightly.

Every possible act corresponds to some "moral emotion", which could be the product of one's genes, which are all the product of evolution. So far the theory doesn't give us any criteria to distinguish right from wrong.

Expand full comment
David Pinsof's avatar

Lust is not a moral emotion. Compassion, guilt, anger, empathy, outrage, social disgust, shame, and pride are moral emotions. Moral discourse revolves around them. Moral discourse does not revolve around lust. When people wonder what the ethically “right” thing to do is, they don’t consider what makes them horny. Typically when we talk about morality, we’re talking about things related to reputations and trust and cooperation and fairness and mutual aid. This doesn’t mean some people aren’t objectively sexy—objectively triggering lust in any normal, single member of the opposite sex—but what is objectively sexy (or cute or delicious or scary or fun) is different from what is objectively moral. Or what we ought to be ashamed of. Or what we ought to feel guilty about. Or what we ought to be mad about. Make sense?

Expand full comment
Philip's avatar
5hEdited

Of course moral discourse revolves around lust, as it has for as long as there has been moral discourse. It's part of the 10th commandment. It's one of the seven deadly sins.

People are constantly talking about how one ought to be ashamed or guilty about lusting after someone they ought not to, and how one ought to be mad at people who lust wrongly.

There are many moral debates centered around lust. Is lusting after children wrong in and of itself, or is only acting on the lust wrong? The same questions apply to lust when the object and/or subject are in a monogamous relationship. What (if any) is the line between healthy desire and objectification? To what extent should (if at all) society impose modesty norms on women to moderate male lust, or is the responsibility entirely on men? Etc etc.

In general, if evolution is not our guide, how can we distinguish between moral and non-moral emotions on your theory?

Expand full comment
Weaver's avatar

This post is bullshit, in the technical sense as defined by Frankfurt.

 Bullshit is speech intended to persuade without regard for truth. The liar cares about the truth and attempts to hide it; the bullshitter doesn't care whether what they say is true or false.

Frankfurt's motivation:

Respect for the truth and a concern for the truth are among the foundations for civilization. I was for a long time disturbed by the lack of respect for the truth that I observed ... bullshit is one of the deformities of these values.

It appeals to evopsych's just-so stories to justify its subjectivity, forgets evolution is a stochastic process and not intelligent design, fails comically at theory of mind, and is a self-confessed bullshitter practicing bullshitting.

This essay is negative utils but does seem to work as a status play given the comments.

Expand full comment
Swami's avatar

I think humans did evolve a sense of how to optimize expected outcomes for their groups, and how to build norms that reinforce these outcomes. Seems like utilitarianism takes this in-group conscientiousness and tries to apply it to the eternal scope of humanity or, even further, to all sentient life.

Expand full comment
David Pinsof's avatar

Also unevolvable given that everyone in one's "group" is not equally genetically related, reproductively valuable, productive, generous, or interdependent.

Expand full comment
Swami's avatar

Strongly disagree. We can select and be selected based upon our reputation. This solves the dilemma of cooperation, as more cooperative types are both preferentially selected and gain power as selectors.

This is well documented in both the anthropological literature and in modern game theory.

Expand full comment
David Pinsof's avatar

Wanting a good reputation is very different from wanting to maximize utility impartially among one's group members. I agree we care deeply about our reputations. I disagree we care deeply about maximizing utility impartially.

Expand full comment
Swami's avatar

One way to get a good reputation is to be seen as a good follower of norms and to be seen as caring for the outcome of the group (which can overlap with the utility of members).

Expand full comment
David Pinsof's avatar

Yes, but wanting to follow group norms and wanting to be seen as caring about the group (insofar as that is a norm) are different from wanting to maximize utility impartially.

Expand full comment
Swami's avatar

Who would you want to choose on your team — the guy that empathizes and understands your point of view and interests, or the guy that is oblivious or uncaring about them?

Expand full comment