According to google, utilitarianism is “the doctrine that an action is right insofar as it promotes happiness, and that the greatest happiness of the greatest number should be the guiding principle of conduct.”
I agree with you. But I'll also be a pedantic mathematician and go further to say utilitarianism fails well beyond these considerations.
The appeal of utilitarianism is also in its quantification of preferences for modeling purposes - considering economic actors as making decisions by maximizing utility, and, correspondingly, impartial adjudication of different incentive structures in the context of microeconomic and social choice theory for the purpose of designing institutions.
And it falls flat here, too.
1) As you note people don't actually have closed form utility functions, and a better microeconomic theory of market decisions is maximizing cumulative prospects, which is the sum of gains from some reference point.
2) Within the field of Welfare Economics, the study of technical formalism on social well being, utilitarianism suffers from various gotchas, and it's not taken seriously in formal Economic Theory. Among various competitors, I prefer (the Nobel Prize winner) Amartya Sen's framework of maximizing capability vectors, that is the variety of possible options the people in a polity have - consume goods and services, start a business, engage in community involvement, etc.
It's why I often take issue with some of the research from the "trolley-ologists" out there. The "utilitarianism vs consequentialism" type of either/or thinking in that area of research often doesn't reflect how people engage in moral reasoning in the real world. I could be wrong, but it seems to me that most people are moral particularists - at least to a limited degree.
I think this was just called practical wisdom at one point. I like to watch the "numbers people" go through a long and winding numerical process through decades of new -isms, just to come to the understanding that contemplative thinkers had it more right all along without all the circular calculations
I'm basically a moral nihilist. For me, taking evolutionary psychology at its logical conclusion seems to mean that all values (morality, beauty, good etc.) are bullshit that our brains evolved to believe in in order to maximize our fitness. "Facts" exist but values will never be able to be measured in an empirical, scientific way (they don't exist in a relevant way).
Yea I used to hold this view, but I think there are facts about the objective criteria that trigger human emotions absent any biases, and that’s usually what we’re talking about when we talk about morality, beauty, etc. Things can therefore be objectively beautiful, moral, etc. (at least to humans). I find this view more plausible than the view that the entirety of our moral and aesthetic discourse is false. I have found no convincing evolutionary explanation for why we would have or need so much false discourse. And plenty of good evolutionary explanations for why such discourse is rooted in features of human nature that we all seem to be referring to with words like “ethical” or “beautiful.”
Thanks for your answer! Yeah I understand this view, but to me this is seems to be using "objective" in a sense that is too weak, arbitrarily anthropocentric and dismissive of disagreement about deeply held values between people.
Yea I admit it doesn't have as much teeth as other versions of "objective" morals or aesthetics, but sometimes we cannot get what we want from reality. And yea, it's definitely anthropocentric, but I don't think that's so arbitrary given that we are humans and greatly benefit from knowing what is objectively compelling to other humans.
I implore you to engage with cognitive science, neuroscience, and philosophy more and engage less with linear, easily digestible forms of data to try to achieve "fit" and thereby sufficient feelings of data confirmation. There are cognitive layers and for that matter, cognitive distortions, that implicate what "feeling strongly" can entail. There are corresponding presuppositions that appeal to different levels that contribute to resulting feelings. It is not a matter of simple plausible deniability or subjective flattening that determine directional, objective necessity, nor are there reductive models that capture this proportionally.
About morality. I agree it’s about what triggers emotions, but I’d push the point even further. There are no “moral” emotions as such. There are just “emotions”, plus the socio-cognitive capacity to anticipate their triggering in social matters.
So when you propose a 80-20 split and I say “That’s not fair”, what I’m really saying “I anticipate that if that were to happen it would trigger emotions that operate to recalibrate status, respect, etc in light of how people treat me”.** Increasingly I think that is all there is i.e. there is no specifically moral cognition as such. Or to put it as a slogan: morality is all spandrel.
** There is also the matter that *just by proposing* an 80-20 split you may have revealed that the value of our relationship is lower to you than I realised, which may trigger emotions etc, but that possibly is independent of my point here.
Yea the bankman-fried stuff is the perfect example—thanks. I think we have basically the same view re morality. But wouldn’t you say that, like, curiosity, lust, confusion, mirth, and lassitude (the feeling of being sick) are not “moral” emotions? But that outrage, guilt, compassion, etc. are “moral” emotions? Isn’t there a relevant distinction there? I’m not sure I can precisely articulate what it is, but it seems like moral discourse revolves around some emotions more than others.
- In many (all?) species, certain experiences are inherently attractive or repulsive (for want of better words).
- In humans at least, some of those experiences are inherently and deeply social. That is to say: humans are so dependent on each other that phenomena such as reputation, standing, respect, etc are super basic to our psychology. So when those things appear to be getting up- or down-regulated, that is very attractive/repulsive (again, for want of better words). Note: exactly what up- and down-regulates social standing can vary historically and culturally.
- As linguistic animals we give words to these experiences of attraction and repulsion. These are terms like outrage, guilt, compassion, good, bad, etc. We can, furthermore, anticipate what will trigger these experiences. Together with words for the experiences, this anticipation allows for discussion and talk of "morality".
- It's true that some of the experiences that are attractive/repulsive are not inherently social. Lassitude is an especially clear example. However, the difference between these and what we call moral is nothing fundamentally moral as such. The only difference is the (very) high psychological importance of sociality.
You know these literatures better than I do. I don't know if anybody else has developed this idea. But when I do glance at these literatures, I keep seeing a tacit assumption that there is such a thing as "moral psychology", "evolved mechanisms of morality", etc. Yet at the moment I don't see any scientific/analytical need for the term "moral". There's just emotions, some of which are social. The only thing that elevates them to "morality" is the deep and strong importance of social standing in human societies.
Hmm, interesting (and fwiw, I think attractive and repulsive are good words—easy to cash out in functional terms). I know there’s this paper that tries to rigorously separate moral emotions from social emotions based on contractualist logic: https://hal.science/hal-03900029
There’s also Oliver Curry’s work arguing that cooperation is what distinguishes the moral from the nonmoral, but I’m a bit of a skeptic there.
I think there’s something to the morality = cooperation / contractualist cognition approach, but I’m also sympathetic to the more deflationary view you’re proposing. Haven’t seen that view defended anywhere.
Yes, both the Fitouchi/Baumard approach and the Curry approach are the sort of thing I have in mind when I talk about glancing at the literature.
And yes, in general I think we should be deflationary with our model of the mind. Occam's Razor etc etc. It also allows us to see clearer differences with our primate relatives: I think our deep sociality is the (small) difference that makes a (big) difference.
The second objection seems ill founded. Utilitarians hold that suffering is intrinsically bad. That is compatible with it being instrumentally useful in some cases, as is indeed obvious. In general, it seems that any plausible moral theory should say that it is good or obligatory to reduce suffering in many cases. This is eg an obvious reason that torture is usually wrong, or childhood vaccination is good
I would go further and say that suffering is useful in *most* cases. And what I mean by "useful" is not necessarily "what promotes happiness and minimizes suffering." I mean "whatever helps us achieve our evolved goals, regardless of whether those goals increase or decrease happiness or suffering." It seems weird to strive to minimize or eliminate something that helps us achieve our goals in most cases--or at least, to declare that of the utmost importance. Minimizing suffering is different from minimizing bad things that people want to avoid (injury, death, illness, etc), and I think we ought to minimize those bad things. But I don't think such minimizing and maximizing should exhaust the *entirety* of our ethical decision making.
1. I'm not sure I agree with your empirical claim that it is useful in most cases, if by that you mean 'so useful that it is good on net'. Human history over the last 200 years has involved a lot of suffering reduction and this seems kind of obviously good at least in part for that reason.
2. Most importantly Your response doesn't engage with my main point which is that on utilitarianism, suffering can be instrumentally useful, and so your argument can't be a critique of utilitarianism.
3. Structuring ones moral theory around the requirement of compatibility with evolved goals seems like a bad idea. My evolved goals are presumably to have as many children as possible, but I am not doing that. Are you? Sexual assault is plausibly an evolved goal for niches in the human population but is nevertheless bad
1) Suffering must be useful in most cases or it would not have evolved by natural selection. I think you're confusing suffering with the bad things that cause suffering. Human history has reduced both, but I think it's the reduction of the bad things themselves that is good.
2) My problem with utilitarianism is that "instrumentally useful" is defined solely in terms of promoting happiness and reducing suffering, but I don't think those are the actual ends we are pursuing. So the definition of "instrumentally useful" is wrong. I'm defining "instrumentally useful" in terms of achieving the evolved goals we actually have, regardless of their effects on happiness or suffering. And I think it's weird to strive to reduce or eliminate something that is instrumentally valuable in that sense. Or at the very least, I don't think it's at all obvious that doing so intrinsically good for the universe or whatever.
3) I disagree that we have an evolved goal to have as many children as possible. You are confusing an explicit desire for fitness with a desire for the things that correlated with fitness in ancestral environments. I doubt whether sexual assault per se is an evolved goal, but even if it were, sexual autonomy is also an evolved goal, and a much stronger one, and that is what makes sexual assault bad.
Re your second point, I think you are changing the subject.
- Your initial criticism was that utilitarianism is wrong because suffering is sometimes instrumentally useful.
- I claimed that utilitarianism is compatible with suffering sometimes being instrumentally useful
- You are now saying that the criticism is actually that utilitarianism has the wrong account of what is *intrinsically* bad. This is a different point.
No, there’s no subject change, let me clarify. I’m claiming it’s ethically bizarre for hedonic utilitarians to want to minimize or eliminate something that’s greatly instrumentally valuable—that serves vital functions and helps us achieve our goals. And then I clarified what I meant by “instrumentally valuable.” Which is very different from what hedonic utilitarians mean by “instrumentally valuable.” My notion of instrumental value makes it weird for hedonic utilitarians to want to minimize suffering. Because doing so would destroy our ability to achieve our actual goals, which do not actually include pursuing happiness and avoiding suffering.
I think this is just you repeating in a different format, your conflation of two distinct claims: (1) suffering is instrumentally useful which is inconsistent with utilitarianism and (2) utilitarianism has the wrong account of what is intrinsically valuable.
I argued (1) is false, but rather than conceding the point you started arguing for (2), which is irrelevant to (1).
Great piece as usual. Happiness is bullshit, check. Utilitarianism is based on bullshit folk psychology (at best), check. What emotions are, check.
My only quibble here is that surely there can be some "real" utilitarians out there, people who try their best at doing ethics/morality according to their principles (however misguided & partially successful). And surely some of them could be doing it in a non-status seeking way. Like their actions might even damage their status, make them feel bad, etc. but for some purely intellectualised reason they still do them.
I'm a hardcore Darwinian, but to me that means that although anything non-adaptive gets culled in the medium or long term, in the very short term (a single generation) anything can happen: no rules, random mutations, lesions on the brain, mind viruses, personal experiments, etc., so it's possible a person's behaviour is totally eccentric & not amenable to an evo pysch framing. Some of the more monk-like utilitarians in the EA/rationalist community are, I think, at least in some of their decisions, not status seeking (even in the paradoxical way you're written about).
Yea maybe, but if what you're talking about is really happening, it can't be very common. I'm not so much interested in weirdos with random mutations or brain lesions as I am in making broad, useful generalizations about humans in general. I'm interested in explaining as much variance in human behavior as possible, not explaining the behavior of John Smith in particular, even if John Smith is the exception to my cynical thesis. We have to content ourselves with explaining most of what's going on in human behavior. We'll never be able to explain every last idiosyncracy of every last individual. Or if we can, we'll need to nail down the basics first. Let's try to figure out what's going on at the most basic level, and then we can argue about John Smith if you want.
I agree with more or less everything, except where you are against the decision in the trolley problem of throwing the fat man under the train.
After all "our moral intuitions are partly utilitarian". I care in the SAME way (more or less: nothing at all) both about the fat man and the other five which would be saved. So here utilitarianism is probably the only way I can deal with the problem (under its conditions: most reasonable people would REFUSE the conditions of those being THE ONLY TWO options and try something else instead - try to save EVERYBODY).
On the contrary, it would be wrong to throw my mom (child, spouse, friend, whatever) under the train, since I care for one but not (less so) for the others. I would actually throw five people from the bridge to save my mom (child, spouse, etc). That by the way reflects the "correct" evolutionary perspective: save *MY* genes.
(finally, the old joke: the real utilitarian solution to the trolley problem is go back in time, kill the one who devised the idea, and spare a lot of people a lot of troubles)
I actually didn't say I'm against the decision to throw the fat man under the trolley. I just listed it as an example of the types of arguments people make against utilitarianism. I'm actually not sure if it's "right" or not. I think it's objectively shameful and disgusting. I think anyone who is capable of physically murdering someone in that context, in that way, should be ashamed of--and disgusted with--themselves. A capacity to physically murder someone that quickly and brutally is objectively undesirable in a social partner--I would be afraid to be around that person. And the person who committed the murder ought to feel guilty about the fat man they murdered and compassionate toward his family and friends. If I were in that context, I would be too filled with feelings of expected shame, guilt, compassion, and self-disgust to go through with it, and I think that's a good thing. I think that makes me a more desirable social partner. Then again, I also get that I'd feel compassion toward the five people who'd die if I didn't act, and guilt for not saving them. I think those emotions would also be on the scale, and I'm not sure what the "correct" weighting of them all would be. All I know is: 1) a good person should feel all those conflicting emotions in that context, 2) I wouldn't push the man if I were in that context, and 3) I would be disgusted with, and afraid of, anyone who did push the man in that context (and I think I would be correct to feel that way).
Sounds like emotivism. I think ultimately there are two distinct ways people talk about morality and we really ought to be distinguishing between them.
I think it’s completely correct to point out that most of the time when people say X is wrong, they’re mostly just expressing their personal emotional aversion to that thing (often playing a subconscious status game or similar).
But the other type is when you say (to take one of your examples) torturing animals is wrong - something I hope most people believe would be wrong regardless of how humans happened to evolve or how we happen to feel about it. If someone edited my brain to think torturing animals was morally good, I would just be wrong.
All else being equal, the universe is a better place if it has less suffering. People who disagree with that I think tend to smuggle in extra changes (ignoring the “all else being equal” part) or else just lack imagination. If lessons could be learnt without suffering, surely that is preferable. If more suffering is somehow preferable, then it tempts the absurd conclusion we should suffer maximally.
Unfortunately your “all else equal” claim also leads to absurd conclusions: that we should eliminate all suffering, pump people full of opiates without their consent, plug them into the matrix, painlessly kill everyone in their sleep, etc. But I agree torturing animals is wrong. And I agree it’s not just an expression of my emotions. It’s a statement of fact about what any normal unbiased human will experience at the sight (or knowledge of) an animal being tortured—and what they will think of the torturer. Moral claims are factual claims about the prototypical triggers of emotions and whether those triggers were present. And yes, I don’t think we can learn lessons (and fulfill all the other helpful functions of suffering) without suffering. The correct way to understand suffering is that it is a mechanism with a function. And no that doesn’t mean we should maximally suffer, any more than we should maximally feel any emotion with a useful function.
Pumping people full of opiates etc doesn’t satisfy the condition of all else being equal though - it completely changes what society looks like, people’s lives and the connections they have with others (things that people value). Criticisms of utilitarianism often smuggle in these other consequences that haven’t been taken into account.
An all else being equal scenario is for example animals in factory farms: We think they suffer greatly, but due to the problem of other minds, we don’t know for sure. I think all else being equal, a universe where they suffer greatly is worse than a universe where they don’t, even if they seem identical to an outside observer (that is to say even if the animals behave identically, they still cry out and convince us they’re in pain, and so on).
That's fair but then your "all else equal" claim applies to basically nothing in reality. What change can you realistically make in the world that affects nothing else? Things are causally interrelated in very complicated ways. If you're only talking about parts of the causal nexus that have no relationship to anything else, then you are basically talking about 0% of reality.
The point is to isolate what it is we care about. In this case, it helps us understand that we care about suffering and want it minimised, provided it can be done without deleterious 2nd/3rd order effects.
Utilitarianism always asks us to consider 2nd/3rd order effects and include them in any calculation.
“So a world without suffering would be a world where we aimlessly destroy our bodies and relationships, where we carelessly dig ourselves deeper and deeper into all of our problems, and where we stupidly repeat our mistakes over and over again.”
This is a justification of suffering by appealing to the merits of its 2nd/3rd order effects.
In other words, it sounds like you’re arguing against utilitarianism with a utilitarian argument.
Maybe you don’t really think utilitarianism is bullshit?
"Personal emotional aversion" is not plainly equal when you ask for form argument, in light of necessity and weight of factors. That is to say, one personal emotional aversion can be more backed and also more implicated than another persons. You even showed an example to this effect with your "often playing a subconscious status game or similar". Any honest discourse about a serious topic-which is rare in a postmodern 'ironic detachment' landscape should be seeking, for example, what is a vice to a power game and what is meta-cognitively considerate of downstream outcomes unrelated to ego prioritization.
Your "less suffering" suggestion is very common materialist position that continually has to revise itself not only on any idea of "progressive" linearity away from it in society but also between comparative factors of suffering and the temporal implication of each that do not fit neatly into a model unless the human condition is understood; the same human condition that would recursively defeat the feasibility of the model itself, as the model discounts convictions on foresight.
Act utilitarianism can drive one crazy -- without commitments, you can get way too swayed by momentary rationalizations. Rule utilitarianism can be useful if imperfect for developing principles.
Yea rule utilitarianism is definitely better, but unfortunately also vulnerable to rationalization. Politics is largely about people rationalizing their side’s policy preferences as being optimally rule utilitarian.
My view of morality is the attempt to examine how variable moral positions can be used to justify different possible choices is a viable , but imperfect, path towards avoiding the most wrong approach
"The purpose of life is not to be happy. It is to be useful, to be honorable, to be compassionate, to have it make some difference that you have lived and lived well."
--Ralph Waldo Emerson
This is true not just individually, but also collectively.
Great piece! A thought that came to mind is that if our moral emotions have developed in a world that differs vastly from the one we live today, does it mean they aren't as relevant anymore, they aren't achieving the same results they did hundreds of thousands of years ago?
Great question, and a very tricky one. I think we have to accept human nature as it is, even if it is mismatched with the modern world. Ancestrally, social humiliation and ostracism meant death. Today, social humiliation and ostracism do not mean death and in most cases are not a big deal. Just find some new friends or get a new job. So does that mean we shouldn’t feel compassion for ostracized, humiliated people—or feel less of it? Or that we should perpetrate the shaming and humiliation—or do more of it? I think the answer is no, we have to accept human nature for what it is. I think the same logic applies to other cases of mismatch.
We can't do much about the feelings, but what I'm questioning is whether we should organize our moral rules around those feelings or not. Have we chosen our moral rules based on those feelings because it was convenient or because those feelings still have some value for what we are trying to achieve with morality overall (taking the position here that morality is not divine in any sense but a human creation)?
I think we have no choice but to organize moral rules around feelings. There is no other way to organize them. The only way to challenge a moral feeling is by pitting a different moral feeling against it. You can say a feeling is mismatched or primitive or whatever, but at the end of the day it’s just another feeling that’s telling you the mismatch or primitiveness is bad. And that feeling has an evolutionary explanation too. It may be just as primitive. There’s no way out. We have to accept that morality is just a fundamentally emotional thing. It doesn’t mean logic and science can’t be involved, but they will only matter insofar as they interact with our feelings.
But they were just feelings, not moral feelings, before we began identifying them with morals. Why couldn't we stop identifying our feelings with morals? Not saying it would be easy but why couldn't it be possible?
>anger is “designed” by natural selection to detect unfair treatment
What do you think of anger at inanimate objects, or at ones own mistake?
>Yes, we’re extremely biased about morality
>It’s just a strategy for winning The Opinion Game and creating a new social norm where pretending to be utilitarian wins you status points.
To what extent do you think "the current meta" really changes what is a correct emotional trigger? It seems like you have to for evaluating shame for example - and this potentially brings a lot of more typically "moral" considerations back into play.
>Torturing animals is bad
This doesnt seem like an intuition a predatory animal would evolve.
Good points and good questions. Re anger at inanimate objects, I’m not sure, but it might be implicit anger at the person who made the object for doing a crappy job making it. Re anger at one’s own mistakes, I’m not sure, but it might be a kind of anger at one’s past self for treating one’s present self unfairly (for failing to take into account the interests of one’s future self at the time of the decision). Re the point about “the current meta.” Sorry I’m not familiar with that expression—you’ll have to explain it to me and repeat the question. Re torturing animals being bad. You’re right that it’s a bit odd for a predatory animal to have such feelings about torturing animals. But there’s a difference between eating animals (which I don’t think is morally wrong) and torturing them. I suspect the reason we have a negative response to animal torture is that we correctly view the sort of person who would torture animals as a sociopath or sadist, and we wisely want to avoid and condemn that sort of person. We also admire the person who feels so much compassion that it overflows onto animals, and we want to associate with that person. So we should objectively feel proud about caring about animal welfare (because it triggers admiration in others), and objectively icked out by people who torture animals or don’t give a shit about animals being tortured.
>I suspect the reason we have a negative response to animal torture is that we correctly view the sort of person who would torture animals as a sociopath or sadist
This actually is a great example of the metagame issue. We view people who do it as dangerous, therefore you would only do it if you didnt care about being seen as dangerous, which meany you probably are dangerous. Its self-sustaining, and I dont think humans consistently land in this equilibrium. Many predators also "play with" their prey, in ways that sure seem torturous, and quite analogous to e.g. bullfighting. Many cultures had some form of torturing animals as entertainment. We have *somehow* gotten into a different equilibrium, and a more detailed look at the process might have more to say about "moral reasoning".
Utilitarianism fails to account for the effects of adopting "maximization" or "optimization" in principle, when in practice, we overfit every narrative to a pretty tiny working memory space. It then ignores the process by which we distill and bundle information to fit that limitation (abstraction and maths being... one of them?).
I demand that your next post be "Bayesianism is Bullshit," because it simply "fails better" in all the same ways. It is a very very very good way to calibrate our working memories, and a shitty shitty shitty way of remembering how distillation, bundling, unpacking and "enrichment" (projection) occurs in its immediate surrounds.
I wouldn't say bayesianism is entirely bullshit--I think it can be intellectually useful in a variety of ways--but like anything it can be abused and taken to absurd extremes.
Ah, but how much have thought about it in particular?
Bayesianism is in a weird place where "truth value" is arguably abandoned for "predictive value." The latter is pretty unabashedly utility-centric, even while there is something pragmatic about conflating the two. But this conflicts directly with evolutionary psychology, at least as I understand it. Truth value and utility (be it genetic, phenotypic, social, etc) can be orthogonally related. Bayesianism in principle says to maximize entropy (orthogonality in this case), but in practice, similar priors and base rates are presumed sufficiently similar, which is a minimizing entropy operation. Bayesianism is hoisted by its own pragmatic utility, but its commitments are pretty conspicuously barren. I might even say it approaches "optimized bullshitting."
I am hanging between (modern) forms of Stoicism and Epicureanism. Hope to become more clear about it with your Essays. What would be less bullshit ? In your texts are many value judgments, where do they come from, what is the core of our values ? Affections( pleasure/pain) ? Reason ?What about mental perturbations and ataraxia / Eudaimonia ? I think this ancient ideas can navigate us out of nihilism.
My value judgments, like anyone else’s value judgments, ultimately come from emotions. I think the only way to understand those emotions is with good evolutionary psychology, and the only way to understand the things in the world that trigger those emotions is by using the scientific method, broadly construed.
Thanks, yes that was my guess too. That's why I'm currently leaning more towards Epicureanism. All the arguments and examples and thought experiments which want to judge (ethical-, quality-) hedonism would like to show bad results. But it is precisely this "bad" that lies in the feelings. There is a bad feeling in murder a fat man for other people, eating babies or killing someone to spend the organs or enslaving many other people (who would rightly fight back and we ourself would not want to be one of them). All in feeling feelings about things/ideas and in core our deepest affections but yes with reason and understanding we can take images / ideas before our mind and the consequences of our actions and outcomes and again value them with our feelings. :)
Epicureanism, types of hedonism and feelings were often underrated in western thought / Philosophy. Letter of Menoceus is the greatest practical philosophical text in my current view :)
Interesting. This seems like the "meaning of life" question; what one should try to maximize dilemma. Personally, and I'm aware this comes of as rather selfish, I have never really been bother by questions like this. My philosophy is to just "do whatever the hell I feel like doing". If I want to work on a project, I do; If I want to learn about x, I do. Maybe it's because I live on a country where being free to pursue whatever one desires is NOT a given; one has to fight for it.
Perhaps I should think about how to improve other people's lives more.. within whatever optimizing policy I come up with
I agree with you. But I'll also be a pedantic mathematician and go further to say utilitarianism fails well beyond these considerations.
The appeal of utilitarianism is also in its quantification of preferences for modeling purposes - considering economic actors as making decisions by maximizing utility, and, correspondingly, impartial adjudication of different incentive structures in the context of microeconomic and social choice theory for the purpose of designing institutions.
And it falls flat here, too.
1) As you note people don't actually have closed form utility functions, and a better microeconomic theory of market decisions is maximizing cumulative prospects, which is the sum of gains from some reference point.
2) Within the field of Welfare Economics, the study of technical formalism on social well being, utilitarianism suffers from various gotchas, and it's not taken seriously in formal Economic Theory. Among various competitors, I prefer (the Nobel Prize winner) Amartya Sen's framework of maximizing capability vectors, that is the variety of possible options the people in a polity have - consume goods and services, start a business, engage in community involvement, etc.
Moral particularism > utilitarianism (and consequentialism)
Not a bad summary.
It's why I often take issue with some of the research from the "trolley-ologists" out there. The "utilitarianism vs consequentialism" type of either/or thinking in that area of research often doesn't reflect how people engage in moral reasoning in the real world. I could be wrong, but it seems to me that most people are moral particularists - at least to a limited degree.
I think this was just called practical wisdom at one point. I like to watch the "numbers people" go through a long and winding numerical process through decades of new -isms, just to come to the understanding that contemplative thinkers had it more right all along without all the circular calculations
I'm basically a moral nihilist. For me, taking evolutionary psychology at its logical conclusion seems to mean that all values (morality, beauty, good etc.) are bullshit that our brains evolved to believe in in order to maximize our fitness. "Facts" exist but values will never be able to be measured in an empirical, scientific way (they don't exist in a relevant way).
Yea I used to hold this view, but I think there are facts about the objective criteria that trigger human emotions absent any biases, and that’s usually what we’re talking about when we talk about morality, beauty, etc. Things can therefore be objectively beautiful, moral, etc. (at least to humans). I find this view more plausible than the view that the entirety of our moral and aesthetic discourse is false. I have found no convincing evolutionary explanation for why we would have or need so much false discourse. And plenty of good evolutionary explanations for why such discourse is rooted in features of human nature that we all seem to be referring to with words like “ethical” or “beautiful.”
Thanks for your answer! Yeah I understand this view, but to me this is seems to be using "objective" in a sense that is too weak, arbitrarily anthropocentric and dismissive of disagreement about deeply held values between people.
Yea I admit it doesn't have as much teeth as other versions of "objective" morals or aesthetics, but sometimes we cannot get what we want from reality. And yea, it's definitely anthropocentric, but I don't think that's so arbitrary given that we are humans and greatly benefit from knowing what is objectively compelling to other humans.
I implore you to engage with cognitive science, neuroscience, and philosophy more and engage less with linear, easily digestible forms of data to try to achieve "fit" and thereby sufficient feelings of data confirmation. There are cognitive layers and for that matter, cognitive distortions, that implicate what "feeling strongly" can entail. There are corresponding presuppositions that appeal to different levels that contribute to resulting feelings. It is not a matter of simple plausible deniability or subjective flattening that determine directional, objective necessity, nor are there reductive models that capture this proportionally.
Hi David!
Utilitarianism is morality for nerds. For the best world example you could ever wish for, read this piece on Sam Bankman-Fried. https://www.lrb.co.uk/the-paper/v45/n21/john-lanchester/he-said-they-said
About morality. I agree it’s about what triggers emotions, but I’d push the point even further. There are no “moral” emotions as such. There are just “emotions”, plus the socio-cognitive capacity to anticipate their triggering in social matters.
So when you propose a 80-20 split and I say “That’s not fair”, what I’m really saying “I anticipate that if that were to happen it would trigger emotions that operate to recalibrate status, respect, etc in light of how people treat me”.** Increasingly I think that is all there is i.e. there is no specifically moral cognition as such. Or to put it as a slogan: morality is all spandrel.
** There is also the matter that *just by proposing* an 80-20 split you may have revealed that the value of our relationship is lower to you than I realised, which may trigger emotions etc, but that possibly is independent of my point here.
Yea the bankman-fried stuff is the perfect example—thanks. I think we have basically the same view re morality. But wouldn’t you say that, like, curiosity, lust, confusion, mirth, and lassitude (the feeling of being sick) are not “moral” emotions? But that outrage, guilt, compassion, etc. are “moral” emotions? Isn’t there a relevant distinction there? I’m not sure I can precisely articulate what it is, but it seems like moral discourse revolves around some emotions more than others.
At the moment, my thinking is:
- In many (all?) species, certain experiences are inherently attractive or repulsive (for want of better words).
- In humans at least, some of those experiences are inherently and deeply social. That is to say: humans are so dependent on each other that phenomena such as reputation, standing, respect, etc are super basic to our psychology. So when those things appear to be getting up- or down-regulated, that is very attractive/repulsive (again, for want of better words). Note: exactly what up- and down-regulates social standing can vary historically and culturally.
- As linguistic animals we give words to these experiences of attraction and repulsion. These are terms like outrage, guilt, compassion, good, bad, etc. We can, furthermore, anticipate what will trigger these experiences. Together with words for the experiences, this anticipation allows for discussion and talk of "morality".
- It's true that some of the experiences that are attractive/repulsive are not inherently social. Lassitude is an especially clear example. However, the difference between these and what we call moral is nothing fundamentally moral as such. The only difference is the (very) high psychological importance of sociality.
You know these literatures better than I do. I don't know if anybody else has developed this idea. But when I do glance at these literatures, I keep seeing a tacit assumption that there is such a thing as "moral psychology", "evolved mechanisms of morality", etc. Yet at the moment I don't see any scientific/analytical need for the term "moral". There's just emotions, some of which are social. The only thing that elevates them to "morality" is the deep and strong importance of social standing in human societies.
Hmm, interesting (and fwiw, I think attractive and repulsive are good words—easy to cash out in functional terms). I know there’s this paper that tries to rigorously separate moral emotions from social emotions based on contractualist logic: https://hal.science/hal-03900029
There’s also Oliver Curry’s work arguing that cooperation is what distinguishes the moral from the nonmoral, but I’m a bit of a skeptic there.
I think there’s something to the morality = cooperation / contractualist cognition approach, but I’m also sympathetic to the more deflationary view you’re proposing. Haven’t seen that view defended anywhere.
Yes, both the Fitouchi/Baumard approach and the Curry approach are the sort of thing I have in mind when I talk about glancing at the literature.
And yes, in general I think we should be deflationary with our model of the mind. Occam's Razor etc etc. It also allows us to see clearer differences with our primate relatives: I think our deep sociality is the (small) difference that makes a (big) difference.
One day I will write this up, I suppose.
The second objection seems ill founded. Utilitarians hold that suffering is intrinsically bad. That is compatible with it being instrumentally useful in some cases, as is indeed obvious. In general, it seems that any plausible moral theory should say that it is good or obligatory to reduce suffering in many cases. This is eg an obvious reason that torture is usually wrong, or childhood vaccination is good
Have you read the arguments on utilitarianism.net?
I would go further and say that suffering is useful in *most* cases. And what I mean by "useful" is not necessarily "what promotes happiness and minimizes suffering." I mean "whatever helps us achieve our evolved goals, regardless of whether those goals increase or decrease happiness or suffering." It seems weird to strive to minimize or eliminate something that helps us achieve our goals in most cases--or at least, to declare that of the utmost importance. Minimizing suffering is different from minimizing bad things that people want to avoid (injury, death, illness, etc), and I think we ought to minimize those bad things. But I don't think such minimizing and maximizing should exhaust the *entirety* of our ethical decision making.
1. I'm not sure I agree with your empirical claim that it is useful in most cases, if by that you mean 'so useful that it is good on net'. Human history over the last 200 years has involved a lot of suffering reduction and this seems kind of obviously good at least in part for that reason.
2. Most importantly Your response doesn't engage with my main point which is that on utilitarianism, suffering can be instrumentally useful, and so your argument can't be a critique of utilitarianism.
3. Structuring ones moral theory around the requirement of compatibility with evolved goals seems like a bad idea. My evolved goals are presumably to have as many children as possible, but I am not doing that. Are you? Sexual assault is plausibly an evolved goal for niches in the human population but is nevertheless bad
1) Suffering must be useful in most cases or it would not have evolved by natural selection. I think you're confusing suffering with the bad things that cause suffering. Human history has reduced both, but I think it's the reduction of the bad things themselves that is good.
2) My problem with utilitarianism is that "instrumentally useful" is defined solely in terms of promoting happiness and reducing suffering, but I don't think those are the actual ends we are pursuing. So the definition of "instrumentally useful" is wrong. I'm defining "instrumentally useful" in terms of achieving the evolved goals we actually have, regardless of their effects on happiness or suffering. And I think it's weird to strive to reduce or eliminate something that is instrumentally valuable in that sense. Or at the very least, I don't think it's at all obvious that doing so intrinsically good for the universe or whatever.
3) I disagree that we have an evolved goal to have as many children as possible. You are confusing an explicit desire for fitness with a desire for the things that correlated with fitness in ancestral environments. I doubt whether sexual assault per se is an evolved goal, but even if it were, sexual autonomy is also an evolved goal, and a much stronger one, and that is what makes sexual assault bad.
Re your second point, I think you are changing the subject.
- Your initial criticism was that utilitarianism is wrong because suffering is sometimes instrumentally useful.
- I claimed that utilitarianism is compatible with suffering sometimes being instrumentally useful
- You are now saying that the criticism is actually that utilitarianism has the wrong account of what is *intrinsically* bad. This is a different point.
No, there’s no subject change, let me clarify. I’m claiming it’s ethically bizarre for hedonic utilitarians to want to minimize or eliminate something that’s greatly instrumentally valuable—that serves vital functions and helps us achieve our goals. And then I clarified what I meant by “instrumentally valuable.” Which is very different from what hedonic utilitarians mean by “instrumentally valuable.” My notion of instrumental value makes it weird for hedonic utilitarians to want to minimize suffering. Because doing so would destroy our ability to achieve our actual goals, which do not actually include pursuing happiness and avoiding suffering.
I think this is just you repeating in a different format, your conflation of two distinct claims: (1) suffering is instrumentally useful which is inconsistent with utilitarianism and (2) utilitarianism has the wrong account of what is intrinsically valuable.
I argued (1) is false, but rather than conceding the point you started arguing for (2), which is irrelevant to (1).
Great piece as usual. Happiness is bullshit, check. Utilitarianism is based on bullshit folk psychology (at best), check. What emotions are, check.
My only quibble here is that surely there can be some "real" utilitarians out there, people who try their best at doing ethics/morality according to their principles (however misguided & partially successful). And surely some of them could be doing it in a non-status seeking way. Like their actions might even damage their status, make them feel bad, etc. but for some purely intellectualised reason they still do them.
I'm a hardcore Darwinian, but to me that means that although anything non-adaptive gets culled in the medium or long term, in the very short term (a single generation) anything can happen: no rules, random mutations, lesions on the brain, mind viruses, personal experiments, etc., so it's possible a person's behaviour is totally eccentric & not amenable to an evo pysch framing. Some of the more monk-like utilitarians in the EA/rationalist community are, I think, at least in some of their decisions, not status seeking (even in the paradoxical way you're written about).
Yea maybe, but if what you're talking about is really happening, it can't be very common. I'm not so much interested in weirdos with random mutations or brain lesions as I am in making broad, useful generalizations about humans in general. I'm interested in explaining as much variance in human behavior as possible, not explaining the behavior of John Smith in particular, even if John Smith is the exception to my cynical thesis. We have to content ourselves with explaining most of what's going on in human behavior. We'll never be able to explain every last idiosyncracy of every last individual. Or if we can, we'll need to nail down the basics first. Let's try to figure out what's going on at the most basic level, and then we can argue about John Smith if you want.
Totally fair. Though as a weirdo myself I feel profoundly unseen.
I agree with more or less everything, except where you are against the decision in the trolley problem of throwing the fat man under the train.
After all "our moral intuitions are partly utilitarian". I care in the SAME way (more or less: nothing at all) both about the fat man and the other five which would be saved. So here utilitarianism is probably the only way I can deal with the problem (under its conditions: most reasonable people would REFUSE the conditions of those being THE ONLY TWO options and try something else instead - try to save EVERYBODY).
On the contrary, it would be wrong to throw my mom (child, spouse, friend, whatever) under the train, since I care for one but not (less so) for the others. I would actually throw five people from the bridge to save my mom (child, spouse, etc). That by the way reflects the "correct" evolutionary perspective: save *MY* genes.
(finally, the old joke: the real utilitarian solution to the trolley problem is go back in time, kill the one who devised the idea, and spare a lot of people a lot of troubles)
I actually didn't say I'm against the decision to throw the fat man under the trolley. I just listed it as an example of the types of arguments people make against utilitarianism. I'm actually not sure if it's "right" or not. I think it's objectively shameful and disgusting. I think anyone who is capable of physically murdering someone in that context, in that way, should be ashamed of--and disgusted with--themselves. A capacity to physically murder someone that quickly and brutally is objectively undesirable in a social partner--I would be afraid to be around that person. And the person who committed the murder ought to feel guilty about the fat man they murdered and compassionate toward his family and friends. If I were in that context, I would be too filled with feelings of expected shame, guilt, compassion, and self-disgust to go through with it, and I think that's a good thing. I think that makes me a more desirable social partner. Then again, I also get that I'd feel compassion toward the five people who'd die if I didn't act, and guilt for not saving them. I think those emotions would also be on the scale, and I'm not sure what the "correct" weighting of them all would be. All I know is: 1) a good person should feel all those conflicting emotions in that context, 2) I wouldn't push the man if I were in that context, and 3) I would be disgusted with, and afraid of, anyone who did push the man in that context (and I think I would be correct to feel that way).
Sounds like emotivism. I think ultimately there are two distinct ways people talk about morality and we really ought to be distinguishing between them.
I think it’s completely correct to point out that most of the time when people say X is wrong, they’re mostly just expressing their personal emotional aversion to that thing (often playing a subconscious status game or similar).
But the other type is when you say (to take one of your examples) torturing animals is wrong - something I hope most people believe would be wrong regardless of how humans happened to evolve or how we happen to feel about it. If someone edited my brain to think torturing animals was morally good, I would just be wrong.
All else being equal, the universe is a better place if it has less suffering. People who disagree with that I think tend to smuggle in extra changes (ignoring the “all else being equal” part) or else just lack imagination. If lessons could be learnt without suffering, surely that is preferable. If more suffering is somehow preferable, then it tempts the absurd conclusion we should suffer maximally.
Unfortunately your “all else equal” claim also leads to absurd conclusions: that we should eliminate all suffering, pump people full of opiates without their consent, plug them into the matrix, painlessly kill everyone in their sleep, etc. But I agree torturing animals is wrong. And I agree it’s not just an expression of my emotions. It’s a statement of fact about what any normal unbiased human will experience at the sight (or knowledge of) an animal being tortured—and what they will think of the torturer. Moral claims are factual claims about the prototypical triggers of emotions and whether those triggers were present. And yes, I don’t think we can learn lessons (and fulfill all the other helpful functions of suffering) without suffering. The correct way to understand suffering is that it is a mechanism with a function. And no that doesn’t mean we should maximally suffer, any more than we should maximally feel any emotion with a useful function.
This is how I read Hume's view--a moral judgement is a belief that a property will evoke certain sentiments in impartial observers.
Haven't read Hume in a while, but if your reading is correct than consider me a Humean.
You are at least a human but even a Humean might be skeptical on that.
Pumping people full of opiates etc doesn’t satisfy the condition of all else being equal though - it completely changes what society looks like, people’s lives and the connections they have with others (things that people value). Criticisms of utilitarianism often smuggle in these other consequences that haven’t been taken into account.
An all else being equal scenario is for example animals in factory farms: We think they suffer greatly, but due to the problem of other minds, we don’t know for sure. I think all else being equal, a universe where they suffer greatly is worse than a universe where they don’t, even if they seem identical to an outside observer (that is to say even if the animals behave identically, they still cry out and convince us they’re in pain, and so on).
That's fair but then your "all else equal" claim applies to basically nothing in reality. What change can you realistically make in the world that affects nothing else? Things are causally interrelated in very complicated ways. If you're only talking about parts of the causal nexus that have no relationship to anything else, then you are basically talking about 0% of reality.
The point is to isolate what it is we care about. In this case, it helps us understand that we care about suffering and want it minimised, provided it can be done without deleterious 2nd/3rd order effects.
Utilitarianism always asks us to consider 2nd/3rd order effects and include them in any calculation.
“So a world without suffering would be a world where we aimlessly destroy our bodies and relationships, where we carelessly dig ourselves deeper and deeper into all of our problems, and where we stupidly repeat our mistakes over and over again.”
This is a justification of suffering by appealing to the merits of its 2nd/3rd order effects.
In other words, it sounds like you’re arguing against utilitarianism with a utilitarian argument.
Maybe you don’t really think utilitarianism is bullshit?
"Personal emotional aversion" is not plainly equal when you ask for form argument, in light of necessity and weight of factors. That is to say, one personal emotional aversion can be more backed and also more implicated than another persons. You even showed an example to this effect with your "often playing a subconscious status game or similar". Any honest discourse about a serious topic-which is rare in a postmodern 'ironic detachment' landscape should be seeking, for example, what is a vice to a power game and what is meta-cognitively considerate of downstream outcomes unrelated to ego prioritization.
Your "less suffering" suggestion is very common materialist position that continually has to revise itself not only on any idea of "progressive" linearity away from it in society but also between comparative factors of suffering and the temporal implication of each that do not fit neatly into a model unless the human condition is understood; the same human condition that would recursively defeat the feasibility of the model itself, as the model discounts convictions on foresight.
Act utilitarianism can drive one crazy -- without commitments, you can get way too swayed by momentary rationalizations. Rule utilitarianism can be useful if imperfect for developing principles.
Yea rule utilitarianism is definitely better, but unfortunately also vulnerable to rationalization. Politics is largely about people rationalizing their side’s policy preferences as being optimally rule utilitarian.
Which isn’t to say that my view isn’t vulnerable to rationalizations either. I just don’t think a morality invulnerable to rationalizations exists.
My view of morality is the attempt to examine how variable moral positions can be used to justify different possible choices is a viable , but imperfect, path towards avoiding the most wrong approach
"The purpose of life is not to be happy. It is to be useful, to be honorable, to be compassionate, to have it make some difference that you have lived and lived well."
--Ralph Waldo Emerson
This is true not just individually, but also collectively.
Great piece! A thought that came to mind is that if our moral emotions have developed in a world that differs vastly from the one we live today, does it mean they aren't as relevant anymore, they aren't achieving the same results they did hundreds of thousands of years ago?
Great question, and a very tricky one. I think we have to accept human nature as it is, even if it is mismatched with the modern world. Ancestrally, social humiliation and ostracism meant death. Today, social humiliation and ostracism do not mean death and in most cases are not a big deal. Just find some new friends or get a new job. So does that mean we shouldn’t feel compassion for ostracized, humiliated people—or feel less of it? Or that we should perpetrate the shaming and humiliation—or do more of it? I think the answer is no, we have to accept human nature for what it is. I think the same logic applies to other cases of mismatch.
We can't do much about the feelings, but what I'm questioning is whether we should organize our moral rules around those feelings or not. Have we chosen our moral rules based on those feelings because it was convenient or because those feelings still have some value for what we are trying to achieve with morality overall (taking the position here that morality is not divine in any sense but a human creation)?
I think we have no choice but to organize moral rules around feelings. There is no other way to organize them. The only way to challenge a moral feeling is by pitting a different moral feeling against it. You can say a feeling is mismatched or primitive or whatever, but at the end of the day it’s just another feeling that’s telling you the mismatch or primitiveness is bad. And that feeling has an evolutionary explanation too. It may be just as primitive. There’s no way out. We have to accept that morality is just a fundamentally emotional thing. It doesn’t mean logic and science can’t be involved, but they will only matter insofar as they interact with our feelings.
But they were just feelings, not moral feelings, before we began identifying them with morals. Why couldn't we stop identifying our feelings with morals? Not saying it would be easy but why couldn't it be possible?
Maybe that’s possible, and maybe that would be good. But if so, it’s ultimately another feeling that’s telling you that.
How so?
I like how you're basically flipping the evolutionary debunking argument on its head.
>anger is “designed” by natural selection to detect unfair treatment
What do you think of anger at inanimate objects, or at ones own mistake?
>Yes, we’re extremely biased about morality
>It’s just a strategy for winning The Opinion Game and creating a new social norm where pretending to be utilitarian wins you status points.
To what extent do you think "the current meta" really changes what is a correct emotional trigger? It seems like you have to for evaluating shame for example - and this potentially brings a lot of more typically "moral" considerations back into play.
>Torturing animals is bad
This doesnt seem like an intuition a predatory animal would evolve.
Good points and good questions. Re anger at inanimate objects, I’m not sure, but it might be implicit anger at the person who made the object for doing a crappy job making it. Re anger at one’s own mistakes, I’m not sure, but it might be a kind of anger at one’s past self for treating one’s present self unfairly (for failing to take into account the interests of one’s future self at the time of the decision). Re the point about “the current meta.” Sorry I’m not familiar with that expression—you’ll have to explain it to me and repeat the question. Re torturing animals being bad. You’re right that it’s a bit odd for a predatory animal to have such feelings about torturing animals. But there’s a difference between eating animals (which I don’t think is morally wrong) and torturing them. I suspect the reason we have a negative response to animal torture is that we correctly view the sort of person who would torture animals as a sociopath or sadist, and we wisely want to avoid and condemn that sort of person. We also admire the person who feels so much compassion that it overflows onto animals, and we want to associate with that person. So we should objectively feel proud about caring about animal welfare (because it triggers admiration in others), and objectively icked out by people who torture animals or don’t give a shit about animals being tortured.
Do you think getting angry at past selves is "correct"? You might also find my old theory interesting:
https://old.reddit.com/r/slatestarcodex/comments/e2loqp/why_we_get_angry_at_inanimate_objects/
>Sorry I’m not familiar with that expression
https://en.wikipedia.org/wiki/Metagame#Competitive_gaming, or maybe something like "the signalling equilibrium".
>I suspect the reason we have a negative response to animal torture is that we correctly view the sort of person who would torture animals as a sociopath or sadist
This actually is a great example of the metagame issue. We view people who do it as dangerous, therefore you would only do it if you didnt care about being seen as dangerous, which meany you probably are dangerous. Its self-sustaining, and I dont think humans consistently land in this equilibrium. Many predators also "play with" their prey, in ways that sure seem torturous, and quite analogous to e.g. bullfighting. Many cultures had some form of torturing animals as entertainment. We have *somehow* gotten into a different equilibrium, and a more detailed look at the process might have more to say about "moral reasoning".
Utilitarianism fails to account for the effects of adopting "maximization" or "optimization" in principle, when in practice, we overfit every narrative to a pretty tiny working memory space. It then ignores the process by which we distill and bundle information to fit that limitation (abstraction and maths being... one of them?).
I demand that your next post be "Bayesianism is Bullshit," because it simply "fails better" in all the same ways. It is a very very very good way to calibrate our working memories, and a shitty shitty shitty way of remembering how distillation, bundling, unpacking and "enrichment" (projection) occurs in its immediate surrounds.
I wouldn't say bayesianism is entirely bullshit--I think it can be intellectually useful in a variety of ways--but like anything it can be abused and taken to absurd extremes.
Ah, but how much have thought about it in particular?
Bayesianism is in a weird place where "truth value" is arguably abandoned for "predictive value." The latter is pretty unabashedly utility-centric, even while there is something pragmatic about conflating the two. But this conflicts directly with evolutionary psychology, at least as I understand it. Truth value and utility (be it genetic, phenotypic, social, etc) can be orthogonally related. Bayesianism in principle says to maximize entropy (orthogonality in this case), but in practice, similar priors and base rates are presumed sufficiently similar, which is a minimizing entropy operation. Bayesianism is hoisted by its own pragmatic utility, but its commitments are pretty conspicuously barren. I might even say it approaches "optimized bullshitting."
Forgot the most important part: the overlap between priors and vibes, even if the method is pretty great at minimizing vibes, given initial vibes.
I am hanging between (modern) forms of Stoicism and Epicureanism. Hope to become more clear about it with your Essays. What would be less bullshit ? In your texts are many value judgments, where do they come from, what is the core of our values ? Affections( pleasure/pain) ? Reason ?What about mental perturbations and ataraxia / Eudaimonia ? I think this ancient ideas can navigate us out of nihilism.
My value judgments, like anyone else’s value judgments, ultimately come from emotions. I think the only way to understand those emotions is with good evolutionary psychology, and the only way to understand the things in the world that trigger those emotions is by using the scientific method, broadly construed.
Thanks, yes that was my guess too. That's why I'm currently leaning more towards Epicureanism. All the arguments and examples and thought experiments which want to judge (ethical-, quality-) hedonism would like to show bad results. But it is precisely this "bad" that lies in the feelings. There is a bad feeling in murder a fat man for other people, eating babies or killing someone to spend the organs or enslaving many other people (who would rightly fight back and we ourself would not want to be one of them). All in feeling feelings about things/ideas and in core our deepest affections but yes with reason and understanding we can take images / ideas before our mind and the consequences of our actions and outcomes and again value them with our feelings. :)
Epicureanism, types of hedonism and feelings were often underrated in western thought / Philosophy. Letter of Menoceus is the greatest practical philosophical text in my current view :)
Interesting. This seems like the "meaning of life" question; what one should try to maximize dilemma. Personally, and I'm aware this comes of as rather selfish, I have never really been bother by questions like this. My philosophy is to just "do whatever the hell I feel like doing". If I want to work on a project, I do; If I want to learn about x, I do. Maybe it's because I live on a country where being free to pursue whatever one desires is NOT a given; one has to fight for it.
Perhaps I should think about how to improve other people's lives more.. within whatever optimizing policy I come up with