5 Comments
User's avatar
⭠ Return to thread
Dan's avatar

You can equally explain people's behavior through the lens of happiness vs. through evolution. Why do we read so much bad news? So we can understand and avoid threats that are detrimental to our wellbeing (also, a lot of people avoid the news because it makes them anxious). Why are we bored by Positive Psychology? Because most people are bored by any kind of science and would rather watch TV or hang out with friends or play games (ya know, things they enjoy). Why do we work too much? So we can buy things that we want, and yes, to improve our status (which makes us feel good). Why do we simmer in anger and shitpost on Twitter? Because there's also content we enjoy on there like jokes and memes, and also because part of us enjoys arguing with people (even if part of us hates it). Why do we beat ourselves up and stay friends with assholes? We beat ourselves up in an attempt to recognize and improve on our flaws to increase our future well-being. If someone stays friends with an asshole, there's probably some other benefit to the relationship such as being part of a broader social circle that they want to stay in, or they want to avoid the discomfort of severing the relationship. Why do we have kids, even though they make us miserable? Most people believe kids will make them happy, and having kids does provide people fulfillment and companionship later in life, even if there are significant sacrifices involved.

If genetic fitness explained everything, why would people commit suicide, wear condoms, have gay sex, decide not to have kids, become drug addicts, sit around all day and watch TV, etc.? I know you can come up with evolutionary explanations for all of these things, but you can also explain them all through the lens of seeking pleasure and avoiding pain, and I think that explanation makes a lot more sense when the behavior doesn't seem to improve the chances your genes will be passed on.

I think your definition of happiness is severely limited. Positive prediction error doesn't explain all positive emotion. If I think the meal I'm about to eat is going to be yummy, and it is as yummy as I expected, I'm still enjoying the meal! Even if it's not as yummy as I thought it would be, part of me will be disappointed, but I will still enjoy the meal if it's sufficiently yummy.

Evolution explains why things make us happy, but it's the happiness we enjoy and want more of, not the genetic fitness. If you offered people two buttons to press, one of which would make them happy for the rest of the life, and the other would maximize the amount of their genetic material that is passed on to future generations, which do you think people would press? I think other good analogies would be any situation where an incentive structure is set by an outside force with goals other than yours. If someone works for Amazon, they are carrying out tasks that are designed to improve Amazon's profits, but the worker wants a job and money. In this analogy, Amazon's profits are evolution's pursuit of propagating genes, and happiness is the money and/or job satisfaction the worker gets. Which do you think people really care about?

Expand full comment
David Pinsof's avatar

Thanks, this is a good critique. But I ultimately think your view of happiness is circular and unfalsifiable. Circular in that you’re defining happiness as “the thing we want” and then saying we want happiness, which means “we want what we want.” And unfalsifiable in that in that your view can explain anything. We want to make ourselves anxious by reading bad news and beating ourselves up? No problem! Just say, without evidence, that doing that stuff ultimately makes us happy. Also, your view doesn’t actually say what happiness *is* in a neurocomputational sense. Mine does. I give a specific functional account of what happiness does in the nervous system. And if none of my puzzles make you question your view, what possibly could? How could you ever falsify it? Also, I don’t predict people want gene replication; I predict people will want the things that facilitated gene replication ancestrally (eg status, power, mating, resources, etc). If you make people choose between these things and happiness, yes they will choose these things: every thing they do every day suggests they are choosing these things over happiness. If you want more arguments for my view, I’d recommend checking out my Twitter thread on the topic: https://x.com/davidpinsof/status/1707438571312054530?s=46&t=Kw5ECukWBaZcFpg22OCOmg

Expand full comment
Dan's avatar

My original post was a little Devil's Advocate-y, so to be clear, I do not think that happiness fundamentally explains all human behavior. But I don't think that evolution does either. Evolution is an imperfect process; genes only have to be "good enough" to be passed on. Organisms have plenty of genetic mutations and useless traits that don't improve fitness. We can and do take plenty of actions that are evolutionarily disadvantageous, such as the ones I listed in my first comment. Evolution has generally pushed us in the direction of better odds of survival and reproduction, but it's not a rational, omniscient force, so some things we do are just random/weird/unexplainable and do not improve our fitness.

I acknowledge that happiness does not describe all of our behavior either, but this is for similar reasons as evolution. You acknowledge that happiness has an evolutionary function, although I don't know why you restrict it purely to prediction error and deny that it can simply be a reward for obtaining things that are evolutionarily advantageous (do you really think people don't experience happiness/joy/pleasure/etc. when they eat delicious food or have sex, even if it's as good or slightly less good than they were expecting?). We generally evolved to like things that improve fitness and dislike things that decrease fitness, but evolution isn't all-powerful and people aren't perfectly rational or capable, so we are both imperfect pleasure-seekers and imperfect fitness-improvers.

So my argument about happiness is more prescriptive than normative. People do not always do what makes them happy, but I think they would be more rational to do so, since happiness in my view is a state of mind that is inherently desirable. "Inherently" is important here; people desire things other than happiness and desire things that don't make them happy, so happiness is not simply "the thing people want." But happiness is "the good feeling," it is what people experience in their brains as self-evidently good. In your Twitter thread, you described this definition as vapid and circular, but I'm not sure that makes it inaccurate. Some things are just hard to define in non-obvious ways: "good" is another one, or "existence" or "consciousness" or "truth."

Expand full comment
David Pinsof's avatar

If happiness is “inherently desirable” then why don’t we always desire it? And isn’t saying “we (ought to) want what is inherently desirable” circular? It’s the same as saying, “We (ought to) want what we inherently want.” It’s also unfalsifiable. If any goal-directed behavior can simply be redefined as happiness-seeking behavior, then how do we falsify the idea that we want to be happy?

Yes I deny that happiness has a “reward” function because such a “reward” is in fact functionless, as I argue in the thread. Motivation doesn’t need any “reward” to push us toward things. It can just go ahead and push us toward things. So the “reward” part explains nothing about how the system works, why such a “reward” is needed, and what the reward is actually doing that motivation isn’t already doing.

Yes when there is zero prediction error we will not feel happiness. But prediction error is very rarely zero, unless we’ve experienced a stimulus hundreds of times and can predict it perfectly. So yes, an awesome meal or awesome song will not make us happy the 600th time we experience it. It will fee neutral. Prediction error is constantly declining with repeated exposure to a stimulus, along with our enjoyment of the stimulus. It may never go to zero, but it does go down, and there’s plenty of data to back this up. The frequency and intensity of happy experiences goes down with age, which makes no sense if we’re pursuing happiness (why would we get worse at pursuing the thing we want instead of getting better at it?), but makes perfect sense under the view that happiness is about prediction error. Score another empirical point for my theory, and I would say take one point away from your theory, but your theory is not even testable. It’s not even wrong.

Sure, some concepts are fuzzy and hard to pin down. But when we have a precise, testable, evolutionarily plausible version of that concept, compared to a fuzzy, circular, unfalsifiable, evolutionarily nonsensical version of the concept, that’s based in nothing but intuition, we should choose the former over the latter. It’s really no contest.

Expand full comment
Dan's avatar

I think any robust and comprehensive definition of “what we want” will inevitably be circular and unfalsifiable. Evolution explains a lot, but not everything, due to the reasons I argued previously. So we could say "we want certain things, and the things we want have been guided by evolution in the direction of that which improves genetic fitness, but they are partially random/unexplained due to genetic drift and evolution’s imprecision.” That feels pretty comprehensive and accurate, but its usefulness is obviously limited.

I feel similarly to what Tove K said in another comment, which is that your definition of happiness being positive reinforcement but not a reward is semantics. I get why it feels weird to view positive reinforcement as something we want, rather than saying we want the things that are being positively reinforced. But this again feels like semantics; when I say “I want to be happy” I’m saying something roughly the same as “I want things that make me happy.” I’m assuming you would respond that happiness is trivial in this conception because it’s being defined as what is wanted. But it’s not trivial, because of the difference that you’ve outlined between motivation and happiness. We want certain things in the current moment, and then when we get them, the pleasure or pain we feel tells us if what we originally wanted was “correct.” This is why people say that happiness is what we “really want”—that’s what it means in this context. It’s not circular because it’s an important distinction from “what we want”—what we want in the current moment is a prediction of an outcome, and what we “really want” is what we judge that it was correct to have wanted post-hoc.

This ties in to your prediction error theory of happiness, which I now basically agree with, but with some caveats. First, I disagree with what precisely is being predicted in the guessing game of life. In your footnote, you said that it was which of various outcomes has the highest expected value in terms of fitness proxies. I think this is what evolution has guided us toward predicting, but I refer back to my argument that evolution is not a perfect process that creates fitness-maximizing machines. So it would be more accurate, but unfortunately more vague, to say we are predicting which outcome has the highest value to us. We then pursue the outcome we are most motivated to pursue (through evolved and learned priors). If the outcome produces happiness, our motivational system updates to more strongly favor it in the future. If it produces suffering, we update to avoid it in the future.

In your guessing game analogy, you pointed out that the goal is to guess the thing, not hear more “getting warmers.” The problem with this analogy is that the guessing game isn’t the only game we’re playing, we’re also playing the game of “get more value.” When we guess correctly, the guessing game is over, but the value game is not. The neutral reaction we have to a correct guess signals that we need to play a new guessing game to find more value. So yeah, it’s weird to think of it as “we want more incorrect guesses,” but we do: we want more value and we need to play more guessing games (and get more incorrect guesses which tells us we should keep playing that guessing game) to get more of it. So maybe it’s a bit misleading to say “we want to be happy,” but it’s a shorthand for “we want more value and happiness is our measure of if we are getting more of it.”

OK, I think I’m done, but thank you—you’ve given me a lot to think about and sharpened my views on this topic. I think our differences are largely in framing but I hope I’ve made a good case for my perspective.

Expand full comment
ErrorError