Incentives Are Everything
“How does the brain work? Five words or less.”
The studio audience laughed. The crowd seemed to know it was an impossible question. But rather than laughing it off, Pinker did the impossible. He actually answered it:
“Nerve cells fire in patterns.”
I love this sentence. It’s poetry. It somehow ties it all together—cellular biology, information theory, cybernetics. The devil is in the details, of course (what kind of patterns?), but the ratio of words to insight is enormous.
This post is about another five-word sentence that packs a similar punch as Pinker’s. I’ve fantasized about delivering it in a televised interview. In my fantasy, Colbert confronts me with an impossible question:
“How does society work? Five words or less.”
The crowd laughs. I smile and respond with Pinkerian wit:
“Behavior is determined by incentives.”
The neurons of society
People usually think of incentives as dollar bills. Occasionally, people think of them as something bad, like penalties or years in prison. Once every blue moon, people talk about “social incentives,” like attention, esteem, or social approval.
But I think it’s useful to stretch the concept as far as it will go: incentives are things in the world that human primates evolved to want.
For example, humans are incentivized by food, gossip, rest, safety, status, sex, territory, homeostasis, intergroup domination under the guise of a romantic ideology, and the appearance of having nicer incentives (called “values”) than the ones we actually have. The particular arrangement of incentives across time, space, and causality is called an incentive structure.
Let’s take an example: the virtue game. We want people to think we’re virtuous—that’s a big incentive for us. But there’s a convoluted structure surrounding this incentive. We can only look virtuous by making it look like we weren’t trying to look virtuous. Which means that, to win the virtue game, we have to do all sorts of social maneuvering: we have to send “buried signals” of anonymous or inconspicuous altruism (to get credit for not caring about getting credit), show “modesty” or “humility” (to look morally superior to non-humble people), “bravely” defy social norms (to win the approval of our peers), and display our adherence to ethical principles independent of our self-interest (which is, ultimately, in our self-interest). It’s all very confusing.
And it gets even more confusing when you think about how incentive structures evolve over time. Random events, like pandemics or economic bubbles, create new political incentives to design new legal incentives. Market competition gives rise to increasingly desirable bullshit at increasingly affordable prices, flooding the market with bigger and shinier things to incentivize us. New technologies get invented, like nuclear weapons or contraceptive pills, and new incentives are born, like mutually assured destruction or childless intercourse. Throw in the dynamics of our weird and ever-changing status games and it’s enough to make your head spin. But at the end of the day, it’s just a bunch of things in the world that humans want, and a bunch of ways we try to get them.
What about free will? It’s the ability to respond to our incentives. Being a good person? It’s having an incentive to behave in ways we call “good”—that is, wanting to win the virtue game. Personality? It’s when we’re incentivized to fill different social niches, or when genetic variation randomly tweaks our sensitivity to different incentives.
Let’s give this worldview a name. Let’s call it incentive determinism.
Incentive determinism is obvious. It’s just a bunch of tautologies: we are who we are, we want what we want, and we do what we’re caused to do. And yet, barely anybody thinks this way. It’s a cold, alien way of thinking.
Instead, we prefer to think in stories. We see the world as revolving around a colorful cast of characters—often representing warring tribes—whom we either like or dislike. There’s a path to utopia, and we can get there if the likable heroes use their “free will” powers to rise above their incentives (e.g., obstacles, temptations) and save the day. There’s also a path to dystopia, and we’ll get there if the unlikable villains use their “free will” powers to defy their incentives (e.g., laws, moral norms) and ruin everything. The heroes must stop the villains, or the world will go to shit.
The facts rarely fit our simplistic stories, but that doesn’t stop us from desperately trying to make them fit. For example, what happens if the likable heroes ruined everything—they told a big lie or killed a bunch of people? It didn’t happen. Or if it did happen, it was an accident, and it didn’t reflect who they really were. Or it wasn’t that bad—stop blowing it out of proportion. And besides, the only reason they lied or cheated or killed was because of that other tribe over there—you know, the one we don’t like.
Okay, but what happens if the unlikable villains saved the day—they had a good idea or helped a bunch of people? It didn’t happen. Or if it did happen, they were just virtue signaling, and it didn’t reflect who they really were. Or it wasn’t anything special—stop worshipping them—they’re not deities. And besides, the people who really deserve the credit belong to that other tribe over there—you know, the one we like.
What causes people to become likable or unlikable in the first place, if not their incentives? Their free will powers. What determines how people’s free will powers are used? How likable they are. So free will determines likability, and likability determines free will? Don’t think too hard about it. What matters is: some people suck, and other people are awesome. We don’t need to ask why.
Let’s give this rival worldview a name. Let’s call it likability determinism.
Like it or not
You see this way of thinking nearly everywhere you look.
You see it in fiction. Stories don’t unfold according to the incentive structure of the fictional world, but according to the likability of the characters. That’s why the “good guys” always win: the moral fiber of the characters causally influences how the story turns out, independent of—and often despite of—the fictional incentive structure. Heroes prevail against all odds. Hope always shines through. Even if the incentives are totally perverse, and Gotham City is corrupt to the core, Batman will rise up from out of nowhere and save the day, because he’s smart and cool and brave and good.
You see it in politics. Policies aren’t bad because the system has bad incentives, but because we don’t have good enough people in power. If only we could put our people—the more likable people—in charge, then everything would be amazing. The problem with our political opponents isn’t that they’re in zero-sum competition with us for power, status, and resources, or that they’re socially rewarded for having different beliefs than us; it’s that they’re bad people. And they believe all the wrong things. Why do they believe all the wrong things? It’s not because they lack any incentive to believe the right things, but because they’re stupid, gullible, and crazy. And they’re angry. Why are they so angry? It’s not because social media companies profit from their anger, but because they’re angry people who walk around all day going, “Grrrrrrr!”
You see it in science. New technologies get invented not because there are social and economic incentives for inventing new technologies, but because there are cool, special geniuses who rise up from out of nowhere and use their huge, magical brains to change the world.
You see it in art. Aesthetic preferences don’t evolve according to the arbitrariness of collapsing and re-emerging status games, but according to the arrival of bold artistic visionaries who revolutionize their craft.
You see it in history. The 20th century wasn’t the story of capitalism and technological advancement incentivizing certain forms of intergroup cooperation; it was the story of evil people oppressing virtuous victims, and the virtuous victims courageously rising up against their evil oppressors. It’s not as if humans evolved to dominate other groups whenever they can get away with it, and then over the last century or two, they stopped being able to get away with it as much. It’s that some people are just bad—we don’t like them—and there were more bad people in the past, for some reason.
It also works in reverse: if we see something in the world we don’t like, we infer that it must have been caused by unlikable people. Patriarchy is caused by sexists who don’t care about women. Cancel culture is caused by woke sadists who love ruining people’s lives. Economic inequality is caused by greedy rich people hogging all the stuff. Poverty is caused by lazy and irresponsible freeloaders. War is caused by evil people who don’t recognize that war is bad. Homelessness is caused by heartless people who don’t care about the plight of others. We don’t need to think about the economic, social, and legal incentive structures that lead to cancel culture, homelessness, poverty, war, or wealth inequality. Booooring! We just need to point the finger at the baddies, and our work is done.
Likability determinism also helps us achieve our social goals—it’s a kind of bullshit. Any time something good or bad happens in the world, we use it as an opportunity to praise our allies or diss our rivals. By praising or dissing individuals and groups, we get to show off whose side we’re on and where our loyalties lie. And if we all agree on who the baddies are, then that brings us closer together—it makes us feel like we belong—which is a big incentive for us humans.
Words, words, words
How do we figure out who the baddies are? Usually, we pay attention to whether they’re saying the right things. Are they saying things that make us nod and applaud? Or are they saying things that make us cringe and facepalm? We infer people’s character traits by the words they use and in what order they use them. When everyone uses this shortcut to identify the baddies, the result is what Robin Hanson calls “righttalkism,” the view that all it takes to improve the world is to change how people talk. The logic is straightforward:
Bad things are caused by bad people.
Good things are caused by good people.
Bad people are bad because they talk the wrong way.
Good people are good because they talk the right way.
Therefore, if everybody talks the right way and nobody talks the wrong way, then everything will be good.
This is essentially what modern discourse is all about. We’re trying to get people to talk the right way and prevent them from talking the wrong way. We spend very little time discussing which incentive structures are the best, and tons of time talking about which sets of words are the right words and which sets of people are the right people. It’s why we write words at all: we think we’re the right people with the right words, and if we just say those words loudly enough, we’ll improve the world.
But this is not how cultural evolution works. If you’re incentivized to say something, chances are someone else has already said it or will say it very soon. Your ideology and subculture are shared by thousands (or even millions) of other people—you’re not special. And if people were already incentivized to nod their heads at you, then you haven’t really changed anything; you’ve just gotten people to nod their heads at you.
But even if you do have something to say that is truly new and ahead-of-your-time, how do you know people will listen to you?
You don’t. People will only listen to you if they have an incentive to listen to you. You have to be high-status, or on the right side of a culture war, or else “interesting” (i.e., full of attention-grabbing bullshit). And if you’re truly “ahead of your time,” then people probably won’t listen to you—they’ll just ignore you, dismiss you, or burn you at the stake.
But even if people do have an incentive to listen to you, how do you know they’ll respond in the way you intended, instead of twisting your words into something you didn’t say, like when Darwinism got twisted into a justification for nazism and eugenics?
You don’t. People will only respond in the way you intended if they have an incentive to do that. They have to want the information you’re providing, whether it’s a warning that something’s on fire, or a rationalization for their political coalition’s incoherent policy platform. Your words must be relevant to their agendas.
So to change the world with words, you need three ingredients: 1) something new and important to say that no one else was going to say, 2) an incentive for people to actually listen to you and not burn you at the stake, and 3) an incentive for people to actually respond in the way you intended and not twist your words into an apologia for Hitler or something.
Sadly, you can only control the first one—what you say. You cannot control the second two—i.e., whether people listen and how they respond. So people who say things for a living, like intellectuals, pretend that what we say is all that matters—history is all about ideas—because it makes them seem more important than they really are. The feeling of “being important” is another big incentive for us humans, and it explains a lot of what goes on in intellectual life.
Beyond good and evil
People don’t like incentive determinism because it’s disorienting. It tells us that our single greatest obsession, likability, is a distraction. It’s bullshit—a destroyer of insight, an enemy of understanding. It plays no special role in explaining how the world works. All it does is give us the illusion of understanding—the feeling of superiority to our apparently unlikable rivals (another big incentive for us)—while leaving us in ignorance of the causal structure of the world.
And that causal structure is everything. If there are powerful incentives to help others, then even “evil” people will help others. If there are powerful incentives to hurt others, then even “good” people will hurt others. In fact, if being a “good” person just means being strongly incentivized by others’ moral approval (i.e., really wanting to win the virtue game), then good people are more likely to hurt people when it wins them moral approval. The “good” members of ISIS are the ones who bravely volunteer to commit suicide bombings, and the “bad” members of ISIS are the ones who cravenly fail to make the ultimate sacrifice. People are only as good as their incentives.
But maybe there’s a way out. Maybe figuring out how this all works—understanding the machinery of society—can, itself, change our incentive structures. Insight is a thing in the world that human primates evolved to want, at least sometimes.
If there’s any hope to be found in this cold, alien view of the world, it’s this: the more we all become aware of our incentive structures, the more incentivized we will be to choose them wisely.
Thanks for reading my bullshit. Subscribe for free to receive new posts and support my work.