We talk a lot about human nature. It’s one of our favorite explanations for people’s behavior.
Why do right-wingers hate immigrants? Because it’s human nature to “fear the other.” What explains cancel culture? Our natural human instinct to rally the tribe and punish heretics. How could anyone believe the earth is flat? Confirmation bias is a part of human nature.
But when we say these things, we neglect to mention that we are humans. That’s a pretty important data point. If we’re trying to explain some weird thing we don’t do, and our explanation is “it’s human nature,” then we need to say something about why human nature doesn’t apply to us. Are we mutants? Are we aliens?
We usually don’t think this way. Instead, we appeal to human nature to imply that we’re superior to some less-evolved group of humans. Belief in the dark side of humanity, or at least the rest of humanity, is a kind of brag. The idea that we’re higher beings—that we’ve transcended our atavistic urges and become civilized—is intoxicating.
But if it were true, if we really were overcoming something primal within us, it would be hard. It would be hard in the same way that it's hard to resist gorging on tortilla chips or compulsively checking our phones. It would be so hard that we’d regularly slip up, and we’d forgive others for their slip-ups.
I don't know about you, but it's not hard for me to, say, support immigration. There's nothing inside me I have to resist, no urges I need to suppress. I’m not transcending human nature when I express my political views, and I shouldn’t give myself credit for doing so.
Here’s a useful test. When you’re trying to explain something weird about a group of humans, and you want to blame human nature, ask yourself: has it been hard for you to resist that nature within yourself?
If the answer is “no,” then maybe you’re moved by the same instincts as the people who disagree with you, but in ways you cannot see. Maybe that “fear of the other” is inside you as well, but merely directed at different others. Maybe you’re just as vulnerable to confirmation bias, and you’re just biased to confirm different beliefs.
Sure, you have "good reasons" for why you believe the things you do, and why you dislike the people you do. But so do the humans who disagree with you—at least from their perspective. "Good reasons" for prejudice and dogma are never in short supply. If anything is a part of human nature, it’s the tendency to rationalize our prejudice and dogma, to make it seem like it’s not prejudice or dogma at all, but basic decency and common sense. To transcend that part of ourselves is, and ought to be, hard.
If it's not hard, you're not doing it right.
A few years ago, amid peak progressive panic, I had social media friends of mine explain to me that the "deep red" folks were not good people because of the way they stereotype, dehumanize, and exhibit intolerance in general. Fair enough, legit criticism.. those are bad things. Except that some of those same friends went further. I asked questions about my friends' attitudes and beliefs. They did not hesitate to explain to me that that group of people were not merely misguided or incorrect on facts, but that their core motivations were abjectly malicious and selfish. They lacked humanity, lacked empathy. Some even suggested the whole self-identified group of them did not necessarily deserve basic rights like speech or voting, that pre-emptive use of force and violence against them was justified because they were, by virtue of their group membership alone, incorrigibly harmful.
My friends sensed my questions must be leading somewhere, but they never grasped the irony that was right in front of them, articulated with their own mouths.
Well, human brains aren't all the same. There's _interesting_ idea about high-functioning autism which is arguing that, in a way, human nature is partially altered/lost. Not the next step in evolution, given that what's lost is being effective at competing socially. https://opentheory.net/2023/05/autism-as-a-disorder-of-dimensionality/
> There’s a concept of ‘canalization’ in biology and psychology, which loosely means how strongly established a setting or default phenotype is. We can expect “standard-dimensional nervous systems” to be relatively strongly canalized, inheriting the same evolution-optimized “standard human psycho-social-emotional-cognitive package”. I.e., standard human nervous systems are like ASICs: hard-coded and highly optimized for doing a specific set of things.
> Once we increase the parameter size, we get something closer to an FPGA, and more patterns can run on this hardware. But more degrees of freedom can be behaviorally and psychologically detrimental since (1) autists need to do their own optimization rather than depending on a prebuilt package, (2) the density of good solutions for crucial circuits may go down as dimensionality goes up, and (3) the patterns autists end up running will be notably different than patterns that others are running (even other neurodivergents), and this can manifest in missed cues and the need to run or emulate normal human patterns ‘without hardware acceleration.’
> To phrase this in terms of LLM alignment (from an upcoming work): Having a higher neuron count, similar to a higher parameter count, unlocks both novel capabilities and novel alignment challenges. Autism jacks the parameter count by ~67% and shifts the basis enough to break some of the pretraining evolution did, but relies on the same basic “postproduction” algorithms to align the model.
> I.e. the canalization we inherit from our genes and environments is optimized for networks operating within specific ranges of parameters. Jam too many neurons into a network, and you shift the network’s basis enough that the laborious pre-training done by evolution becomes irrelevant; you’re left with a more generic high-density network that you have to prune into circuits yourself, and it’s not going to be hugely useful until you do that pruning. And you might end up with weird results, strange sensory wirings, etc because pruning a unique network is a unique task with sometimes rather loose feedback; see also work by Safron et al on network flexibility.