Discussion about this post

User's avatar
Ed Clint's avatar

A few years ago, amid peak progressive panic, I had social media friends of mine explain to me that the "deep red" folks were not good people because of the way they stereotype, dehumanize, and exhibit intolerance in general. Fair enough, legit criticism.. those are bad things. Except that some of those same friends went further. I asked questions about my friends' attitudes and beliefs. They did not hesitate to explain to me that that group of people were not merely misguided or incorrect on facts, but that their core motivations were abjectly malicious and selfish. They lacked humanity, lacked empathy. Some even suggested the whole self-identified group of them did not necessarily deserve basic rights like speech or voting, that pre-emptive use of force and violence against them was justified because they were, by virtue of their group membership alone, incorrigibly harmful.

My friends sensed my questions must be leading somewhere, but they never grasped the irony that was right in front of them, articulated with their own mouths.

Expand full comment
Sinity's avatar

Well, human brains aren't all the same. There's _interesting_ idea about high-functioning autism which is arguing that, in a way, human nature is partially altered/lost. Not the next step in evolution, given that what's lost is being effective at competing socially. https://opentheory.net/2023/05/autism-as-a-disorder-of-dimensionality/

> There’s a concept of ‘canalization’ in biology and psychology, which loosely means how strongly established a setting or default phenotype is. We can expect “standard-dimensional nervous systems” to be relatively strongly canalized, inheriting the same evolution-optimized “standard human psycho-social-emotional-cognitive package”. I.e., standard human nervous systems are like ASICs: hard-coded and highly optimized for doing a specific set of things.

> Once we increase the parameter size, we get something closer to an FPGA, and more patterns can run on this hardware. But more degrees of freedom can be behaviorally and psychologically detrimental since (1) autists need to do their own optimization rather than depending on a prebuilt package, (2) the density of good solutions for crucial circuits may go down as dimensionality goes up, and (3) the patterns autists end up running will be notably different than patterns that others are running (even other neurodivergents), and this can manifest in missed cues and the need to run or emulate normal human patterns ‘without hardware acceleration.’

> To phrase this in terms of LLM alignment (from an upcoming work): Having a higher neuron count, similar to a higher parameter count, unlocks both novel capabilities and novel alignment challenges. Autism jacks the parameter count by ~67% and shifts the basis enough to break some of the pretraining evolution did, but relies on the same basic “postproduction” algorithms to align the model.

> I.e. the canalization we inherit from our genes and environments is optimized for networks operating within specific ranges of parameters. Jam too many neurons into a network, and you shift the network’s basis enough that the laborious pre-training done by evolution becomes irrelevant; you’re left with a more generic high-density network that you have to prune into circuits yourself, and it’s not going to be hugely useful until you do that pruning. And you might end up with weird results, strange sensory wirings, etc because pruning a unique network is a unique task with sometimes rather loose feedback; see also work by Safron et al on network flexibility.

Expand full comment
8 more comments...

No posts