"The default assumption of every intellectual should be that the human mind is about as well-designed as the hawk’s eye, the bat’s sonar, or the cheetah’s sprint. Unfortunately, intellectuals do not make this assumption. Instead, they assume our species is broken, and they’ve been put on this earth to fix us."
But surely things have changed? We're very well adapted to our original environment, but what about the modern world? Overconfidence may have been useful in a paleolithic community, where the risks of overestimating oneself are low, but it's certainly not a useful trait for a stock broker or procrastinating university student.
Regarding politics, I agree that it's a high-stakes competition some of the time, but certainly not all of the time. It seems like there are, & have been, a bunch of win-win policies that were needlessly contentious.
"voters have basically no incentive to be unbiased, and strong incentive to parrot their tribe’s propaganda."
This also doesn't seem right. It seems to me that the opinionated uncle at thanksgiving dinner isn't exactly the most prosocial guy.
Re voters having no incentive to be unbiased, I'd recommend checking out the linked-to book, The Myth of the Rational Voter, for the argument that voters have no incentive to have rational political opinions. Yes, our uncle is opinionated, but his opinions are often bullshit, because he has no incentive for accurate political opinions. Re environments changing, I agree this can happen, and some of our maladies might be caused by mismatch to a novel environment, but in general I think mismatch explanations are a bit overrated and that humans are pretty clever and adaptable to any unique environment you throw at them. That is, after all, why we have such huge brains.
Yea I agree. The post is more about changing the world through education and debiasing and consciousness raising and political activism than it is about changing the world by redesigning our incentive structures. I agree that we can make things better via changing incentives--check out my post "incentives are everything."
Awesome read as always! The prose in some of the sections reminds me of Thomas Sowell's writing: dry, direct, and a little sarcastic wording that makes the points come across as obviously true. That said, I'm not sure I fully agree on the second-to-last point. There *might* be something we can do. If technology and science continue moving forward, it might be possible for drugs (or other technology that is impossible to foresee) to "change" what human nature even is. I have literally zero clue what that might be, and my timelines are conservative. I imagine this won't be even remotely possible until 2100, but I don't think it's a given that no intervention could ever possibly exist.
The way that I look at this issue is to consider to consider two possible meanings of "rationality" and also to to consider the roles of rationality and passions in directing human behavior.
I think that rationality is most often seen as a process of obtaining a true, accurate representation of the world. From this rationality-as-accurate-representation-of-reality view, misunderstanding is forming an incorrect representation of reality (e.g., stereotypes as inaccurate descriptions of various out-groups).
But, leaving aside the issue of how accurate stereotypes are, even if we did possess perfectly accurate knowledge of different groups, that knowledge alone does not tell us how to behave. A purely cognitive, information-processing description of people cannot tell us how they will use that information to choose a behavioral path. The goals we pursue are determined by our emotions, our motives, our passions. Without a push from emotional determinants, we would just sit there, contemplating our knowledge. Our rationality can only help us calculate specific behavioral paths for reaching the goals set by our passions. Therefore, making people more "rational" in the sense of possessing more accurate knowledge cannot solve recognized problems such as ethnic fighting, murder, and wars.
But note that rationality-as-accurate-representation-of-reality is not the only way to define rationality. In fact, there are problems in trying to think about rationality this way, the primary problem being that our mental maps or representations of reality can never be perfect representations of reality. The map is never the territory. Rather, our mental maps are constructions that serve the practical function of prediction *well enough* to allow creatures to achieve the goals of life *well enough* to reproduce.
A moment's thought reveals that an insect's mental world and our mental world are very different, but does it make sense to say that an insect's rationality is worse than ours? A pragmatic rather than correspondence conceptualization of rationality (a mental map that is good enough to allow an organism to reproduce before it dies) is more consistent with Kahneman's disinterest in correcting even his own biases because they are "rational" in a pragmatic, functional sense of helping him achieve his goals.
Another brilliant essay by the best writer in the world (seriously).
I have one point I would like to dispute - that the stakes are high in politics. Of course they’re high on a national level, but they’re not high for most people individually.
First, most people’s day to day lives will not change very much based on what happens in a presidential election.
Second, most people have no control over the outcome. Even if it’s important in some sense if it’s outside your control then being heavily invested in the outcome is irrational.
Of course we can talk about other motives for people’s political behavior but I disagree with the claim that the stakes are high for most people.
Thanks, Ross. It's a great point, and I agree with the second part about voters having little control over the outcome. The book I link to later on, when talking about how voters have no incentive to be unbiased, makes essentially the same argument: because every individual voter has essentially no impact on the outcome, they have no incentive to hold accurate political beliefs. But I disagree with the first part, that most people's day to day lives will not change based on what happens in politics. Certainly people with unwanted pregnancies will have their lives changed dramatically as a result of abortion policy. Certainly businesses who are more heavily regulated will have their livelihood and their way of making a living changed by regulatory policy. We may not have a huge stake in *all* political outcomes, but we do have a clear stake in *some*, and it is those political outcomes that ultimately shape which side we support in politics. So if I have a stake in abortion policy, I might join whichever party agrees with me on abortion, and then adopt all that side's other issue opinions to show loyalty to them. So I think the stakes are genuinely high in politics for at least some issues, those issues shape which political coalition we join, and we come up with bullshit narratives (the sanctity of life, a woman's right to choose) to justify our self-interested stance on those issues. But we care more about showing loyalty to our political coalition and generating propaganda to support our coalition than we do in having true political beliefs. Or at least, that's how I'm thinking about these issues as of now (I have a long post in the pipeline called Democracy Is Bullshit where I get into more detail).
I’m excited about the Democracy is Bullshit post! Hopefully it will be here soon.
I still disagree with the claim that most people’s lives are heavily influenced by presidential election results. You gave 2 examples - abortion and business regulations. Only a tiny percentage of people are going to be getting (or trying to get) abortions in the next few years, and it’s only relevant if abortion becomes illegal in the state you live in and you can’t go to another state to get one. As for business regulations, most things that affect businesses are outside of actions taken by the federal government. Do you think the success of most businesses depends on whether a democrat or republican is president? That’s obviously a tiny factor and affected business owners are a small percentage of people.
Yea I like that piece. I actually think both the mistake view and the conflict view are correct. I think politics is a zero-sum competition over power, but in our zeal to win that competition, we believe a lot of mistaken things about reality. It would be better if we didn't believe those mistaken things, but trying to correct them is largely futile, and it would be better to change the underlying incentives of our political system so that mistaken beliefs are disincentivized.
I mean, you could alternatively expect the fight of intellectuals to be about making people's actions closer to their stated motives, with the "misunderstanding" framing just a tactic to provide an easy "out" for the rest to do that. (You don't even need to ascribe good motives to that: there is the obvious benefit of being able to formulate your motives in an ultimately more self-serving way by intellectuals.)
Ah! This is a perfect counterpoint to what I am writing— similar conclusions, but from an opposing angle.
Consider what Dan republished yesterday, that (according to Jeffrey Friedman) there are two types of naive realism to hedge against: 1st person & 3rd person. I would reframe the former as "System 1 neglect" (the kind of naive realism that neglects pre-processing), and I would reframe the latter as "System 2 neglect."
What you have described here is what most game theoretic framings do by neglecting the influence of System 2. Even if System 2 were just an arbitrary, post hoc narrative of fictitious "intentions," paving over a landscape of motives and incentives, it is still non-trivial given that overfitting to the narrative has real effects on individual and group behavior. This is to say that, even in the most conservative framing, we should not equate arbitrariness with triviality or error. Your reframing of heuristics and biases are excellent examples of insights available, absent such conflations.
You have argued well for the necessity of SOME amount of cynicism. But you have NOT argued for a particular magnitude or for proportionality constraints. Any conflation of those questions is a classic case of heuristical substitution of an easier question (some) for a tougher question (how much). Similarly, trashing on "ideology" and "virtue signaling" is substituting a weak man where a steel man should go. In fact, you have not even established that virtue signaling is caused by ideology. If my choices are to signal because of audience capture, or because I fear my silence would be treated as a betrayal by my ingroup, then "ideological purity" is a FORM of signal that would survive, regardless of CONTENT (my god has the ass of a fish) or confidence in what is being signalled. The same status game mechanics you propose would suggest performative cynicism risks runaway selection, mired in survivorship bias (the signal that survives), pluralistic ignorance (the majority is wrong about the majority) and spirals of silence (the more correct signal is selectively dampened).
These are bubbles of System 2 neglect, and bursting those bubbles requires updating on "intention" that is not reducible to (and can even be contrary to) motives and incentives. However, those most likely to navigate by intent are also those most susceptible to System 1 neglect. In other words, social lenses are hedged bets that incur tradeoffs. Exactly what you would expect of a species that specializes in navigating noisy spaces by projecting arbitrary, yet shared constraints as step 1 of collective parsing. This again reinforces your insights, but it does not rely on a cynical lens to highlight the same features.
"The default assumption of every intellectual should be that the human mind is about as well-designed as the hawk’s eye, the bat’s sonar, or the cheetah’s sprint. Unfortunately, intellectuals do not make this assumption. Instead, they assume our species is broken, and they’ve been put on this earth to fix us."
But surely things have changed? We're very well adapted to our original environment, but what about the modern world? Overconfidence may have been useful in a paleolithic community, where the risks of overestimating oneself are low, but it's certainly not a useful trait for a stock broker or procrastinating university student.
Regarding politics, I agree that it's a high-stakes competition some of the time, but certainly not all of the time. It seems like there are, & have been, a bunch of win-win policies that were needlessly contentious.
"voters have basically no incentive to be unbiased, and strong incentive to parrot their tribe’s propaganda."
This also doesn't seem right. It seems to me that the opinionated uncle at thanksgiving dinner isn't exactly the most prosocial guy.
Re voters having no incentive to be unbiased, I'd recommend checking out the linked-to book, The Myth of the Rational Voter, for the argument that voters have no incentive to have rational political opinions. Yes, our uncle is opinionated, but his opinions are often bullshit, because he has no incentive for accurate political opinions. Re environments changing, I agree this can happen, and some of our maladies might be caused by mismatch to a novel environment, but in general I think mismatch explanations are a bit overrated and that humans are pretty clever and adaptable to any unique environment you throw at them. That is, after all, why we have such huge brains.
Thankyou for your hilarious insights in the human mind.
Your observations are refreshing in a world where everyone tries to more understanding, holier and forgiving than the other.
On the one hand, I largely agree with the diagnosis. The academic belief that a little education will fix everything is deluded.
On the other, only an academic would say: “I cannot change the world by educating people therefore I am powerless and nothing can change”.
We act in the world. We move and we bring people with us through all the arts at our disposal. And if they come with us then we are leaders.
I am a deeply cynical man. But I am not without hope. Despair is boring: https://tempo.substack.com/p/cyber-realism
Yea I agree. The post is more about changing the world through education and debiasing and consciousness raising and political activism than it is about changing the world by redesigning our incentive structures. I agree that we can make things better via changing incentives--check out my post "incentives are everything."
Awesome read as always! The prose in some of the sections reminds me of Thomas Sowell's writing: dry, direct, and a little sarcastic wording that makes the points come across as obviously true. That said, I'm not sure I fully agree on the second-to-last point. There *might* be something we can do. If technology and science continue moving forward, it might be possible for drugs (or other technology that is impossible to foresee) to "change" what human nature even is. I have literally zero clue what that might be, and my timelines are conservative. I imagine this won't be even remotely possible until 2100, but I don't think it's a given that no intervention could ever possibly exist.
Point taken.
The way that I look at this issue is to consider to consider two possible meanings of "rationality" and also to to consider the roles of rationality and passions in directing human behavior.
I think that rationality is most often seen as a process of obtaining a true, accurate representation of the world. From this rationality-as-accurate-representation-of-reality view, misunderstanding is forming an incorrect representation of reality (e.g., stereotypes as inaccurate descriptions of various out-groups).
But, leaving aside the issue of how accurate stereotypes are, even if we did possess perfectly accurate knowledge of different groups, that knowledge alone does not tell us how to behave. A purely cognitive, information-processing description of people cannot tell us how they will use that information to choose a behavioral path. The goals we pursue are determined by our emotions, our motives, our passions. Without a push from emotional determinants, we would just sit there, contemplating our knowledge. Our rationality can only help us calculate specific behavioral paths for reaching the goals set by our passions. Therefore, making people more "rational" in the sense of possessing more accurate knowledge cannot solve recognized problems such as ethnic fighting, murder, and wars.
But note that rationality-as-accurate-representation-of-reality is not the only way to define rationality. In fact, there are problems in trying to think about rationality this way, the primary problem being that our mental maps or representations of reality can never be perfect representations of reality. The map is never the territory. Rather, our mental maps are constructions that serve the practical function of prediction *well enough* to allow creatures to achieve the goals of life *well enough* to reproduce.
A moment's thought reveals that an insect's mental world and our mental world are very different, but does it make sense to say that an insect's rationality is worse than ours? A pragmatic rather than correspondence conceptualization of rationality (a mental map that is good enough to allow an organism to reproduce before it dies) is more consistent with Kahneman's disinterest in correcting even his own biases because they are "rational" in a pragmatic, functional sense of helping him achieve his goals.
Okay, it's an inconvenient truth. But what's the evolutionary explanation for our widespread need to weave a web of comforting lies around it?
Thanks
I give an answer to this question in my academic preprint “the evolution of social paradoxes” (soon-to-be-published in American Psychologist).
Another brilliant essay by the best writer in the world (seriously).
I have one point I would like to dispute - that the stakes are high in politics. Of course they’re high on a national level, but they’re not high for most people individually.
First, most people’s day to day lives will not change very much based on what happens in a presidential election.
Second, most people have no control over the outcome. Even if it’s important in some sense if it’s outside your control then being heavily invested in the outcome is irrational.
Of course we can talk about other motives for people’s political behavior but I disagree with the claim that the stakes are high for most people.
Thanks, Ross. It's a great point, and I agree with the second part about voters having little control over the outcome. The book I link to later on, when talking about how voters have no incentive to be unbiased, makes essentially the same argument: because every individual voter has essentially no impact on the outcome, they have no incentive to hold accurate political beliefs. But I disagree with the first part, that most people's day to day lives will not change based on what happens in politics. Certainly people with unwanted pregnancies will have their lives changed dramatically as a result of abortion policy. Certainly businesses who are more heavily regulated will have their livelihood and their way of making a living changed by regulatory policy. We may not have a huge stake in *all* political outcomes, but we do have a clear stake in *some*, and it is those political outcomes that ultimately shape which side we support in politics. So if I have a stake in abortion policy, I might join whichever party agrees with me on abortion, and then adopt all that side's other issue opinions to show loyalty to them. So I think the stakes are genuinely high in politics for at least some issues, those issues shape which political coalition we join, and we come up with bullshit narratives (the sanctity of life, a woman's right to choose) to justify our self-interested stance on those issues. But we care more about showing loyalty to our political coalition and generating propaganda to support our coalition than we do in having true political beliefs. Or at least, that's how I'm thinking about these issues as of now (I have a long post in the pipeline called Democracy Is Bullshit where I get into more detail).
I’m excited about the Democracy is Bullshit post! Hopefully it will be here soon.
I still disagree with the claim that most people’s lives are heavily influenced by presidential election results. You gave 2 examples - abortion and business regulations. Only a tiny percentage of people are going to be getting (or trying to get) abortions in the next few years, and it’s only relevant if abortion becomes illegal in the state you live in and you can’t go to another state to get one. As for business regulations, most things that affect businesses are outside of actions taken by the federal government. Do you think the success of most businesses depends on whether a democrat or republican is president? That’s obviously a tiny factor and affected business owners are a small percentage of people.
Love your work. Merry Christmas. Forwarding this banger to the fam.
You've probably already read this, but this piece reminded me of https://slatestarcodex.com/2018/01/24/conflict-vs-mistake/
Yea I like that piece. I actually think both the mistake view and the conflict view are correct. I think politics is a zero-sum competition over power, but in our zeal to win that competition, we believe a lot of mistaken things about reality. It would be better if we didn't believe those mistaken things, but trying to correct them is largely futile, and it would be better to change the underlying incentives of our political system so that mistaken beliefs are disincentivized.
I thought of this too. The piece seems to specifically address mistake theorists.
Love it
I recently came up with a different maxim: unaccountability is at the root of all evil
I haven't read your piece yet - just wanted to share
I agree. Pretty similar to “incentives are everything,” which was one of my posts.
Reminds me of Nassim Taleb's "skin in the game" idea.
I mean, you could alternatively expect the fight of intellectuals to be about making people's actions closer to their stated motives, with the "misunderstanding" framing just a tactic to provide an easy "out" for the rest to do that. (You don't even need to ascribe good motives to that: there is the obvious benefit of being able to formulate your motives in an ultimately more self-serving way by intellectuals.)
Good point.
Ah! This is a perfect counterpoint to what I am writing— similar conclusions, but from an opposing angle.
Consider what Dan republished yesterday, that (according to Jeffrey Friedman) there are two types of naive realism to hedge against: 1st person & 3rd person. I would reframe the former as "System 1 neglect" (the kind of naive realism that neglects pre-processing), and I would reframe the latter as "System 2 neglect."
What you have described here is what most game theoretic framings do by neglecting the influence of System 2. Even if System 2 were just an arbitrary, post hoc narrative of fictitious "intentions," paving over a landscape of motives and incentives, it is still non-trivial given that overfitting to the narrative has real effects on individual and group behavior. This is to say that, even in the most conservative framing, we should not equate arbitrariness with triviality or error. Your reframing of heuristics and biases are excellent examples of insights available, absent such conflations.
You have argued well for the necessity of SOME amount of cynicism. But you have NOT argued for a particular magnitude or for proportionality constraints. Any conflation of those questions is a classic case of heuristical substitution of an easier question (some) for a tougher question (how much). Similarly, trashing on "ideology" and "virtue signaling" is substituting a weak man where a steel man should go. In fact, you have not even established that virtue signaling is caused by ideology. If my choices are to signal because of audience capture, or because I fear my silence would be treated as a betrayal by my ingroup, then "ideological purity" is a FORM of signal that would survive, regardless of CONTENT (my god has the ass of a fish) or confidence in what is being signalled. The same status game mechanics you propose would suggest performative cynicism risks runaway selection, mired in survivorship bias (the signal that survives), pluralistic ignorance (the majority is wrong about the majority) and spirals of silence (the more correct signal is selectively dampened).
These are bubbles of System 2 neglect, and bursting those bubbles requires updating on "intention" that is not reducible to (and can even be contrary to) motives and incentives. However, those most likely to navigate by intent are also those most susceptible to System 1 neglect. In other words, social lenses are hedged bets that incur tradeoffs. Exactly what you would expect of a species that specializes in navigating noisy spaces by projecting arbitrary, yet shared constraints as step 1 of collective parsing. This again reinforces your insights, but it does not rely on a cynical lens to highlight the same features.
I've only read the intro so far but the last one got me 🤣