Discussion about this post

User's avatar
Julius's avatar

Thanks for writing this piece. I'm glad you turned your lens towards AI Doomerism. You make a lot of good points, and I agree with lots of what you said, but I think you overstate the importance of these assumptions. My main disagreement is that I don’t think these assumptions are all required to be concerned about AI Doom. Here’s an assumption-by-assumption response.

1. Intelligence is one thing.

I'm very concerned about AI Doom and I do not believe that “intelligence is one thing.” In fact, when we talk about “intelligence,” I, like you, believe “we’re not pointing to a singular, spooky substance, but gesturing at a wide variety of complex, heterogeneous things.” It’s, as you say, a “folk concept.” But this is true of many concepts, and that illustrates a limitation in our understanding—not that there isn’t something real and important here.

Imagine that intelligence isn’t a single thing but is made up of two components: Intelligence 1 and Intelligence 2. Further, imagine AI is only increasing at Intelligence 1 but not Intelligence 2. We don’t know enough about intelligence to clearly define the boundary between 1 and 2, but I can tell you that every time OpenAI releases a bigger model, it sure seems better at designing CBRN weapons. This pattern of improvement in potentially dangerous capabilities is concerning regardless of whether we can precisely define or measure "intelligence."

You say that “the word ‘intelligence’ is a semantic catastrophe” and I agree. But that’s true of many words. If you don’t like the word “intelligence”, fine. But I would argue you’re holding that term to a standard that very few, if any, concepts can meet.

The point is, you still have to explain what were seeing. You still have to explain scaling laws. If you don’t want to say the models are more intelligent, fine, but something is definitely happening. It’s that something I’m concerned about (and I think it’s reasonable to call it “increasing intelligence”).

Time and again, when the GPT models have scaled (GPT -> GPT-2 -> GPT-3 -> GPT-4), they have been more "intelligent" in the way people generally use that term. Would you argue that they haven’t? Intelligence isn't one thing and it's messy and yes, yes, yes, to all your other points, but this is still happening. If you don’t want to call this increasing intelligence, what would you call it?

To show you that I’m talking about something real, I will make the following prediction: If GPT-4 were scaled up by a factor of 10 in every way (assuming sufficient additional training data, as that’s a separate issue), and I got to spend adequate time conversing with both, I would perceive the resulting model (“GPT-5”) to be more intelligent than GPT-4. In addition, although IQ is an imperfect measure of the imperfect concept of intelligence, I predict that it would score higher on an IQ test.

Would you take the opposite side of this bet? My guess is “no”, but I’m curious what your explanation for declining would be. If it’s something like, “because models become better at conversing and what people think of as intelligence and what IQ tests measure and designing dangerous capabilities, but that’s not intelligence”, fine, but then we’re arguing about the definition of a word and not AI Doom.

2. It’s in the brain.

In humans, it’s mostly in the brain, but there are some aspects of what some people call “intelligence” that occur outside the brain. The gut processes information, so some might argue it exhibits a degree of intelligence. This doesn’t seem relevant to the AI risk arguments though.

3. It’s one a single continuum.

Again, I agree that the word ‘intelligence’ is a ‘semantic catastrophe,’ and it’s more complex than a single continuum. Not everything we associate with intelligence is on a single continuum. But, again, I’m willing to bet money that the 10X version of GPT-4 would be better at most tasks people associate with intelligence.

4. It can help you achieve any goal.

You’re making it seem like AI Doomers believe intelligence is equivalent to omnipotence. It’s not. Even if it’s hard to define, we all agree that it doesn't directly regulate body temperature. It can, however, in the right contexts, allow a species to create clothes that regulate body temperature, antibiotics that speed up recovery, spears that keep predators away, and so on. It's an incredibly powerful thing, but it has limitations.

As for why it hasn't evolved over and over, it's expensive. In humans, it's about 2% of our body mass and consumes about 20% of our energy. On top of that, it requires longer gestation periods or childhoods. Something that costs that much better pay off in a big way. It did with humans, but I don't see how it would for lots of other niches. I imagine that the more an organism can manipulate its environment—say, by having hands to move things around or legs to move itself around—the more useful intelligence would be. It would not benefit a tree very much. Do you really think a really smart crab would have a sufficient increase in genetic fitness to make the cost worth it?

In the right contexts, though, it’s incredibly powerful. Our intelligence allowed cumulative culture, which is why we’re the dominant species on Earth. It’s why the Earth’s mammalian biomass is dominated by humans and the things we domesticated for our consumption. Humans decide which other animals go extinct. It’s why humans can sit around tables and say things like, "California condors are critically endangered. We like them so let's make an effort to bring them back. The Tecopa pupfish is critically endangered, but those new bathhouses are bringing in lots of tourism money, so bye-bye pupfish."

5. It has barely any limits or constraints.

You bring up good points about constraints. I agree that “real life is complex, heterogenous, non-localizable, and constrained.” Intelligence has constraints. It’s not going to build a Dyson Sphere overnight. The world has friction.

It’s worth thinking carefully about how significant these constraints will be. They certainly matter—the world of atoms moves more slowly than the world of bits.

But we shouldn’t be too confident assuming the limitations of a superintelligent system. I doubt people would have predicted Satoshi Nakamoto could become a billionaire only through digital means. Certainly, a superintelligent AI could do the same. Where in this chain does the AI fail? Could it not become a billionaire? From that position, would it not be able to amass even more power?

I think there’s a lot more that could be said here, but I don’t know how much this is a crux for you.

Expand full comment
Greg G's avatar

I don't follow why concern about AI risk means one is committing to all of those propositions. What if capabilities are just a black box, and you see capabilities increasing rapidly? It doesn't seem to matter whether those capabilities are one thing or another. And what if we posit a malevolent or adversarial human user, or an adversarial situation like war? Nuclear weapons don't require intelligence or volition to be a meaningful threat. Perhaps AI will be truly intelligent or sentient, but it seems like rapidly advancing capabilities are the real crux.

Expand full comment
112 more comments...

No posts