3 Comments
User's avatar
⭠ Return to thread
David Pinsof's avatar

This begs the question. It assumes that there is a 5% or higher risk of an AI apocalypse. That is the claim that requires a burden of proof.

Expand full comment
Mo Diddly's avatar

That’s not how statistics works. The only way we could prove the actual statistics would be to run the experiment a bunch of times and see how often the world ends.

In lieu of that, all we can do is query as many experts on the subject as possible and see what the mean p(doom) estimation is. Which turns out to be about 10%. You might think all of these experts are full of it, and that’s a perfectly valid opinion, but given the stakes it’s hard for me to buy the case that we shouldn’t be extremely cautious.

Expand full comment
David Pinsof's avatar

I didn’t mean literal mathematical proof but, like, an argument. I think the 10% of experts are mistaken, and I doubt they have the requisite expertise for assessing the question (given that it requires expertise in many fields beyond computer science), and I think we are biased by our folk intuitions, as I argued in the post, so I don’t find that piece of evidence very persuasive. It is very common for 10% of experts to be wrong, especially about a complicated, controversial topic.

Expand full comment
ErrorError