This is not going to be a popular post here, but I wanted to articulate precisely why I have a very low pDoom (2-20%) compared to most people on LessWrong.
Every argument I am aware of for pDoom fits into one of two categories: bad or weak.
Bad arguments make a long list of claims, most of which have no evidence and some of which are obviously wrong. Examples include A List of Lethalities, which is almost the canonical example. There is no attempt to organize the list into a single logical argument, and it is built on many assumptions (analogies to human evolution, assumption of fast takeoff, ai opaqueness) which are in conflict with reality.
Weak arguments go like this: "AGI will be powerful. Powerful systems can do unpredictable things. Therefore AGI could doom us all." Examples of these arguments include each of the arguments on this list.
So the line of reasoning I follow is something like this;
- I start with a very low prior of AGI doom (for the purpose of this discussion, assume I defer to consensus).
- I then completely ignore the bad arguments,
- finally, I give 1 bit of evidence collectively for the weak arguments (I don't consider them independent, most are just rephrasing the example argument)
So even if I assume no one betting on Manifold has ever heard of the argument "AGI might be bad actually", I only get from 13% -> 30% with that additional bit of evidence.
In the comments: if you wish to convince me, please propose arguments that are neither bad nor weak. Please do not argue that I am using the wrong base-rate or that the examples that I have already given are neither bad nor weak.
EDIT:
There seems to be a lot of confusion about this, so I thought I should clarify what I mean by a "strong good argument"
Suppose you have a strongly-held opinion, and that opinion disagrees from the expert-consensus (in this case, the Manifold market or expert surveys showing that most AI experts predict a low probability of AGI killing us all). If you want to convince me to share your beliefs, you should have a strong good argument for why I should change my beliefs.
A strong good argument has the following properties:
- it is logically simple (can be stated in a sentence or two)
- This is important, because the longer your argument, the more details that have to be true, and the more likely that you have made a mistake. Outside the realm of pure-mathematics, it is rare for an argument that chains together multiple "therefore"s to not get swamped by the fact that
- Each of the claims in the argument is either self-evidently true, or backed by evidence.
- example of a claim that is self-evidently true would be: if AGI exists, it will be made out of atoms
- example of a claim that is not self-evidently true: if AGI exists, it will not share any human values
To give an example completely unrelated to AGI. The expert consensus is that nuclear power is more expensive to build and maintain than solar power.
However, I believe this consensus is wrong because: The cost of nuclear power is artificially inflated by the regulation which mandates nuclear be "as safe as possible", thereby guaranteeing that nuclear power can never be cheaper than other forms of power (which do not face similar mandates).
Notice that even if you disagree with my conclusion, we can now have a discussion about evidence. You might ask, for example "what fraction of nuclear power's cost is driven by regulation?" "Are there any countries that have built nuclear power for less than the prevailing cost in the USA?" "What is an acceptable level of safety for nuclear power plants?"
I should also probably clarify why I consider "long lists" bad arguments (and ignore them completely).
If you have 1 argument, it's easy for me to examine the argument on it's merits so I can decide whether it's valid/backed by evidence/etc.
If you have 100 arguments, the easiest thing for me to do is to ignore them completely and come up with 100 arguments for the opposite point. Humans are incredibly prone to cherry-picking and only noticing arguments that support their point of view. I have absolutely no reason to believe that you the reader have somehow avoided all this and done a proper average over all possible arguments. The correct way to do such an average is to survey a large number of experts or use a prediction market, not whatever method you have settled upon.
Get a dozen AI risk skeptics together, and I suspect you'll get majority support from the group for each and every point that the AI risk case depends on. You, in particular, seem to be extremely aligned with the "doom" arguments.
The "guy-on-the-street" skeptic thinks that AGI is science fiction, and it's silly to worry about it. Judging by your other answers, it seems like you disagree, and fully believe that AGI is coming. Go deep into the weeds, and you'll find Sutton and Page and the radical e/accs who believe that AI will wipe out humanity, and that's a good thing, and that wanting to preserve humanity and human control is just another form of racism. A little further out, plenty of AI engineers believe that AGI would normally wipe out humanity, but they're going to solve the alignment problem in time so no need to worry. Some contrarians like to argue that intelligence has nothing to do with power, and that superintelligence will permanently live under humanity's thumb because we have better access to physical force. And then, some optimists believe that AI will inevitably be benevolent, so no need to worry.
If I'm understanding your comments correctly, your position is something like "ASI can and will take over the world, but we'll be fine", a position so unusual I didn't even think to include it detail in my lengthy taxonomy of "everything turns out okay" arguments. I am unable to make even a basic guess as to how you arrived at the position (though I would be interested in learning).
Please notice that your position is extremely non-intuitive to basically everyone. If you start with expert consensus regarding the basis of your own position in particular, you don't get 87% chance that you're right, you get a look of incredulity and an arbitrarily small number. If you instead want to examine the broader case for AI risk, most of the "good arguments" are going to look more like "no really, AI keeps getting smarter, look at this graph" and things like Yudkowsky's "The Power of Intelligence", both of which (if I understand correctly) you already think are obviously correct.
If you want to find good arguments for "humanity is good, actually", don't ask AI risk people, ask random "normal" people.
My apologies if I've completely misunderstood your position.
(PS: Extinction markets do not work, since they can't pay out after extinction.)