One of the biggest problems with evaluating the plausibility of SI's arguments is that the arguments involve a large number of premises (as any complex argument will) and often these arguments are either not written down or are written down in disparate locations, making it very hard to piece together these claims. SI is aware of this and one of their major aims is to state their argument very clearly. I'm hoping to help with this aim.
My specific plan is as follows: I want to map out the broad structure of SI's arguments in "standard form" - that is, as a list of premises that support a conclusion. I then want to write this up into a more readable summary and discussion of SI's views.
The first step to achieving this is making sure that I understand what SI is arguing. Obviously, SI is arguing for a number of different things but I take their principle argument to be the following:
P1. Superintelligent AI (SAI) is highly likely to be developed in the near future (say, next 100 years and probably sooner)
P2. Without explicit FAI research, superintelligent AI is likely to pose a global catastrophic risk for humanity.
P3. FAI research has a reasonable chance of making it so that superintelligent AI will not pose a global catastrophic risk for humanity.
Therefore
C1. FAI research has a high expected value for humanity.
P4. We currently fund FAI research at a level below that supported by its expected value.
Therefore
C2. Humanity should expend more effort on FAI research.
Note that P1 in this argument can be weakened to simply say that SAI is a non-trivial possibility but, in response, a stronger version of P2 and P3 are required if the conclusion is still to be viable (that is, if SAI is less likely, it needs to be more dangerous or FAI research needs to be more effective in order for FAI research to have the same expected value). However, if P2 and P3 already seem strong to you, then the argument can be made more forceful by weakening P1. One further note, however, doing so might also make the move from C1 and P4 to C2 more open to criticism - that is, some people think that we shouldn't make decisions based on expected value calculations when we are talking about low probability/high value events.
So I'm asking for a few things from anyone willing to comment:
1.) A sense of whether this is a useful project (I'm very busy and would like to know whether this is a suitable use of my scarce spare time) - I will take upvotes/downvotes as representing votes for or against the idea (so feel free to downvote me if you think this idea isn't worth pursuing even if you wouldn't normally downvote this post).
2.) A sense of whether I have the broad structure of SI's basic argument right.
In terms of my commitment to this project: as I said before, I'm very busy so I don't promise to finish this project. However, I will commit to notifying Less Wrong if I give in on it and engaging in handover discussions with anyone that wants to take the project over.
That's basically it. What's missing here is probabilities. I don't need FAI research to have a high enough probability of helping to be considered "reasonable" in order to believe that it is still the best action. Similarly, I don't need to believe that AGI will be developed in the next one or even few hundred years for it to be urgent. Basically the expected value is dominated by the negative utility if we do nothing (loss of virtually all utility forever) and my belief that UFAI is the default occurrence (high probability). I do however believe that AGI could be developed soon; it simply adds to the urgency.
Cool, glad I understood. Yes, the argument could be made more specific with probabilities. At this stage, I'm deliberately being vague because that allows for more flexibility - ie. there are multiple ways you can assign probabilities and values to the premises such that they will support the conclusion and I don't want to specify just one of them at the expense of others.
If I get to the end of the project I plan to consider the argument in detail in which case I will start to give more specific (though certainly not precise) probabilities for different premises.