One of the biggest problems with evaluating the plausibility of SI's arguments is that the arguments involve a large number of premises (as any complex argument will) and often these arguments are either not written down or are written down in disparate locations, making it very hard to piece together these claims. SI is aware of this and one of their major aims is to state their argument very clearly. I'm hoping to help with this aim.
My specific plan is as follows: I want to map out the broad structure of SI's arguments in "standard form" - that is, as a list of premises that support a conclusion. I then want to write this up into a more readable summary and discussion of SI's views.
The first step to achieving this is making sure that I understand what SI is arguing. Obviously, SI is arguing for a number of different things but I take their principle argument to be the following:
P1. Superintelligent AI (SAI) is highly likely to be developed in the near future (say, next 100 years and probably sooner)
P2. Without explicit FAI research, superintelligent AI is likely to pose a global catastrophic risk for humanity.
P3. FAI research has a reasonable chance of making it so that superintelligent AI will not pose a global catastrophic risk for humanity.
Therefore
C1. FAI research has a high expected value for humanity.
P4. We currently fund FAI research at a level below that supported by its expected value.
Therefore
C2. Humanity should expend more effort on FAI research.
Note that P1 in this argument can be weakened to simply say that SAI is a non-trivial possibility but, in response, a stronger version of P2 and P3 are required if the conclusion is still to be viable (that is, if SAI is less likely, it needs to be more dangerous or FAI research needs to be more effective in order for FAI research to have the same expected value). However, if P2 and P3 already seem strong to you, then the argument can be made more forceful by weakening P1. One further note, however, doing so might also make the move from C1 and P4 to C2 more open to criticism - that is, some people think that we shouldn't make decisions based on expected value calculations when we are talking about low probability/high value events.
So I'm asking for a few things from anyone willing to comment:
1.) A sense of whether this is a useful project (I'm very busy and would like to know whether this is a suitable use of my scarce spare time) - I will take upvotes/downvotes as representing votes for or against the idea (so feel free to downvote me if you think this idea isn't worth pursuing even if you wouldn't normally downvote this post).
2.) A sense of whether I have the broad structure of SI's basic argument right.
In terms of my commitment to this project: as I said before, I'm very busy so I don't promise to finish this project. However, I will commit to notifying Less Wrong if I give in on it and engaging in handover discussions with anyone that wants to take the project over.
I'm not sure I follow -
P3 says that FAI research had a reasonable chance of success. Presumably you believe that at least a weak version of P3 must be true because otherwise there's no expected value to researching FAI (unless you just enjoy reading the research?)
Something similar can be said in terms of P1. As I note below the main argument, you can weaken P1 but you surely need at least a weakened version of P1.
Is that what you mean? That you only need a very weak version of P1 and P3 for C2 to follow. So if AI is even slightly possible and FAI has even a small chance of success then C2 follows anyway.
If so, that's fine but then you put a lot of weight on P2 as well as an unstated premise (P2a. Global catastrophic risks are very bad) as well as on another unstated premise (P4a. We should decide on funding based on expected value calculations [even in cases with small probabilities/high gains]).
Further, note that the more you lower the expected value of FAI research, the harder it becomes to support P4. There are lots of things we would like to fund - other global catastrophic risk research, literature, nice food, understanding the universe etc - and FAI research needs to have a high enough expected value that we should spend our time on this research rather than on these other things. As such, the expected value doesn't just need to be high enough that in an ideal world we would want to do FAI research but high enough that we ought to do the research in this world.
If that's what you're saying, that's fine but by putting so much weight on fewer premises, you risk failing to convince other people to also accept the importance of FAI research. If that's not what you mean then I'd love to get a better sense of what you're saying.
That's basically it. What's missing here is probabilities. I don't need FAI research to have a high enough probability of helping to be considered "reasonable" in order to believe that it is still the best action. Similarly, I don't need to believe that AGI will be developed in the next one or even few hundred years for it to be urgent. Basically the expected value is dominated by the negative utility if we do nothing (loss of virtually all utility forever) and my belief that UFAI is the default occurrence (high probability). I do however believe that AGI could be developed soon; it simply adds to the urgency.