What changes with WBE?
Did you read the part of the workshop report that talked about this?
this decision, as it's being made, doesn't take many out of the pool of those who would make progress towards UFAI
Getting decision theory "right enough" could be important for building a viable UFAI (or at least certain types of it, e.g., non-neuromorphic). There's reason to think for example that AIXI would fail due to incorrect decision theory (but people trying to make AIXI practical do not seem to realize this yet). Given that we seem to constitute a large portion of all people trying to get decision theory right for AI purposes, the effect of our decisions might be larger than you think.
Alternatively, don't talk about the results openly, but work anyway
Yes, but of course that reduces the positive effects of working on decision theory, so you might decide that you should do something else instead. For example I think that thinking about strategy and meta-philosophy might be better uses of my time. (Also, I suggest that keeping secrets is very hard so even this alternative of working in secret may be a net negative.)
What changes with WBE?
Did you read the part of the workshop report that talked about this?
Yes, and I agree, but it's not what I referred to. The essential part of the claim (as I accept it) is that given WBE, there exist scenarios where FAI can be developed much more reliably than in any feasible pre-WBE scenario. At the very least, dominating WBE theoretically allows to spend thousands of subjective years working on the problem, while in pre-WBE mode we have at most 150 and more likely about 50-80 years.
What I was talking about is probability of suc...
Here is a short new publication from the Singularity Institute, on the 2-day workshop that followed Singularity Summit 2011.
Note the new publication design. We are currently porting our earlier publications to this template, too.