John_Maxwell_IV comments on Help me make SI's arguments clear - Less Wrong

14 Post author: crazy88 20 June 2012 10:54PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (21)

You are viewing a single comment's thread. Show more comments above.

Comment author: roll 21 June 2012 04:22:03AM *  1 point [-]

From what I gathered SI's relevance rests upon an enormous conjunction of implied and a very narrow approach as solution, both of which were decided upon significant time in the past. Subsequently, truly microscopic probability of relevance is easily attained; I estimate at most 10^-20 due to multiple use of narrow guesses into a huge space of possibilities.

Comment author: John_Maxwell_IV 21 June 2012 07:45:03AM *  3 points [-]

Hm, most of the immediate strategies SI is considering going forward strike me as fairly general:

http://lesswrong.com/r/discussion/lw/cs6/how_to_purchase_ai_risk_reduction/

They're also putting up these strategies for public scrutiny, suggesting they're open to changing their plans.

If you're referring to sponsoring an internal FAI team, Luke wrote:

I don't take it to be obvious that an SI-hosted FAI team is the correct path toward the endgame of humanity "winning." That is a matter for much strategic research and debate.

BTW, I wish to reinforce you for the behavior of sharing a dissenting view (rationale: view sharing should be agnostic to dissent/assent profile, but sharing a dissenting view intuitively risks negative social consequences, an effect that would be nice to neutralize), so I voted you up.

Comment author: roll 21 June 2012 08:06:44AM 0 points [-]

Well, that's the Luke's aspirations; I was referring to the work done so far. The whole enterprise has the feeling of over optimistic startup with ill defined extremely ambitious goals; those don't have any success rate even for much much simpler goals.