red75 comments on Simplified Humanism, Positive Futurism & How to Prevent the Universe From Being Turned Into Paper Clips - Less Wrong

7 Post author: Kevin 22 July 2010 10:03AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (43)

You are viewing a single comment's thread. Show more comments above.

Comment author: red75 23 July 2010 05:29:35PM 3 points [-]

P(extinction-event)~=P(realized-other-extinction-threat)+P(hand-coded-CEV/FAI-goes-terribly-wrong)+P(AGI-goes-FOOM)

P(AGI-goes-FOOM)~= 1 - \prod j [P(development-team-j-will-not-create-AGI-before-FAI-is-developed) + {1-P(development-team-j-will-not-create-AGI-before-FAI-is-developed) } P(development-team-j-can-stop-AGI-before-FOOM) ]

So strategy is to convince every development team, that no matter what precautions they use P(development-team-j-can-stop-AGI-before-FOOM)~=0. And development of recommendations for AGI containment will suggest that P(development-team-j-can-stop-AGI-before-FOOM) can be made sufficiently high, thus lowering P(development-team-j-will-not-create-AGI-before-FAI-is-developed). Given overconfidence bias it is plausible to assume that latter will increase P(AGI-goes-FOOM).

I withdraw suggestion.

Comment author: PhilGoetz 28 July 2010 08:40:27PM 1 point [-]

No - expected value is important. If many successful FAI scenarios could result in negative value, then zero value (universal extinction) would be better.

We should put some thought into whether a negative-value universe is plausible, and what it would look like.