You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Wei_Dai comments on Stupid Questions Open Thread Round 4 - Less Wrong Discussion

6 Post author: lukeprog 27 August 2012 12:04AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (179)

You are viewing a single comment's thread. Show more comments above.

Comment author: lukeprog 03 September 2012 11:51:59PM 1 point [-]

AGIs that cause worse-than-extinction outcomes are clustered around FAIs in design space.

Yes, that's the part I'd like to see developed more. Maybe SI or FHI will get around to it eventually, but in the meantime I wouldn't mind somebody like Wei Dai taking a crack at it.

Comment author: Wei_Dai 04 September 2012 08:27:58PM 1 point [-]

I'm not sure what more can be said about "AGIs that cause worse-than-extinction outcomes are clustered around FAIs in design space". It's obvious, isn't it?

I guess I could write about some FAI approaches being more likely to cause worse-than-extinction outcomes than others. For example, FAIs that are closely related to uploading or try to automatically extract values from humans seem riskier in this regard than FAIs where the values are coded directly and manually. But this also seems obvious and I'm not sure what I can usefully say beyond a couple of sentences.

Comment author: TheOtherDave 04 September 2012 08:56:29PM 2 points [-]

FWIW, that superhuman environment-optimizers (e.g. AGIs) that obtain their target values from humans using an automatic process (e.g., uploading or extraction) are more likely to cause worse-than-extinction outcomes than those using a manual process (e.g. coding) is not obvious to me.