You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Will_Newsome comments on Q&A with Shane Legg on risks from AI - Less Wrong Discussion

42 Post author: XiXiDu 17 June 2011 08:58AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (21)

You are viewing a single comment's thread. Show more comments above.

Comment author: Will_Newsome 25 June 2011 06:56:33AM 0 points [-]

It's the party line at LW maybe, but not SingInst. 21st century Earth is a huge attractor for simulations of all kinds. I'm rather interested in coarse simulations of us run by agents very far away in the wave function or in algorithmspace. (Timelessness does weird things, e.g. controlling non-conscious models of yourself that were computed in the "past".) Also, "controlling" analogous algorithms is pretty confusing.

Comment author: timtyler 25 June 2011 08:12:41AM *  0 points [-]

It's the party line at LW maybe, but not SingInst.

If so, they keep pretty quet about it! I expect for them it would be "more convenient" if those superintelligences whose ultimate values did not mention humans would just destroy the world. If many of them would be inclined to keep some humans knocking around, that dilutes the "save the world" funding pitch.

Comment author: Will_Newsome 25 June 2011 08:40:50AM 0 points [-]

I think it's epistemicly dangerous to guess at the motivations of "them" when there are so few people and all of them have diverse views. There are only a handful of Research Fellows and it's not like they have blogs where they talk about these things. SingInst is still really small and really diverse.

Comment author: timtyler 25 June 2011 11:42:27AM 0 points [-]

There are only a handful of Research Fellows and it's not like they have blogs where they talk about these things.

Right - so, to be specific, we have things like this:

Any Future not shaped by a goal system with detailed reliable inheritance from human morals and metamorals, will contain almost nothing of worth.

I think I have to agree with the Europan Zugs in disagreeing with that.