You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

selylindi comments on A few thoughts on a Friendly AGI (safe vs friendly, other minds problem, ETs and more) - Less Wrong Discussion

3 Post author: the-citizen 19 October 2014 07:59AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (44)

You are viewing a single comment's thread.

Comment author: selylindi 19 October 2014 09:02:32PM *  0 points [-]

Basically its a challenge for people to briefly describe an FAI goal-set, and for others to respond by telling them how that will all go horribly wrong. ... We should encourage a slightly more serious version of this.

Thanks for the link. I reposted the idea currently on my mind hoping to get some criticism.

But more importantly, what features would you be looking for in a more serious version of that game?

Comment author: the-citizen 20 October 2014 04:17:28AM 0 points [-]

I think I'd like the comments to be broadly organised and developed as the common themes and main arguments emerge. Apart from that a little more detail. I don't think it has to go into much implementation specifics, because that's a separate issue and requires a more highly developed set of math/CS skills. But I think we can make use of a broader set of smart brains by having this kind of discussion.