You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

adamzerner comments on Open thread, September 8-14, 2014 - Less Wrong Discussion

5 Post author: polymathwannabe 08 September 2014 12:31PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (295)

You are viewing a single comment's thread.

Comment author: adamzerner 13 September 2014 07:50:19PM 2 points [-]

What do we want out of AI? Is it happiness? If so, then why not just research wireheading itself and not encounter the risks of an unfriendly AI?

Comment author: hairyfigment 13 September 2014 08:52:46PM 1 point [-]

We don't know what we want from AI, beyond obvious goals like survival. Mostly I think in terms of a perfect tutor that would bring us to its own level of intelligence before turning itself off. But quite possibly we don't want that at all. I recall some commenter here seemed to want a long-term ruler AI.

Comment author: Leonhart 15 September 2014 08:00:11PM 0 points [-]

I am generally in favour of a long-term ruler AI; though I don't think I'm the one you heard it from before. As you say, though, this is an area where we should have unusually low confidence that we know what we want.

Comment author: Mac 14 September 2014 12:00:47AM 0 points [-]

The promise of AI is irresistibly seductive because an FAI would make everything easier, including wireheading and survival.