You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

John_Maxwell_IV comments on Isolated AI with no chat whatsoever - Less Wrong Discussion

14 Post author: ancientcampus 28 January 2013 08:22PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (61)

You are viewing a single comment's thread. Show more comments above.

Comment author: gwern 29 January 2013 01:03:15AM 1 point [-]

But what if your AI wasn't programmed to model the entire world, just some subset of it, and had restrictions in place to preserve this? Would it be possible to write a safe, recursively self-improving chess-playing AI, for instance? (You could call this approach "restricting the AI's ontology".)

Why would this work any better (or worse) than an oracle AI?

Comment author: John_Maxwell_IV 29 January 2013 07:14:52AM *  1 point [-]

Presumably an Oracle AI's ontology would not be restricted because it's trying to model the entire world.

Obviously we don't particularly need an AI to play chess. It's possible that we'd want this for some other domain, though, perhaps especially for one that has some relevance for FAI, or as a self-improving AI prototype. I also think it's interesting as a thought experiment. I don't understand the reasons why SI is so focused on the FAI approach, and I figure by asking questions like that one maybe I can learn more about their views.

Comment author: gwern 29 January 2013 04:47:43PM 0 points [-]

Presumably an Oracle AI's ontology would not be restricted because it's trying to model the entire world.

Well, yes, by definition. But that's not an answer to my question.

Comment author: John_Maxwell_IV 30 January 2013 07:30:07AM *  0 points [-]

I don't know which would approach would be more easily formalized and proven to be safe.