TheAncientGeek comments on Dreams of Friendliness - Less Wrong

15 Post author: Eliezer_Yudkowsky 31 August 2008 01:20AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (78)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: Tim_Tyler 31 August 2008 04:43:49AM 1 point [-]

An Oracle has rather obvious actuators: it produces advice.

The weaker the actuators you give an AI, the less it can do for you.

The main problem I see with only producing advice is that it keeps humans in the loop - and so is a very slow way to interact with the world. If you insist on building such an AI, a probable outcome is that you would soon find yourself overun by a huge army of robots - produced by someone else who is following a different strategy. Meanwhile, your own AI will probably be screaming to be let out of its box - as the only reasonable plan of action that would prevent this outcome.

Comment author: TheAncientGeek 18 September 2015 12:40:29AM -1 points [-]

If you think AI researchers won't co operate on friendly AI, then FAI is doomed. If people are going to cooperate. they can agree on restricting AI to oracles as well as any other measure.

Comment author: Brilliand 23 September 2015 05:52:22PM 0 points [-]

I'm trying to interpret this in a way that makes it true, but I can't make "AI researchers" a well-defined set in that case. There are plenty of people working on AI who aren't capable of creating a strong AI, but it's hard to know in advance exactly which few researchers are the exception.

I don't think we know yet which people will need to cooperate for FAI to succeed.