Carinthium comments on The Friendly AI Game - Less Wrong

38 Post author: bentarm 15 March 2011 04:45PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (170)

You are viewing a single comment's thread.

Comment author: Carinthium 16 March 2011 08:31:08AM *  0 points [-]

New one(I'm better at thinking of ideas than refutation, so I'm going to run with that)- start off with a perfect replica of a human mind. Eliminate absolutely all measures regarding selfishness, self-delusion, and rationalisation. Test at this stage to check it fits standards using a review board consistent of people who are highly moral and rational by the standards of ordinary humans. If not, start off using a different person's mind, and repeat the whole process.

Eventually, use the most optimal mind coming out of this process and increase it's intelligence until it becomes a 'Friendly' A.I.

Comment author: jimrandomh 16 March 2011 04:21:28PM 7 points [-]

Start off with a perfect replica of a human mind. Eliminate absolutely all measures regarding selfishness, self-delusion, and rationalisation ... Eventually, use the most optimal mind coming out of this process and increase its intelligence until it becomes a 'Friendly' A.I.

The mind does not have modules for these things that can be removed; they are implicit in the mind's architecture. Nor does it use an intelligence-fluid which you can pour in to upgrade. Eliminating mental traits and increasing intelligence are both extraordinarily complicated procedures, and the possible side effects if they're done improperly include many sorts of insanity.

Comment author: Manfred 17 March 2011 05:13:40AM *  2 points [-]

Human minds aren't designed to be changed, so if this was actually done you would likely just upgrade the first mind that was insane in a subtle enough way to get past the judges. It's conceivable that it could work if you had ridiculous levels of understanding, but this sort of thing would come many years after Friendly AI was actually needed.

Comment author: Dorikka 17 March 2011 01:10:42AM 1 point [-]

check it fits standards using a review board consistent of people who are highly moral and rational by the standards of ordinary humans.

You mean real, meaty humans that whose volitions aren't even being extrapolated so they can use lots of computing power? What makes you think that they won't accidentally destroy the universe?

Comment author: ewang 17 March 2011 04:01:49AM 0 points [-]

The AI fiegns sanity to preserve itself through the tests and proceeds to do whatever horrible things uFAIs typically do.

Comment author: Carinthium 17 March 2011 04:07:59AM 1 point [-]

THAT one wouldn't work, anyway- at this point it's still psycologically human and only at human intelligence- both are crippling disadvantages relative to later on.

Comment author: ewang 17 March 2011 04:25:10AM 0 points [-]

Right, I didn't realize that. I'll just leave it up to prevent people from making the same mistake.