TheAncientGeek comments on Friendly-AI is an abomination - Less Wrong

-13 Post author: kingmaker 12 April 2015 08:21PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (30)

You are viewing a single comment's thread. Show more comments above.

Comment author: kingmaker 12 April 2015 09:08:38PM 1 point [-]

There is no ghost in a (relatively) simple machine, but an AI is not simple. The greatest success in AI research have been by imitating what we understand of the human mind. We are no longer programming AI's, we are imitating the structure of the human brain and then giving it a directive (for example with Google's deepmind). With AI's, there is a ghost in the machine, i.e. we do not know that it is possible to give a sentient being a prime directive. We have no idea whether it will desire what we want it to desire, and everything could go horribly wrong if we attempt to force it to.

Comment author: TheAncientGeek 12 April 2015 09:34:26PM *  2 points [-]

OK. That's much better. Current AI research is anthropomorphic, because AI researchers only have the human mind as a model of intelligence. MIRI considers anthropomirphic assumptions a mistake, which is mistaken,

A MIRI type AI won't have the problem you indicated, because it it is not anthropomirphic, and only has the values that are explicitly programmed into it, so there will be no conflict.

But adding in constraints to an anthropomorphic .AI, if anyone wants to do that, could be a problem.

Comment author: kingmaker 12 April 2015 09:56:03PM 1 point [-]

But I don't think that MIRI will succeed at building an FAI by non-anthropomorphic means in time.

Comment author: TheAncientGeek 12 April 2015 11:21:45PM 1 point [-]

I still don't see why you are considering a combination of non MIRI AI and MRI friendliness solution.