wedrifid comments on What Program Are You? - Less Wrong

28 Post author: RobinHanson 12 October 2009 12:29AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (42)

You are viewing a single comment's thread. Show more comments above.

Comment author: wedrifid 12 October 2009 11:04:28AM 0 points [-]

Please do not ever create an AI capable of recursively self improvement. 'Thinking outside the box' is a bug.

Comment author: whpearson 12 October 2009 11:31:35AM 1 point [-]

Systems without the ability to go beyond the mental model their creators have (at a certain point in time), are subject to whatever flaws that mental model possesses. I wouldn't classify them as full intelligences.

I wouldn't want a flawed system to be the thing to guide humanity to the future.

Comment author: Vladimir_Nesov 12 October 2009 11:37:16AM *  2 points [-]

Systems without the ability to go beyond the mental model their creators have (at a certain point in time), are subject to whatever flaws that mental model possesses.

Where does the basis for deciding something to be a flaw reside?

Comment author: whpearson 12 October 2009 01:21:10PM 0 points [-]

In humans? No one knows. My best guess at the moment for the lowest level of model choice is some form of decentralised selectionist system, that is much as decision theoretic construct as real evolution is.

We do of course have higher level model choosing systems that might work on a decision theoretic basis, but they have models implicit in them which can be flawed.

Comment author: wedrifid 12 October 2009 12:16:11PM *  0 points [-]

Improving the mental model is right there at the centre of the box. Creating a GAI that doesn't operate according to some sort of decision theory? That's, well, out of the box crazy talk.

Comment author: whpearson 12 October 2009 01:27:35PM *  0 points [-]

We might be having different definitions of thinking outside of the box, here.

Are you objecting to the possibility of a General intelligence not based on a decision theory at its foundation, or do you just think one would be unsafe?

Do you think us humans are based on some form of decision theory?

Comment author: wedrifid 12 October 2009 06:01:37PM 0 points [-]

Are you objecting to the possibility of a General intelligence not based on a decision theory at its foundation, or do you just think one would be unsafe?

Unsafe.

Do you think us humans are based on some form of decision theory?

No. And I wouldn't trust a fellow human with that sort of uncontrolled power.