TheAncientGeek comments on No Universally Compelling Arguments in Math or Science - Less Wrong

30 Post author: ChrisHallquist 05 November 2013 03:32AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (227)

You are viewing a single comment's thread. Show more comments above.

Comment author: TheAncientGeek 05 November 2013 07:45:03PM *  -1 points [-]

"Rational" is not "persuadable" where values are involved.

Rational is persuadable, because people who don't accept good arguments that don't suit them are not considered particularly rational. That is of course an appeal to how the word is generally used, not the LW idiolect.

You could perhaps build an AI that has the stubborn behaviour you describe (although value stability remains unsolved), but so what? there are all sorts of dangerous things you can build: the significant claim is what a non-malevolent real-world research project would come up with. In the world outside LW, general intelligence means general intelligence, not compulsively following fixed goals, and rationality includes persuadability, and "values" doens't mean "unupdateable values".

Comment author: nshepperd 05 November 2013 07:56:32PM *  -1 points [-]

General intelligence means being able to operate autonomously in the real world, in non-"preprogrammed" situations. "Fixed goals" have nothing to do with it.

You said this:

A successful AGI would be an intelligent AGI would be a rational AI would be a persuadable AI.

The only criterion for success is instrumental rationality, which does not imply persuadability. You are equivocating on "rational". Either "rational" means "effective", or it means "like a human". You can't have both.

Also, the fact that you are (anthropomorphically) describing realistic AIs as "stubborn" and "compulsive" suggests to me that you would be better served to stop armchair theorizing and actually pick up an AI textbook. This is a serious suggestion.

Comment author: TheAncientGeek 05 November 2013 08:02:04PM -1 points [-]

I am not equivocating. By "successful" I don't mean (or exclude) good-at-things, I mean it is actually artificial, general and intelligent.

"Strong AI is hypothetical artificial intelligence that matches or exceeds human intelligence — the intelligence of a machine that could successfully perform any intellectual task that a human being can.[1] It is a primary goal of artificial intelligence research and an important topic for science fiction writers and futurists. Strong AI is also referred to as "artificial general intelligence"[2] or as the ability to perform "general intelligent action."[3] ".

To be good-at-things an agent has to be at least instrumentally rational, but that is in no way a ceiling.

Either "rational" means "effective", or it means "like a human". You can't have both.

Since there are effective humans, I can.

Comment author: nshepperd 05 November 2013 08:39:50PM -1 points [-]

Either "rational" means "effective", or it means "like a human". You can't have both.

Since there are effective humans, I can.

Right, in exactly the same way that because there are square quadrilaterals I can prove that if something is a quadrilateral its area is exactly L^2 where L is the length of any of its sides.

Comment author: TheAncientGeek 05 November 2013 08:55:18PM 0 points [-]

I can't define rational as "effective and human like"?

Comment author: nshepperd 05 November 2013 09:35:44PM *  2 points [-]

You can, if you want to claim that the only likely result of AGI research is a humanlike AI. At which point I would point at actual AI research which doesn't work like that at all.

Comment author: TheAncientGeek 05 November 2013 10:05:29PM -1 points [-]

It's failures are idiots,not evil genii