timtyler comments on Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It) - Less Wrong

32 Post author: ciphergoth 30 October 2010 09:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (432)

You are viewing a single comment's thread. Show more comments above.

Comment author: timtyler 30 October 2010 06:45:06PM 2 points [-]

The idea that a self-improving AGI has to be bug-free from the moment it is first run seems like part of the "syndrome" to me. Can the machine fix its own bugs? What about a "controlled ascent"? etc.

Comment author: Kingreaper 31 October 2010 11:20:47AM 6 points [-]

If it has a bug in its utility function, it won't want to fix it.

If it has a bug in its bug-detection-and-fixing techniques, you can guess what happens.

So, no, you can't rely on the AGI to fix itself, unless you're certain that the bugs are localised in regions that will be fixed.

Comment author: timtyler 31 October 2010 12:08:21PM -1 points [-]

So: bug-free is not needed - and a controlled ascent is possible.

The unreferenced "hubris verging on sheer insanity" asumption seems like a straw man - nobody assumed that in the first place.

Comment author: pjeby 30 October 2010 08:10:34PM *  11 points [-]

Can the machine fix its own bugs?

How do you plan to fix the bugs in its bug-fixing ability, before the bug-fixing ability is applied to fixing bugs in the "don't kill everyone" routine? ;-)

More to the point, how do you know that you and the machine have the same definition of "bug"? That seems to me like the fundamental danger of self-improving AGI: if you don't agree with it on what counts as a "bug", then you're screwed.

(Relevant SF example: a short story in which the AI ship -- also the story's narrator -- explains how she corrected her creator's all-too-human error: he said their goal was to reach the stars, and yet for some reason, he set their course to land on a planet. Silly human!)

What about a "controlled ascent"?

How would that be the default case, if you're explicitly taking precautions?

Comment author: timtyler 30 October 2010 09:17:26PM 0 points [-]

It seems as though you don't have any references for the supposed "hubris verging on sheer insanity". Maybe people didn't think that in the first place.

Computers regularly detect and fix bugs today - e.g. check out Eclipse.

I never claimed "controlled ascent" as being "the default case". In fact I am here criticising "the default case" as weasel wording.

Comment author: Jordan 01 November 2010 09:15:03PM *  0 points [-]

What about a "controlled ascent"?

How would that be the default case, if you're explicitly taking precautions?

Controlled ascent isn't the default case, but it certainly should be what provably friendly AI is weighed against.