Eliezer_Yudkowsky comments on Help: Writing Marvin Minsky - Less Wrong

18 Post author: XiXiDu 10 June 2011 12:45PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (29)

You are viewing a single comment's thread.

Comment author: Eliezer_Yudkowsky 10 June 2011 07:44:21PM 11 points [-]

I've met Minsky. He's, well, old. I tried asking him what he thought of Bayesianism. He said he regarded it as a failed approach. I didn't try pushing any further.

Comment author: jmmcd 10 June 2011 10:06:44PM 6 points [-]

Yes -- I've seen him talk a couple of times, and everyone still loves to hear him, but he's not now influential.

I also recently saw Rodney Brooks giving the standard "rapture of the nerds" answer to a singularity question. Brooks is influential, I think, so maybe a good target.

To help XiXiDu's task, we should put together a list of useful targets.

Comment author: XiXiDu 11 June 2011 09:59:55AM 6 points [-]

To help XiXiDu's task, we should put together a list of useful targets.

That would be great. I don't know of many AGI researchers. I am not going to ask Hugo De Garis, we know what Ben Goertzel thinks, and there already is an interview with Peter Voss that I will have to watch first.

More on Ben Goertzel:

He recently wrote 'Why an Intelligence Explosion is Probable', but with the caveat (see the comments):

Look -- what will prevent the first human-level AGIs from self-modifying in a way that will massively increase their intelligence is a very simple thing: they won't be smart enough to do that!

Every actual AGI researcher I know can see that. The only people I know who think that an early-stage, toddler-level AGI has a meaningful chance of somehow self-modifying its way up to massive superhuman intelligence -- are people associated with SIAI.

But I have never heard any remotely convincing arguments in favor of this odd, outlier view of the easiness of hard takeoff!!!

Comment author: jmmcd 11 June 2011 04:54:02PM 1 point [-]

Jurgen Schmidhuber is one possibility.

Comment author: XiXiDu 11 June 2011 05:56:59PM 1 point [-]

Jurgen Schmidhuber is one possibility.

Thanks, emailed him.

Comment author: XiXiDu 11 June 2011 10:18:12AM *  1 point [-]

...there already is an interview with Peter Voss that I will have to watch first.

I watched it, check 9:00 (first video) for the answer on friendly AI, he seems to agree with Ben Goertzel?

ETA

More here.

Comment author: timtyler 11 June 2011 09:09:49AM *  0 points [-]

We have Brooks answer to many of these questions here - at 17:20.

Essentially, I think Brooks is wrong, robots are highly likely to take over. He only addresses the "standard scenario" of a Hollywood-style hostile robot takeover.

One big possibility he fails to address is a cooperative machine takeover, with the humans and the machines on the same side.

I agree with Brooks that consumer pressure will mostly create "good" robots in the short term. Consumer-related forces will drive the extraction of human preferences into machine-readable formats, much as we are seeing privacy-related preferences being addressed by companies today. Brooks doesn't really look into later future scenarios where forces applied by human consumers are relatively puny, though. There's eventually going to be a bit of a difference between a good company, and a company that is pretending to be good for PR reasons.

I agree with Brooks that a major accident is relatively unlikely. Brooks gives a feeble reason for thinking that, though - comparing an accident with a "lone guy" building a 747. That is indeed unlikely - but surely is only one of the possible accident scenarios.

Brooks is a robot guy. Those folk are not going to build intelligent machines first. They are typically too wedded to systems with slow build-test cycles. So Brooks may be a muddle about all this, but that doesn't seem too important: it isn't really his area.

Comment author: XiXiDu 11 June 2011 09:51:20AM 4 points [-]

I tried asking him what he thought of Bayesianism.

Maybe this is a stupid question, but what was the context of your question and his answer? Maybe he meant that Bayesianism isn't the answer to the problem of artificial general intelligence. I honestly can't tell if that is stupid, but does it necessarily mean that he thought Bayesianism to be a failed approach in any other context, or that it isn't part of a much larger solution?

Anyway, he is just one person. I would only have to change the template slightly to use it to email others as well.

If I get a response, I might send him another email asking about Bayesianism.

Comment author: Eliezer_Yudkowsky 11 June 2011 06:46:45PM 10 points [-]

I am not sure that you properly appreciate what happens to people when they get old. There was once a Marvin Minsky who helped write the first paper ever on "artificial intelligence". I look forward to meeting him after he comes out of cryonic suspension, but he isn't around to talk to right now.

Comment author: XiXiDu 11 June 2011 06:58:09PM *  3 points [-]

I am not sure that you properly appreciate what happens to people when they get old.

I do, I apparently failed to make the correct inference from your remark about him being old. I didn't think he was that "old". Well, if he answers and I notice something "peculiar", I'll refrain from publishing his response.

ETA

I'll err on the side of caution about his response being not reflective of him.