Eliezer_Yudkowsky comments on Help: Writing Marvin Minsky - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (29)
I've met Minsky. He's, well, old. I tried asking him what he thought of Bayesianism. He said he regarded it as a failed approach. I didn't try pushing any further.
Yes -- I've seen him talk a couple of times, and everyone still loves to hear him, but he's not now influential.
I also recently saw Rodney Brooks giving the standard "rapture of the nerds" answer to a singularity question. Brooks is influential, I think, so maybe a good target.
To help XiXiDu's task, we should put together a list of useful targets.
That would be great. I don't know of many AGI researchers. I am not going to ask Hugo De Garis, we know what Ben Goertzel thinks, and there already is an interview with Peter Voss that I will have to watch first.
More on Ben Goertzel:
He recently wrote 'Why an Intelligence Explosion is Probable', but with the caveat (see the comments):
Jurgen Schmidhuber is one possibility.
Thanks, emailed him.
I watched it, check 9:00 (first video) for the answer on friendly AI, he seems to agree with Ben Goertzel?
ETA
More here.
We have Brooks answer to many of these questions here - at 17:20.
Essentially, I think Brooks is wrong, robots are highly likely to take over. He only addresses the "standard scenario" of a Hollywood-style hostile robot takeover.
One big possibility he fails to address is a cooperative machine takeover, with the humans and the machines on the same side.
I agree with Brooks that consumer pressure will mostly create "good" robots in the short term. Consumer-related forces will drive the extraction of human preferences into machine-readable formats, much as we are seeing privacy-related preferences being addressed by companies today. Brooks doesn't really look into later future scenarios where forces applied by human consumers are relatively puny, though. There's eventually going to be a bit of a difference between a good company, and a company that is pretending to be good for PR reasons.
I agree with Brooks that a major accident is relatively unlikely. Brooks gives a feeble reason for thinking that, though - comparing an accident with a "lone guy" building a 747. That is indeed unlikely - but surely is only one of the possible accident scenarios.
Brooks is a robot guy. Those folk are not going to build intelligent machines first. They are typically too wedded to systems with slow build-test cycles. So Brooks may be a muddle about all this, but that doesn't seem too important: it isn't really his area.
Maybe this is a stupid question, but what was the context of your question and his answer? Maybe he meant that Bayesianism isn't the answer to the problem of artificial general intelligence. I honestly can't tell if that is stupid, but does it necessarily mean that he thought Bayesianism to be a failed approach in any other context, or that it isn't part of a much larger solution?
Anyway, he is just one person. I would only have to change the template slightly to use it to email others as well.
If I get a response, I might send him another email asking about Bayesianism.
I am not sure that you properly appreciate what happens to people when they get old. There was once a Marvin Minsky who helped write the first paper ever on "artificial intelligence". I look forward to meeting him after he comes out of cryonic suspension, but he isn't around to talk to right now.
I do, I apparently failed to make the correct inference from your remark about him being old. I didn't think he was that "old". Well, if he answers and I notice something "peculiar", I'll refrain from publishing his response.
ETA
I'll err on the side of caution about his response being not reflective of him.