Roko comments on Open Thread June 2010, Part 3 - Less Wrong

6 Post author: Kevin 14 June 2010 06:14AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (606)

You are viewing a single comment's thread. Show more comments above.

Comment author: Yoreth 14 June 2010 08:10:24AM 5 points [-]

A prima facie case against the likelihood of a major-impact intelligence-explosion singularity:

Firstly, the majoritarian argument. If the coming singularity is such a monumental, civilization-filtering event, why is there virtually no mention of it in the mainstream? If it is so imminent, so important, and furthermore so sensitive to initial conditions that a small group of computer programmers can bring it about, why are there not massive governmental efforts to create seed AI? If nothing else, you might think that someone could exaggerate the threat of the singularity and use it to scare people into giving them government funds. But we don’t even see that happening.

Second, a theoretical issue with self-improving AI: can a mind understand itself? If you watch a simple linear Rube Goldberg machine in action, then you can more or less understand the connection between the low- and the high-level behavior. You see all the components, and your mind contains a representation of those components and of how they interact. You see your hand, and understand how it is made of fingers. But anything more complex than an adder circuit quickly becomes impossible to understand in the same way. Sure, you might in principle be able to isolate a small component and figure out how it works, but your mind simply doesn’t have the capacity to understand the whole thing. Moreover, in order to improve the machine, you need to store a lot of information outside your own mind (in blueprints, simulations, etc.) and rely on others who understand how the other parts work.

You can probably see where this is going. The information content of a mind cannot exceed the amount of information necessary to specify a representation of that same mind. Therefore, while the AI can understand in principle that it is made up of transistors etc., its self-representation necessary has some blank areas. I posit that the AI cannot purposefully improve itself because this would require it to understand in a deep, level-spanning way how it itself works. Of course, it could just add complexity and hope that it works, but that’s just evolution, not intelligence explosion.

So: do you know any counterarguments or articles that address either of these points?

Comment deleted 14 June 2010 01:49:25PM *  [-]
Comment author: CarlShulman 14 June 2010 02:41:36PM 6 points [-]

10% is a low bar, it would require a dubiously high level of confidence to rule out AI over a 90 year time frame (longer than the time since Turing and Von Neumann and the like got going, with a massively expanding tech industry, improved neuroimaging and neuroscience, superabundant hardware, and perhaps biological intelligence enhancement for researchers). I would estimate the average of the group you mention as over 1/3rd by 2100. Chalmers says AI is more likely than not by 2100, I think Robin and Nick are near half, and I am less certain about the others (who have said that it is important to address AI or AI risks but not given unambiguous estimates).

Here's Ben Goertzel's survey. I think that Dan Dennett's median estimate is over a century, although at the 10% level by 2100 I suspect he would agree. Dawkins has made statements that suggest similar estimates, although perhaps with someone shorter timelines. Likewise for Doug Hofstadter, who claimed at the Stanford Singularity Summit to have raised his estimate of time to human-level AI from 21st century to mid-late millenium, although he weirdly claimed to have done so for non-truth-seeking reasons.

Comment author: timtyler 15 June 2010 09:01:05PM *  2 points [-]

Dan Dennett and Douglas Hofstadater don't think machine intelligence is coming anytime soon. Those folk actually know something about machine intelligence, too!

Comment author: JoshuaZ 14 June 2010 02:07:37PM 3 points [-]

None of those people are AI theorists so it isn't clear that their opinions should get that much weight given that it is outside their area of expertise (incidentally, I'd be curious what citation you have for the Hawking claim). From the computer scientists I've talked to, the impression I get is that they see AI as such a failure that most of them just aren't bothering to do much in the way of research in it except for narrow purpose machine learning or expert systems. There's also an issue of a sampling bias: the people who think a technology is going to work are generally more loud about that than people who think it won't. For example, a lot of physicists are very skeptical of Tokamak fusion reactors being practical anytime in the next 50 years, but the people who talk about them a lot are the people who think they will be practical.

Note also that nothing in Yoreth's post actually relied on or argued that there won't be moderately smart AI so it doesn't go against what he's said to point out that some experts think there will be very smart AI (although certainly some people on that list, such as Chalmers and Hanson do believe that some form of intelligence explosion like event will occur). Indeed, Yoreth's second argument applies roughly to any level of intelligence. So overall, I don't think the point about those individuals does much to address the argument.

Comment deleted 14 June 2010 03:01:10PM *  [-]
Comment author: JoshuaZ 14 June 2010 03:07:49PM 10 points [-]

That's a very good point. The AI theorist presumably knows more about avenues that have not done very well (neural nets, other forms of machine learning, expert systems) but isn't likely to have much general knowledge. However, that does mean the AI individual has a better understanding of how many different approaches to AI have failed miserably. But that's just a comparison to your example of the physics grad student who can code. Most of the people you mentioned in your reply to Yoreth are clearly people who have knowledge bases closer to that of the AI prof than to the physics grad student. Hanson certainly has looked a lot at various failed attempts at AI. I think I'll withdraw this argument. You are correct that these individuals on the whole are likely to have about as much relevant expertise as the AI professor.

Comment author: SilasBarta 14 June 2010 09:19:45PM 3 points [-]

What does an average AI prof know that a physics graduate who can code doesn't know? I'm struggling to name even one thing. If you set the two of them to code AI for some competition like controlling a robot, I doubt that there would be much advantage to the AI guy.

So people with no experience programming robots but who know the equations governing them would just be able to, on the spot, come up with comparable code to AI profs? What do they teach in AI courses, if not the kind of thing that would make you better at this?

Comment author: Vladimir_Nesov 14 June 2010 03:07:50PM *  3 points [-]

What does an average AI prof know that a physics graduate who can code doesn't know?

Machine learning, more math/probability theory/belief networks background?

Comment deleted 14 June 2010 03:15:02PM [-]
Comment author: Vladimir_Nesov 14 June 2010 03:33:51PM *  2 points [-]

There is ton of knowledge about probabilistic processes defined by networks in various ways, numerical methods for inference in them, clustering, etc. All the fundamental stuff in this range has applications to physics, and some of it was known in physics before getting reinvented in machine learning, so in principle a really good physics grad could know that stuff, but it's more than standard curriculum requires. On the other hand, it's much more directly relevant to probabilistic methods in machine learning. Of course both should have good background in statistics and bayesian probability theory, but probabilistic analysis of nontrivial processes in particular adds unique intuitions that a physics grad won't necessarily possess.

Comment author: timtyler 15 June 2010 09:05:43PM *  2 points [-]

Re: "What does an average AI prof know that a physics graduate who can code doesn't know? I'm struggling to name even one thing. If you set the two of them to code AI for some competition like controlling a robot, I doubt that there would be much advantage to the AI guy."

A very odd opinion. We have 60 years of study of the field, and have learned quite a bit, judging by things like the state of translation and speech recognition.

Comment author: Daniel_Burfoot 14 June 2010 08:46:16PM 2 points [-]

I disagree with this, basically because AI is a pre-paradigm science.

I am gratified to find that someone else shares this opinion.

What does an average AI prof know that a physics graduate who can code doesn't know?

A better way to phrase the question might be: what can an average AI prof. do that a physics graduate who can code, can't?

Comment deleted 14 June 2010 10:47:12PM [-]
Comment author: CarlShulman 15 June 2010 12:46:08PM 4 points [-]

I think that the closest we have seen is the ML revolution, but when you look at it, it is not new science, it is just statistics correctly applied.

Statistics vs machine learning: FIGHT!

Comment author: SilasBarta 15 June 2010 12:20:17AM 0 points [-]

Could you clarify exactly what Hutter has done that has advanced the frontier? I used to be very nearly a "Hutter enthusiast", but I eventually concluded that his entire work is:

"Here's a few general algorithms that are really good, but take way too long to be of any use whatsoever."

Am I missing something? Is there something of his I should read that will open my eyes to the ease of mechanizing intelligence?

Comment author: timtyler 15 June 2010 09:10:54PM 1 point [-]

This seems like a fairly reasonable description of the work's impact:

"Another theme that I picked up was how central Hutter’s AIXI and my work on the universal intelligence measure has become: Marcus and I were being cited in presentations so often that by the last day many of the speakers were simply using our first names. As usual there were plenty of people who disagree with our approach, however it was clear that our work has become a major landmark in the area."

Comment author: SilasBarta 16 June 2010 04:36:54PM 0 points [-]

But why does it get those numerous citations? What real-world, non-academic consequences have resulted from this massive usage of Hutter's intelligence definition, which would distinguish it from a mere mass frenzy?

Comment author: timtyler 16 June 2010 05:00:00PM *  0 points [-]

No time for a long explanation from me - but "universal intelligence" seems important partly since it shows how simple an intelligent agent can be - if you abstract away most of its complexity into a data-compression system. It is just a neat way to break down the problem.

Comment deleted 15 June 2010 08:33:33AM [-]
Comment author: timtyler 15 June 2010 09:07:39PM 0 points [-]

Surely everyone has been doing that from the beginning.

Comment author: whpearson 14 June 2010 03:08:14PM 1 point [-]

The AI prof is more likely to know more things that don't work and the difficulty of finding things that do. Which is useful knowledge when predicting the speed of AI development, no?

Comment deleted 14 June 2010 03:15:39PM [-]
Comment author: whpearson 14 June 2010 03:22:49PM 0 points [-]

Trying to model the world as crisp logical statements a la block worlds for example.

Comment deleted 14 June 2010 04:13:51PM [-]
Comment author: whpearson 14 June 2010 04:25:51PM 0 points [-]

Yup... which things were you asking for? Examples of things that do work? You don't actually need to find them to know that they are hard to find!

Comment author: MatthewW 14 June 2010 07:10:44PM 2 points [-]

I think Hofstadter could fairly be described as an AI theorist.

Comment author: Emile 17 June 2010 02:14:59PM 2 points [-]

So could Robin Hanson.