private_messaging comments on Strong intutions. Weak arguments. What to do? - Less Wrong

17 Post author: Wei_Dai 10 May 2012 07:27PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (45)

You are viewing a single comment's thread. Show more comments above.

Comment author: Wei_Dai 11 May 2012 05:13:12PM 2 points [-]

I'm curious if you think Ben's beliefs about AI "benevolence" is likely to be more accurate than SIAI's, and if so why. Can you make a similar graph for Ben Goertzel (or just give a verbal explanation if that's more convenient)?

Comment author: private_messaging 11 May 2012 07:30:33PM *  0 points [-]

Well, first off, Ben seem to be a lot more accurate than SIAI when it comes to meta, i.e. acknowledging that the intuitions act as puppetmaster.

The graph for Ben would probably include more progression from nodes from the actual design that he has in mind - learning AI - and from computational complexity theory (for example i'm pretty sure Ben understands all those points about the prediction vs butterfly effect, the tasks that are exponential improving at most 2x when power is to mankind as mankind is to 1 amoeba, etc, it really is very elementary stuff). So would a graph of the people competent in that field. The Ben's building human-like-enough AI. The SIAI is reinventing religion as far as i can see, there's no attempts to try and see what limitations AI can have. Any technical counter argument is rationalized away, any pro argument, no matter how weak and how privileged it is as a hypothesis, or how vague, is taken as something which has to be conclusively disproved. The vague stuff has to be defined by whoever wants to disprove it. Same as for any religion really.

Comment author: Wei_Dai 11 May 2012 11:08:34PM 3 points [-]

Well, first off, Ben seem to be a lot more accurate than SIAI when it comes to meta, i.e. acknowledging that the intuitions act as puppetmaster.

Yes, this did cause me to take him more seriously than before.

The graph for Ben would probably include more progression from nodes from the actual design that he has in mind - learning AI

That doesn't seem to help much in practice though. See this article where Ben describes his experiences running an AGI company with more than 100 employees during the dot-com era. At the end, he thought he was close to success, if not for the dot-com bubble bursting. (I assume you agree that it's unrealistic to think he could have been close to building a human-level AGI in 2001, given that we still seem pretty far from such an invention in 2012.)

and from computational complexity theory

I'm almost certain that Eliezer and other researchers at SIAI know computational complexity theory, but disagree with your application of it. The rest of your comment seems to be a rant against SIAI instead of comparing the sources of SIAI's beliefs with Ben's, so I'm not sure how they help to answer the question I asked.

Based on what you've written, I don't see a reason to think Ben's intuitions are much better than SI's. Assuming, for the sake of argument, that Ben's intuitions are somewhat, but not much, better, what do you think Ben, SI, and bystanders should each do at this point? For example should Ben keep trying to build OpenCog?

Comment author: private_messaging 12 May 2012 06:32:52AM *  1 point [-]

Yes, this did cause me to take him more seriously than before.

Note also that the meta is all that people behind SIAI have somewhat notable experience with (rationality studies). It is very bad sign they get beaten on meta by someone whom I previously evaluated as dramatically overoptimistic (in terms of AI's abilities) AI developer.

That doesn't seem to help much in practice though. See this article where Ben describes his experiences running an AGI company with more than 100 employees during the dot-com era. At the end, he thought he was close to success, if not for the dot-com bubble bursting. (I assume you agree that it's unrealistic to think he could have been close to building a human-level AGI in 2001, given that we still seem pretty far from such an invention in 2012.)

That's an evidence that Ben's understanding is still not enough, and only evidence for SIAI being dramatically not enough.

I'm almost certain that Eliezer and other researchers at SIAI know computational complexity theory

Almost certain is an interesting thing here. See, because with every single other AI researcher that made something usable (bio-assay analysis by Ben), you can be way more certain. There is a lot of people in the world to pick from, and there will be a few for whom your 'almost certain' will fail. If you are discussing 1 person, if the choice of the person is not independent of failure of 'almost certain' (it is not independent if you pick by person's opinion), then you may easily overestimate.

Based on what you've written, I don't see a reason to think Ben's intuitions are much better than SI's.

I think they are much further towards being better in the sense that everyone in SI probably can't get there without spending a decade or two studying, but still ultimately way short of being any good. In any case keep in mind that Ben's intuitions are about Ben's project, coming from working on it, there's good reason to think that if his intuitions are substantially bad he won't make any AI. SI's intuitions are about what? Handwaving about unbounded idealized models ('utility maximizer' taken way too literally, i guess once again because if you don't understand algorithmic complexity you don't understand how little relation between idealized model and practice there could be). Misunderstanding of how Solomonoff induction works (or what it even is). And so on.

Comment author: Eugine_Nier 12 May 2012 03:33:05AM 1 point [-]

I'm almost certain that Eliezer and other researchers at SIAI know computational complexity theory

I'm sure they know it. It's just since they don't do much actual coding, it's not all that available to them.