gwern comments on Strong intutions. Weak arguments. What to do? - Less Wrong

17 Post author: Wei_Dai 10 May 2012 07:27PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (45)

You are viewing a single comment's thread.

Comment author: gwern 10 May 2012 09:15:48PM 3 points [-]

Here's one suggestion: focus on the causes of the intuition. If the intuition is based on something we would accept as rational evidence if it were suitably cleaned up and put into rigorous form, then we should regard that as an additional argument for whatever. If the intuition is based on subject matter we would disregard in other circumstances or flawed reasons, then we can regard that as evidence against the whatever.

This is a little abstract, so I'll give a double example:

  1. recently there's been a lot of research into the origins of religious belief, focusing on intuitive versus analytical styles of thinking. To the extent that explicit analytical thought is superior at truth-gathering, we should take this as evidence for atheism and against theism.
  2. This area of research has also focused on when religious belief develops, and there's evidence that the core of religious belief is formed in childhood because children ascribe agency to all sorts of observations, while lack of agency is more a difficult learned adult way of thinking (and as things like the gambler's fallacy show, is often not learned even then); to the extent that we trust adult thinking over childhood thinking, we will again regard this as evidence against theism and for atheism.

So, what is the origin of intuitions about things like AI and the future performance of machines...? (I'll just note that I've seen a little evidence that young children are also vitalists.)

Comment author: private_messaging 11 May 2012 09:37:46AM *  1 point [-]

Here's one suggestion: focus on the causes of the intuition.

So, what is the origin of intuitions about things like AI and the future performance of machines...? (I'll just note that I've seen a little evidence that young children are also vitalists.)

I've posted about that (as Dmytry), the belief propagation graph (which shows what paths can't be the cause of intuitions due to too long propagation delay), that was one of the things which convinced me that trying to explain anything to LW is a waste of time, and that critique without explanation is more effective because explanatory critique gets rationalized away, while the critique of the form "you suck" makes people think (a little) what caused the impression in question and examine themselves somewhat, in the way in which they don't if they are given actual, detailed explanation.

Comment author: Wei_Dai 11 May 2012 05:13:12PM 2 points [-]

I'm curious if you think Ben's beliefs about AI "benevolence" is likely to be more accurate than SIAI's, and if so why. Can you make a similar graph for Ben Goertzel (or just give a verbal explanation if that's more convenient)?

Comment author: private_messaging 11 May 2012 07:30:33PM *  0 points [-]

Well, first off, Ben seem to be a lot more accurate than SIAI when it comes to meta, i.e. acknowledging that the intuitions act as puppetmaster.

The graph for Ben would probably include more progression from nodes from the actual design that he has in mind - learning AI - and from computational complexity theory (for example i'm pretty sure Ben understands all those points about the prediction vs butterfly effect, the tasks that are exponential improving at most 2x when power is to mankind as mankind is to 1 amoeba, etc, it really is very elementary stuff). So would a graph of the people competent in that field. The Ben's building human-like-enough AI. The SIAI is reinventing religion as far as i can see, there's no attempts to try and see what limitations AI can have. Any technical counter argument is rationalized away, any pro argument, no matter how weak and how privileged it is as a hypothesis, or how vague, is taken as something which has to be conclusively disproved. The vague stuff has to be defined by whoever wants to disprove it. Same as for any religion really.

Comment author: Wei_Dai 11 May 2012 11:08:34PM 3 points [-]

Well, first off, Ben seem to be a lot more accurate than SIAI when it comes to meta, i.e. acknowledging that the intuitions act as puppetmaster.

Yes, this did cause me to take him more seriously than before.

The graph for Ben would probably include more progression from nodes from the actual design that he has in mind - learning AI

That doesn't seem to help much in practice though. See this article where Ben describes his experiences running an AGI company with more than 100 employees during the dot-com era. At the end, he thought he was close to success, if not for the dot-com bubble bursting. (I assume you agree that it's unrealistic to think he could have been close to building a human-level AGI in 2001, given that we still seem pretty far from such an invention in 2012.)

and from computational complexity theory

I'm almost certain that Eliezer and other researchers at SIAI know computational complexity theory, but disagree with your application of it. The rest of your comment seems to be a rant against SIAI instead of comparing the sources of SIAI's beliefs with Ben's, so I'm not sure how they help to answer the question I asked.

Based on what you've written, I don't see a reason to think Ben's intuitions are much better than SI's. Assuming, for the sake of argument, that Ben's intuitions are somewhat, but not much, better, what do you think Ben, SI, and bystanders should each do at this point? For example should Ben keep trying to build OpenCog?

Comment author: private_messaging 12 May 2012 06:32:52AM *  1 point [-]

Yes, this did cause me to take him more seriously than before.

Note also that the meta is all that people behind SIAI have somewhat notable experience with (rationality studies). It is very bad sign they get beaten on meta by someone whom I previously evaluated as dramatically overoptimistic (in terms of AI's abilities) AI developer.

That doesn't seem to help much in practice though. See this article where Ben describes his experiences running an AGI company with more than 100 employees during the dot-com era. At the end, he thought he was close to success, if not for the dot-com bubble bursting. (I assume you agree that it's unrealistic to think he could have been close to building a human-level AGI in 2001, given that we still seem pretty far from such an invention in 2012.)

That's an evidence that Ben's understanding is still not enough, and only evidence for SIAI being dramatically not enough.

I'm almost certain that Eliezer and other researchers at SIAI know computational complexity theory

Almost certain is an interesting thing here. See, because with every single other AI researcher that made something usable (bio-assay analysis by Ben), you can be way more certain. There is a lot of people in the world to pick from, and there will be a few for whom your 'almost certain' will fail. If you are discussing 1 person, if the choice of the person is not independent of failure of 'almost certain' (it is not independent if you pick by person's opinion), then you may easily overestimate.

Based on what you've written, I don't see a reason to think Ben's intuitions are much better than SI's.

I think they are much further towards being better in the sense that everyone in SI probably can't get there without spending a decade or two studying, but still ultimately way short of being any good. In any case keep in mind that Ben's intuitions are about Ben's project, coming from working on it, there's good reason to think that if his intuitions are substantially bad he won't make any AI. SI's intuitions are about what? Handwaving about unbounded idealized models ('utility maximizer' taken way too literally, i guess once again because if you don't understand algorithmic complexity you don't understand how little relation between idealized model and practice there could be). Misunderstanding of how Solomonoff induction works (or what it even is). And so on.

Comment author: Eugine_Nier 12 May 2012 03:33:05AM 1 point [-]

I'm almost certain that Eliezer and other researchers at SIAI know computational complexity theory

I'm sure they know it. It's just since they don't do much actual coding, it's not all that available to them.

Comment author: Kaj_Sotala 11 May 2012 09:25:00PM *  1 point [-]

Hanson saying the same:

For example, if there were such a thing as a gene for optimism versus pessimism, you might believe that you had an equal chance of inheriting your mother’s optimism gene or your father’s pessimism gene. You might further believe that your sister had the same chances as you, but via an independent draw, and following Mendel’s rules of inheritance. You might even believe that humankind would have evolved to be more pessimistic, had they evolved in harsher environments. Beliefs of this sort seem central to scientific discussions about the origin of human beliefs, such as occur in evolutionary psychology. [...]

Consider, for example, two astronomers who disagree about whether the universe is open (and infinite) or closed (and finite). Assume that they are both aware of the same relevant cosmological data, and that they try to be Bayesians, and therefore want to attribute their difference of opinion to differing priors about the size of the universe.

This paper shows that neither astronomer can believe that, regardless of the size of the universe, nature was equally likely to have switched their priors. Each astronomer must instead believe that his prior would only have favored a smaller universe in situations where a smaller universe was actually more likely. Furthermore, he must believe that the other astronomer’s prior would not track the actual size of the universe in this way; other priors can only track universe size indirectly, by tracking his prior. Thus each person must believe that prior origination processes make his prior more correlated with reality than others’ priors.

As a result, these astronomers cannot believe that their differing priors arose due to the expression of differing genes inherited from their parents in the usual way. After all, the usual rules of genetic inheritance treat the two astronomers symmetrically, and do not produce individual genetic variations that are correlated with the size of the universe.

This paper thereby shows that agents who agree enough about the origins of their priors must have the same prior.