I thought Ben Goertzel made an interesting point at the end of his dialog with Luke Muehlhauser, about how the strengths of both sides' arguments do not match up with the strengths of their intuitions:

One thing I'm repeatedly struck by in discussions on these matters with you and other SIAI folks, is the way the strings of reason are pulled by the puppet-master of intuition. With so many of these topics on which we disagree -- for example: the Scary Idea, the importance of optimization for intelligence, the existence of strongly convergent goals for intelligences -- you and the other core SIAI folks share a certain set of intuitions, which seem quite strongly held. Then you formulate rational arguments in favor of these intuitions -- but the conclusions that result from these rational arguments are very weak. For instance, the Scary Idea intuition corresponds to a rational argument that "superhuman AGI might plausibly kill everyone." The intuition about strongly convergent goals for intelligences, corresponds to a rational argument about goals that are convergent for a "wide range" of intelligences. Etc.

On my side, I have a strong intuition that OpenCog can be made into a human-level general intelligence, and that if this intelligence is raised properly it will turn out benevolent and help us launch a positive Singularity. However, I can't fully rationally substantiate this intuition either -- all I can really fully rationally argue for is something weaker like "It seems plausible that a fully implemented OpenCog system might display human-level or greater intelligence on feasible computational resources, and might turn out benevolent if raised properly." In my case just like yours, reason is far weaker than intuition.

What do we do about this disagreement and other similar situations, both as bystanders (who may not have strong intuitions of their own) and as participants (who do)?

I guess what bystanders typically do (although not necessarily consciously) is evaluate how reliable each party's intuitions are likely to be, and then use that to form a probabilistic mixture of the two sides' positions.The information that go into such evaluations could include things like what cognitive processes likely came up with the intuitions, how many people hold each intuition and how accurate each individual's past intuitions were.

If this is the best we can do (at least in some situations), participants could help by providing more information that might be relevant to the reliability evaluations, and bystanders should pay more conscious attention to such information instead of focusing purely on each side's arguments. The participants could also pretend that they are just bystanders, for the purpose of making important decisions, and base their beliefs on "reliability-adjusted" intuitions instead of their raw intuitions.

Questions: Is this a good idea? Any other ideas about what to do when strong intuitions meet weak arguments?

Related Post: Kaj Sotala's Intuitive differences: when to agree to disagree, which is about a similar problem, but mainly from the participant's perspective instead of the bystander's.

New Comment
46 comments, sorted by Click to highlight new comments since:

(Tangent: In some cases it's possible to find a third party that understands a participant's intuitions and is willing to explain them to participants with opposing intuitions in language that everyone, including bystanders, can understand. E.g. I think that if I was involved as a translator in the Muehlhauser--Goertzel debate then at least some of Ben's intuitions would have been made clearer to Luke. Because Luke didn't quite understand Ben's position this also led some LW bystanders to get a mistaken impression of what Ben's position was, e.g. a highly upvoted comment suggested that Ben thought that arbitrary AGIs would end up human-friendly, which is not Ben's position. A prerequisite for figuring out what to do in the case of disagreeing intuitions is to figure out what the participants' intuitions actually are. I don't think people are able to do this reliably. (I also think it's a weakness of Eliezer's style of rationality in particular, and so LessWrong contributors especially might want to be wary of fighting straw men. I don't mean to attack LessWrong or anyone with this comment, just to share my impression about one of LessWrong's (IMO more glaring and important) weaknesses.))

This smells true to me, but I don't have any examples at hand. Do you?

This would be especially useful here, as specific examples would make it easier to think about strategies to avoid this, and maybe see if we're doing something systematically badly.

[-]Rain00

One I recall is when Eliezer was talking with the 48 Laws of Power guy on Bloggingheads.tv. He kept using the same analogy over and over, when the other guy clearly didn't get it. Eliezer has said he's specifically bad at coming up with good analogies on the spot.

Personally, I enjoy finding the proper path to bridge inferential gaps, and am quite good at it in person. I've thought a 'translation service' like the one Will recommends would be highly beneficial in such debates, but only with very skilled translators.

[-]Shmi80

I thought that it has been generally agreed upon that, as a participant, the rational thing to do was to examine one's intuition and logic and make sure that your system 1 and system 2 thinking matches up, then reexamine it for possible biases. Hence it seems highly irrational to bring up something like

I have a strong intuition that OpenCog can be made into a human-level general intelligence, and that if this intelligence is raised properly it will turn out benevolent and help us launch a positive Singularity. However, I can't fully rationally substantiate this intuition

as a supporting argument.

Except in a lot of cases your intuition is better than your conscious thinking. See Daniel Kahneman's Thinking Fast and Slow.

couscous thinking

That's gotta be the best Cupertino I've seen in a while.

Thanks, fixed.

[-]Shmi40

My point was that you would be irrational to seriously expect your opponent to share your intuition, not that your intuition is wrong.

I feel like this is just a case of people not being good enough at communicating. Productively communicating that is, even when you start out disagreeing - like when you start arguing with your father about which day to take the vacation on, but end up solving the problem of what vacation you want to take instead.

There are lots of hard parts about communication. Understanding your own point of view, changing your mind when you're wrong, and understanding the other person's point of view are all hard.

So when Ben says "it's our intuitions' fault," I think it means "it was too hard for us to go further - we need better skills X, Y and Z."

I think SI's greatest strength is communication. The greatest weakness is lack of relevant technical competence. They are taken far more seriously than anyone with average communication skills and same degree of technical expertise (or lack thereof) would.

I would agree that this is true. But there are lots of different communication skills, and humans are really bad at some of them, so "greatest strength" still leaves a lot of room for error. When I look at Ben and Luke's dialogue, I see places where they speak past each other, or walk into walls. And of course what Ben said was basically "we had some particular problems with communicating" - my claim is just that those problems are something we should aim to overcome.

But I'm just an amateur. If we have any psychology grad students on here, maybe we should shanghai them into figuring out how to start a communication dojo.

I seen that sort of 'talking past each other' happen very often when one side doesn't know the topic well enough for dialogue (but got a strong opinion anyway). I just don't think it is useful to view it as purely 'communication' problem. Perhaps the communication is good enough, and the ideas being communicated are bad (faulty). That's what you should expect with someone whose only notable accomplishments are at communicating, who's failing with multiple other people including university professors.

Errors in communication are there, believe me. Maybe their first mistake was choosing too big a topic (everything we disagree about :P), because it seems like they felt pressure to "touch on" a bunch of points, rather than saying "hold on, let's slow down and make sure we're talking about the same thing."

And if the other person is wrong and not a good communicator, there are still some things you can do to help the dialogue, though this is hard and I'm bad at it - changing yourself is easy by comparison. For example, if it turns out that you're talking about two different things (e.g. AI as it is likely to be built vs. AI "in general"), you can be the one to move over and talk about the thing the other person wants to talk about.

Well, I estimate negative utility for giving ideas about AI 'in general' to people who don't understand magnitude of distinction between AIs 'in general' (largely the AIs that could not be embedded within universe that has finite computational power), and the AIs that matter in practice.

Here's one suggestion: focus on the causes of the intuition. If the intuition is based on something we would accept as rational evidence if it were suitably cleaned up and put into rigorous form, then we should regard that as an additional argument for whatever. If the intuition is based on subject matter we would disregard in other circumstances or flawed reasons, then we can regard that as evidence against the whatever.

This is a little abstract, so I'll give a double example:

  1. recently there's been a lot of research into the origins of religious belief, focusing on intuitive versus analytical styles of thinking. To the extent that explicit analytical thought is superior at truth-gathering, we should take this as evidence for atheism and against theism.
  2. This area of research has also focused on when religious belief develops, and there's evidence that the core of religious belief is formed in childhood because children ascribe agency to all sorts of observations, while lack of agency is more a difficult learned adult way of thinking (and as things like the gambler's fallacy show, is often not learned even then); to the extent that we trust adult thinking over childhood thinking, we will again regard this as evidence against theism and for atheism.

So, what is the origin of intuitions about things like AI and the future performance of machines...? (I'll just note that I've seen a little evidence that young children are also vitalists.)

Hanson saying the same:

For example, if there were such a thing as a gene for optimism versus pessimism, you might believe that you had an equal chance of inheriting your mother’s optimism gene or your father’s pessimism gene. You might further believe that your sister had the same chances as you, but via an independent draw, and following Mendel’s rules of inheritance. You might even believe that humankind would have evolved to be more pessimistic, had they evolved in harsher environments. Beliefs of this sort seem central to scientific discussions about the origin of human beliefs, such as occur in evolutionary psychology. [...]

Consider, for example, two astronomers who disagree about whether the universe is open (and infinite) or closed (and finite). Assume that they are both aware of the same relevant cosmological data, and that they try to be Bayesians, and therefore want to attribute their difference of opinion to differing priors about the size of the universe.

This paper shows that neither astronomer can believe that, regardless of the size of the universe, nature was equally likely to have switched their priors. Each astronomer must instead believe that his prior would only have favored a smaller universe in situations where a smaller universe was actually more likely. Furthermore, he must believe that the other astronomer’s prior would not track the actual size of the universe in this way; other priors can only track universe size indirectly, by tracking his prior. Thus each person must believe that prior origination processes make his prior more correlated with reality than others’ priors.

As a result, these astronomers cannot believe that their differing priors arose due to the expression of differing genes inherited from their parents in the usual way. After all, the usual rules of genetic inheritance treat the two astronomers symmetrically, and do not produce individual genetic variations that are correlated with the size of the universe.

This paper thereby shows that agents who agree enough about the origins of their priors must have the same prior.

Here's one suggestion: focus on the causes of the intuition.

So, what is the origin of intuitions about things like AI and the future performance of machines...? (I'll just note that I've seen a little evidence that young children are also vitalists.)

I've posted about that (as Dmytry), the belief propagation graph (which shows what paths can't be the cause of intuitions due to too long propagation delay), that was one of the things which convinced me that trying to explain anything to LW is a waste of time, and that critique without explanation is more effective because explanatory critique gets rationalized away, while the critique of the form "you suck" makes people think (a little) what caused the impression in question and examine themselves somewhat, in the way in which they don't if they are given actual, detailed explanation.

I'm curious if you think Ben's beliefs about AI "benevolence" is likely to be more accurate than SIAI's, and if so why. Can you make a similar graph for Ben Goertzel (or just give a verbal explanation if that's more convenient)?

Well, first off, Ben seem to be a lot more accurate than SIAI when it comes to meta, i.e. acknowledging that the intuitions act as puppetmaster.

The graph for Ben would probably include more progression from nodes from the actual design that he has in mind - learning AI - and from computational complexity theory (for example i'm pretty sure Ben understands all those points about the prediction vs butterfly effect, the tasks that are exponential improving at most 2x when power is to mankind as mankind is to 1 amoeba, etc, it really is very elementary stuff). So would a graph of the people competent in that field. The Ben's building human-like-enough AI. The SIAI is reinventing religion as far as i can see, there's no attempts to try and see what limitations AI can have. Any technical counter argument is rationalized away, any pro argument, no matter how weak and how privileged it is as a hypothesis, or how vague, is taken as something which has to be conclusively disproved. The vague stuff has to be defined by whoever wants to disprove it. Same as for any religion really.

Well, first off, Ben seem to be a lot more accurate than SIAI when it comes to meta, i.e. acknowledging that the intuitions act as puppetmaster.

Yes, this did cause me to take him more seriously than before.

The graph for Ben would probably include more progression from nodes from the actual design that he has in mind - learning AI

That doesn't seem to help much in practice though. See this article where Ben describes his experiences running an AGI company with more than 100 employees during the dot-com era. At the end, he thought he was close to success, if not for the dot-com bubble bursting. (I assume you agree that it's unrealistic to think he could have been close to building a human-level AGI in 2001, given that we still seem pretty far from such an invention in 2012.)

and from computational complexity theory

I'm almost certain that Eliezer and other researchers at SIAI know computational complexity theory, but disagree with your application of it. The rest of your comment seems to be a rant against SIAI instead of comparing the sources of SIAI's beliefs with Ben's, so I'm not sure how they help to answer the question I asked.

Based on what you've written, I don't see a reason to think Ben's intuitions are much better than SI's. Assuming, for the sake of argument, that Ben's intuitions are somewhat, but not much, better, what do you think Ben, SI, and bystanders should each do at this point? For example should Ben keep trying to build OpenCog?

Yes, this did cause me to take him more seriously than before.

Note also that the meta is all that people behind SIAI have somewhat notable experience with (rationality studies). It is very bad sign they get beaten on meta by someone whom I previously evaluated as dramatically overoptimistic (in terms of AI's abilities) AI developer.

That doesn't seem to help much in practice though. See this article where Ben describes his experiences running an AGI company with more than 100 employees during the dot-com era. At the end, he thought he was close to success, if not for the dot-com bubble bursting. (I assume you agree that it's unrealistic to think he could have been close to building a human-level AGI in 2001, given that we still seem pretty far from such an invention in 2012.)

That's an evidence that Ben's understanding is still not enough, and only evidence for SIAI being dramatically not enough.

I'm almost certain that Eliezer and other researchers at SIAI know computational complexity theory

Almost certain is an interesting thing here. See, because with every single other AI researcher that made something usable (bio-assay analysis by Ben), you can be way more certain. There is a lot of people in the world to pick from, and there will be a few for whom your 'almost certain' will fail. If you are discussing 1 person, if the choice of the person is not independent of failure of 'almost certain' (it is not independent if you pick by person's opinion), then you may easily overestimate.

Based on what you've written, I don't see a reason to think Ben's intuitions are much better than SI's.

I think they are much further towards being better in the sense that everyone in SI probably can't get there without spending a decade or two studying, but still ultimately way short of being any good. In any case keep in mind that Ben's intuitions are about Ben's project, coming from working on it, there's good reason to think that if his intuitions are substantially bad he won't make any AI. SI's intuitions are about what? Handwaving about unbounded idealized models ('utility maximizer' taken way too literally, i guess once again because if you don't understand algorithmic complexity you don't understand how little relation between idealized model and practice there could be). Misunderstanding of how Solomonoff induction works (or what it even is). And so on.

I'm almost certain that Eliezer and other researchers at SIAI know computational complexity theory

I'm sure they know it. It's just since they don't do much actual coding, it's not all that available to them.

In many fields, intuitions are just not very reliable. For example, in math, many of the results in both topology and set theory are highly counter-intuitive. If one is reaching a conclusion primarily based off of intuitions that should be a cause for concern.

On the other hand, working on topology for a while gives one the meta-intuition that one should check reasonable sounding statements on the long line), the topologists's sine curve, the Cantor set, etc.

Or better, one's idea of what constitutes a "reasonable-sounding statement" in the first place changes, to better accommodate what is actually true.

(Checking those examples is good; but even better would be not to need to, due to having an appropriate feeling for how abstract a topological space is.)

Completely agreed. Part of this might look like a shift in definitions/vocabulary over time. Coming to topology from analysis, sequence felt like a natural way to interrogate limiting behavior. After a while though, it sort of became clear that thinking sequentially requires putting first-countability assumptions everywhere. Introducing nets did away with the need for this assumption and better captured what convergence ought to mean in a general topological spaces.

Sure. But we don't have much in the way of actual AI to check our intuitions against in the same way.

Do you believe ZFC (or even PA) to be consistent? Can you give a reason for this belief that doesn't relay on your intuition?

Hack, can you justify the axioms used in those systems without appeal to your intuition?

This is a valid point. Sometimes we rely on intuition. So can one reasonably distinguish this case from the case of ZFC or PA? I think the answer is yes.

First, we do have some other (albeit weak) evidence for the consistency of PA and ZFC. In the case of PA we have what looks like a physical model that seems pretty similar. That's only a weak argument because the full induction axiom schema is much stronger than one can represent in any finite chunk of PA in a reasonable fashion. We also have spent a large amount of time on both PA and ZFC making theorems and we haven't seen a contradiction. This is after we've had a lot of experience with systems like naive set theory where we have what seems to be a good idea of how to find contradictions in systems. This is akin to something similar to having a functional AGI and seeing what it does in at least one case for a short period of time. Of course, this argument is also weak since Godelian issues imply that there should be axiomatic systems that are fairly simple and yet have contradictions that only appear when one looks at extremely long chains of inferences compared to the complexity of the systems.

Second in the case of PA (and to a slightly lesser extent ZFC) , different people who have thought about the question have arrived at the same intuition. There are of course a few notable exceptions like Edward Nelson but those exceptions are limited, and in many cases, like Nelson's there seem to be other, extra-mathematical motives for them to reach their conclusions. This is in contrast to the situation in question where a much smaller number of people have thought about the issues and they haven't reached the same intuition.

A third issue is that we have consistency proofs of PA that use somewhat weak systems. Gentzen's theorem is the prime example. The forms of induction required are extremely weak compared to the full induction schema as long as one is allowed a very tiny bit of ordinal arithmetic. I don't know what the relevant comparison would be in the AGI context, but this seems like a type of evidence we don't have in that context.

Thinking about this some more, I don't think our intuitions are particularly unreliable, simply it's more memorable when they fail.

If one is reaching a conclusion primarily based off of intuitions that should be a cause for concern.

I wonder if I might be missing your point, since this post is basically asking what one should do after one is already concerned. Are you saying that the first step is to become concerned (and perhaps one or both parties in my example aren't yet)?

Yes, I'm not sure frankly that either party in question is demonstrating enough concern about the reliability of their intuitions.

Many of the results are counter-intuitive, but most are not, especially for someone trained in that area. In fact, intuition is required to make progress on math.

But that intuition is in many cases essentially many years of experience with similar contexts all put together operating in the back of one's head. In this case the set of experience to inform/create intuition is pretty small. So it isn't at all clear when there are strongly contradicting intuitions which one makes more sense to pay attention to.

Good question. The sequences focus on thinking correctly more than arguing successfully, and I think most people who stick around here develop these intuitions through a process of learning to think more like Eliezer does.

The first possible cause I see for why strong intuitions are not convertible into convincing arguments is long inferential distances--the volume of words is simply too great to fit into a reasonably-sized exchange. But the Hanson-Yudkowsky Foom Debate was unreasonably long, and as I understand it, both parties left with their strong intuitions fairly intact.

The post-mortem from the Foom debate seemed to center around emotional attachment to ideas, and their intertwining with identity. This looks like the most useful level for bystander-based examination. I'd be interested to know how well, say, priming yourself for disinterested detachment and re-examining both arguments works for a Foom-sized debate as opposed to one of ordinary length.

evaluate how reliable each party's intuitions are likely to be, and then use that to form a probabilistic mixture of the two sides' positions.

I'd break this down into

  1. Outside view of each party, if other intuitions are available for evaluation.

  2. Outside view of each intuition, although the debaters probably already did this for each other.

  3. A probabilistic graph for each party, involving the intuitions and the arguments. Through what paths did the intuitions generate the arguments? If there was any causality going the other direction, how did that work?

What other methods for evaluating inexpressibly strong intuitions are there?

It seems plausible that a fully implemented OpenCog system might display human-level or greater intelligence on feasible computational resources, and might turn out benevolent if raised properly.

Is there a disagreement about this? Perhaps not as great as it seems.

The idea of a superhuman software is generally accepted on LW. Whether OpenCog is the right platform, is a technical detail, which we can skip at the moment.

Might this software turn out benevolent, if raised properly? Let's become more specific about that part. If "might" only means "there is a nonzero probability of this outcome", LW agrees.

So we should rather ask how high is the probability that a "properly raised" OpenCog system will turn out "benevolent" -- depending on definitions of "benevolent" and "properly raised". That is the part which makes the difference.

[-]asr10

I don't think informal arguments can convince people on topics where they have made up their minds. You need either a proof or empirical evidence.

  • Show us a self-improving something. Show us that it either does or doesn't self-improve in surprising and alarming ways. Even if it's self-improving only in very narrow limited ways, that would be illuminating.

  • Explain how various arguments would apply to real existing AI-ish systems, like self-driving cars, machine translation, Watson, or a web search engine.

  • Give a proof that some things can or can't be done. There is a rich literature on uncomputable and intractable problems. We do know how to prove properties of computer programs I am surprised at how little it gets mentioned on LW.

I've posted on that also. For example the predictions are fighting against butterfly effect, and at best double in time when you square the computing power (and that's given unlimited knowledge of initial state!). It's pretty well demonstrable on the weather, for instance, but of course rationalizers can always argue that it 'wasn't demonstrated' for some more complex cases. There are things at which being to mankind as mankind is to 1 amoeba, will only double the ability compared to mankind at best (or much less than double). The LW is full of intuitions where you say that it is to us as we are to 1 amoeba, in terms of computing power, and then it is intuited that it actually can do things as much better than we can, as we can vs amoeba. Which just ain't so.

I think it is probably rare that a person's intutions on something, in the absense of clear evidence, are very reliable. I was trying to think of ways to resolve these dilemmas given that, and came up with two ideas.

The first is to try and think of some test, preferably one that is easy and simple given your current capacities. If a test isn't possible, you could try comparing what you THINK would happen at the end of the test, just to make sure you weren't having a disagreement on words or holding a belief within a belief or something along those lines. That latter part could get sketchy, but if you found yourself defending your view against falsification, that'd be an indicator something is wrong.

The second would be to accept that neither of you is likely to be right in the absense of such a test. That isn't enough, however: you should more concerned about how close each of you is to the right answer. Something like the following might be good for working that out:

  • Are there any areas where both your intuitions predict the same thing? If so, what OTHER solutions would hold that?
  • Is there another idea that could subsume both intuition spaces? It wouldn't be exact, and exacthness IS a virtue, but it could help with deanchoring and searching the probability space.
  • Are your ideas conditional? ("I believe A will happen if B, and you believe C will happen if D") If so, is there a more general idea that could explain each under its own conditions?

Again, you'd be looking for exactness here, and I think that finding a test is far preferable than simply comparing your intutiions, all things being equal.

In the case of AGI, simple tests could come from extant AIs, humans, or animals. These tests wouldn't be perfect, one could ALWAYS object that we aren't talking about AGI with these tests, but they could at least serve to direct our intuitions: how often do "properly raised" people/animals become "benevolent"? How often does "tool AI" currently engage in "non-benevolent" behavior? How successful are attempts to influence and/or encode values across species or systems? Obviously some disambiguation is necessary, but it seems like empirical tests guiding one's intutitions creates the best case scenario for guiding accurate beliefs short of actually having the answer in front of you.

[-][anonymous]00

If you can't convert your strong intuitions into a proof then one explanation might be that you have an overactive System 1 (*), like Daniel Kahneman would say. If so, the correct response is to stop paying attention to your intuitions.

(*) http://www.amazon.com/Thinking-Fast-Slow-Daniel-Kahneman/dp/0374275637

[This comment is no longer endorsed by its author]Reply
[-]Thomas-20

Some people, at least, have better intuition than their rational thinking is.

Even some irrational guesses are later proved correct.

An unavoidable fact, until the rational thinking is not perfect. What may be never.

Some people, at least, have better intuition than their rational thinking is.

I think that by "rational thinking" you mean "deliberative attempt at rational thinking". I consider it a good human level rationalist strategy to train your intuition and learn when and what extent you can rely on it.

Perhaps a better way to say it is that some people are better at intuition than at reflection or computation.