I thought Ben Goertzel made an interesting point at the end of his dialog with Luke Muehlhauser, about how the strengths of both sides' arguments do not match up with the strengths of their intuitions:
One thing I'm repeatedly struck by in discussions on these matters with you and other SIAI folks, is the way the strings of reason are pulled by the puppet-master of intuition. With so many of these topics on which we disagree -- for example: the Scary Idea, the importance of optimization for intelligence, the existence of strongly convergent goals for intelligences -- you and the other core SIAI folks share a certain set of intuitions, which seem quite strongly held. Then you formulate rational arguments in favor of these intuitions -- but the conclusions that result from these rational arguments are very weak. For instance, the Scary Idea intuition corresponds to a rational argument that "superhuman AGI might plausibly kill everyone." The intuition about strongly convergent goals for intelligences, corresponds to a rational argument about goals that are convergent for a "wide range" of intelligences. Etc.
On my side, I have a strong intuition that OpenCog can be made into a human-level general intelligence, and that if this intelligence is raised properly it will turn out benevolent and help us launch a positive Singularity. However, I can't fully rationally substantiate this intuition either -- all I can really fully rationally argue for is something weaker like "It seems plausible that a fully implemented OpenCog system might display human-level or greater intelligence on feasible computational resources, and might turn out benevolent if raised properly." In my case just like yours, reason is far weaker than intuition.
What do we do about this disagreement and other similar situations, both as bystanders (who may not have strong intuitions of their own) and as participants (who do)?
I guess what bystanders typically do (although not necessarily consciously) is evaluate how reliable each party's intuitions are likely to be, and then use that to form a probabilistic mixture of the two sides' positions.The information that go into such evaluations could include things like what cognitive processes likely came up with the intuitions, how many people hold each intuition and how accurate each individual's past intuitions were.
If this is the best we can do (at least in some situations), participants could help by providing more information that might be relevant to the reliability evaluations, and bystanders should pay more conscious attention to such information instead of focusing purely on each side's arguments. The participants could also pretend that they are just bystanders, for the purpose of making important decisions, and base their beliefs on "reliability-adjusted" intuitions instead of their raw intuitions.
Questions: Is this a good idea? Any other ideas about what to do when strong intuitions meet weak arguments?
Related Post: Kaj Sotala's Intuitive differences: when to agree to disagree, which is about a similar problem, but mainly from the participant's perspective instead of the bystander's.
Good question. The sequences focus on thinking correctly more than arguing successfully, and I think most people who stick around here develop these intuitions through a process of learning to think more like Eliezer does.
The first possible cause I see for why strong intuitions are not convertible into convincing arguments is long inferential distances--the volume of words is simply too great to fit into a reasonably-sized exchange. But the Hanson-Yudkowsky Foom Debate was unreasonably long, and as I understand it, both parties left with their strong intuitions fairly intact.
The post-mortem from the Foom debate seemed to center around emotional attachment to ideas, and their intertwining with identity. This looks like the most useful level for bystander-based examination. I'd be interested to know how well, say, priming yourself for disinterested detachment and re-examining both arguments works for a Foom-sized debate as opposed to one of ordinary length.
I'd break this down into
Outside view of each party, if other intuitions are available for evaluation.
Outside view of each intuition, although the debaters probably already did this for each other.
A probabilistic graph for each party, involving the intuitions and the arguments. Through what paths did the intuitions generate the arguments? If there was any causality going the other direction, how did that work?
What other methods for evaluating inexpressibly strong intuitions are there?