I thought Ben Goertzel made an interesting point at the end of his dialog with Luke Muehlhauser, about how the strengths of both sides' arguments do not match up with the strengths of their intuitions:
One thing I'm repeatedly struck by in discussions on these matters with you and other SIAI folks, is the way the strings of reason are pulled by the puppet-master of intuition. With so many of these topics on which we disagree -- for example: the Scary Idea, the importance of optimization for intelligence, the existence of strongly convergent goals for intelligences -- you and the other core SIAI folks share a certain set of intuitions, which seem quite strongly held. Then you formulate rational arguments in favor of these intuitions -- but the conclusions that result from these rational arguments are very weak. For instance, the Scary Idea intuition corresponds to a rational argument that "superhuman AGI might plausibly kill everyone." The intuition about strongly convergent goals for intelligences, corresponds to a rational argument about goals that are convergent for a "wide range" of intelligences. Etc.
On my side, I have a strong intuition that OpenCog can be made into a human-level general intelligence, and that if this intelligence is raised properly it will turn out benevolent and help us launch a positive Singularity. However, I can't fully rationally substantiate this intuition either -- all I can really fully rationally argue for is something weaker like "It seems plausible that a fully implemented OpenCog system might display human-level or greater intelligence on feasible computational resources, and might turn out benevolent if raised properly." In my case just like yours, reason is far weaker than intuition.
What do we do about this disagreement and other similar situations, both as bystanders (who may not have strong intuitions of their own) and as participants (who do)?
I guess what bystanders typically do (although not necessarily consciously) is evaluate how reliable each party's intuitions are likely to be, and then use that to form a probabilistic mixture of the two sides' positions.The information that go into such evaluations could include things like what cognitive processes likely came up with the intuitions, how many people hold each intuition and how accurate each individual's past intuitions were.
If this is the best we can do (at least in some situations), participants could help by providing more information that might be relevant to the reliability evaluations, and bystanders should pay more conscious attention to such information instead of focusing purely on each side's arguments. The participants could also pretend that they are just bystanders, for the purpose of making important decisions, and base their beliefs on "reliability-adjusted" intuitions instead of their raw intuitions.
Questions: Is this a good idea? Any other ideas about what to do when strong intuitions meet weak arguments?
Related Post: Kaj Sotala's Intuitive differences: when to agree to disagree, which is about a similar problem, but mainly from the participant's perspective instead of the bystander's.
I count only 1 out of 11 SIAI researcher not having a degree. (Paul Christiano's bio hasn't been updated yet, but he told me he just graduated from MIT). Click these links if you want to check for yourself.
I no longer have much hope of changing your views, but rather want to encourage you to make some positive contributions (like your belief propagation graph idea) despite having views that I consider to be wrong. (I can't resist pointing out some of the more blatant errors though, like the above.)