Here's one suggestion: focus on the causes of the intuition. If the intuition is based on something we would accept as rational evidence if it were suitably cleaned up and put into rigorous form, then we should regard that as an additional argument for whatever. If the intuition is based on subject matter we would disregard in other circumstances or flawed reasons, then we can regard that as evidence against the whatever.
This is a little abstract, so I'll give a double example:
So, what is the origin of intuitions about things like AI and the future performance of machines...? (I'll just note that I've seen a little evidence that young children are also vitalists.)
...For example, if there were such a thing as a gene for optimism versus pessimism, you might believe that you had an equal chance of inheriting your mother’s optimism gene or your father’s pessimism gene. You might further believe that your sister had the same chances as you, but via an independent draw, and following Mendel’s rules of inheritance. You might even believe that humankind would have evolved to be more pessimistic, had they evolved in harsher environments. Beliefs of this sort seem central to scientific discussions about
I thought Ben Goertzel made an interesting point at the end of his dialog with Luke Muehlhauser, about how the strengths of both sides' arguments do not match up with the strengths of their intuitions:
What do we do about this disagreement and other similar situations, both as bystanders (who may not have strong intuitions of their own) and as participants (who do)?
I guess what bystanders typically do (although not necessarily consciously) is evaluate how reliable each party's intuitions are likely to be, and then use that to form a probabilistic mixture of the two sides' positions.The information that go into such evaluations could include things like what cognitive processes likely came up with the intuitions, how many people hold each intuition and how accurate each individual's past intuitions were.
If this is the best we can do (at least in some situations), participants could help by providing more information that might be relevant to the reliability evaluations, and bystanders should pay more conscious attention to such information instead of focusing purely on each side's arguments. The participants could also pretend that they are just bystanders, for the purpose of making important decisions, and base their beliefs on "reliability-adjusted" intuitions instead of their raw intuitions.
Questions: Is this a good idea? Any other ideas about what to do when strong intuitions meet weak arguments?
Related Post: Kaj Sotala's Intuitive differences: when to agree to disagree, which is about a similar problem, but mainly from the participant's perspective instead of the bystander's.