If they had degrees I would have to assume they probably, sometime in the past, have passed an exam (which is not a good evidence of competence either, but at least is something).
I count only 1 out of 11 SIAI researcher not having a degree. (Paul Christiano's bio hasn't been updated yet, but he told me he just graduated from MIT). Click these links if you want to check for yourself.
If you want to change my view, you better actually link some posts that are evidence for them knowing something instead of calling what i say a 'rant'.
I no longer have much hope of changing your views, but rather want to encourage you to make some positive contributions (like your belief propagation graph idea) despite having views that I consider to be wrong. (I can't resist pointing out some of the more blatant errors though, like the above.)
I thought Ben Goertzel made an interesting point at the end of his dialog with Luke Muehlhauser, about how the strengths of both sides' arguments do not match up with the strengths of their intuitions:
What do we do about this disagreement and other similar situations, both as bystanders (who may not have strong intuitions of their own) and as participants (who do)?
I guess what bystanders typically do (although not necessarily consciously) is evaluate how reliable each party's intuitions are likely to be, and then use that to form a probabilistic mixture of the two sides' positions.The information that go into such evaluations could include things like what cognitive processes likely came up with the intuitions, how many people hold each intuition and how accurate each individual's past intuitions were.
If this is the best we can do (at least in some situations), participants could help by providing more information that might be relevant to the reliability evaluations, and bystanders should pay more conscious attention to such information instead of focusing purely on each side's arguments. The participants could also pretend that they are just bystanders, for the purpose of making important decisions, and base their beliefs on "reliability-adjusted" intuitions instead of their raw intuitions.
Questions: Is this a good idea? Any other ideas about what to do when strong intuitions meet weak arguments?
Related Post: Kaj Sotala's Intuitive differences: when to agree to disagree, which is about a similar problem, but mainly from the participant's perspective instead of the bystander's.