(Do note that I linked to mecho-gecko as an example of a technology that can run on a surface that we, even using that technology, would not be able to run on. The actual gecko is irrelevant, I just couldn't find a clip that didn't include the comparison.)
No, I don't. I am aware that you also think you have not been wrong at any point during this either, which has caused me to re-evaluate my own estimation of my correctness.
Having re-evaluated, I still believe I have been right all along.
To expand further on the analogy: the human brain is not a universal thinker, any more than the human leg is a universal runner. The brain thinks, and the leg runs, but they both do so in ways that are limited in some aspects, underperform in some domains, and suffer from quirks and idiosyncrasies. To say that the kind of thinking that a human brain does, is the only kind of thinking and AIs won't do any different, is isomorphic to saying that the kind of running a human leg does, is the same kind of running that a gecko's leg does.
Do you have an argument that our brains do not have universality WRT intelligence?
Do you understand what the theory I'm advocating is and says? Do you know why it says it?
http://vimeo.com/22099396
What do people think of this, from a Bayesian perspective?
It is a talk given to the Oxford Transhumanists. Their previous speaker was Eliezer Yudkowsky. Audio version and past talks here: http://groupspaces.com/oxfordtranshumanists/pages/past-talks