Many Singularitarians assume a lot, not the least of which is that intelligence is fundamentally a computational process.
What else it would be? Except the divine origin of thoughts nothing was submitted as an alternative so far.
I distrust "what else would it be"-style arguments; they are ultimately appeals to inadequate imagination.
Certainly of the things we understand reasonably well, computation is the only candidate that could explain intelligence; if intelligence weren't fundamentally a computational process it would have to fundamentally be something we don't yet understand.
Just to be clear, I'm not challenging the conclusion; given the sorts of things that intelligence does, and the sorts of things that computations do, that intelligence is a form of computation seems pretty likely to me. What I'm pushing back on is the impulse to play burden-of-proof tennis with questions like this, rather than accepting the burden of proof and trying to meet it.
I can imagine a great many other things it could be, but in the real world people have to go by the evidential support. Your post is just a variation of the "argument from ignorance" , as in "We don't know in detail what intelligence is, so it could be something else", even though you admit "Certainly of the things we understand reasonably well, computation is the only candidate that could explain intelligence".
Building an AI does not require it being a computer - it could be a bunch of rubber bands if that's what worked. The assumption is more like intelligence is not inherently mysterious, and humans are not at some special perfect point of intelligence.
Building an AI does not require it being a computer - it could be a bunch of rubber bands if that's what worked
You can build a computer out of pretty much anything, including rubber bands.
On the subject of morality in robots, I would assume that when (if?) we devise a working cognitive model of an A.I. that would be indistinct from a human in every observable circumstance, the chances of it developing/learning sociopathic behaviour would be no different from a human developing psychopathic tendencies (which, although I can provide no scientific proof, I imagine is in the minority).
I know this is an abstraction that doesn't do justice to the work people are doing on working towards this model, but I think the complexities of AI are one of the things that lead certain people to the knee-jerk reaction that all post-singularity AIs will want to exterminate the human race. (possessing a phobia because you don't understand something etc etc...)
The Department of Defense report "Autonomous Military Robotics: Risk, Ethics, and Design" linked looks interesting (it doesn't seem to have been linked here before, though it's from 2008). I'll check it out.
Edit: I skimmed through the bits that looked interesting; there's an off-hand reference to "friendliness theory" but the difficult bits of getting a machine to have a correct morality seem glossed over (justified by the claim that that these are supposed to be special-purpose robots with a definite mission and orders to obey, not AGIs - though some of the stuff they describe sounds "AI hard" to me). There's some mention of robots building other robots and running amok in the risks, and some references to Kurzweil.
On the plus side for the article:
Discussion of AI ethics in a major newspaper (we'll get out of the crank file any day now)
Some good bridging of the inferential distance via discussion of physical robot interactions (self-driving cars, etc)
http://opinionator.blogs.nytimes.com/2011/12/25/the-future-of-moral-machines/