You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Future of Moral Machines - New York Times [link]

0 Post author: Dr_Manhattan 26 December 2011 02:44PM

Comments (10)

Comment author: Thomas 26 December 2011 03:14:32PM 2 points [-]

Many Singularitarians assume a lot, not the least of which is that intelligence is fundamentally a computational process.

What else it would be? Except the divine origin of thoughts nothing was submitted as an alternative so far.

Comment author: TheOtherDave 26 December 2011 04:00:20PM 9 points [-]

I distrust "what else would it be"-style arguments; they are ultimately appeals to inadequate imagination.

Certainly of the things we understand reasonably well, computation is the only candidate that could explain intelligence; if intelligence weren't fundamentally a computational process it would have to fundamentally be something we don't yet understand.

Just to be clear, I'm not challenging the conclusion; given the sorts of things that intelligence does, and the sorts of things that computations do, that intelligence is a form of computation seems pretty likely to me. What I'm pushing back on is the impulse to play burden-of-proof tennis with questions like this, rather than accepting the burden of proof and trying to meet it.

Comment author: billswift 27 December 2011 04:59:32AM 0 points [-]

I can imagine a great many other things it could be, but in the real world people have to go by the evidential support. Your post is just a variation of the "argument from ignorance" , as in "We don't know in detail what intelligence is, so it could be something else", even though you admit "Certainly of the things we understand reasonably well, computation is the only candidate that could explain intelligence".

Comment author: Manfred 26 December 2011 05:24:45PM *  3 points [-]

Building an AI does not require it being a computer - it could be a bunch of rubber bands if that's what worked. The assumption is more like intelligence is not inherently mysterious, and humans are not at some special perfect point of intelligence.

Comment author: MileyCyrus 27 December 2011 05:24:50AM 7 points [-]

Building an AI does not require it being a computer - it could be a bunch of rubber bands if that's what worked

You can build a computer out of pretty much anything, including rubber bands.

Comment author: orthonormal 26 December 2011 03:41:12PM 1 point [-]

Has anyone read the book that the article was a self-promotion for? (I have mediocre expectations, given the article; but mediocre would be an improvement in high-status treatment of the issue.)

Comment author: Dr_Manhattan 27 December 2011 01:26:44PM *  1 point [-]

"try before you buy" link

Comment author: Sush 27 December 2011 05:18:57PM 0 points [-]

On the subject of morality in robots, I would assume that when (if?) we devise a working cognitive model of an A.I. that would be indistinct from a human in every observable circumstance, the chances of it developing/learning sociopathic behaviour would be no different from a human developing psychopathic tendencies (which, although I can provide no scientific proof, I imagine is in the minority).

I know this is an abstraction that doesn't do justice to the work people are doing on working towards this model, but I think the complexities of AI are one of the things that lead certain people to the knee-jerk reaction that all post-singularity AIs will want to exterminate the human race. (possessing a phobia because you don't understand something etc etc...)

Comment author: Emile 26 December 2011 05:09:58PM *  0 points [-]

The Department of Defense report "Autonomous Military Robotics: Risk, Ethics, and Design" linked looks interesting (it doesn't seem to have been linked here before, though it's from 2008). I'll check it out.

Edit: I skimmed through the bits that looked interesting; there's an off-hand reference to "friendliness theory" but the difficult bits of getting a machine to have a correct morality seem glossed over (justified by the claim that that these are supposed to be special-purpose robots with a definite mission and orders to obey, not AGIs - though some of the stuff they describe sounds "AI hard" to me). There's some mention of robots building other robots and running amok in the risks, and some references to Kurzweil.

Comment author: Dr_Manhattan 26 December 2011 03:59:37PM *  0 points [-]

On the plus side for the article:

  • Discussion of AI ethics in a major newspaper (we'll get out of the crank file any day now)

  • Some good bridging of the inferential distance via discussion of physical robot interactions (self-driving cars, etc)