jimrandomh comments on A Brief Overview of Machine Ethics - Less Wrong

6 Post author: lukeprog 05 March 2011 06:09AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (90)

You are viewing a single comment's thread. Show more comments above.

Comment author: jimrandomh 06 March 2011 10:55:40PM 3 points [-]

I said that public access to an AI under development would be bad, because if it wasn't safe to run - that is, if running it might cause it too foom and destroy the world - then no one would be able to make that judgment and keep others from running it. You responded with an analogy to EMACS, which no one believes or has ever believed to be dangerous, and which has no potential to do disastrous things that their operators did not intend. So that analogy is really a non sequitur.

"Dangerous" in this context does not mean "powerful", it means "volatile", as in "reacts explosively with Pentiums".

Comment author: timtyler 07 March 2011 12:19:52AM *  -2 points [-]

Both types of software are powerful tools. Powerful tools are dangerous in the wrong hands, because they amplify the power of their users. That is the gist of the analogy.

I expect EMACS has been used for all kinds of evil purposes, from writing viruses, trojans, and worms to tax evasion and fraud.

I note that Anders Sandberg recently included:

"Otherwise the terrorists will win!"

...in his list of of signs that you might be looking at a weak moral argument.

That seems rather dubious as a general motto, but in this case, I am inclined to agree. In the case of intelligent machines, the positives of openness substantially outweigh their negatives, IMO.

Budding machine intelligence builders badly need to signal that they are not going to screw everyone over. How else are other people to know that they are not planning to screw everyone over?

Such signals should be expensive and difficult to fake. In this case, about the only credible signal is maximum transpareny. I am not going to screw you over, and look, here is the proof: what's mine is yours.

Comment author: jimrandomh 07 March 2011 12:33:35AM *  0 points [-]

If you don't understand something I've written, please ask for clarification. Don't guess what I said and respond to that instead; that's obnoxious. Your comparison of my argument to

"Otherwise the terrorists will win!"

Leads me to believe that you didn't understand what I said at all. How is destroying the world by accident like terrorism?

Comment author: timtyler 07 March 2011 12:50:04AM -2 points [-]

Er, characterising someone who disagrees with you on a technical point as "obnoxious" is not terribly great manners in itself! I never compared destroying the world by accident with terrorism - you appear to be projecting. However, I am not especially interested in the conversation being dragged into the gutter in this way.

If you did have a good argument favouring closed source software and reduced transparency I think there has been a reasonable chance to present it. However, if you can't even be civil, perhaps you should consider waiting until you can.

Comment author: jimrandomh 07 March 2011 01:09:23AM -1 points [-]

I gave an argument that open-sourcing AI would increase the risk of the world being destroyed by accident. You said

I note that Anders Sandberg recently included: "Otherwise the terrorists will win!" ...in his list of of signs that you might be looking at a weak moral argument.

I presented the mismatch between this statement and my argument as evidence that you had misunderstood what I was saying. In your reply,

I never compared destroying the world by accident with terrorism - you appear to be projecting.

You are misunderstanding me again. I think I've already said all that needs to be said, but I can't clear up confusion if you keep attacking straw men rather than asking questions.