timtyler comments on Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It) - Less Wrong

32 Post author: ciphergoth 30 October 2010 09:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (432)

You are viewing a single comment's thread. Show more comments above.

Comment author: timtyler 31 October 2010 08:57:43AM *  -2 points [-]

The argument is that testing would be dangerous.

Life is dangerous: the issue is surely whether testing is more dangerous than not testing.

It seems to me that a likely outcome of pursuing a strategy involving searching for a proof is that - while you are searching for it - some other team makes a machine intelligence that works - and suddenly whether your machine is "friendly" - or not - becomes totally irrelevant.

I think bashing testing makes no sense. People are interested in proving what they can about machines - in the hope of making them more reliable - but that is not the same as not doing testing.

The idea that we can make an intelligent machine - but are incapable of constructing a test harness capable of restraining it - seems like a fallacy to me.

Poke into these beliefs, and people will soon refer you to the AI-box experiment - which purports to explain that restrained intelligent machines can trick human gate keepers.

...but so what? You don't imprison a super-intelligent agent - and then give the key to a single human and let them chat with the machine!