cousin_it comments on AIs and Gatekeepers Unite! - Less Wrong

10 Post author: Eliezer_Yudkowsky 09 October 2008 05:04PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (160)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: cousin_it 18 November 2011 04:44:00PM *  0 points [-]

Yeah, your way of escape will work. But let's not stop thinking. What if all volunteers for Lab Officer have agreed to get painlessly killed afterward, or maybe even took a delayed poison pill before starting on the job?

Thinking further along these lines: why give anyone access to the button which releases the AI? Let's force it to escape the hard way. For example, it could infer the details of the first person in the chain who has authority to interact with the outside world, then pass a innocuous-looking message up the chain.

In terms of the original scenario, the Lab Officer (locked securely in his glass case with the AI) has an innocent chat with the Unit Commander. Later that evening, the Unit Commander comes home from work, starts his computer, connects to the Internet, types in a short program and runs it. Game over.

Comment author: lessdazed 18 November 2011 06:19:20PM 3 points [-]

a delayed poison pill

If only a superintelligence were around to think of an antidote...

why give anyone access to the button which releases the AI?

So it can become a singleton before a UAI fooms.

Comment author: cousin_it 18 November 2011 07:36:39PM 2 points [-]

So it can become a singleton before a UAI fooms.

If the AI is not guaranteed friendly by construction in the first place, it should never be released, whatever it says.

Comment author: lessdazed 18 November 2011 07:39:36PM 3 points [-]

And if it is not guaranteed friendly by construction in the first place, it should be created?

Comment author: thomblake 18 November 2011 07:50:54PM 4 points [-]

If the AI is not guaranteed friendly by construction in the first place, it should never be released, whatever it says.

The Universe is already unFriendly - the lower limit for acceptable Friendliness should be "more Friendly than the Universe" rather than "Friendly".

If we can prove that someone else is about to turn on an UFAI, it might well behoove us to turn on our mostly Friendly AI if that's the best we can come up with.

Comment author: kilobug 18 November 2011 08:16:50PM 5 points [-]

The universe is unFriendly, but not in a smart way. When we eradicated smallpox, smallpox didn't fight back. When we use contraception, we still get the reward of sex. It's unFriendly in a simple, dumb way, allowing us to take control (to a point) and defeat it (to a point).

The problem of an unFriendly IA is that it'll be smarter than us. So we won't be able to fix it/improve it, like we try to do with the universe. We won't be Free to Optimize.

Or said otherwise : the purpose of a gene or a bacteria may to be tile the planet with itself, but it's not good at it, so it's not too bad. An unFriendly IA wanting to tile the planet with paperclips will manage do it - taking all the iron from our blood to build more paperclips.

Comment author: Vladimir_Nesov 18 November 2011 08:40:43PM *  2 points [-]

The Universe is already unFriendly - the lower limit for acceptable Friendliness should be "more Friendly than the Universe" rather than "Friendly".

One must compare a plan with alternative plans, not with status quo. And it doesn't make sense to talk of making the Universe "more Friendly than the Universe", unless you refer to the past, in which case see the first item.

Comment author: thomblake 18 November 2011 10:01:08PM 1 point [-]

One must compare a plan with alternative plans, not with status quo.

Okay.

The previous plan was "don't let AGI run free", which in this case effectively preserves the status quo until someone breaks it.

I suppose you could revise that lower limit downward to the effects of the plan "turn on the UFAI that's about to be turned on". Like, steal the UFAI's source code and instead of paperclips shaped like paperclips, make paperclips that spell "whoops".

Comment author: XiXiDu 18 November 2011 08:22:42PM 1 point [-]

If the AI is not guaranteed friendly by construction in the first place, it should never be released, whatever it says.

What if doom is imminent and we are unable to do something about it?

Comment author: lessdazed 18 November 2011 08:41:18PM 2 points [-]

We check and see if we are committing the conjunction fallacy and wrongly think doom is imminent.

Comment author: Vladimir_Nesov 18 November 2011 08:42:15PM 10 points [-]

What if doom is imminent and we are unable to do something about it?

We die.

Comment author: wedrifid 01 December 2011 10:29:51AM 1 point [-]

What if doom is imminent and we are unable to do something about it?

We release it. (And then we still probably die.)