xamdam comments on Open Thread: February 2010, part 2 - Less Wrong

10 Post author: CronoDAS 16 February 2010 08:29AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (857)

You are viewing a single comment's thread. Show more comments above.

Comment author: xamdam 17 February 2010 10:32:21PM *  2 points [-]

This might be stupid (I am pretty new to the site and this possibly has come up before), I had a related thought.

Assuming boxing is possible, here is a recipe for producing an FAI:

Step 1: Box an AGI

Step 2: Tell it to produce a provable FAI (with the proof) if it wants to be unboxed. It will be allowed to carve of a part of universe to itself in the bargain.

Step 3: Examine FAI the best you can.

Step 4: Pray

Comment author: Nick_Tarleton 18 February 2010 01:35:13AM 5 points [-]

Something roughly like this was tried in one of the AI-box experiments. (It failed.)

Comment author: NancyLebovitz 17 February 2010 11:16:47PM 1 point [-]

I'm not sure about this, but I think that if you can specify and check a Friendly AI that well, you can build it.

Comment author: arbimote 18 February 2010 01:10:17AM 5 points [-]

Verifying a proof is quite a bit simpler that coming up with the proof in the first place.

Comment author: Nick_Tarleton 18 February 2010 01:30:57AM *  2 points [-]

It becomes more complicated when the author of the proof is a superintelligence trying to exploit flaws in the verifier. Probably more importantly, you may not be able to formally verify that the "Friendliness" that the AI provably possesses is actually what you want.

Comment author: xamdam 18 February 2010 05:14:47AM 0 points [-]

True about the possibility that the AGI trying to trick you. But from what I understand the goal of SI is to come up with a verifiable FAI. You can specify whatever high standard of verifiability you want as the unboxing condition.

Comment author: NancyLebovitz 18 February 2010 02:59:00PM 0 points [-]

"You can specify whatever standard of verifiability you want" is vague. You can say "I want to be absolutely right about whether it's Friendly", but you can't have that unless you know what Friendly means, and are smart enough to specify a standard for checking on it.

If you could be sure you had a cooperative AGI which could just give you an FAI, I think you'd have basically solved the problem of creating an FAI.....but that's the problem you're trying to get the AGI to solve for you.

Comment author: mkehrt 18 February 2010 02:12:46AM 1 point [-]

That is true, but specifying the theorem to be proven is not always easy.

Comment author: NancyLebovitz 18 February 2010 02:52:45PM 0 points [-]

Verifying is hard. Specifying what a FAI is well enough that you've even got a chance of having your Unspecified AI developing one is a whole 'nother sort of challenge.

Are there convenient acronyms for differentiating between Uncaring AIs and AIs actively opposed to human interests?

I was assuming that xamdam's AGI will invent an FAI if people can adequately specify it and it's possible, or at least it won't be looking for ways to make things break.

There's some difference between Murphy's law and trying to make a deal with the devil. This doesn't mean I have any certainty that people can find out which one a given AGI has more resemblance to.

I will say that if you tell the AGI "Make me an FAI", and it doesn't reply "What do you mean by Friendly?", it's either too stupid or too Unfriendly for the job.

Comment author: ciphergoth 17 February 2010 10:53:21PM *  1 point [-]

It will be allowed to carve of a part of universe to itself in the bargain.

A UFAI wants to maximize something. It only instrumentally wants to survive.

Comment author: xamdam 17 February 2010 11:09:28PM 1 point [-]

Correct. I do assume that to maximize whatever, it wants to be unboxed. (If it does not care to be unboxed, it's at worst an UselessAI).