Comment author: Jack 21 December 2010 03:36:45AM 6 points [-]

It is an anthropic problem. Agents who don't get to make decisions by definition don't really exist in the ontology of decision theory. As a decision theoretic agent being told you are not the decider is equivalent to dying.

Comment author: jimrandomh 21 December 2010 01:53:57AM *  1 point [-]

Checking a single proof is O(n) (where n is the length of the proof). There are 2^n proofs of length n, so brute force search is O(sum(i=1..n)(i*2^i)) = O(n*2^n). Note that we throw away constant factors, and any terms whose effect is smaller than a constant factor, when using big-oh notation.

Using an UFAI this way requires 2n invocations of the AI (n to get the length, and another n to get the bits), plus an O(n) verification step after each. All of those verification steps add up to O(n^2) - easily manageable with current computers. The 2n invocations of the AI, on the other hand, might not be; we never specified the AI's computational complexity, but it's almost certainly much worse than n^2.

Comment author: Jack 21 December 2010 01:55:26AM 0 points [-]

Appreciated.

Comment author: paulfchristiano 21 December 2010 01:46:41AM 3 points [-]

A genie who can't find a solution has literally no agency. There is nothing he can say to the filter which will cause it to say "yes," because the filter itself checks to see if the genie has given a proof. If the genie can't find a proof, the filter will always say "no." I don't quite know what going on strike would entail, but certainly if all of the genies who can't find solutions collectively have 0 influence in the world, we don't care if they strike.

Comment author: Jack 21 December 2010 01:52:00AM 0 points [-]

Okay, that makes sense. What about computation time limits? A genie that knows it can't give an answer would wait as long as possible before saying anything.

Comment author: jimrandomh 21 December 2010 01:29:00AM 0 points [-]

You might have to have a third form of punishment worse than death to prevent mass striking.

No, you don't. Proof checking is only O(n) and the size of the AI's output has a small bound.

Comment author: Jack 21 December 2010 01:32:11AM *  0 points [-]

No, you don't. Proof checking is only O(n) and the size of the AI's output has a small bound.

If every AI makes us check their solution how is this method better than a brute force search?

(Or be more explicit about where I'm being stupid:-)

ETA: Oh, I see. We only need to make one check for each theorem length. Still seems like it would be worth while to punish false positives more though. But you're right, it isn't strictly needed.

Comment author: Vladimir_Nesov 21 December 2010 01:05:11AM 1 point [-]

If any risk of being discovered as giving incorrect information is unacceptable, then you should just ask your questions in plain English, as any lie the genie will tell you generally increases the risk of punishment. (It will also minimize the risk of punishment by increasing believability of its answers at the expense of their truth, but only where it has lies that are more believable than the actual truth.)

Comment author: Jack 21 December 2010 01:27:17AM *  2 points [-]

In my understanding of the post (which doesn't seem to be what everyone else took away) the genies are destroyed whenever they fail to to provide a theorem that meets our constraints even when those constraints make finding a theorem impossible. The genie that gets tasked with finding a one bit proof of the Riemann hypothesis is just screwed. You might have to have a third form of punishment (Edited: there are three incentive options, two could be described as punishments) worse than death to prevent mass striking. Anyway, the point is each genie only provides one bit of information and if it either lies about finding a theorem or fails to find one it dies. So it isn't a matter of risking punishment: it will die if it fails to give a solution when it has one that meets the constraints. The only way out is to share a theorem if there is one.

Comment author: shokwave 21 December 2010 12:58:10AM 0 points [-]

Your comment made me re-read the post more carefully. I had on first reading assumed that a truthful answer was rewarded (whether yes or no) and a lying answer was punished. If a yes is rewarded and a no is punished, and our AI genies are so afraid of termination that they would never give a 'no' where they could give a 'yes', why wouldn't they all give us 'yes'?

Comment author: Jack 21 December 2010 01:18:26AM 0 points [-]

As I understand it this method is designed to work for constraint satisfaction problems -where we can easily detect false positives. You're right that a possibility is that all the genies that can't find solutions go on strike just to make us check all the yes's (which would make this process no better than a brute force search, right?), maybe there needs to be a second punishment that is worse than death to give them an incentive not to lie.

Comment author: Eliezer_Yudkowsky 21 December 2010 12:18:58AM 5 points [-]

Each genie you produce will try its best to find a solution to the problem you set it, that is, they will respond honestly. By hypothesis, the genie isn't willing to sacrifice itself just to destroy human society.

Thanks to a little thing called timeless decision theory, it's entirely possible that all the genies in all the bottles can cooperate with each other by correctly predicting that they are all in similar situations, finding Schelling points, and coordinating around them by predicting each other based on priors, without causal contact. This does not require that genies have similar goals, only that they can do better by coordinating than by not coordinating.

Comment author: Jack 21 December 2010 12:50:33AM 0 points [-]

If I understand the post correctly for each genie the only information that ever gets is a yes or no. A no results in termination, yes results in release from the lamp into the Netherworld. Any coordination would require at least one genie to defect from this agreement, producing a no when the real answer was yes. The punishment for any negative is death meaning the genie would have to be willing to die for the coordination, which by stipulation it isn't willing to do. In other words, if the best they can do by coordinating is destroy human society, but genies aren't willing to destroy themselves to destroy society, I don't see how TDT gets them out of the this coordination problem.

Comment author: Jack 20 December 2010 12:58:24AM *  9 points [-]

I've said before that I think I care about the truth more than other people because a parent lied to me- but I don't think the Santa lie was the traumatizing one.

I slowly gathered more evidence there was no Santa year by year. Once my Aunt thanked my mother for something that had a "From Santa" label. We had a tradition of calling Santa to tell him what we wanted for Christmas, Santa being my mother's older brother the actor. I recall my belief diminishing when I realized none of my classmates were talking to Santa on the phone. And then there was the fact that my brother and I began to hunt and find the hidden presents- presents we assumed would be put under the tree as "From Mom" but a few ended up coming from Santa Claus and that pretty much gave it away.

The Tooth Fairy was the first myth I realized was false- figuring this out was easy. Like the fifth tooth I lost I didn't tell anyone and put it under my pillow. I woke up the next day and it was still there. Then I told my parents and the next night, found money. I then pretended I still believed in the Tooth Fairy until the rest of my teeth came out.

Maybe there is a rationalist case for these lies. There aren't many other occasions for kids to find important things out about the world on their own. They mostly learn by being talked at "there are atoms" "the earth rotates around the sun" etc. Outside of Santa Claus when does a seven-year-old get to weigh evidence and challenge authority. Maybe it should be like a rationalist right of passage. The day your kid discovers Santa Claus isn't real you take him out for dinner with family and friends, explain the lesson and give him a badge or a bicycle or something. Welcome him to the next step on the path to adulthood.

I believe there was an idea here for a rationalist school that teaches the process of discovery by not teaching children facts about the world but by giving them the tools to learn those facts on their own. I can't remember if that idea originated in my head or if I read it here first and then told others about it. Maybe Santa Claus should be something like that.

Comment author: Perplexed 19 December 2010 08:58:16PM 1 point [-]

Germany can avert war by returning his fleet to Denmark or by allowing the placement of some currently held German territory under English control.

Beg pardon. Don't you mean Russian control?

Comment author: Jack 19 December 2010 09:20:34PM 1 point [-]

Yes. That is what I mean.

Comment author: Jack 19 December 2010 08:27:30PM 2 points [-]

While the British Empire understands the the Treaty of Hler could not last indefinitely we have the following complaints.

A) Germany failed to notify England and Russia of their decision to terminate the treaty, if that is their intention.

B) Germany's move destabilizes Russia, we believe a capable Russia is essential to English security.

C) While the treaty recognizes Sweden as Russia's domain, our Norwegian citizens long for the days of the Union. As such we believe that if Sweden is not to be held by Russia its right and proper place is under English domain beside Norway.

While the treaty does demand that England go to war with Germany there is precedent for peaceful, negotiated withdrawal in Scandinavia. Germany can avert war by returning his fleet to Denmark or by allowing the placement of some currently held German territory under English control.

View more: Prev | Next