Comment author: 22 December 2013 04:55:28AM *  0 points [-]

I see my mistake, here's an updated breakdown:

Boy1Tu/Boy2Any

Boy1Tu/Boy2Monday Boy1Tu/Boy2Tuesday Boy1Tu/Boy2Wednesday Boy1Tu/Boy2Thursday Boy1Tu/Boy2Friday Boy1Tu/Boy2Saturday Boy1Tu/Boy2Sunday

Then the Boy1Any/Boy2Tu option:

Boy1Monday/Boy2Tu Boy1Tuesday/Boy2Tu Boy1Wednesday/Boy2Tu Boy1Thursday/Boy2Tu Boy1Friday/Boy2Tu Boy1Saturday/Boy2Tu Boy1Sunday/Boy2Tu

See 7 days for each set? They aren't interchangeable even though the label "boy" makes it seem like they are.

Do the Bayesian probabilities instead to verify, it comes out to 50% even.

Comment author: 22 December 2013 05:25:35AM *  0 points [-]

What's the difference between

Boy1Tu/Boy2Tuesday

and

Boy1Tuesday/Boy2Tu

?

Comment author: 22 November 2013 12:35:39PM 36 points [-]

I took it.

Comment author: 27 September 2013 05:11:40PM *  1 point [-]

Edit: Looks like I was assuming probability distributions for which Lim (Y -> infinity) of Y*P(Y) is well defined. This turns out to be monotonic series or some similar class (thanks shinoteki).

I think it's still the case that a probability distribution that would lead to TraderJoe's claim of P(Y)*Y tending to infinity as Y grows would be un-normalizable. You can of course have a distribution for which this limit is undefined, but that's a different story.

Comment author: 27 September 2013 05:52:07PM 5 points [-]

Counterexample: P(3^^^...3)(n "^"s) = 1/2^n P(anything else) = 0 This is normalized because the sum of a geometric series with decreasing terms is finite. You might have been thinking of the fact that if a probability distribution on the integers is monotone decreasing (i.e. if P(n)>P(m) then n <m) then P(n) must decrease faster than 1/n. However, a complexity-based distribution will not be monotone because some big numbers are simple while most of them are complex.

Comment author: 26 July 2013 08:51:02AM *  0 points [-]

There must be something I'm missing here. The previous post pretty definitively proved to me that the no communication clause must be false.

Consider the latter two experiments in the last post:

A transmitted 20°, B transmitted 40°: 5.8%

A transmitted 0°, B transmitted 40°: 20.7%

Lets say I'm on Planet A and my friend is on Planet B, and we are both constantly receiving entangled pairs of photons from some satellite stationed between us. I'm filtering my photons on planet A at 20°, and my friend on planet B is filtering his at 40°. He observes a 5.8% chance that his photons are transmitted, in accordance with the experiment. I want to send him a signal faster than light, so I turn my filter to 0°. He should now observe that his photons have a 20.7% chance of being transmitted.

This takes some statistical analysis before he can determine that the signal has really been sent, but the important part is that it makes the speed of sending the message not dependent on the distance, but on the number of particles sent. Given a sufficient distance and enough particles, it should be faster than light, right?

Comment author: 26 July 2013 05:09:31PM 1 point [-]

Those are the probabilities that both halves of a pair of photons are transmitted, so you can't determine them without the information from both detectors. The distribution at each individual detector doesn't change, it's the correlation between them that changes.

Comment author: 20 July 2013 08:13:17PM 0 points [-]

I think I get it, but I'm still a bit confused, because both A' and A'' are moving forward at the same rate, which means since A'' started off older, A' will never really "catch up to" and become A'', because A'' continues to age. A'' is still three hours older than A', right, forever and ever?

To consider a weird example, what about a six hour old baby going back in time to witness her own birth? Once the fetus comes out, wouldn't there just be two babies, one six hours older than the other? Since they're both there and they're both experiencing time at a normal forward rate of one second per second, can't they just both grow up like siblings? If the baby that was just born waited an hour and went back to witness her own birth, she would see her six hour older version there watching her get born, and she would also see the newborn come out, and then there'd be three babies, age 0, age six hours, and age twelve hours, right?

How exactly would the "witnessing your own birth" thing play out with time travel? I think your explanation implies that there will never be multiple copies running around for any length of time, but why does A'' cease to exist once A' ages three hours? A'' has also aged three hours and become someone else in the meantime, right?

Comment author: 20 July 2013 08:29:38PM 3 points [-]

A' doesn't become A'' by catching up to him, he becomes A'' when he uses his time machine to jump back 3 hours.

There would be three babies for 6 hours, but then the youngest two would use their time machines and disappear into the past.

A'' doesn't cease to exist. A' "ceases to exist" because his time machine sends him back into the past to become A''.

Comment author: 20 July 2013 06:31:58PM 0 points [-]

I debated over whether to include this in the HPMOR thread, but it's not specific to that story, and, well, it is kind of a stupid question.

How does backwards-only time travel work? Specifically, wouldn't a time traveler end up with dozens of slightly older or younger versions of herself all living at the same time? I guess "Yes" is a perfectly acceptable answer, but I've just never really seen the consequences addressed. I mean, given how many times Harry has used the Time Turner in HPMOR (just a convenient example), I'm wondering if there are like 13 or 14 Harries just running around acting independently? Because with backwards-only time travel, how is there a stable loop?

Think about a situation with a six-hour Time Turner and three versions of the same person: A, A' (three hours older than A), and A'' (three hours older than A'). Let's say A' gets to work and realizes he forgot his briefcase. If he had a backwards and forwards time machine, he could pop into his home three hours ago and be back in literally the blink of an eye - and because he knows he could do this, he should then expect to see the briefcase already at his desk. Sure enough, he finds it, and three hours later he becomes A'', and goes back to plant the briefcase before the meeting. This mostly makes sense to me, because A'' would plant the briefcase and then return to his own time, through forwards time travel, rather than the slow path. A'' would never interact with A', and every version of A to reach the point of the meeting would be locked deterministically to act exactly as A' and A'' acted.

But I'm really confused about what happens if A has a Time Turner, that can go backwards but not forwards. Then, when A' realizes he forgot his briefcase, wouldn't there actually be two ways this could play out?

One, A' finds the briefcase at his desk, in which case three hours later, he would become A'' and then come back to plant the briefcase. But what does A'' do after he plants the briefcase? Can he do whatever he wants? His one job is over, and there's another version of him coming through from the past to live out his life - could A'' just get up and move to the Bahamas or become a secret agent or something, knowing that A' and other past versions would take care of his work and family obligations? Isn't he a full-blown new person that isn't locked into any kind of loop?

Two, A' doesn't find the briefcase at his desk, in which case he goes back three hours to remind A to take his briefcase - does that violate any time looping laws? A' never had someone burst in to remind him to take a briefcase, but does that mean he can't burst in on A now? A' can't jump back to the future and experience firsthand the consequences of having the briefcase. If he goes back to talk to A, isn't this just the equivalent of some other person who looks like you telling you not to forget your briefcase for work? Then A can get the briefcase and go to work, while A' can just...leave, right? And live whatever life he wants?

Am I missing something really obvious? I must be, because Harry never stops to consider the consequences of dozens of independently operating versions of himself out there in the world, even when there are literally three other versions of him passed out next to his chair. What happens to those three other Harries, and in general what happens with backwards-only time travel? Is there no need for forwards time travel to "close the circuit" and create a loop, instead of a line?

Comment author: 20 July 2013 07:21:28PM 1 point [-]

You don't need a time machine to go forward in time - you can just wait. A'' cant leave everything to A' because A' will disappear within three hours when he goes back to become A''. If A' knows A wasn't reminded the A' can't remind A. the other three Harrys use their time turners to go backwards and close the loop. You do need both forward and backward time travel to create a closed loop, but the forward time travel can just be waiting; it doesn't require a machine.

Comment author: 22 May 2011 05:24:08PM *  -1 points [-]

Newcomb's Problem is silly. It's only controversial because it's dressed up in wooey vagueness. In the end it's just a simple probability question and I'm surprised it's even taken seriously here. To see why, keep your eyes on the bolded text:

Omega has been correct on each of 100 observed occasions so far - everyone [on each of 100 observed occasions] who took both boxes has found box B empty and received only a thousand dollars; everyone who took only box B has found B containing a million dollars.

What can we anticipate from the bolded part? The only actionable belief we have at this point is that 100 out of 100 times, one-boxing made the one-boxer rich. The details that the boxes were placed by Omega and that Omega is a "superintelligence" add nothing. They merely confuse the matter by slipping in the vague connotation that Omega could be omniscient or something.

In fact, this Omega character is superfluous; the belief that the boxes were placed by Omega doesn't pay rent any differently than the belief that the boxes just appeared at random in 100 locations so far. If we are to anticipate anything different knowing it was Omega's doing, on what grounds? It could only be because we were distracted by vague notions about what Omega might be able to do or predict.

The following seemingly critical detail is just more misdirection and adds nothing either:

And the twist is that Omega has put a million dollars in box B iff Omega has predicted that you will take only box B.

I anticipate nothing differently whether this part is included or not, because nothing concrete is implied about Omega's predictive powers - only "superintelligence from another galaxy," which certainly sounds awe-inspiring but doesn't tell me anything really useful (how hard is predicting my actions, and how super is "super"?).

The only detail that pays any rent is the one above in bold. Eliezer is right that one-boxing wins, but all you need to figure that out is Bayes.

EDIT: Spelling

Comment author: 02 July 2013 01:36:18PM 1 point [-]

Do you also choose not to chew gum in Eliezer's version of Solomon's Problem?

Comment author: 13 June 2013 02:58:24PM 3 points [-]

The nice part about modal agents is that there are simple tools for finding the fixed points without having to search through proofs; in fact, Mihaly and Marcello wrote up a computer program to deduce the outcome of the source-code-swap Prisoner's Dilemma between any two (reasonably simple) modal agents. These tools also made it much easier to prove general theorems about such agents.

Would it be possible to make this program publicly available? I'm curious about how certain modal agents play against each other, but struggling to caculate it manually.

Comment author: 11 June 2013 01:04:03AM 3 points [-]

I have two proposed alternatives to PrudentBot.

1. If you can prove a contradiction, defect. Otherwise, if you can prove that your choice will be the same as the opponent's, cooperate. Otherwise, defect.

2. If you can prove that, if you cooperate, the other agent will cooperate, and you can't prove that if you defect, the other agent will cooperate, then cooperate. Otherwise, defect.

Both of these are unexploitable, cooperate with themselves, and defect against CooperateBot, if my calculations are correct. The first one is a simple way of "sanitizing" NaiveBot.

The second one is exactly cousin_it's proposal here.

Comment author: 11 June 2013 10:33:33AM *  1 point [-]

If you can prove a contradiction, defect.

Should this be "If you can prove that you will cooperate, defect"? As it is, I don't see how this prevents cooperation with Cooperatebot, unless the agent uses an inconsistent system for proofs.

Comment author: 10 June 2013 04:24:33AM 2 points [-]

I am bothered by the fact that the reasoning that leads to PrudentBot seems to contradict the reasoning of decision theory. Specifially, the most basic and obvious fact of behavior in these competitive games is: if you can prove that the opponent cooperates if and only if you do, then you should cooperate. But this reasoning gives the wrong answer vs. CooperateBot, for Lobian reasons. Is there an explanation for this gap?

Comment author: 11 June 2013 10:24:31AM 2 points [-]

It's true that if you can prove that your opponent will cooperate counterfactual-if you cooperate and defect counterfacual-if you defect, then you should cooperate. But we don't yet have a good formalization of logical counterfactuals, and the reasoning that cooperates with cooperatebot just uses material-if instead of conterfactual-if.

View more: Next