All of answer's Comments + Replies

answer70

Impressive, I didn't think it could be automatized (and even if it could, that it could go so many digits before hitting a computational threshold for large exponentials). My only regret is that I have but 1 upvote to give.

answer80

In the interest of challenging my mental abilities, I used as few resources as possible (and I suck at writing code). It took fewer than 3^^^3 steps, thankfully.

and I suck at writing code

In that case, you might find writing a program to solve it for you an even better challenge of your mental abilities.

answer60

Partially just to prove it is a real number with real properties, but mostly because I wanted a challenge and wasn't getting it from my current math classes (I'm currently in college, majoring in math). As much as I'd like to say it was to outdo the AI at math (since calculators can't do anything with the number 3^^^3, even take its mod 2), I had to use a calculator for all but the last 3 digits.

1wedrifid
You mean you did it manually? You didn't write code to do the grunt work?
answer70

I started with some iterated powers of 3 and tried to find patterns. For instance, 3 to an odd (natural number) power is always 3 mod 4, and 3 to the power of (a natural number that's 3 mod 4) always has a 7 in the one's place.

answer290

I solved the last 8 digits of 3^^^3 (they're ...64,195,387). Take that ultrafinitists!

3A1987dM
http://en.wikipedia.org/wiki/Graham%27s_number#Rightmost_decimal_digits
Kindly240

...62535796399618993967905496638003222348723967018485186439059104575627262464195387.

Boo-yah.

Edit: obviously this was not done by hand. I used Mathematica. Code:

TowerMod[base_, m_] := If[m == 1, 0, PowerMod[base, TowerMod[base, EulerPhi[m]], m]];

TowerMod[3, 10^80]

Edit: this was all done to make up for my distress at only having an Erdos number of 3.

1wedrifid
That's awesome. Why did you do this? (ie. Did you get to publish it someplace...)
7Joshua_Blaine
I.. just.. WHAT? The last digits are the easiest, of course, BUT STILL. What was your methodology? (because I can't be bothered to think of how to do it myself)
answer10

Hmm. "Three to the 'three to the pentation of three plus two'-ation of three". Alternatively, "big" would also work.

answer40

"Three to the pentation of three".

1Fhyve
How about 3^...(3^^^3 up arrows)...^3?
answer30

Although making precommitments to enforce threats can be self-destructive, it seems the only reason they were for the baron is because he didn't account for a 3rd outcome, rather than just the basic set {you do what I want, you do what I don't want} and 3rd outcomes kept happening.

2Stuart_Armstrong
You'd almost think the author was conspiring against the Baron!
answer80

Newcomb's problem does happen (and has happened) in real life. Also, omega is trying to maximize his stake rather than minimize yours; he made a bet with alpha with much higher stakes than the $1,000,000. Not to mention newcomb's problem bears some vital semblance to the prisoners' dilemma, which occurs in real life.

0Decius
Oddly enough, that problem is also solved better by a time-variable agent: Joe proposes sincerely, being an agent who would never back out of a commitment of this level. If his marriage turns out poorly enough, Joe, while remaining the same agent that used to wouldn't back out, backs out. And the prisoners' dilemma as it is written cannot occur in real life, because it requires no further interaction between the agents.
1Eliezer Yudkowsky
And Parfit's Hitchhiker scenarios, and blackmail attempts, not to mention voting.
1tim
Sure, I didn't mean to imply that there were literally zero situations that could be described as Newcomb-like (though I think that particular example is a questionable fit). I just think they are extremely rare (particularly in a competitive context such as poker or sports). edit: That example is more like a prisoner's dilemma where Kate gets to decide her move after seeing Joe's. Agree that Newcomb's definitely has similarities with the relatively common PD.
answer40

Ok, so we do agree that it can be rational to one-box when predicted by a human (if they predict based upon factors you control such as your facial cues). This may have been a misunderstanding between us then, because I thought you were defending the computationalist view that you should only one-box if you might be an alternate you used in the prediction.

3someonewrongonthenet
yes, we do agree on that.
answer10

So you would never one-box unless the simulator did some sort of scan/simulation upon your brain? But it's better to one-box and be derivable as the kind of person to (probably) one-box than to two-box and be derivable as the kind of person to (probably) two-box.

The only reason to one-box is when your actions (which include both the final decision and the thoughts leading up to it) effect the actual arrangement of the boxes.

Your final decision never affects the actual arrangement of the boxes, but its causes do.

4someonewrongonthenet
I'd one-box when Omega had sufficient access to my source-code. It doesn't have to be through scanning - Omega might just be a great face-reading psychologist. We're in agreement. As we discussed, this only applies insofar as you can control the factors that lead you to be classified as a one-boxer or a two-boxer. You can alter neither demographic information nor past behavior. But when (and only when) one-boxing causes you to be derived as a one-boxer, you should obviously one box. Well, that's true for this universe. I just assume we're playing in any given universe, some of which include Omegas who can tell the future (which implies bidirectional causality) - since Psychohistorian3 started out with that sort of thought when I first commented.
answer10

True, the 75% would merely be a past history (and I am in fact a poker player). Indeed, if the factors used were entirely or mostly comprised of factors beyond my control (and I knew this), I would two-box. However, two-boxing is not necessarily optimal because of a predictor whose prediction methods you do not know the mechanics of. In the limited predictor problem, the predictor doesn't use simulations/scanners of any sort but instead uses logic, and yet one-boxers still win.

3someonewrongonthenet
agreed. To add on to this: It's worth pointing out that Newcomb's problem always takes the form of Simpson's paradox. The one boxers beat the two boxers as a whole, but among agents predicted to one-box, the two boxers win, and among agents predicted to two-box, the two boxers win. The only reason to one-box is when your actions (which include both the final decision and the thoughts leading up to it) effect Omega's prediction. The general rule is: "Try to make Omega think you're one-boxing, but two-box whenever possible." It's just that in Newcomb's problem proper, fulfilling the first imperative requires actually one-boxing.
answer30

Yeah, the argument would hold just as much with an inaccurate simulation as with an accurate one. The point I was trying to make wasn't so much that the simulation isn't going to be accurate enough, but that a simulation argument shouldn't be a prerequisite to one-boxing. If the experiment were performed with human predictors (let's say a psychologist who predicts correctly 75% of the time), one-boxing would still be rational despite knowing you're not a simulation. I think LW relies on computationalism as a substitute for actually being reflectively consistent in problems such as these.

2someonewrongonthenet
The trouble with real world examples is that we start introducing knowledge into the problem that we wouldn't ideally have. The psychologist's 75% success rate doesn't necessarily apply to you - in the real world you can make a different estimate than the one that is given. If you're an actor or a poker player, you'll have a much different estimate of how things are going to work out. Psychologists are just messier versions of brain scanners - the fundamental premise is that they are trying to access your source code. And what's more - suppose the predictions weren't made by accessing your source code? The direction of causality does matter. If Omega can predict the future, the causal lines flow backwards from your choice to Omega's past move. If Omega is scanning your brain, the causal lines go from your brain-state to Omega's decision. If there are no causal lines between your brain/actions and Omega's choice, you always two-box. Real world example: what if I substituted your psychologist for a sociologist, who predicted you with above-chance accuracy using only your demographic factors? In this scenario, you aught to two-box - If you disagree, let me know and I can explain myself. In the real world, you don't know to what extent your psychologist is using sociology (or some other factor outside your control). People can't always articulate why, but their intuition (correctly) begins to make them deviate from the given success% estimate as more of these real-world variables get introduced.
answer50

Right, any predictor with at least a 50.05% accuracy is worth one-boxing upon (well, maybe a higher percentage for those with concave functions in money). A predictor with sufficiently high accuracy that it's worth one-boxing isn't unrealistic or counterintuitive at all in itself, but it seems (to me at least) that many people reach the right answer for the wrong reason: the "you don't know whether you're real or a simulation" argument. Realistically, while backwards causality isn't feasible, neither is precise mind duplication. The decision to one-box can be rationally reached without those reasons: you choose to be the kind of person to (predictably) one-box, and as a consequence of that, you actually do one-box.

1Decius
Assuming that you have no information other than the base rate, and that it's equally likely to be wrong either way.
2someonewrongonthenet
Oh, that's fair. I was thinking of "you don't know whether you're real or a simulation" as an intuitive way to prove the case for all "conscious" simulations. It doesn't have to be perfect - you could just as easily be an inaccurate simulation, with no way to know that you are a simulation and no way to know that you are inaccurate with respect to an original. I was trying to get people to generalize downwards from the extreme intuitive example- Even with decreasing accuracy, as the simulation becomes so rough as to lose "consciousness" and "personhood", the argument keeps holding.
answer20

Not that I disagree with the one-boxing conclusion, but this formulation requires physically reducible free will (which has recently been brought back into discussion). It would also require knowing the position and momentum of a lot of particles to arbitrary precision, which is provably impossible.

6someonewrongonthenet
We don't need a perfect simulation for the purposes of this problem in the abstract - we just need a situation such that the problem-solver assigns better-than-chance predicting power to the Predictor, and a sufficiently high utility differential between winning and losing. The "perfect whole brain simulation" is an extreme case which keeps things intuitively clear. I'd argue that any form of simulation which performs better than chance follows the same logic. The only way to escape the conclusion via simulation is if you know something that Omega doesn't - for example, you might have some secret external factor modify your "source code" and alter your decision after Omega has finished examining you. Beating Omega essentially means that you need to keep your brain-state in such a form that Omega can't deduce that you'll two-box. As Psychohistorian3 pointed out, the power that you've assigned to Omega predicting accurately is built into the problem. Your estimate of the probability that you will succeed in deception via the aforementioned method or any other is fixed by the problem. In the real world, you are free to assign whatever probability you want to your ability to deceive Omega's predictive mechanisms, which is why this problem is counter intuitive.
answer30

Relative to UFAI, FAI work seems like it would be mathier and more insight-based, where UFAI can more easily cobble together lots of pieces. This means that UFAI parallelizes better than FAI. UFAI also probably benefits from brute-force computing power more than FAI. Both of these imply, so far as I can tell, that slower economic growth is good news for FAI; it lengthens the deadline to UFAI and gives us more time to get the job done.

Forgive me if this is a stupid question, but wouldn't UFAI and FAI have identical or near-identical computational abilities/methods/limits and differ only by goals/values?

0TheOtherDave
Yes. The OP is assuming that the process of reliably defining the goals/values which characterize FAI is precisely what requires a "mathier and more insight-based" process which parallelizes less well and benefits less from brute-force computing power.
9knb
An FAI would have to be created by someone who had a clear understanding of how the whole system worked--in order for them to know it would be able to maintain the original values its creator wanted it to have. Because of that, an FAI would probably have to have fairly clean, simple code. You could also imagine a super-complex kludge of different systems (think of the human brain) that work when backed by massive processing power, but is not well-understood. It would be hard to predict what that system would do without turning it on. The overwhelming probability is that it would be a UAI, since FAIs are such a small fraction of the set of possible mind designs. It's not that a UFAI needs more processing power, but that if tons of processing power is needed, you're probably not running something which is provably Friendly.