Comment author: Kindly 12 August 2013 11:47:43AM *  16 points [-]

...62535796399618993967905496638003222348723967018485186439059104575627262464195387.

Boo-yah.

Edit: obviously this was not done by hand. I used Mathematica. Code:

TowerMod[base_, m_] := If[m == 1, 0, PowerMod[base, TowerMod[base, EulerPhi[m]], m]];

TowerMod[3, 10^80]

Edit: this was all done to make up for my distress at only having an Erdos number of 3.

Comment author: answer 12 August 2013 06:31:49PM *  5 points [-]

Impressive, I didn't think it could be automatized (and even if it could, that it could go so many digits before hitting a computational threshold for large exponentials). My only regret is that I have but 1 upvote to give.

Comment author: wedrifid 12 August 2013 05:00:53AM 1 point [-]

I had to use a calculator for all but the last 3 digits.

You mean you did it manually? You didn't write code to do the grunt work?

Comment author: answer 12 August 2013 05:19:41AM 6 points [-]

In the interest of challenging my mental abilities, I used as few resources as possible (and I suck at writing code). It took fewer than 3^^^3 steps, thankfully.

Comment author: wedrifid 12 August 2013 04:39:29AM 1 point [-]

I solved the last 8 digits of 3^^^3 (they're ...64,195,387). Take that ultrafinitists!

That's awesome. Why did you do this? (ie. Did you get to publish it someplace...)

Comment author: answer 12 August 2013 04:48:33AM 4 points [-]

Partially just to prove it is a real number with real properties, but mostly because I wanted a challenge and wasn't getting it from my current math classes (I'm currently in college, majoring in math). As much as I'd like to say it was to outdo the AI at math (since calculators can't do anything with the number 3^^^3, even take its mod 2), I had to use a calculator for all but the last 3 digits.

Comment author: Joshua_Blaine 12 August 2013 02:46:44AM 4 points [-]

I.. just.. WHAT? The last digits are the easiest, of course, BUT STILL. What was your methodology? (because I can't be bothered to think of how to do it myself)

Comment author: answer 12 August 2013 02:59:09AM 4 points [-]

I started with some iterated powers of 3 and tried to find patterns. For instance, 3 to an odd (natural number) power is always 3 mod 4, and 3 to the power of (a natural number that's 3 mod 4) always has a 7 in the one's place.

Comment author: answer 12 August 2013 02:41:38AM *  18 points [-]

I solved the last 8 digits of 3^^^3 (they're ...64,195,387). Take that ultrafinitists!

Comment author: Fhyve 02 August 2013 05:08:15AM 1 point [-]

How about 3^...(3^^^3 up arrows)...^3?

Comment author: answer 04 August 2013 08:46:03PM 1 point [-]

Hmm. "Three to the 'three to the pentation of three plus two'-ation of three". Alternatively, "big" would also work.

Comment author: Panic_Lobster 31 July 2013 10:26:33PM 10 points [-]

How do you pronounce 3^^^3?

Comment author: answer 01 August 2013 01:07:45AM 3 points [-]

"Three to the pentation of three".

Comment author: answer 15 July 2013 06:09:32PM 2 points [-]

Although making precommitments to enforce threats can be self-destructive, it seems the only reason they were for the baron is because he didn't account for a 3rd outcome, rather than just the basic set {you do what I want, you do what I don't want} and 3rd outcomes kept happening.

Comment author: tim 19 June 2013 10:55:55PM 1 point [-]

I don't see how those are Newcomb situations at all. When I try to come up with an example of a Newcomb-like sports situation (eg football since plays are preselected and revealed simultaneously more or less) I get something like the following:

  1. you have two plays A and B (one-box, two-box)
  2. the opposing coach has two plays X and Y
  3. if the opposing coach predicts you will select A they will select X and if they predict you will select B they will select Y.
  4. A vs X results in a moderate gain for you. A vs Y results in no gain for you. B vs Y results in a small gain for you. B vs X results in a large gain for you.
  5. You both know all this.

The problem lies in the 3rd assumption. Why would the opposing coach ever select play X? Symmetrically, if Omega was actually competing against you and trying to minimize your winnings why would it ever put a million dollars in the second box.

Newcomb's works, in part, due to Omega's willingness to select a dominated strategy in order to mess with you. What real-life situation involves an opponent like that?

Comment author: answer 19 June 2013 11:19:34PM 6 points [-]

Newcomb's problem does happen (and has happened) in real life. Also, omega is trying to maximize his stake rather than minimize yours; he made a bet with alpha with much higher stakes than the $1,000,000. Not to mention newcomb's problem bears some vital semblance to the prisoners' dilemma, which occurs in real life.

Comment author: someonewrongonthenet 19 June 2013 10:52:36PM *  3 points [-]

So you would never one-box unless the simulator did some sort of scan/simulation upon your brain?

I'd one-box when Omega had sufficient access to my source-code. It doesn't have to be through scanning - Omega might just be a great face-reading psychologist.

But it's better to one-box and be derivable as the kind of person to (probably) one-box than to two-box and be derivable as the kind of person to (probably) two-box.

We're in agreement. As we discussed, this only applies insofar as you can control the factors that lead you to be classified as a one-boxer or a two-boxer. You can alter neither demographic information nor past behavior. But when (and only when) one-boxing causes you to be derived as a one-boxer, you should obviously one box.

Your final decision never affects the actual arrangement of the boxes, but its causes do.

Well, that's true for this universe. I just assume we're playing in any given universe, some of which include Omegas who can tell the future (which implies bidirectional causality) - since Psychohistorian3 started out with that sort of thought when I first commented.

Comment author: answer 19 June 2013 10:59:44PM 3 points [-]

Ok, so we do agree that it can be rational to one-box when predicted by a human (if they predict based upon factors you control such as your facial cues). This may have been a misunderstanding between us then, because I thought you were defending the computationalist view that you should only one-box if you might be an alternate you used in the prediction.

View more: Next