Posts

Sorted by New

Wiki Contributions

Comments

answer11y70

Impressive, I didn't think it could be automatized (and even if it could, that it could go so many digits before hitting a computational threshold for large exponentials). My only regret is that I have but 1 upvote to give.

answer11y80

In the interest of challenging my mental abilities, I used as few resources as possible (and I suck at writing code). It took fewer than 3^^^3 steps, thankfully.

answer11y60

Partially just to prove it is a real number with real properties, but mostly because I wanted a challenge and wasn't getting it from my current math classes (I'm currently in college, majoring in math). As much as I'd like to say it was to outdo the AI at math (since calculators can't do anything with the number 3^^^3, even take its mod 2), I had to use a calculator for all but the last 3 digits.

answer11y70

I started with some iterated powers of 3 and tried to find patterns. For instance, 3 to an odd (natural number) power is always 3 mod 4, and 3 to the power of (a natural number that's 3 mod 4) always has a 7 in the one's place.

answer11y290

I solved the last 8 digits of 3^^^3 (they're ...64,195,387). Take that ultrafinitists!

answer11y10

Hmm. "Three to the 'three to the pentation of three plus two'-ation of three". Alternatively, "big" would also work.

answer11y40

"Three to the pentation of three".

answer11y30

Although making precommitments to enforce threats can be self-destructive, it seems the only reason they were for the baron is because he didn't account for a 3rd outcome, rather than just the basic set {you do what I want, you do what I don't want} and 3rd outcomes kept happening.

answer11y80

Newcomb's problem does happen (and has happened) in real life. Also, omega is trying to maximize his stake rather than minimize yours; he made a bet with alpha with much higher stakes than the $1,000,000. Not to mention newcomb's problem bears some vital semblance to the prisoners' dilemma, which occurs in real life.

answer11y40

Ok, so we do agree that it can be rational to one-box when predicted by a human (if they predict based upon factors you control such as your facial cues). This may have been a misunderstanding between us then, because I thought you were defending the computationalist view that you should only one-box if you might be an alternate you used in the prediction.

Load More