In the new version of Newcomb's problem, you have to choose between a box containing the map and a box containing warm fuzzies.
Self-delusion or accurate beliefs? We all can empathize with this choice.
In the new version of Newcomb's problem, you have to choose between a box containing the map and a box containing warm fuzzies.
Self-delusion or accurate beliefs? We all can empathize with this choice.
I'd say it's highlighting the human fallacy to try to ignore and escape from bad news. Instead of facing this prophecy, they just destroyed the ship that delivered it to them and told themselves they were safe.
Actually, prophesy was about the ship; the spaceship crashed into Aragena, their planet, and then curious inhabitants looked inside (and found nothing dangerous). After that came the messenger of their King and told them that they all are doomed.
And they indeed were.
Not quite seeing the applicability as a rationality quote; but in "it's bed" you should drop the apostrophe.
Probably I'm incredible late with that, but:
a) thank you, embarrassing mistake fixed
b) I was fascinated with the "volatile atoms" bit. It feels like a line taken from a poem on reductionism. I'm not sure that I managed to convey it because I'm not so much versed in English fiction and poetry.
Also, I liked their safety measures, it's a pity they hadn't worked in the end.
I don't understand this rationality quote. Is it about fighting akrasia? Self-hacking to effectively saving money? It clearly describes a method that wouldn't actually work, and it could work as humour, but what does it mean as a rationality tale?
It's interesting to view this story from source-code-swap Prisoner's Dilemma / Timeless Decision Theory perspective. This can be a perfect epigraph in an article dedicated to it.
The fear people have about the idea of adherence to protocol is rigidity. They imagine mindless automatons, heads down in a checklist, incapable of looking out their windshield and coping with the real world in front of them. But what you find, when a checklist is well made, is exactly the opposite. The checklist gets the dumb stuff out of the way, the routines your brain shouldn’t have to occupy itself with (Are the elevator controls set? Did the patient get her antibiotics on time? Did the managers sell all their shares? Is everyone on the same page here?), and lets it rise above to focus on the hard stuff (Where should we land?).
Here are the details of one of the sharpest checklists I’ve seen, a checklist for engine failure during flight in a single-engine Cessna airplane—the US Airways situation, only with a solo pilot. It is slimmed down to six key steps not to miss for restarting the engine, steps like making sure the fuel shutoff valve is in the OPEN position and putting the backup fuel pump switch ON. But step one on the list is the most fascinating. It is simply: FLY THE AIRPLANE. Because pilots sometimes become so desperate trying to restart their engine, so crushed by the cognitive overload of thinking through what could have gone wrong, they forget this most basic task. FLY THE AIRPLANE. This isn’t rigidity. This is making sure everyone has their best shot at survival.
-- Atul Gawande, The Checklist Manifesto
Sages and scientists heard those words, and fear seized them. However, they disbelieved the horrible prophecy, deeming the possibility of perdition too improbable. They lifted the starship from its bed, shattered it into pieces with platinum hammers, plunged the pieces into hard radiation, and thus the ship was turned into myriads of volatile atoms, which are always silent, for atoms have no history; they are identical, whatever origin they have, whether it be bright suns, dead planets or intelligent creatures, — virtuous or vile — for raw matter is same in the Cosmos, and it is other things you should be afraid of.
Still, even atoms were gathered, frozen into one clod and sent into distant sky. Only then were Enterites able to say "We are saved. Nothing threatens us now".
-- Stanislaw Lem, White Death
(as far as I know, this sweet short story never have been translated into English; I translated this passage myself from my Russian copy, so I will be glad if someone corrects my mistakes)
in a series (one single, continuing series!) of coin tosses, the probability that you get a run of heads at least half as long as the overall length of the series (eg ttththtHHHHHHH) is always >0, but it is not guaranteed to happen, no matter how many chances you give it.
... any event for which you don't change the epsilon such that the sum becomes a convergent series. Or any process with a Markov property. Or any event with a fixed epsilon >0.
That should cover round about any relevant event.
(and also you mis-apply the Law of large Numbers here)
Explain.
Law of Large Numbers states that sum of a large amount of i.i.d variables approaches its mathematical expectation. Roughly speaking, "big samples reliably reveal properties of population".
It doesn't state that "everything can happen in large samples".
The first terrifying shock comes when you realize that the rest of the world is just so incredibly stupid.
The second terrifying shock comes when you realize that they're not the only ones.
For those who are here and are unfamiliar with canon, I believe BT_Uytya meant this YouTube clip, or a similar one like it; as far as I know, none of them are authorized by Warner Bros. or J.K. Rowling, but may be short enough to qualify as fair use in many jurisdictions. I am not a lawyer.
Yes, it was this video I had in mind.
Nice example of how using a probability of exactly zero can screw you over. Two observations.
Could have done with a link to Eliezer's 0 and 1 are not probabilities from back in 2008.
I say you can get to 99.99% confidence that 1159 is prime (if it actually is; I haven't checked); probably 99.9999%. Suppose you (a) write a program to check all possible divisors, test that it gives the right answers for everything up to 100, and run it multiple times in case of cosmic rays; (b) look it up in a table of prime numbers; (c) apply, by hand, one of the fancy number-theoretical primality tests (most of these are probabilistic -- but again you can find statements of the form "If a number less than 10^12 passes these specific tests and isn't one of the following short list of exceptions, it is prime"). Then I reckon that apart from theories where what's wrong is your brain a,b,c are clearly extremely close to independent; (a) has well less than 0.001 chance of failure, (b) well less than 0.01, and (c) well less than 0.1; so the only hypothesis you need to consider that might take the probability above 10^-6 is that you're delusional in some way that specifically messes up your ability to tell whether 1159 is prime. Now (d) this is surely extraordinarily rare -- delusions in general aren't so rare, but this is something very specific and weird; and (e) if your mind is that badly screwed up then attempting to work with probabilities is probably kinda meaningless anyway. (Theorems like Cox's say that you should use probabilities as measures of confidence, and compute with them in the standard ways, if you are a rational agent. If you are known to be a catastrophically irrational agent, then what probabilities you assign is probably not the greatest of your worries.)
(just amused by the possibility)
Also, it is possible that Peano arithmetic isn't consistent; if so, either the very concept of 'primality' doesn't make any sense, or it can just mess up the primality tests which were used in creation of (b) and (c), and the connection between "1159 if prime" and "this program outputs True and halts" as well.
Of course, it screws up any application of Cox's theorem here, even worse than in delusion case.