blinks
I didn't realize the Lifespan Dilemma was a cognitive hazard. How much freakout are we talking about here?
blinks
I didn't realize the Lifespan Dilemma was a cognitive hazard. How much freakout are we talking about here?
Interestingly, I discovered the Lifespan Dilemma due to this post. While not facing a total breakdown of my ability to do anything else, it did consume an inordinate amount of my thought process.
The question looks like an optimal betting problem- you have a limited resource, and need to get the most return. According to the Kelly Criterion, the optimal percentage of your total bankroll looks like f*=(p(b-1)+1)/b, where p is the probability of success, and b is the return per unit risked. The interesting thing here is that for very large values of b, the percentage of bankroll to be risked almost exactly equals the percentage chance of winning. Assuming a bankroll of 100 units and a 20 percent chance of success, you should bet the same amount if b = 1 million or if b = 1 trillion: 20 units.
Eager to apply this to the problem at hand, I decided to plug in the numbers. I then realized I didn't know what the bank roll was in this situation. My first thought was that the bankroll was the expected time left- percent chance of success * time if successful. I think this is the mode that leads to the garden path- every time you increase your time of life if successful, it feels like you have more units to bet with, which means you are willing to spend more on longer odds.
Not satisfied, I attempted to re-frame the question into money. Stating it like this, I have 100$, and in 2 hours I will either have 0$, or 1 million, with an 80% chance of winning. I could trade my 80% chance for a 79% chance of winning 1 trillion. So, now that we are in money, where is my bankroll?
I believe that is the trick- in this question, you are already all in. You have already bet 100% of your bankroll, for an 80% chance of winning- in 2 hours, you will know the outcome of your bet. For extremely high values of b, you should have only bet 80% of your bankroll- you are already underwater. Here is the key point- changing the value of b does not change what you should have bet, or even your bet at all- that's locked in. All you can change is the probability, and you can only make it worse. From this perspective, you should accept no offer that lowers your probability of winning.
From my perspective, if you are in a place of prestige and you want to avoid damage to your image, hiding your quirks is maximizes the chance that they will be discovered in a way that precludes you controlling the how it is released. If image malpractice is the issue, keeping this in the open is an inoculation against more damaging future revelation. The trade-off is that you may lose credibility up front. Given EY's eschewing of the "normal" routes to academic success, and the profound strangeness that a some of the ideas we take for granted have at first blush to anyone who hasn't read the sequences, I don't thing OK cupid is doing much damage.
Finally, I noticed when I first read this that the article gave me the squicks. In trying to compare the feeling to a known quantity, I realized it was analogous to when my religious parents would scandalously tell me of a couple who are "shacking up". The feeling of someone sharing psudo-private information in a way that does not explicitly make a value judgement, certainly does implicitly. I rather doubt that was your intention, however, you might want to be aware of the reaction, if it was not intended.
tl,dr; EY's just this guy, you know?
I made a non-typo'd version if anyone is interested. Here it is: http://joshua-david.net/php/cardsagainstrationality.php.
If anyone wants the php source code that made it, that's here: http://joshua-david.net/php/cardsagainstrationality.php.php.
Because I know LW is big on meta-anything, the source code for that source code can be found here: http://joshua-david.net/php/cardsagainstrationality.php.php.php.
Some choice ones:
tip: put ?n=5 at the end for 5 of them
Eliezer Yudkowsky is what acausal sex feels like from the inside.
Inside Eliezer Yudkowsky's pineal gland is not an immortal soul, but counterfactual hugging.
This is the case with Masquerade: within the hypotheticals, the masks and opponents play against each other as if theirs were the "real" round.
So- does the whole problem go away if instead of trying to deduce what fairbot is going to do with masquerade, we assume that fairbot is going to asses it as if masquerade = the current mask? By ignoring the existence of masquerade in our deduction, we both solve the Gödel inconsistency, while simultaneously ensuring that another AI can easily determine that we will be executing exactly the mask we choose.
Masquerade deduces the outcomes of each of its masks, ignoring its own existence, and chooses the best outcome. Fairbot follows the exact same process, determines what Mask Masquerade is going to use, and then uses that outcome to make its own decision, as if Masquerade were whatever mask it ends up as.I assume Masquerade would check that it is not running against itself, and automatically co-operate if it is, without running the deduction, which would be the other case for avoiding the loop.
What happens if the masks are spawned as sub processes that are not "aware" of the higher level process monitoring thems? The higher level process can kill off the sub processes and spawn new ones as it sees fit, but the mask processes themselves retain the integrity needed for a fairbot to cooperate with itself.
A rationalist who doesn't consider the effects of tone when attempting to effect a change in someone's thinking is not dealing in reality. There is a reason Becker's Rules have to be asked for and agreed to, even among rationalists- we are not built to automatically separate tone from content, and there are times when even the most thoughtful of us are personally vulnerable to a harsh tone. We tend to simplify to "two Beysians updating on evidence", but in reality, we have to consider the best way to transmit that message, as well as the outcome of that transmission. Human language is not tightly controlled code- when a change in tone is equivalent to a change in meaning, ignoring tone is the same as ignoring all parentheses in code.
Ahh, that wonderfully embarrassing moment when you realize your small group has been calling Crocker's rules by the wrong name for almost year.
A rationalist who doesn't consider the effects of tone when attempting to effect a change in someone's thinking is not dealing in reality. There is a reason Becker's Rules have to be asked for and agreed to, even among rationalists- we are not built to automatically separate tone from content, and there are times when even the most thoughtful of us are personally vulnerable to a harsh tone. We tend to simplify to "two Beysians updating on evidence", but in reality, we have to consider the best way to transmit that message, as well as the outcome of that transmission. Human language is not tightly controlled code- when a change in tone is equivalent to a change in meaning, ignoring tone is the same as ignoring all parentheses in code.
...without losing sight of the fact that having allowed myself to get into a position where my only path to victory requires a low-probability event that I don't control was already a huge mistake that I should confidently expect to result in my failure.
This is a very fine line to walk, especially in magic. Finding the places you could have made better decisions, while understanding what decisions you could not have made better with the information you had at the time, is not an easy task- although at my skill level, it is generally easier to assume I made a poor decision and find it.
Probably the best example of how do I win. In this match, the gut reaction would be to use the direct damage spell in hand to clear away one of the creatures, and hope for either a big creature draw, or some other game changing spell. Instead, knowing the ONLY way he could win is if the card on top of his deck is direct damage, he spent the direct damage spell in hand directly at his opponent, and then just flipped over the top card- if you only have one path to victory, you have to ignore all other paths, no matter how tempting, or how much it feels like the wrong play.
If in Newcomb's problem you replace Omega with James Randi, suddenly everyone is a one-boxer, as we assume there is some slight of hand involved to make the money appear in the box after we have made the choice. I am starting to wonder if Newcomb's problem is just simple map and territory- do we have sufficient evidence to believe that under any circumstance where someone two-boxes, they will receive less money than a one box? If we table the how it is going on, and focus only on the testable probability of whether Randi/Omega is consistently accurate, we can draw conclusions on whether we live in a universe where one boxing is profitable or not. Eventually, we may even discover the how, and also the source of all the money that Omege/Randi is handing out, and win. Until then, like all other natural laws that we know but don't yet understand, we can still make accurate predictions.