I think this post would benefit from a link to some article about the Iterated Prisoner's Dilemma, since the beginning of this post requires some knowledge about it to be valuable.
One possible reason Alicorn hasn't applied her technique to you is that it simply isn't powerful enough to overcome your unpleasantness. FWIW, I perceive you as a lot less civil than the LW norm, you seem possessed of a snarky combativeness. You also appear to have a tendency of fixating on personal annoyances and justifying your focus with concerns and observations that pop out of nowhere, context-wise.
In this case, your supposed insight into what would really be best for Alicorn plays that role. And then, having established this "lemma", you carry through to the conclusion that... Alicorn's behavior is inconsistent. Take a step back, and look at what you're saying. You're basically claiming to have reverse-engineered someone else's utility function, as the premise of an argument which concludes that they're being a hypocrite.
I hope you'll come to see this sort of behavior as embarrassing.
"FWIW" == "For What It's Worth," to save a few person-minutes for other passive readers here.
It's not clear to me whether I should spend this sum of money (considering opportunity cost etc.) on potentially cryopreserving myself or reducing existential risk or making some other charitable contribution or actually passing on substantially more of my money to my relatives or whatever else. Namely, I'm not sure how to estimate the probability of actually being revived at some point. It might help to determine the probability of legally "dying" in such a way as to be around people during death or "dying" only a short time before while still being possible to preserve (for example this might include the chance of "dying" in a hospital). This would seemingly have a large effect on my chances of being revived, but maybe not. The technology for reviving those thought "dead" would already require such major advances in technology that even days of not being discovered (and thus an enormous difference in bodily decay) that perhaps even such large differences in decay could be trivial. Or, this could be entirely wrong, depending on how technology does progress. But even after such differences of time of pre-preservation "death" are accounted for, it is not then clear how to estimate the likelihood of ever being revived or a number of other things that would be necessary at a minimum to establish a reasonable method of determining the proper amount of money to allocate to the aforementioned potential uses.
Basically, this issue is far more difficult to resolve than a simple pseudo-Pascal's Wager (here the response is not to the article in question but rather in a more general form to a few arguments I have seen even on this site including some comments)
View more: Prev
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I suppose it's that I naively expect, when opening the list of top LW posts ever, to see ones containing the most impressive or clever insights into rationality.
Not that I don't think Holden's post deserves a high score for other reasons. While I am not terribly impressed with his AI-related arguments, the post is of the very highest standards of conduct, of how to have a disagreement that is polite and far beyond what is usually named "constructive".
Some people who upvoted the post may think it is one of the best-written and most important examples of instrumental rationality on this site.