Can you explain and/or link this analysis of transparent Newcomb? It looks very wrong to me.
It's only wrong if you are the kind of person who doesn't like getting $1,000,000.
If only all our knowledge of our trading partners and environment was as reliable as 'fundamentally included in the very nature of the problem specification'. You have to think a lot harder when you are only kind of confident and know the limits of your own mind reading capabilities.
If only all our knowledge of our trading partners and environment was as reliable as 'fundamentally included in the very nature of the problem specification'.
If you're going to make that kind of argument, you're dismissing pretty much all LW-style thought experiments.
Here's an edited version of a puzzle from the book "Chuck Klosterman four" by Chuck Klosterman.
When should you punish someone for a crime they will commit in the future? Discuss.