Just this guy, you know?
Allocation of blame/causality is difficult, but I think you have it wrong.
ex. 1 ... He would also waste Tim's $100 which counterfactually could have been used to buy something else for Bob. So Bob is stuck with using the $100 headphone and spending the $300 somewhere else instead.
No. TIM wasted $100 on a headset that Bob did not want (because he planned to buy a better one). Bob can choose whether to to hide this waste (at a cost of the utility loss by having $300 and worse listening experience, but a "benefit" of misleading Tim about his misplaced altruism), or to discard the gift and buy the headphones like he'd already planned (for the benefit of being $300 poorer and having better sound, and the cost of making Tim feel bad but perhaps learning to ask before wasting money).
ex. 2 The world is now stuck with Chris' poor translation on book X with Andy and Bob never touching it again because they have other books to work on.
Umm, here I just disagree. The world is no worse off for having a bad translation than having no translation. If the bad translation is good enough that the incremental value of a good translation doesn't justify doing it, then that is your answer. If it's not valuable enough to change the marginal decision to translate, then Andy or Bob should re-translate it. Either way, Chris has improved the value of books, or has had no effect except wasting his own time.
You need to be careful to define "us" in these discussions. The people for whom it worked in the past are not the people making behavioral choices now. They are the ancestors of today's people. You also have to be more specific about what "worked" means - they were able to reproduce and create the current people. That is very different from what most people mean by "it works" when evaluating how to behave today.
It's also impossible to distinguish what parts of historical behavior "worked" in this way. Perhaps it was conformity per se, perhaps it was the specific conformist behaviors that previous eras preferred, perhaps it was other parts of the environment that made it work, which no longer does.
It gets very complicated when you add in incentives and recognize that science and scientists are also businesses. There's a LOT of the world that scientists haven't (or haven't in the last century or so) really tried to prove, replicate, and come to consensus on.
Yes for the first half, no for the second. I would reply 1/2, but not JUST because of conventional probability theory. It's also because the unstated parts of "what will resolve the prediction", in my estimation and modeling, match the setup of conventional probability theory. It's generally assumed there's no double-counting or other experience-affecting tomfoolery.
I'm very much not sure discouraging HFT is a bad thing.
It's not just the "bad" HFT. It's any very-low-margin activity.
But normal taxes have the same effect, don't they?
Nope, normal taxes scale with profit, not with transaction size.
It's too much for some transactions, and too little for others. For high-frequency (or mid-frequency) trading, 1% of the transaction is 3 or 4 times the expected value from the trade. For high-margin sales (yachts or software), 1% doesn't bring in enough revenue to be worth bothering (this probably doesn't matter unless the transaction tax REPLACES other taxes rather than being in addition to).
It also interferes with business organization - it encourages companies to do things in-house rather than outsourcing or partnering, since inside-company "transactions" aren't real money and aren't taxed.
It's not a bad idea per se, it just needs as many adjustments and carveouts as any other tax, so it ends up as politically complicated as any other tax and doesn't actually help with anything.
I suspect we don't agree on what it means for something to matter. If outside the causal/observable cone (add dimensions to cover MWI if you like), the difference or similarity is by definition not observable.
And the distinction between "imaginary" and "real, but fully causally disconnected" is itself imaginary.
There is no identity substance, and only experience-reachable things matter. All agency and observation is embedded, there is no viewpoint from outside.
I'm not sure why
is not one of your considerations. This seems most likely to me.
If quantum immortality is true
This is a big if. It may be true (though it also implies that events as unlikely as Boltzmann Brains are true as well), but it's not true in a way that has causal impact on my current predicted experiences. If so, then the VAST VAST MAJORITY of universes don't contain me in the first place, and the also-extreme majority of those that do will have me die.
Assume quantum uncertainty affects how the coins land. I survive the night only if I correctly guess the 10th digit of π and/or all seven coins land heads, otherwise I will be killed in my sleep.
In a literal experiment, where a human researcher kills you based on their observations of coins and calculation of pi, I don't think you should be confident of surviving the night. If you DO survive, you don't learn much about uncorrelated probabilities - there's a near-infinite number of worlds, and fewer and fewer of them will contain you.
I guess this is a variant of option (1) - Deny that QI is meaningful. You don't give up on probability - you can estimate a (1/2)^7 * 1/10 = 0.00078 chance of surviving.
Thanks for this - it's important to keep in mind that a LOT of systems are easier to sustain or expand than to begin. Perhaps most systems face this.
In a lot of domains, this is known as the "bootstrap" problem, based on the concept of "lift yourself up by your bootstraps", which doesn't actually work well as a metaphor. See Bootstrapping - Wikipedia
In CS, for instance, compilers are pieces of software that turn source code into machine code. Since they're software, they need a complier to build them. GCC (and some other from-scratch compilers, but many other compilers just depend on GCC) includes a "bootstrap C compiler", which is some hand-coded (actually nowadays it's not, it's compiled as well) executable code which can compile a minimal "stage 2" compiler, which then compiles the main compiler, and then the main compiler is used to build itself again, with all optimizations available.
In fact, you've probably heard the term "booting up" or "rebooting" your computer. This is a shortening of the word "bootstrap", and refers to powering on without any software, loading a small amount of code from ROM or Flash (or other mostly-static store), and using that code to load further stages of Operating System.