I'm not entirely convinced by the rest of your argument, but
The idea that multiplying suffering by the number of sufferers yields a correct and valid total-suffering value is not fundamental truth, it is just a naive extrapolation of our intuitions that should help guide our decisions.
Is, far and away, the most intelligent thing I have ever seen anyone write on this damn paradox.
Come on, people. The fact that naive preference utilitarianism gives us torture rather than dust specks is not some result we have to live with, it's an indication that the decision theory is horribly, horribly wrong,
It is beyond me how people can look at dust specks and torture and draw the conclusion they do. In my mind, the most obvious, immediate objection is that utility does not aggregate additively across people in any reasonable ethical system. This is true no matter how big the numbers are. Instead it aggregates by minimum, or maybe multiplicatively (especially if we normalize everyone's utility function to [0,1]).
Sorry for all the emphasis, but I am sick and tired of supposed rationalists using math to reach the reprehensible conclusion and then claiming it must be right because math. It's the epitome of Spock "rationality".
Issues with the survey:
EDIT: Overall, it's pretty good.
27 years early, 60% certain. Oops.
I'm fairly convinced that MWI is LW dogma because it supports the Bayesian notion that probabilities are mental entitites rather than physical ones, and not on its own merits.
This is phenomenal! Thanks!
Every part of this comment is true for me, too.
I think I thought this was better when it was utterly inexplicable, actually.
That reply is entirely begging the question. Whether or not consciousness is a phenomenon "like math" or a phenomenon "like photosynthesis" is exactly is being argued about. So it's not an answering argument; it's an assertion.
This isn't really true--clock performance is a really good metric for computing power. If your clock speed doubles, you get a 2x speedup in the amount of computation you can do without any algorithmic changes. If you instead increase chip complexity, e.g., with parallelism, you need to write new code to take advantage of it.