I think that the Torture versus Dust Specks "paradox" was invented to show how utilitarianism (or whatever we're calling it) can lead to on-face preposterous conclusions whenever the utility numbers get big enough. And I think that the intent was for everybody to accept this, and shut up and calculate.
However, for me, and I suspect some others, Torture versus Dust Specks and also Pascal's Mugging have implied something rather different: that utilitarianism (or whatever we're calling it) doesn't work correctly when the numbers get too big.
The idea that multiplying suffering by the number of sufferers yields a correct and valid total-suffering value is not fundamental truth, it is just a naive extrapolation of our intuitions that should help guide our decisions.
Let's consider a Modified Torture versus Specks scenario: You are given the same choice as in the canonical problem, except you are also given the opportunity to collect polling data from every single one of the 3^^^3 individuals before you make your decision. You formulate the following queries:
"Would you rather experience the mild distraction of a dust speck in your eye, or allow someone else to be tortured for fifty years?"
"Would you rather be tortured for fifty years, or have someone else experience the mild discomfort of a dust speck in their eye?"
You do not mention, in either query, that you are being faced by the Torture versus Specks dilemma. You are only allowing the 3^^^3 to consider themselves and one hypothetical other.
You get the polling results back instantly. (Let's make things simple and assume we live in a universe without clinical psychopathy.) The vast majority of respondents have chosen the "obviously correct" option.
Now you have to make your decisions knowing that the entire universe totally wouldn't mind having dust specks in exchange for preventing suffering for one other person. If that doesn't change your decision ... something is wrong. I'm not saying something is wrong with the decision so much as something is wrong with your decision theory.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Clock speed isn't the only measure of CPU performance. In fact, it isn't much of a measure at all, given that new processors are outperforming Pentium 4 chips (ca. 2005) by the factor you'd expect from Moore's law, despite the fact that their clock speeds are lower by as much as a half.
This isn't really true--clock performance is a really good metric for computing power. If your clock speed doubles, you get a 2x speedup in the amount of computation you can do without any algorithmic changes. If you instead increase chip complexity, e.g., with parallelism, you need to write new code to take advantage of it.