sark comments on Contrived infinite-torture scenarios: July 2010 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (188)
In all seriousness, coming up with extreme, contrived examples is a very good way to test the limits of moral criteria, methods of reasoning, etc. Oftentimes if a problem shows up most obviously at the extreme fringes, it may also be liable, less obviously, to affect reasoning in more plausible real-world scenarios, so knowing where a system obviously fails is a good starting point.
Of course, we're generally relying on intuition to determine what a "failure" is (many people would hear that utilitarianism favours TORTURE over SPECKS and deem that a failure of utilitarianism rather than a failure of intuition), so this method is also good for probing at what people really believe, rather than what they claim to believe, or believe they believe. That's a good general principle of reverse engineering — if you can figure out where a system does something weird or surprising, or merely what it does in weird or surprising cases, you can often get a better sense of the underlying algorithms. A person unfamiliar with the terminology of moral philosophy might not know whether they are a deontologist or a consequentialist or something else, and if you ask them whether it is right to kill a random person for no reason, then they will probably say no, no matter what they are. If you ask them whether it's right to kill someone who is threatening other people, there's some wiggle room on both sides. But if you tell them to imagine that an evil wizard comes before them and gives them a device with a button, and tells them that pushing the button will cause a hundred people to die horrible deaths, and that if and only if they don't push the button, he will personally kill a hundred thousand people, then their answer to this ridiculous scenario will give you much better information about their moral thinking than any of the previous more realistic examples.
I think paradoxes/extreme examples work mainly by provoking lateral thinking, forcing us to reconsider assumptions, etc. It has nothing at all to do with the logical system under consideration. Sometimes we get lucky and hit upon an idea that goes further and with less exceptions, other times we don't. In short, it's all in the map, not in the territory.
I don't believe in absolute consistency (whether in morality or even in say, physics). A theory is an algorithm that works. We should be thankful that it does at all. In something like morality, I don't expect there to be a possible systematization of it. We will only know what is moral in the far future in the only-slightly-less-far future. Self-modification has no well-defined trajectory.
Theories of the known, which are described by different physical ideas may be equivalent in all their predictions and are hence scientifically indistinguishable. However, they are not psychologically identical when trying to move from that base into the unknown. For different views suggest different kinds of modifications which might be made and hence are not equivalent in the hypotheses one generates from them in ones attempt to understand what is not yet understood. --Feynman