Reminds me of a story, set in a lazy Mark Twain river town. Two friends walking down the street. First says to second, "See that kid? He is really stupid." Second asks, "Why do you say that?" First answers, "Watch". Approaches kid. Holds out nickel in one hand and dime in the other. Asks kid which he prefers. "I'll take the nickel. It's bigger". Man hands nickel to kid with smirk, and the two friends continue on.
Later the second man comes back and attempts to instruct the kid. "A dime is worth twice the value, that is it buys more candy", says he, "even though the nickel looks bigger." The kid gives the man a pitying look. "Ok, if you say so. But I've made seven nickels so far this month. How many dimes have you made?"
Which brings me to my real point - empirical research, I'm sure you have seen it, in which player 1 is asked to specify a split of $10 between himself and player 2. Player 2 then chooses to accept or reject. If he rejects, neither player gets anything. As I recall, when greedy player 1 specifies more than about 70% for himself, player 2 frequently rejects even though he is costing himself money. This can only be understood in classical "rational agent" game theory by postulating that player 2 does not believe researcher claims that the game is a one-shot.
What is the point? Well, perhaps people who have read about Newcomb problems are assuming (like most people in the research) that, somehow or other, greed will be punished.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Where should the line be drawn regarding the status of animals as moral objects/entities? E.G Do you think it is ethical to boil lobsters alive? It seems to me there is a full spectrum of possible answers: at one extreme only humans are valued, or only primates, only mammals, only veterbrates, or at the other extreme, any organism with even a rudimentary nervous system (or any computational, digital isomorphism thereof), could be seen as a moral object/entity.
Now this is not necessarily a binary distinction, if shrimp have intrinsic moral value it does not follow that they must have a equal value to humans or other 'higher' animals. As I see it, there are two possibilities; either we come to a point where the moral value drops to zero, or else we decide that entities approach zero to some arbitrary limit: e.g. a c. elegans roundworm with its 300 neurons might have a 'hedonic coefficient' of 3x10^-9. I personally favor the former, the latter just seems absurd to me, but I am open to arguments or any comments/criticisms.
Suppose sentient beings have intrinsic value in proportion to how intensely they can experience happiness and suffering. Then the value of invertebrates and many non-mammal vertebrates is hard to tell, while any mammal is likely to have almost as much intrinsic value as a human being, some possibly even more. But that's just the intrinsic value. Humans have a tremendously greater instrumental value than any non-human animal, since humans can create superintelligence that can, with time, save tremendous amounts of civilisations in other parts of the universe from suffering (yes, they are sparse, but with time our superintelligence will find more and more or them, in theory ultimately infinitely many).
The instrumental value of most humans is enormously higher than the intrinsic value of the same persons - given that they do sufficiently good things.