Peterdjones comments on By Which It May Be Judged - LessWrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (934)
I read this post with a growing sense of unease. The pie example appears to treat "fair" as a 1-place word, but I don't see any reason to suppose it would be. (I note my disquiet that we are both linking to that article; and my worry about how confused this post seems to me.)
The standard atheist reply is tremendously unsatisfying; it appeals to intuition and assumes what it's trying to prove!
My resolution of Euthryphro is "the moral is the practical." A predictable consequence of evolution is that people have moral intuitions, that those intuitions reflect their ancestral environment, and that those intuitions can be variable. Where would I find mercy, justice, or duty? Cognitive algorithms and concepts inside minds.
This article reads like you're trying to move your stone tablet from your head into the world of logic, where it can be as universal as the concept of primes. It's not clear to me why you're embarking on that particular project.
The example of elegance seems like it points the other way. If your sense of elegance is admittedly subjective, why are we supposing a Platonic form of elegance out in the world of logic? Isn't this basically the error where one takes a cognitive algorithm that recognizes whether or not something is a horse and turns it into a Platonic form of horseness floating in the world of logic?
It looks to me like you're trying to say "because classification algorithms can be implemented in reality, there can be real ensembles that embody logical facts, but changing the classification algorithms doesn't change those logical facts," which seems true but I don't see what work you expect it to do.
There's also the statement "when you change the algorithms that lead to outputs, you change the internal sensation of those outputs." That has not been my experience, and I don't see a reason why that would be the case. In particular, when dreaming it seems like many algorithms have their outputs fixed at certain values: my 'is this exciting?' algorithm may return 'exciting!' during the dream but 'boring!' when considering the dream whilst awake, but the sensation that results from the output of the algorithm seems indistinguishable; that is, being excited in a dream feels the same to me as being excited while awake. (Of course, it could be that whichever part of me is able to differentiate between sensations is also malfunctioning while dreaming!)
If you show me the pattern of neurons firing that happens when my bladder is full, then my bladder won't feel full. If you put an electrode in my head (or use induction, or whatever) and replicate that pattern of neurons firing, then my bladder will feel full, because the feeling of fullness is the output of those neurons firing in that pattern.
You sure it's not just executing an adaption? Why?
How do you avoid prudent predation
I think the author of that piece needs to learn the concept of precommitment. Precommitting to one-box is not at all the same as believing that one-boxing is the dominant strategy in the general newcomb problem. Likewise, precommitting not to engage in prudent predation is not a matter of holding a counterfactual belief, but of taking a positive-expected-utility action.
Are there moral systems used by humans that avoid prudent predation, and are not outcompeted by moral systems used by humans that make use of prudent predation?
I will note that the type of predation that is prudent has varied significantly over time, and correspondingly, so have moral intuitions. Further altering the structure of society will again alter the sort of predation that is prudent, and so one can seek to restructure society so disliked behavior is less prudent and liked behavior is more prudent.
I find it hard to make sense of that. I don't think people go in for morality for selfish gain, and the very idea may be incoherent.
Maybe. I don't see what your point is. If the moral is not the practical, and if PP is wrong, that would not imply morality is timeless, and vice versa.
The claim is that moral intuitions exist because they were selected for, and they must have been selected for because they increased reproductive fitness. Similarly, we should expect moral behavior to the degree that morality is more rewarding than immorality. (The picture is muddied by there being both genetic and memetic evolution, but the basic idea survives.)
But morality isn't just moral intuitions. It includes "eat fish on friday"
That doens't follow. Fitness-enahncing and gene-spreading behaviour don;t have to reward the organism concerned. What't the reward for self sacrifice?
that's a considerable understatement.
Sure. We should expect such rules to be followed to the degree that they are prudent.
There are several; kin selection, reciprocal altruism, and so on. In some cases, self-sacrifice is the result of a parasitic relationship. (Kin selection appears to have a memetic analog as well, but I'm not familiar with work that develops that concept rigorously, and distinguishes it from normal alliance behaviors; it might just be a subset of that.)
Again, I have no idea what you mean. Morality does not predict self-centered prudence, since it enjoins self-sacrifice, and evolution doenst predict self-centered prudence in all cases. It is not selfishly prudent for bees to to defend their colony, or for male praying mantises to mate.
Rewards for whom?
If you pass on the idea that self-sacrifice is virtuous, in a persuasive sort of way (such as by believing it yourself), you're marginally more likely to enjoy the benefits of having someone willing to sacrifice their own interests nearby when you particularly need such a person. Of course, sometimes that meme kills you. Some people are born with sickle-cell anemia and never get the opportunity to benefit from malaria resistance; evolution doesn't play nice.