Peterdjones comments on General purpose intelligence: arguing the Orthogonality thesis - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (156)
"There are sets of objective moral truths such that any rational being that understood them would be compelled to follow them". The arguments seem mainly to be:
1) Playing around with the meaning of rationality until you get something ("any rational being would realise their own pleasure is no more valid than that of others" or "pleasure is the highest principle, and any rational being would agree with this, or else be irrational")
2) Convergence among human values.
3) Moral progress for society: we're better than we used to be, so there needs to be some scale to measure the improvements.
4) Moral progress for individuals: when we think about things a lot, we make better moral decisions than when we were young and naive. Hence we're getting better a moral reasoning, so these is some scale on which to measure this.
5) Playing around with the definition of "truth-apt" (able to have a valid answer) in ways that strike me, uncharitably, as intuition-pumping word games. When confronted with this, I generally end up saying something like "my definitions do not map on exactly to yours, so your logical steps are false dichotomies for me".
6) Realising things like "if you can't be money pumped, you must be an expected utility maximiser", which implies that expected utility maximisation is superior to other reasoning, hence that there are some methods of moral reasoning which are strictly inferior. Hence there must be better ways of moral reasoning and (this is the place where I get off) a single best way (though that argument is generally implicit, never explicit).
I could add: Objective punishments and rewards need objective justification.