utilitymonster comments on The Urgent Meta-Ethics of Friendly Artificial Intelligence - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (249)
I'm sort of surprised by how people are taking the notion of "reason for action". Isn't this a familiar process when making a decision?
For all courses of action you're thinking of taking, identify the features (consequences if you that's you think about things) that count in favor of taking that course of action and those that count against it.
Consider how those considerations weigh against each other. (Do the pros outweigh the cons, by how much, etc.)
Then choose the thing that does best in this weighing process.
It is not a presupposition of the people talking this way that if R is a reason to do A in a context C, then R is a reason to do in all contexts.
The people talking this way also understand that a single R might be both a reason to do A and a reason to believe X at the same time. You could also have R be a reason to believe X and a reason to cause yourself to not believe X. Why do you think these things make the discourse incoherent/non-perspicuous? This seems no more puzzling than the familiar fact that believing a certain thing could be epistemically irrational but prudentially rational to (cause yourself) to believe.