Will_Newsome comments on Metacontrarian Metaethics - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (75)
Preferences A will be more satisfied if the agent actually had preferences B than they will be if they actually have preferences A. So the way you get what you would have wanted is by wanting something different. For example, if I have a preference for '1' but I know that someone is going to average my preferences with someone who prefers 0 then I know I will make '1' happen by modifying myself to prefer '2' instead of '1'. So averaging sucks.
Yeah, evolutionary (in the Universal Darwinian sense that includes Hebbian learning) incentives for a belief, attention signal, meme, or person to game differential comparisons made by overseer/peer algorithms (who are themselves just rent-seeking half the time) whenever possible is a big source of dukkha (suffering, imperfection, off-kilteredness). An example at the memetic-societal level: http://lesswrong.com/lw/59i/offense_versus_harm_minimization/3y0k .
In the torture/specks case it's a little tricky. If no one knows that you're going to be averaging their preferences and won't ever find out, and all of their preferences are already the result of billions of years of self-interested system-gaming, then at least averaging doesn't throw more fuel on the fire. Unless preferences have evolved to exaggerate themselves to game systems-in-general due to incentives caused by the general strategy of averaging preferences, in which case you might want to have precommited to avoid averaging. Of course, it's not like you can avoid having to take the average somewhere, at some level of organization...