We act not for the sake of pleasure alone. We cannot solve the Friendly AI problem just by programming an AI to maximize pleasure.
IAWYC, but would like to hear more about why you think the last sentence is supported by the previous sentence. I don't see an easy argument from "X is a terminal value for many people" to "X should be promoted by the FAI." Are you supposing a sort of idealized desire fulfilment view about value? That's fine--it's a sensible enough view. I just wouldn't have thought it so obvious that it would be a good idea to go around invisibly assuming it.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Both you and Eliezer seem to be replying to this argument:
People only intrinsically desire pleasure.
An FAI should maximize whatever people intrinsically desire.
Therefore, an FAI should maximize pleasure.
I am convinced that this argument fails for the reasons you cite. But who is making that argument? Is this supposed to be the best argument for hedonistic utilitarianism?