just because I want X doesn't mean I don't also want Y where Y is incompatible with X
In real life you are still forced to choose between X and Y, and through wireheading you can still cycle between X and Y at different times.
This might be one reason why Eliezer talks about morality as a fixed computation.
P.S. Also, doesn't the being itself have a preference for not-suffering?
A problem here seems to be that creating a being in intense suffering would be ethically neutral
Well don't existing people have a preference about there not being such creatures? You can have preferences that are about other people, right?
preference total utilitarianism gives credit for satisfying more preferences - and if creating more people is a way of doing this, then it's in favour
Shouldn't we then just create people with simpler and easier to satisfy preferences so that there's more preference-satisfying in the world?
To tell which images are cats and which are dogs, you employ some method that is hopelessly impossible for you to write down.
Right, but if/when we get to (partial) brain emulations (in large quantities) we might be able to do the same thing for 'morality' that we do today to recognize cats using a computer.
similar to trying to recognize cats in pictures by reading R,G,B number value array and doing some arithmetic
But a computer can recognize cats by reading pixel values in pictures? Maybe not as efficiently and accurately as people, but that's because brains have a more efficient architecture/algorithms than today's generic computers.
I think the stupidity of utilitarianism is the belief that the morality is about the state, rather than about dynamic process and state transition.
"State" doesn't have to mean "frozen state" or something similar, it could mean "state of the world/universe". E.g. "a state of the universe" in which many people are being tortured includes the torture process in it's description. I think this is how it's normally used.
Because people are running on similar neural architectures? So all people would likely experience similar pleasure from e.g. some types of food (though not necessarily identical). The more we understand about how different types of pleasure are implemented by the brain, the more precisely we'd be able to tell whether two people are experiencing similar levels/types of pleasure. When we get to brain simulations these might get arbitrarily precise.
Okay, I have a "stupid" question. Why is the longer binary sequence that represents the hypothesis less likely to be 'true' data generator? I read the part below but I don't get the example, can someone explain in a different way?