I laughed: SMBC comic.
I laughed: SMBC comic.
SMBC comic: poorly programmed average-utility-maximizing AI
It's a total-utility maximising AI.
if it was a total utility maximizing AI it would clone the utility monster (or start cloning everyone else if the utility monster is super linear) edit: on the other hand, if it was average utility maximizing AI it would kill everyone else leaving just the utility monster. In any case there'd be some serious population 'adjustment'.
Felix means happy (or lucky), and is the origin of the word felicity. It took me a while to realize this, so I thought I would note it. Is it obvious for all native English speakers?
The latest SMBC comic is now an illustrated children's story which more or less brings up parallel thoughts to Cynical about Cynicism.
Everyone's talking about this as if it was a hypothetical, but as far as I can tell it describes pretty accurately how hierarchical human civilizations tend to organize themselves once they hit a certain size. Isn't a divine ruler precisely someone who is more deserving and more able to absorb resources? Aren't the lower orders people who would not appreciate luxuries and indeed have fully internalized such a fact ("Not for the likes of me")
If you skip the equality requirement, it seems history is full of utilitarian societies.
Felix is 3^^^3 units happy. And no dust speck in his eyes. What is torturing millions for this noble goal?
I, of course, reject that "sequence" which preaches exactly this.
That's because your brain doesn't have the ability to imagine just how happy Felix is and fails to weigh his actual happiness against humanity's.
Look. You have one person, under terrible torture for 50 years on one side and a gazillion of people with a slight discomfort every year or so on the other side.
It is claimed that the first is better.
Now, you have a small humanity as is, only enslaved for pyramid building for Felix. He has eons of subjective time to enjoy this pyramids and he is unbelievably happy. More happy than any man, woman or child could ever be. The amount of happiness of Felix outweights the misery of billion of people by a factor of a million.
What's the fundamental difference between those two cases? I don't see it, do you?
The only similarity between those cases is that they involve utility calculations you disagree with. Otherwise every single detail is completely different. (e. g. the sort of utility considered, two negative utilities being traded against each other vs. trading utility elsewhere (positive and negative) for positive utility, which side of the trade the single person with the large individual utility difference is on, the presence of perverse incentives, etc, etc).
If anything it would be more logical to equate Felix with the tortured person and treat this as a reductio ad absurdum of your position on the dust speck problem. (But that would be wrong too, since the numbers aren't actually the problem with Felix, the fact that there's an incentive to manipulate your own utility function that way is (among other things).)
No. This reduces the words to the point of meaninglessness. Human beings have values other than reproduction, values that make them happy when satisfied - art, pride, personal achievement, understanding, etc. Wireheading is about being made happy directly, regardless of the satisfaction of the various values.
The scenario previously discussed about Felix is that he was happy and everyone else suffered. Now you're posing a scenario where everyone is happy, but they're made happy by having their values rewritten to place extremelty value on Felix's happiness instead.
At this point, I hope we're not pretending it's the same scenario with only minor modifications, right? Your scenario is about the AI rewriting our values, it's not about trading our collective suffering for Felix's happiness.
Your scenario can effectively remove the person of Felix from the situation altogether, and the AI could just make us all very happy that the laws of physics keep on working.
You say art... what if I am a musician and I am making a song? That's good, right? What if I get 100 experimental subjects to sit in MRI, as they listen to test music, and using my intelligence and some software tools, make very pleasurable song? What if I know that it works by activating such and such connections here and there which end up activating the reward system? What if I don't use MRI, but use internal data available in my own brain, to achieve same result?
I know that this is arriving at meaninglessness, I just don't see it as reducing the words ... (read more)