Stuart_Armstrong comments on Hedonium's semantic problem - Less Wrong

12 Post author: Stuart_Armstrong 09 April 2015 11:50AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (61)

You are viewing a single comment's thread. Show more comments above.

Comment author: Stuart_Armstrong 10 April 2015 11:22:13AM 2 points [-]

probably an animal-level intelligence could be just as happy.

In that case it's not a human-comparable intelligent agent experiencing happiness. So I'd argue that either a) hedonium needs to be more complex that expected, or b) the definition of happiness does not require high level agents experiencing it.

And I'm arguing that the minimum complexity should be higher than the human level, as you need not only a mind, but an interaction with the environment of sufficient complexity to ground it as a mind.

At first you only mentioned the hedonium scenario as one where we took a single maximally happy state and copied it across the universe to obtain the maximum density of happiness; now you seem to be talking about something like "would it be possible to take all currently living humans and make them maximally happy while preserving their identity". This is a very different scenario from just the plain hedonium scenario.

That's the point. I don't think that the first setup would count as a happy state, if copied in the way described.