Kaj_Sotala comments on Hedonium's semantic problem - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (61)
Now replying to actual meat of the post:
This post seems to be mostly talking about the questions of "what is intelligence" and "what is meaning", while implying that answering that question would also help figure out the answer to "what's the minimum requirement for the subjective experience of happiness".
But it doesn't seem at all obvious to me that these are the same question!
Research on the requirements for subjective experience doesn't, as far as I know, say anything about whether something is intelligent or having meaning. E.g. Thomas Metzinger has argued that a neural representation becomes a phenomenally conscious representation if it's globally available for the system (for deliberately guided attention, cognitive reference, and control of action), activated within a window of presence (subjectively perceived as being experienced now), bound into a global situational context (experienced as being part of a world), etc. Some researchers have focused on specific parts of the criteria, like the global availability.
Now granted, if your thesis is that a hedonium or mind crime algorithm seems to require some minimum amount of complexity which might be greater than some naive expectations, then the work I've mentioned would also support that. But that doesn't seem to me like it would prevent hedonium scenarios - it would just put some upper bound on how dense with pleasure we can make the universe. And I don't know of any obvious reasons for why the required level of complexity for experiencing subjective pleasure would necessarily be even at the human level: probably an animal-level intelligence could be just as happy.
Later on in the post you say:
But now you seem to be talking about something else than in the beginning of the post. At first you only mentioned the hedonium scenario as one where we took a single maximally happy state and copied it across the universe to obtain the maximum density of happiness; now you seem to be talking about something like "would it be possible to take all currently living humans and make them maximally happy while preserving their identity". This is a very different scenario from just the plain hedonium scenario.
In that case it's not a human-comparable intelligent agent experiencing happiness. So I'd argue that either a) hedonium needs to be more complex that expected, or b) the definition of happiness does not require high level agents experiencing it.
And I'm arguing that the minimum complexity should be higher than the human level, as you need not only a mind, but an interaction with the environment of sufficient complexity to ground it as a mind.
That's the point. I don't think that the first setup would count as a happy state, if copied in the way described.