Stuart_Armstrong comments on Hedonium's semantic problem - LessWrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (61)
In that case it's not a human-comparable intelligent agent experiencing happiness. So I'd argue that either a) hedonium needs to be more complex that expected, or b) the definition of happiness does not require high level agents experiencing it.
And I'm arguing that the minimum complexity should be higher than the human level, as you need not only a mind, but an interaction with the environment of sufficient complexity to ground it as a mind.
That's the point. I don't think that the first setup would count as a happy state, if copied in the way described.