What if you had a button that you could press to make other people happy?
Ignoring the frame of the post, which assumes some respect for boundaries, there is the following point about the statement taken on its own. Happiness is a source of reward, and rewards rewire the mind. There is nothing inherently good about it, even systematic pursuit of a reward (while you are being watched) is compatible with not valuing the thing being pursued.
I wouldn't want my mind rewired according to some process I don't endorse, by default it's like brain damage, not something good. I wouldn't want to take a pill that would make me want to take more pills like that, because I currently don't endorse fascination with pill-taking activity; that's not even a hypothetical worry in a world filled with superstimuli. If the pill rewires the mind in a way that doesn't induce such a fascination, and does some other thing unrelated to pill-taking, that's hardly better. (AIs are being trained like this, with concerning ethical implications.)
Thanks for your comment, I think it raises an important point if I understood it correctly. But I'm not sure if I have understood it correctly. Are you saying that by doing random things that make other people happy, I would be messing with their reward function? So that I would, for example, reward and thus incentivise random other things the person doesn't really value?
In writing this, I had indeed assumed that while happiness is probably not the only valuable thing and we wouldn't want to hook everybody up to a happiness machine, the marginal bit of happiness in our world would be positive and quite harmless. But maybe superstimuli are a counterexample to that? I have to think about it more.
As I disclaimed, the frame of the post does rule out relevance of this point, it's not a response to the post's interpretation that has any centrality. I'm more complaining about the background implication that rewards are good (this is not about happiness specifically). Just because natural selection put a circuit in my mind, doesn't mean I prefer to follow its instruction, either in ways that natural selection intended, or in ways that it didn't. Human misalignment relative to natural selection doesn't need to go along with rewards at all, let alone seeking superstimulus. Rewards probably play some role in the process of figuring out what is right, but there is no robust reason for their contribution to even be pointing in the obvious direction.
tldr: some free ways to benefit others
Epistemic status: Some things I noticed, made up or let ChatGPT generate
Idea
What if you had a button that you could press to make other people happy? Pushing the button doesn’t cost you anything besides the negligible effort. How often would you press it? If your answer is somewhere in the range between “Very often” and “Obviously I would build a robot arm that could press the button as fast as physically possible”, then there probably are a couple of things you should be doing more often:
Examples
Caveats
Conclusion
That being said, I still think these things are worth doing and are currently neglected, so I want you to do them more often! And I want you to comment all the happiness buttons I forgot to include in this post so we can have a nice collection, a common good of common goods.
Please include “#happinessbutton” in every comment that adds more happiness buttons, so they are easier to find.