tldr: some free ways to benefit others
Epistemic status: Some things I noticed, made up or let ChatGPT generate


Idea

What if you had a button that you could press to make other people happy? Pushing the button doesn’t cost you anything besides the negligible effort. How often would you press it? If your answer is somewhere in the range between “Very often” and “Obviously I would build a robot arm that could press the button as fast as physically possible”, then there probably are a couple of things you should be doing more often:

 

Examples

  • Give genuine compliments. I made a trigger-action plan that if I like something about someone and it’s not inappropriate to say, then I say it. Note: The more unusual the compliment, the better your judgment about appropriateness needs to be. 
  • Share relevant knowledge and resources. Trigger-action plan: If I encounter something that could be useful, then I quickly think about who could also benefit from this and share the info with them. You can also do this with things you already know.
  • Say thank you. Trigger-action plan: If I found something beneficial, then I say “thank you”. Don’t limit this to personal favors! A “thank you” in response to a talk, a public Slack message or a nice looking garden is very welcome, especially because it rewards providing common goods. When many people benefit from something and could in theory say thank you, often nobody ends up doing it. Saying “Thank you” to people who “just do their job” is neglected as well. Also, you can thank people for who they are (compared to what they did). You have to know and like a person rather well to be able to do that, but it’s super wholesome.
  • Introduce people to each other, if they consent. Trigger-action plan: If I notice complementary interests in 2 people, then I first ask both of them (explicitly saying they’re allowed to say “no”) and if they say “yes”, then I connect them. You could connect applicants and employers, students who want to learn the same thing, or travelers who want to go to the same place.
  • Let people know you thought of them. Trigger-action plan: If something made me think of somebody (positively), then I let them know. Very easy to do, very wholesome.

 

Caveats

  • It’s possible to overdo some of these things. But you can probably always ask if it’s too much
  • Maybe some of these do have a non-trivial cost in the form of mental effort. I don’t think they do for me but maybe it’s different for other people.
  • Someone pointed out that there might also be a cost to your social status if you do a lot of the “Be exceptionally nice”-things like giving a lot of compliments, especially if you don’t receive the same niceness in return. While I can’t rule it out, I can’t confirm it either.


Conclusion

That being said, I still think these things are worth doing and are currently neglected, so I want you to do them more often! And I want you to comment all the happiness buttons I forgot to include in this post so we can have a nice collection, a common good of common goods.
Please include “#happinessbutton” in every comment that adds more happiness buttons, so they are easier to find.

New Comment
3 comments, sorted by Click to highlight new comments since:

What if you had a button that you could press to make other people happy?

Ignoring the frame of the post, which assumes some respect for boundaries, there is the following point about the statement taken on its own. Happiness is a source of reward, and rewards rewire the mind. There is nothing inherently good about it, even systematic pursuit of a reward (while you are being watched) is compatible with not valuing the thing being pursued.

I wouldn't want my mind rewired according to some process I don't endorse, by default it's like brain damage, not something good. I wouldn't want to take a pill that would make me want to take more pills like that, because I currently don't endorse fascination with pill-taking activity; that's not even a hypothetical worry in a world filled with superstimuli. If the pill rewires the mind in a way that doesn't induce such a fascination, and does some other thing unrelated to pill-taking, that's hardly better. (AIs are being trained like this, with concerning ethical implications.)

Thanks for your comment, I think it raises an important point if I understood it correctly. But I'm not sure if I have understood it correctly. Are you saying that by doing random things that make other people happy, I would be messing with their reward function? So that I would, for example, reward and thus incentivise random other things the person doesn't really value?

In writing this, I had indeed assumed that while happiness is probably not the only valuable thing and we wouldn't want to hook everybody up to a happiness machine, the marginal bit of happiness in our world would be positive and quite harmless. But maybe superstimuli are a counterexample to that? I have to think about it more.

As I disclaimed, the frame of the post does rule out relevance of this point, it's not a response to the post's interpretation that has any centrality. I'm more complaining about the background implication that rewards are good (this is not about happiness specifically). Just because natural selection put a circuit in my mind, doesn't mean I prefer to follow its instruction, either in ways that natural selection intended, or in ways that it didn't. Human misalignment relative to natural selection doesn't need to go along with rewards at all, let alone seeking superstimulus. Rewards probably play some role in the process of figuring out what is right, but there is no robust reason for their contribution to even be pointing in the obvious direction.