aceofspades comments on Checklist of Rationality Habits - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (186)
I have read this post and have not been persuaded that people who follow these steps will lead longer or happier lives (or will cause others to live longer or happier lives). I therefore will make no conscious effort to pay much of any regard to this post, though it is plausible it will have at least a small unconscious effect. I am posting this to fight groupthink and sampling biases, though this post actually does very little against them.
Longer? Probably not. Happier? Possible, depending on that person's baseline, since we don't know our own desires and acquiring these skills might help, but given the hedonic treadmill effect, unlikely. Achieving more of their interim goals? Possible if not probable. There are a lot of possible goals aside from living longer and being happier.
I have decided that maximizing the integral of happiness with respect to time is my selfish supergoal and that maximizing the double integral of happiness with respect to time and with respect to number of people is my altruistic supergoal. All other goals are only relevant insofar as they affect the supergoals. I have yet to be convinced this is a bad system, though previous experience suggests I probably will make modifications at some point. I also need to decide what weight to place on the selfish/altruistic components.
But despite my finding such an abstract way of characterizing my actions interesting, the actual determining of the weights and the actual function I'm maximizing are just determined by what I actually end up doing. In fact constructing this abstract system does not seem to convincingly help me further its purported goal, and I therefore cease all serious conversation about it.
I think this is a common problem. That doesn't mean you have to give up on having your second-order desires agree with your first-order desires. It is possible to use your abstract models to change your day-to-day behaviour, and it's definitely possible to build a more accurate model of yourself and then use that model to make yourself do the things you endorse yourself doing (i.e. avoiding having to use willpower by making what you want to want to do the "default.")
As for me, I've decided that happiness is too elusive of a goal–I'm bad at predicting what will make me happier-than-baseline, the process of explicitly pursuing happiness seems to make it harder to achieve, and the hedonic treadmill effect means that even if I did, I would have to keep working at it constantly to stay in the same place. Instead, I default to a number of proxy measures: I want to be physically fit, so I endorse myself exercising and preferably enjoying exercise; I want to have enough money to satisfy my needs; I want to finish school with good grades; I want to read interesting books; I want to have a social life; I want to be a good friend. Taken all together, these are at least the building blocks of happiness, which happens by itself unless my brain chemistry gets too wacked out.
So the normal chain of events here would just be that I argue those are still all subgoals of increasing happiness and we would go back and forth about that. But this is just arguing by definition, so I won't continue along that line.
To the extent I understand the first paragraph in terms of what it actually says at the level of real-world experience, I have never seen evidence supporting its truth. The second paragraph seems to say what I intended the second paragraph of my previous comment to mean. So really it doesn't seem that we disagree about anything important.
Agreed. I find it practical to define my goals as all of those subgoals and not make happiness an explicit node, because it's easy to evaluate my subgoals and measure how well I'm achieving them. But maybe you find it simpler to have only one mental construct, "happiness", instead of lots.
I guess I explicitly don't allow myself to have abstract systems with no measurable components and/or clear practical implications–my concrete goals take up enough mental space. So my automatic reaction was "you're doing it wrong," but it's possible that having an unconnected mental system doesn't sabotage your motivation the same way it does mine. Also, "what I actually end up doing" doesn't, to me, have to connotation of "choosing and achieving subgoals", it has the connotation of not having goals. But it sounds like that's not what it means to you.
I would argue that the altruism should be part of the selfish utility function. The reason that you care about other people is because you value other people. If you did not value other people there is no reason they should be in your utility function.
Excellent! This nuance of what "selfish" means is something I find myself reiterating all too frequently. (Where the latter means I've done it at least three times that I can recall.)