regarding the first paragraph - Eliezer not criticizing the Drowning Child story in our world, but in dath ilan. dath ilan, that is utilitarian in such questions, when more or less everyone is utilitarian when children lives are what at stake. we don't live in dath ilan. in our world, it's often the altruistic parts that hammer down the selfish parts, or warm-fuzzies parts that hammer down the utilitarian ones as heartless and cruel.
EA sometimes is doing the opposite - there are a lot of stories of burnout.
and in the large scheme of things, what i want is a way to find what actions in the world will represent my values to the fullest - but this is a problem when i can't learn from dath ilan, that have a lot of things fungible, that are not in Earth.
A response to: Self-Integrity and the Drowning Child
On Internal Integrity
Eliezer criticises Peter Singer's The Child in the Pond thought experiment on the basis that it is an "outside assault on your internal integrity". He explains that it was designed to "let your altruistic part hammer down the selfish part... in a way that would leave it feeling small and injured and unable to speak in its own defense."
There is a lot of truth to this framing. However, one critique I have of this is that we cannot talk about the proper way to resolve conflicts between values in the abstract, but only in relation to particular meta-values (by which I mean the values that we use to resolve conflicts between values).
Perhaps, there are some people whose meta-values are such that letting one part hammer down another part is the true expression themselves insofar as it makes sense to think of people having a true self and insofar as it doesn't make sense, the critique of sacrificing one's internal integrity also doesn't make sense.
As an example, there are many people who would prefer pain and suffering over mediocrity; who desire greatness and who are willing to forge themselves into a kind of metaphorical weapon. These people are exceedingly rare and hence special. They are deserving of the utmost praise and recognition.
I don't think Eliezer has any kind of basis for saying that these people are mistaken and should adopt his meta-values.
On other hand, I think he raises an important point. I suspect that the vast majority of people, myself included, do in fact share Eliezer's meta-value of not wanting one of our parts to hammer down another of our parts. Further, I agree with his contention that for most of us we have selfish and unselfish parts rather than these desires being completely fungible at some fixed rate.
There are many ways in which we could model this, but the way I tend to think of this is to imagine people as being less willing to sacrifice some utility for the common good the less utility that they have. Perhaps at some point people will hit a minimum where they are no longer willing to make any contribution towards the common good (this isn’t necessarily the same as being willing to trash it though). Or at least, this is roughly what I suspect my utility function to be.
On Motivation
I don’t find myself motivated to pursue a particular path for my life unless I believe that it will lead to my overall happiness. The path of pure selfishness doesn't hold any appeal to me either.
I will use motivation to refer either willpower, a felt sense of desire or some combination of the two. I am capable of doing some actions without much motivation, for example making small donations or providing quick feedback. On the other hand, larger projects are much more subject to a motivation constraint and how I choose to spend my life is subject to this even further.
While there are methods such as pre-commitment or building habits that reduce the amount of motivation required, using these methods still requires a certain level of motivation. Even if the minimum motivation to pursue certain projects isn’t as high as we might think, I don’t see the impact of motivation as just a step function, so my expectation is that barely exceeding the minimum is likely to greatly reduce the likelihood of success and expected impact.
Given these constraints, I feel that I “ought” to strive for finding a path that allows me to achieve both my own happiness and to also make a significant impact on the world. This might seem like a triviality (“Well, obviously it’s better to achieve both objectives rather than just one”), but I mean this in a rather strong sense: if a path doesn't offer a route to both, I think I “should” reject it in order to spend time searching for alternatives. We can imagine someone making the opposite counsel: “You’ve already spent a long time searching for such a path - you need to just pick one” or “It is irrational to refuse to accept harsh realities”.
Obviously, I can imagine worlds where I ought to follow this advice. In some worlds, there would be a hard limitation blocking me from achieving both. And I can imagine circumstances where I would decide that this strategy hadn't worked out and that it would be time to swallow the bitter pill of only being able to choose one. I guess my contention is that I ought to lean towards the “hold out longer” school of thought.
All this said, my gut intuition is that from my current position it is possible to succeed in both. I'm not going to attempt to articulate these reasons as it kind of feels like a distraction from the key point of this post.
I guess, for me, the main point of this post is more to articulate a particular perspective, rather than to justify its application in my circumstances. I think there are significant advantages to pursuing an explict "pursue both" plan rather than just kind of happening to do it. After all, only when you have a plan can you see if things are going according to plan.
And even if my theory doesn't turn out to be true, I don't think I'll end up regretting having pursued both, as I suspect that the motivational boost will more than compensate for any inefficiency resulting from having made a strategic error.
One point I need to clarify is what I mean by making a significant difference. If I were, for example, to volunteer locally for some warm fuzzies and to forget about existential risk, then even though this might satisfy some people’s altruistic goals, it wouldn’t be sufficient for me. I define my altruistic goals as making a difference that is significant in light of what I know about the fragility of humanity, whether or not I could make an even larger impact if I were more self-sacrificing. I would become much more worried about the future of humanity if I learned that a lot of EA's/rationalists were settling in the former regard, than if I learned that they were settling in the latter regard.
Addendum: The Japanese have the concept of Ikigai which covers the intersection of what we love, what we're good at, what the world needs and what we can be paid for. It's very similar to what I'm proposing here, except I've presented it as only the fusion of two things (what needs to be done and what I want to do), having left the monetary and skill constraints as implicit in the background.