All of Kaura's Comments + Replies

Kaura10

Fellow effective altruists and other people who care about making things better, especially those of you who mostly care about minimising suffering: how do you stay motivated in the face of the possibility of infinities, or even just the vast numbers of morally relevant beings outside our reach?

I get that it's pretty silly to get so distressed over issues there's nothing I can do about, but I can't help feeling discouraged when I think about the vast amount of suffering that probably exists - I mean, it doesn't even have to be infinite to feel like a botto... (read more)

3banx
I remind myself that I care about each individual that can be helped by my action. Even if there are huge numbers of individuals I can't help, there are some I CAN help, and helping each one is worthwhile.
Kaura100

Negative: a couple decided to go poly after some years in a stable monogamous relationship. It seemed to go well for a few months, but the guy apparently told a few white lies here and there, which then got completely out of control and eventually resulted in a disaster for pretty much everyone involved.

Neutral/negative: a couple was poly for maybe half a year or so, then decided it was "too much trouble" and returned to monogamy. I don't know them well enough to be able to provide more details, but they have been together for a few years after t... (read more)

Kaura30

Thanks! No need for a lengthy debate, I'm just very curious about how people decide where to donate, especially when the process leads to explicitly non-EA decisions. Your reasons are in fact pretty close to what I would have guessed, so I suppose similar intuitions are quite common and might explain part of why an idea as obvious as effective altruism took so long to develop.

But yeah, a subthread about this in the OT sounds like a good idea (unless I can find lots of old discussions on the subject).

Kaura40

I am not completely sold on effective altruism and might also donate to the Red Cross or so.

Interesting, why is this? Do you mean effective altruism as a concept, or the EA movement as it currently is?

4Metus
I am not going to start a lengthy discussion on this subject as this is not the place for it, so please do not read the lack of any further answers as anything else than the statemet above. That being said ... I am not completely sold on the premise that all human lives are equal which puts the whole idea of a cheaper saved life in question. I am not donating out of a moral imperative but personal preference so my donations exhibit decreasing marginal utility making diversification a necessity. And finally I have generally massive skepticism towards anything and anyone that claims to solve a huge, long standing problem like poverty just like the EA movement tends to do. This is the rough sketch of my reservations. I will not discuss it further here but I am willing to discuss it in a more appropriate place, like a seperate thread or the open thread.
Kaura00

Thanks! Ah, I'm probably just typical-minding like there's no tomorrow, but I find it inconceivable to place much value on the amount of branches you exist in. The perceived continuation of your consciousness will still go on as long as there are beings with your memories in some branch: in general, it seems to me that if you say you "want to keep living", you mean you want there to be copies of you in some or the possible futures, waking up the next morning doing stuff present-you would have done, recalling what present-you thought yesterday, an... (read more)

0DanielFilan
As it happens, you totally can (it's called the Born measure, and it's the same number as what people used to think was the probabilities of different branches occurring), and agents that satisfy sane decision-theoretic criteria weight branches by their Born measure - see this paper for the details. This is a good place to strengthen intuition, since if you replace "killing myself" with "torturing myself", it's still true that none of your future selves who remain alive/untortured "would ever notice anything, vast amounts of future copies of [yourself] would wake up just like they thought they would the nloext morning, and carry on with their lives and aspirations". If you arrange for yourself to be tortured in some branches and not others, you wake up just as normal and live an ordinary, fulfilling life - but you also wake up and get tortured. Similarly, if you arrange for yourself to be killed in some branches and not others, you wake up just as normal and live an ordinary, fulfilling life - but you also get killed (which is presumably a bad thing even or especially if everybody else also dies). One way to intuitively see that this way of thinking is going to get you in trouble is to note that your preferences, as stated, aren't continuous as a function of reality. You're saying that universes where (1-x) proportion of branches feature you being dead and x proportion of branches feature you being alive are all equally fine for all x > 0, but that a universe where you are dead with proportion 1 and alive with proportion 0 would be awful (well, you didn't actually say that, but otherwise you would be fine with killing some of your possible future selves in a classical universe). However, there is basically no difference between a universe where (1-epsilon) proportion of branches feature you being dead and epsilon proportion of branches feature you being alive, and a universe where 1 proportion of branches feature you being dead and 0 proportion of branches feature
Kaura20

Assuming for a moment that Everett's interpretation is correct, there will eventually be a way to very confidently deduce this (and time, identity and consciousness work pretty much like described by Drescher IIRC - there is no continuation of consciousness, just memories, and nothing meaningful separates your identity from your copies):

Should beings/societies/systems clever enough to figure this out (and with something like preferences or values) just seek to self-destruct if they find themselves in a sufficiently suboptimal branch, suffering or otherwise... (read more)

4DanielFilan
Not really. If you're in a suboptimal branch, but still doing better than if you didn't exist at all, then you aren't making the world better off by self-destructing regardless of whether other branches exist. It would not increase the proportion (technically, you want to be talking about measure here, but the distinction isn't important for this particular discussion) of branches where everything is stellar - just the proportion of branches where everything is stellar out of the total proportion of branches where you are alive, which isn't so important. To see this, imagine you have two branches, one where things are going poorly and one where things are going great. The proportion of branches where things are going stellar is 1/2. Now suppose that the being/society/system that is going poorly self-destructs. The proportion of branches where things are going stellar is still 1/2, but now you have a branch where instead of having a being/society/system that is going poorly, you have no being/society/system at all.
Kaura20

In general, vegetarians don't care as much about e.g. species flourishing as they do about the vast amounts of suffering that farmed animals are quite likely to experience. I see nothing strange in viewing animals as morally relevant and deeming their life a net negative, thus hoping they wouldn't have to exist.

Eating only free range or hunted meat is a pretty good option, although of course not entirely unproblematic, from the suffering-reduction point of view. It is very often brought up by non-vegetarians whenever the topic of animal suffering comes up ... (read more)