Comment author: Kaura 25 February 2015 01:16:22AM 1 point [-]

Fellow effective altruists and other people who care about making things better, especially those of you who mostly care about minimising suffering: how do you stay motivated in the face of the possibility of infinities, or even just the vast numbers of morally relevant beings outside our reach?

I get that it's pretty silly to get so distressed over issues there's nothing I can do about, but I can't help feeling discouraged when I think about the vast amount of suffering that probably exists - I mean, it doesn't even have to be infinite to feel like a bottomless pit that we're pretty much hopelessly trying to fill. I have a hard time feeling genuinely happy about good news, like progress in eradicating diseases or reducing world hunger, because intuitively it all feels like such an insignificant part of all the misery that's going on elsewhere (and in other Everett branches of course, in case the MWI is correct).

I know this is a bit of a noob question and something everyone probably thinks about at some point, which is why I'm hoping to hear what kind of conclusions other people have reached.

Comment author: Kaura 05 January 2015 08:04:54PM 3 points [-]

Negative: a couple decided to go poly after some years in a stable monogamous relationship. It seemed to go well for a few months, but the guy apparently told a few white lies here and there, which then got completely out of control and eventually resulted in a disaster for pretty much everyone involved.

Neutral/negative: a couple was poly for maybe half a year or so, then decided it was "too much trouble" and returned to monogamy. I don't know them well enough to be able to provide more details, but they have been together for a few years after that and are now having a child, so nothing terrible probably happened.

I know plenty of other poly people as well, but don't know as much of what's going on in their individual relationships. The general feeling I get is that while a healthy poly relationship certainly isn't impossible, they are only rarely very stable and often seem to require significantly more attention and work to succeed even when they do (which of course are not negative things to everyone, and it can be worth it anyway in case the freedom and additional partners bring a lot of value). Problems arising from insufficient honesty are pretty common, even among those who would generally seem to value trust and openness, so that's probably an important thing to watch out for.

Comment author: Metus 25 December 2014 11:48:06AM 3 points [-]

I am not going to start a lengthy discussion on this subject as this is not the place for it, so please do not read the lack of any further answers as anything else than the statemet above. That being said ...

I am not completely sold on the premise that all human lives are equal which puts the whole idea of a cheaper saved life in question. I am not donating out of a moral imperative but personal preference so my donations exhibit decreasing marginal utility making diversification a necessity. And finally I have generally massive skepticism towards anything and anyone that claims to solve a huge, long standing problem like poverty just like the EA movement tends to do.

This is the rough sketch of my reservations. I will not discuss it further here but I am willing to discuss it in a more appropriate place, like a seperate thread or the open thread.

Comment author: Kaura 26 December 2014 01:03:05AM 2 points [-]

Thanks! No need for a lengthy debate, I'm just very curious about how people decide where to donate, especially when the process leads to explicitly non-EA decisions. Your reasons are in fact pretty close to what I would have guessed, so I suppose similar intuitions are quite common and might explain part of why an idea as obvious as effective altruism took so long to develop.

But yeah, a subthread about this in the OT sounds like a good idea (unless I can find lots of old discussions on the subject).

Comment author: Metus 25 December 2014 12:12:46AM 3 points [-]

Great list! Hope you don't mind a couple of questions.

Thanks! There would be little point in posting to a discussion board if I wasn't expecting discussion.

Any particular reason to donate to Wikipedia? I ask because I just read this interesting article about Wikimedia donations that was posted on the FB EA thread a few days ago.

Until a few minutes ago I thought that people would on average not donate enough to Wikipedia enough. Actually, my thought was more like "Wikipedia was so useful in the past and I expect it to be useful in the future too, so I could donate a small amount to make up for my use." But I am revising that thought as we speak. The larger point anyhow was to signal that I am not completely sold on effective altruism and might also donate to the Red Cross or so.

Also, how many applications per month?

I have until the end of this year to decide. A modest goal would be one per week, but it would be way more effective if I make the rate dependent on time and domain. So let's say - and let me say that this won't be the final number - one per week for stuff in industry that is not seasonal and an adjusted number for seasonal stuff.

Comment author: Kaura 25 December 2014 11:33:07AM 2 points [-]

I am not completely sold on effective altruism and might also donate to the Red Cross or so.

Interesting, why is this? Do you mean effective altruism as a concept, or the EA movement as it currently is?

Comment author: DanielFilan 11 December 2014 12:01:00AM 4 points [-]

Should beings/societies/systems clever enough to figure this out (and with something like preferences or values) just seek to self-destruct if they find themselves in a sufficiently suboptimal branch, suffering or otherwise worse off than they plausibly could be?

Not really. If you're in a suboptimal branch, but still doing better than if you didn't exist at all, then you aren't making the world better off by self-destructing regardless of whether other branches exist.

Committing to give up in case things go awry would lessen the impact of setbacks and increase the proportion of branches where everything is stellar, just due to good luck. Keep the best worlds, discard the rest, avoid a lot of hassle.

It would not increase the proportion (technically, you want to be talking about measure here, but the distinction isn't important for this particular discussion) of branches where everything is stellar - just the proportion of branches where everything is stellar out of the total proportion of branches where you are alive, which isn't so important. To see this, imagine you have two branches, one where things are going poorly and one where things are going great. The proportion of branches where things are going stellar is 1/2. Now suppose that the being/society/system that is going poorly self-destructs. The proportion of branches where things are going stellar is still 1/2, but now you have a branch where instead of having a being/society/system that is going poorly, you have no being/society/system at all.

Comment author: Kaura 11 December 2014 04:53:18PM 0 points [-]

Thanks! Ah, I'm probably just typical-minding like there's no tomorrow, but I find it inconceivable to place much value on the amount of branches you exist in. The perceived continuation of your consciousness will still go on as long as there are beings with your memories in some branch: in general, it seems to me that if you say you "want to keep living", you mean you want there to be copies of you in some or the possible futures, waking up the next morning doing stuff present-you would have done, recalling what present-you thought yesterday, and so on (in addition you will probably want a low probability for this future to include significant suffering). Likewise, if you say you "want to see humanity flourish indefinitely", you want a future that includes your biological or cultural peers and offspring colonizing space and all that, remembering and cherishing many of the values you once had (sans significant suffering). To me it seems impossible to assign value to the amount of MWI-copies of you, not least because there is no way you could even conceive their number, or usually make meaningful ethical decisions where you weigh their amounts.* Instead, what matters overwhelmingly more is the probability of any given copy living a high quality life.

just the proportion of branches where everything is stellar out of the total proportion of branches where you are alive

Yes, this is obvious of course. What I meant was exactly this, because from the point of view of a set of observers, eliminating the set of observers from a branch <=> rendering the branch irrelevant, pretty much.

which isn't so important.

To me it did feel like this is obviously what's important, and the branches where you don't exist simply don't matter - there's no one there to observe anything after all, or judge the lack of you to be a loss or morally bad (again, not applicable to individual humans).

If I learned today that I have a 1% chance to develop a maybe-terminal, certainly suffering-causing cancer tomorrow, and I could press a button to just eliminate the branches where that happens, I would not have thought I am committing a moral atrocy. I would not feel like I am killing myself just because part of my future copies never get to exist, nor would I feel bad for the copies of the rest of all people - no one would ever notice anything, vast amounts of future copies of current people would wake up just like they thought they would the next morning, and carry on with their lives and aspirations. But this is certainly something I should learn to understand better before anyone gives me a world-destroying cancer cure button.

*Which is one main difference when comparing this to regular old population ethics, I suppose.

Comment author: Kaura 10 December 2014 02:54:19PM 2 points [-]

Assuming for a moment that Everett's interpretation is correct, there will eventually be a way to very confidently deduce this (and time, identity and consciousness work pretty much like described by Drescher IIRC - there is no continuation of consciousness, just memories, and nothing meaningful separates your identity from your copies):

Should beings/societies/systems clever enough to figure this out (and with something like preferences or values) just seek to self-destruct if they find themselves in a sufficiently suboptimal branch, suffering or otherwise worse off than they plausibly could be? Committing to give up in case things go awry would lessen the impact of setbacks and increase the proportion of branches where everything is stellar, just due to good luck. Keep the best worlds, discard the rest, avoid a lot of hassle.

This is obviously not applicable to e.g. humanity as it is, where self-destruction on any level is inconvenient, if at all possible, and generally not a nice thing to do. But would it theoretically make sense for intelligences like this to develop, and maybe even have an overwhelming tendency to develop in the long term? What if this is one of the vast amount of branches where everyone in the observable universe pretty much failed to have a good enough time and a bright enough future and just offed themselves before interstellar travel etc., because a sufficiently advanced civilization sees it's just not a big deal in an Everett multiverse?

(There's probably a lot that I've missed here as I have no deep knowledge regarding the MWI, and my reading history so far only touches on this kind of stuff in general, but yay stupid questions thread.)

Comment author: Salemicus 23 November 2014 02:38:40PM 1 point [-]

Is farm chicken life is worth living?

I have no idea what that question even means. I don't want to save the Bengal tiger because I think it has a "life worth living" but because I want the species to flourish.

But to the extent that you are concerned that battery chickens have negative lives, why become a vegetarian? Eat free range meat. Or eat only hunted meat. And why make a fuss about trace amounts of meat products in your cheese or whatever? Isn't it suspicious that people who make the strange claim that animals count as objects of moral concern also make the strange claim that animal lives aren't worth living and also cash out that concern by a dietary purity ritual? Were I a cynic, I might even think that the religious-seeming ritual were the whole point, and the elaborate epicyclical theology built around it a mere after-the-fact justification.

Comment author: Kaura 24 November 2014 09:52:18PM 2 points [-]

In general, vegetarians don't care as much about e.g. species flourishing as they do about the vast amounts of suffering that farmed animals are quite likely to experience. I see nothing strange in viewing animals as morally relevant and deeming their life a net negative, thus hoping they wouldn't have to exist.

Eating only free range or hunted meat is a pretty good option, although of course not entirely unproblematic, from the suffering-reduction point of view. It is very often brought up by non-vegetarians whenever the topic of animal suffering comes up - anecdotally I counted four people I know who I have heard using the argument when explaining or defending their meat eating. None of them actually even eats mainly free range or hunted meat. To me, it seems the whole point is unfortunately only ever used as a motte that people retreat to to avoid having to feel or look bad, before again just eating whatever as soon as they can stop thinking about it. This might not mean these people don't really care on some level: I'd guess it is more expensive cognitively to analyze and keep tabs on which meat products cause only acceptable amounts of suffering, without succumbing to rationalization and constant habit-breaking and eventually forgetting the project, than it is to just rule meat out of your diet and stop thinking about it.

Another reason why free-range and hunted meat are not quite equivalent to veg(etari)anism is that they don't seem to scale as easily to feed large populations for a reasonable land area and product price. That said, I for one would welcome a society which mostly eats plant-based food, but with the very occasional expensive hunted or ethically-farmed piece of meat or cheese, which indeed seems like what a non-factory-farming omnivore society could end up looking like. (Of course, for us embracing a more negative form of utilitarianism, wild-animal suffering would still be a problem, but that's beyond the scope of this discussion.)