In this thread, I would like to invite people to summarize their attitude to Effective Altruism and to summarise their justification for their attitude while identifying the framework or perspective their using.
Initially I prepared an article for a discussion post (that got rather long) and I realised it was from a starkly utilitarian value system with capitalistic economic assumptions. I'm interested in exploring the possibility that I'm unjustly mindkilling EA.
I've posted my write-up as a comment to this thread so it doesn't get more air time than anyone else's summarise and they can be benefit equally from the contrasting views.
I encourage anyone who participates to write up their summary and identify their perspective BEFORE they read the others, so that the contrast can be most plain.
Thanks for your comment.
Read the first comment on that post and the discussion the OP has with them.
No, I'm saying that it 'chooses more important causes and weights them higher'.
Is this the flow through effects link? I'm not sure what you're talking about.
The evidence that they believe that is in the link- where Givewell says it and the other links are to 80K or GWWC echoing it (don't recall which from memory).
I would say you are missing something - whether market efficiency is something worth throwing money at - well, market efficiency by definition refers to a case where money is beying thrown at something that is worthwhile - a coincidence of interest in supply and demand.
Certainly. If QALY's are valuable, then curing disease and saving lives is inherently valuable. However, people experience death and disease differently. Very differently. How can we work out how 'bad' that is for them - well we could use QALY's and generalise for the entire disease for all people - or, we could infer it from what people actually do in relation to it. Do they save up money to buy bednets, or do they spend that money on a donkey to visit their girlfriend in the next village (that's a fictional kinda silly example but illustrates my point). If they have a preference for bed nets above all other alternative options, and still can't afford it, they have incentive to contribute their labour, for instance, to their community in a way that improves the lives of others and helps those people reach their preferences, while earning money to buy those bednets. If they can't be valuable to their community, then their death is an overall positive to the overall economic efficiency of their community. That is, unless they are artificially subsidised for that kind of lifestyle by certain kinds of charity.
Demand can only be reliably inferred from past behaviour. If someone buys a loaf of bread every week, that's demand for bread. If there's 1/23 chance someone in a village gets cholera every year, and that village has a reputation for being able to afford the cholera treatment, then that's demand for cholera treatment. People 'demanding' or begging, or a tourist feeling sory for someone out of judgement for some kind of inferior lifestyle subjectively is not demand. It can be interpreted as need, or even modelled as a need by consequence to something else: ie - you need to eat food to survive - but then the question is something else - are you donating because they 'demand something?' - that you're fulfilling a subjective desire or utility state for them - which I believe is empathy driven, or are you fulfilling a utility conditional of your guilt or something else?
If a non EA get's 100% warm fuzzies from donating to save polar bears or another thing they intuit. Their dynamic inconsistency means their cause preference changes, and it's not big deal for them to switch charities when they feel like it.
An EA gets warm fuzzies only if they can satisfy some complicated equation and approval of their EA buddies, while that approval changes as information gets updated. However, they're also fighting against their intuitive warm fuzzies for things like polar bears, and the same dynamic inconsistency amongst non-effective causes that non-EA's are - for instance, they feel like donating to save guide dogs when they are primed by seeing a local blind man. Since this is for more complicated, the prospect of regret would be higher - at least, I think so intuitively, no?
I had never thought about it like that. I have to think about this some more. What a novel way of looking at it - thanks!
That's just your opinion. Many tourists love unique and different cultures for their own sake. Or they might have a unique language to share, or anything. If they are alive, it's because they have survived in an evolutionarily sound way till now so as a rough hereustic - they're okay until there's some kind of disaster event.
I think setting up less difficult conditions for maximum utility makes it easier to maximise your utility. There's no need to slap a label on it. If I call something 'effective fruit eating' where I maximise my utility by successfully eating the sultanas across the room from me right now, it's not very hard for me to maximise my utility.
Could you explain the idea of markets optimising for utility weighted by wealth more? I'm having trouble wrapping my head around the concept.
edit 1: perhaps existing EA's could maximise their utility more by getting treated for scrupulosity?
OK, done. Now what? (I did not find that reading that material changed (a) my opinion that Dias's complaint was basically that EA is too utilitarian, nor (b) my impression that you are complaining it isn't utilitarian enough.)
And you regard that as a bad thing? Evidently I'm missing something, because weighting more important things more highly seems obviously sensible. What am I missing?
... (read more)