I thought the biodeterminists guide was one of the most useful things I've ever read. I'd love it if Yvain would write the same for longevity, general fitness, IQ, etc.
I asked for a good general guide to IQ (and in particular its objectivity and importance) on the LW FB group a while back. I got a bunch of answers, including these standouts:
http://www.psych.utoronto.ca/users/reingold/courses/intelligence/cache/1198gottfred.html
http://www.newscientist.com/data/doc/article/dn19554/instant_expert_13_-_intelligence.pdf
But there's still plenty of room for improvement on those so I'd be curious to hear others' suggestions.
I've been looking for this all my life without even knowing it. (Well, at least for half a year.)
I like Effective Altruism a lot - I follow a lot of effective altrusim blogs, I adopt a lot of mental models and tools, I think the idea is great for a lot of people.
I'm highly interested in how to be effective, and I'm highly interested in how to do good, and EA gives some great ideas on both concepts.
That being said, what I'm not interested in as my sole aim is to be maximally effective at doing good. I'm more interested in expressing my values in as large and impactful a way as possible - and in allowing others to do the same. This happens to coincide with doing lots and lots f good, but it definitely doesn't mean that I would begin to sacrifice my other values (eg fun, peace, expression) to maximize good. I'm interested in allowing others to express THEIR values, even if it means they're incredibly selfish and do very little good - I suppose this almost begins to sound utilitarian, and I suppose it is - but again, I'm not going to sacrifice appreciable amounts of my own utility if it means more utllity for others, and I don't expect others to do the same.
In terms of your critique of EA, I think you've completely bought into the idea of "revealed preferences" - that people's utility is revealed in what they want. However, a large portion of psychology research shows something very different - that the behavior people have that gets reinforced is a completey separate "compulsion" pathway than what they enjoy/find happiness from/get fulfilled from, etc.
Economics doesn't really care about that shit if it doesn't effect people's actions, so it's easier to talk about "revealed preferences." But as a utilitarian, you should be aware of all the separate pathways that the brain evolved to survive and replicate - many of them separate from happiness, fulfillment, pleasure, and other things which we like to talk about when we talk about "utility'.
The upshot of how all this relates to your points is that the free market/racking up money often hits a bunch of these compulsion pathways through the accumulation of money, but often IGNORES other areas of utility. Givewell is trying to fix the imbalance.
That being said, what I'm not interested in as my sole aim is to be maximally effective at doing good. I'm more interested in expressing my values in as large and impactful a way as possible - and in allowing others to do the same. This happens to coincide with doing lots and lots f good, but it definitely doesn't mean that I would begin to sacrifice my other values (eg fun, peace, expression) to maximize good.
It's interesting to ask to what extent this is true of everyone - I think we've discussed this before Matt.
Your version and phrasing of what you're interested in is particular to you, but we could broaden the question out to ask how far people have gone a long way moving away from having primarily self-centred drives which overwhelm others when significant self-sacrifice is on the table. I think some people have gone a long way moving away from that, but I'm sceptical that any single human being goes the full distance. Most EAs plausibly don't make any significant self-sacrifices if measured in terms of their happiness significantly dipping.* The people I know who have gone the furthest may be Joey and Kate Savoie, with whom I've talked about these issues a lot.
* Which doesn't mean they haven't done a lot of good! If people can donate 5% or 10% or 20% of their income without becoming significantly less happy then that's great, and convincing people to do that is a low hanging fruit that we should prioritise, rather than focusing our energies on then squeezing out extra sacrifices that start to really eat into their happiness. The good consequences of people donating are what we really care about after all, not the level of sacrifice they themselves are making.
Whenever you're getting information that is better explained in text than in person, and is person.
For instance, someone recently sent these 2 pieces of advice to me:
Sit on lap – Take her hand and move it above her head so that she spins around. Then, when her back is towards you, sit her down on your lap and hug her from behind. . Kissing with release – There are many good techniques for kissing a girl. One would be to put on some flavored chapstick and say, “You know what the best part about chapstick is? Here, smell it”. Let her smell it and then continue, “It not only smells like strawberries, it actually tastes like strawberries too. Check it out”. And then you go in and kiss her.
Worst mode of transmission ever!
People's expectation clock starts running from the time they hit send. More improtantly, deadlines related to the email content really sets the agenda for how often to check your email.
Then change people's expectations, including those of the deadlines appropriate for tasks communicated by emails that people may not see for a while! (Partly a tongue in cheek answer - I know this may no be feasible, and you make a fair point).
Every time I pay for electricity for my computer rather than sending the money to a third world peasant is, according to EA, a failure to maximize utility.
I'm sad that people still think EAers endorse such a naive and short-time-horizon type of optimizing utility. It would obviously not optimize any reasonable utility function over a reasonable timeframe for you to stop paying for electricity for your computer.
More generally, I think most EAers have a much more sophisticated understanding of their values, and the psychology of optimizing them, than you give them credit for. As far as I know, nobody who identifies with EA routinely makes individual decisions between personal purchases and donating. Instead, most people allocate a "charity budget" periodically and make sure they feel ok about both the charity budget and the amount they spend on themselves. Very few people, if any, cut personal spending to the point where they have to worry about, e.g., electricity bills.
As far as I know, nobody who identifies with EA routinely makes individual decisions between personal purchases and donating. [ ... ] Very few people, if any, cut personal spending to the point where they have to worry about, e.g., electricity bills.
I do know - indeed, live with :S - a couple.
Could charity distorts market signals which cripples the ability of sponsored economies to develop sustainability, leading to negative utility in the long term
Hikma and Norbrook are examples of ethical UK/worldwide pharmaceutical companies. I've worked for and can vouch for both.
Effective altruism ==/== utilitarianism
Here's the thread on this at the EA Forum: Effective Altruism and Utilitarianism
Here's the thread on this at the EA Forum: Effective Altruism and Utilitarianism
People seem to be complaining about community fracturing, and good writers going off onto their own blogs. Why not just accept that and encourage people to post links to the good content from these places?
Hacker News is successful mainly because they encourage people to post their own blog posts there, to get a wider audience and discussion. As opposed to reddit where self promotion is heavily discouraged.
Lesswrong is based on reddit's code. You could add a lesswrong.com/r/links, and just tell people it's ok to publish links to whatever they want there. This could be quite successful, given lesswrong already has a decent community to seed it with. As opposed to going off and starting another subreddit, where it's very hard to attract an initial user base (and you run into the self promotion problem I mentioned.)
Potentially worth actually doing - what'd be the next step in terms of making that a possibility?
Relevant: a bunch of us are coordinating improvements to the identical EA Forum codebase at https://github.com/tog22/eaforum and https://github.com/tog22/eaforum/issues
I don't think that so high estimate for first statement is reasonable.
Also, link now leads to bicameral reasoning article.
Thanks, fixed, now points to http://slatestarcodex.com/2015/05/29/no-time-like-the-present-for-ai-safety-work/
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I asked for a good general guide to IQ (and in particular its objectivity and importance) on the LW FB group a while back. I got a bunch of answers, including these standouts:
http://www.psych.utoronto.ca/users/reingold/courses/intelligence/cache/1198gottfred.html
http://www.newscientist.com/data/doc/article/dn19554/instant_expert_13_-_intelligence.pdf
But there's still plenty of room for improvement on those so I'd be curious to hear others' suggestions.
And a friend requests an article comparing IQ and conscientiousness as a predictor for different things.