Comment author: ozziegooen 03 January 2015 09:10:54PM 2 points [-]

Perhaps 'Fermi estimate' was not the best term to use but I couldn't think of an equally understandable but better one. It could be called simply 'estimate', but I think the important thing here is that its used very similarly to how a Fermi estimate would be (with very high uncertainty of the inputs, and done in a very simple manner). What would you call it? (http://lesswrong.com/lw/h5e/fermi_estimates/).

Comment author: tog 19 August 2015 03:15:15PM 0 points [-]

I like the name it sounds like you may be moving to - "guesstimate".

Comment author: owencb 05 January 2015 11:52:07AM 2 points [-]

Thanks, I think this may be a valuable direction to pursue.

The error-tracking for multiplication in Fermilab seems like it's probably wrong. But I don't think there's an easy fix, since products of Gaussian distributions aren't Gaussian. Since multiplication is more common than addition in Fermi estimates, you might replace your distributions with log-normals (this is what I do when tracking uncertainty in back-of-the-envelope calculations), but I agree that monte carlo simulations are really the way to go.

Comment author: tog 19 August 2015 03:12:48PM 0 points [-]

Do you think you'd use this out of interest Owen?

Comment author: tog 18 August 2015 05:12:20AM 1 point [-]

I asked for a good general guide to IQ (and in particular its objectivity and importance) on the LW FB group a while back. I got a bunch of answers, including these standouts:

http://www.psych.utoronto.ca/users/reingold/courses/intelligence/cache/1198gottfred.html

http://www.newscientist.com/data/doc/article/dn19554/instant_expert_13_-_intelligence.pdf

But there's still plenty of room for improvement on those so I'd be curious to hear others' suggestions.

Comment author: tog 18 August 2015 05:12:47AM 0 points [-]

And a friend requests an article comparing IQ and conscientiousness as a predictor for different things.

Comment author: Larks 16 August 2015 02:25:57PM 12 points [-]

I thought the biodeterminists guide was one of the most useful things I've ever read. I'd love it if Yvain would write the same for longevity, general fitness, IQ, etc.

Comment author: tog 18 August 2015 05:12:20AM 1 point [-]

I asked for a good general guide to IQ (and in particular its objectivity and importance) on the LW FB group a while back. I got a bunch of answers, including these standouts:

http://www.psych.utoronto.ca/users/reingold/courses/intelligence/cache/1198gottfred.html

http://www.newscientist.com/data/doc/article/dn19554/instant_expert_13_-_intelligence.pdf

But there's still plenty of room for improvement on those so I'd be curious to hear others' suggestions.

Comment author: tog 18 August 2015 05:10:01AM 1 point [-]

I've been looking for this all my life without even knowing it. (Well, at least for half a year.)

Comment author: [deleted] 09 July 2015 01:36:39AM 4 points [-]

I like Effective Altruism a lot - I follow a lot of effective altrusim blogs, I adopt a lot of mental models and tools, I think the idea is great for a lot of people.

I'm highly interested in how to be effective, and I'm highly interested in how to do good, and EA gives some great ideas on both concepts.

That being said, what I'm not interested in as my sole aim is to be maximally effective at doing good. I'm more interested in expressing my values in as large and impactful a way as possible - and in allowing others to do the same. This happens to coincide with doing lots and lots f good, but it definitely doesn't mean that I would begin to sacrifice my other values (eg fun, peace, expression) to maximize good. I'm interested in allowing others to express THEIR values, even if it means they're incredibly selfish and do very little good - I suppose this almost begins to sound utilitarian, and I suppose it is - but again, I'm not going to sacrifice appreciable amounts of my own utility if it means more utllity for others, and I don't expect others to do the same.

In terms of your critique of EA, I think you've completely bought into the idea of "revealed preferences" - that people's utility is revealed in what they want. However, a large portion of psychology research shows something very different - that the behavior people have that gets reinforced is a completey separate "compulsion" pathway than what they enjoy/find happiness from/get fulfilled from, etc.

Economics doesn't really care about that shit if it doesn't effect people's actions, so it's easier to talk about "revealed preferences." But as a utilitarian, you should be aware of all the separate pathways that the brain evolved to survive and replicate - many of them separate from happiness, fulfillment, pleasure, and other things which we like to talk about when we talk about "utility'.

The upshot of how all this relates to your points is that the free market/racking up money often hits a bunch of these compulsion pathways through the accumulation of money, but often IGNORES other areas of utility. Givewell is trying to fix the imbalance.

In response to comment by [deleted] on Effective Altruism from XYZ perspective
Comment author: tog 15 July 2015 08:16:07PM 0 points [-]

That being said, what I'm not interested in as my sole aim is to be maximally effective at doing good. I'm more interested in expressing my values in as large and impactful a way as possible - and in allowing others to do the same. This happens to coincide with doing lots and lots f good, but it definitely doesn't mean that I would begin to sacrifice my other values (eg fun, peace, expression) to maximize good.

It's interesting to ask to what extent this is true of everyone - I think we've discussed this before Matt.

Your version and phrasing of what you're interested in is particular to you, but we could broaden the question out to ask how far people have gone a long way moving away from having primarily self-centred drives which overwhelm others when significant self-sacrifice is on the table. I think some people have gone a long way moving away from that, but I'm sceptical that any single human being goes the full distance. Most EAs plausibly don't make any significant self-sacrifices if measured in terms of their happiness significantly dipping.* The people I know who have gone the furthest may be Joey and Kate Savoie, with whom I've talked about these issues a lot.

* Which doesn't mean they haven't done a lot of good! If people can donate 5% or 10% or 20% of their income without becoming significantly less happy then that's great, and convincing people to do that is a low hanging fruit that we should prioritise, rather than focusing our energies on then squeezing out extra sacrifices that start to really eat into their happiness. The good consequences of people donating are what we really care about after all, not the level of sacrifice they themselves are making.

Comment author: Clarity 13 July 2015 08:56:07AM *  -1 points [-]

Whenever you're getting information that is better explained in text than in person, and is person.

For instance, someone recently sent these 2 pieces of advice to me:

Sit on lap – Take her hand and move it above her head so that she spins around. Then, when her back is towards you, sit her down on your lap and hug her from behind. . Kissing with release – There are many good techniques for kissing a girl. One would be to put on some flavored chapstick and say, “You know what the best part about chapstick is? Here, smell it”. Let her smell it and then continue, “It not only smells like strawberries, it actually tastes like strawberries too. Check it out”. And then you go in and kiss her.

Worst mode of transmission ever!

Comment author: tog 15 July 2015 08:14:12AM 0 points [-]

People's expectation clock starts running from the time they hit send. More improtantly, deadlines related to the email content really sets the agenda for how often to check your email.

Then change people's expectations, including those of the deadlines appropriate for tasks communicated by emails that people may not see for a while! (Partly a tongue in cheek answer - I know this may no be feasible, and you make a fair point).

Comment author: benkuhn 12 July 2015 03:23:49AM 6 points [-]

Every time I pay for electricity for my computer rather than sending the money to a third world peasant is, according to EA, a failure to maximize utility.

I'm sad that people still think EAers endorse such a naive and short-time-horizon type of optimizing utility. It would obviously not optimize any reasonable utility function over a reasonable timeframe for you to stop paying for electricity for your computer.

More generally, I think most EAers have a much more sophisticated understanding of their values, and the psychology of optimizing them, than you give them credit for. As far as I know, nobody who identifies with EA routinely makes individual decisions between personal purchases and donating. Instead, most people allocate a "charity budget" periodically and make sure they feel ok about both the charity budget and the amount they spend on themselves. Very few people, if any, cut personal spending to the point where they have to worry about, e.g., electricity bills.

Comment author: tog 13 July 2015 01:27:27AM 1 point [-]

As far as I know, nobody who identifies with EA routinely makes individual decisions between personal purchases and donating. [ ... ] Very few people, if any, cut personal spending to the point where they have to worry about, e.g., electricity bills.

I do know - indeed, live with :S - a couple.

Comment author: Clarity 08 July 2015 04:35:10AM *  2 points [-]

Could charity distorts market signals which cripples the ability of sponsored economies to develop sustainability, leading to negative utility in the long term

Hikma and Norbrook are examples of ethical UK/worldwide pharmaceutical companies. I've worked for and can vouch for both.

Comment author: tog 12 July 2015 04:44:17PM 0 points [-]

Effective altruism ==/== utilitarianism

Here's the thread on this at the EA Forum: Effective Altruism and Utilitarianism

Comment author: tog 12 July 2015 04:43:34PM 0 points [-]

Here's the thread on this at the EA Forum: Effective Altruism and Utilitarianism

View more: Prev | Next