Comment author: Giles 22 February 2013 01:54:22PM 2 points [-]

This is like a whole sequence condensed into a post.

Comment author: Giles 21 February 2013 11:40:58PM 5 points [-]

The pledging back-of-the-envelope calculation got me curious, because I had been assuming GWWC wouldn't flat out lie about how much had been pledged (they say "We currently have 291 members ... who together have pledged more than 112 million dollars" which implies an actual total not an estimate).

On the other hand, it's just measuring pledges, it's not an estimate of how much money anyone expects to actually materialise. It hadn't occurred to me that anyone would read it that way - I may be mistaken here though, in which case there's a genuine issue with how the number is being presented.

Anyway, I still wasn't sure the pledge number made sense so I did my own back-of-the-envelope:

£72.68M pledged 291 members £250K pledged per person over the course of their life 40 years average expected time until retirement (this may be optimistic. I get the impression most members are young though) £6.2K average pledged per member per year

That would mean people are expecting to make £62K per year averaged over their entire remaining career, which still seems very optimistic. But:

  • some people will be pledging more than 10%
  • there might be some very high income people mixed in there, dragging the mean up.

So I think this passes the laugh test for me, as a measure of how much people might conceivably have pledged, not how much they'll actually deliver.

Comment author: Giles 22 February 2013 12:06:25AM 4 points [-]

Incidentally, in case it's useful to anyone... The way I originally processed the $112M figure (or $68M as it then was), was something along the lines of:

  • $68M pledged
  • apply 90% cynicism
  • that gives $6.8M
  • that's still way too large a number to represent actual ROI from $170K worth of volunteer time
  • how can I make this inconvenient number go away?
  • aha! This is money that's expected to roll in over the next several decades. We really have no idea what the EA movement will turn into over that time, so should apply big future discounting when it comes to estimating our impact

    (note it looks like Will was more optimistic, applying 67% cynicism to get from $400 to $130)

Comment author: Giles 21 February 2013 11:46:49PM 5 points [-]

This implies immediately that 75-80% haven't, and in practise that number will be higher care of the self-reporting. This substantially reduces the likely impact of 80,000 hours as a program.

Reduces it from what? There's a point at which it's more cots effective to just find new people than carrying on working to persuade existing ones. My intuition doesn't say much about whether this happy point is above or below 25%.

Good point about self-reporting potentially exaggerating the impact though.

Comment author: Giles 21 February 2013 11:40:58PM 5 points [-]

The pledging back-of-the-envelope calculation got me curious, because I had been assuming GWWC wouldn't flat out lie about how much had been pledged (they say "We currently have 291 members ... who together have pledged more than 112 million dollars" which implies an actual total not an estimate).

On the other hand, it's just measuring pledges, it's not an estimate of how much money anyone expects to actually materialise. It hadn't occurred to me that anyone would read it that way - I may be mistaken here though, in which case there's a genuine issue with how the number is being presented.

Anyway, I still wasn't sure the pledge number made sense so I did my own back-of-the-envelope:

£72.68M pledged 291 members £250K pledged per person over the course of their life 40 years average expected time until retirement (this may be optimistic. I get the impression most members are young though) £6.2K average pledged per member per year

That would mean people are expecting to make £62K per year averaged over their entire remaining career, which still seems very optimistic. But:

  • some people will be pledging more than 10%
  • there might be some very high income people mixed in there, dragging the mean up.

So I think this passes the laugh test for me, as a measure of how much people might conceivably have pledged, not how much they'll actually deliver.

Comment author: Giles 09 February 2013 06:17:21AM 34 points [-]

I love the landmine metaphor - it blows up in your face and it's left over from some ancient war.

Comment author: Qiaochu_Yuan 01 February 2013 06:08:33PM 34 points [-]

Things that are your fault are good because they can be fixed. If they're someone else's fault, you have to fix them, and that's much harder.

-- Geoff Anders (paraphrased)

Comment author: Giles 09 February 2013 04:34:03AM 4 points [-]

Did he mean if they're someone else's fault then you have to fix the person?

Comment author: Qiaochu_Yuan 07 February 2013 06:02:50PM 5 points [-]

Do people update far more strongly on evidence if it comes from their own lab?

This isn't a completely unreasonable thing to do. For one thing, you have much more knowledge about the methodology of experiments conducted in your lab.

Comment author: Giles 07 February 2013 06:03:51PM 5 points [-]

You also know your own results aren't fraudulent.

Comment author: Giles 07 February 2013 05:58:06PM 3 points [-]

That experiment has changed Latham's opinion of priming and has him wondering now about the applications for unconscious primes in our daily lives.

He seems to have skipped right over the part where he wonders why he and Bargh see one thing and other people see something different. Do people update far more strongly on evidence if it comes from their own lab?

Also, yay priming! (I don't want this comment to sound negative about priming as such)

Comment author: Giles 06 February 2013 07:19:38PM *  1 point [-]

2 sounds wrong to me - like you're trying to explain why having a consistent internal belief structure is important to someone who already believes that.

The things which would occur to me are:

  • If both of you are having reactions like this then you're dealing with status, in-group and out-group stuff, taking offense, etc. If you can make it not be about that and be about the philosophical issues - if you can both get curious - then that's great. But I don't know how to make that happen.
  • Does your friend actually have any contradictory beliefs? Do they believe that they do?
  • You could escalate - point out every time your friend applies a math thing to social justice. "2000 people? That's counting. You're applying a math thing there." "You think this is better than that? That's called a partial ordering and it's a math thing". I'm not sure I'd recommend this approach though.
Comment author: shminux 06 February 2013 05:15:13PM 10 points [-]

I suspect that what frustrated you is not noticing your own confusion. You clearly had a case of lost purposes: "applying a math thing to social justice" is instrumental, not terminal. You discovered a belief "applying math is always a good thing" which is not obviously connected to your terminal goal "social justice is a good thing".

You are rationalizing your belief about applying math in your point 2:

An inconsistent belief system will generate actions that are oriented towards non-constant goals, and interfere destructively with each other, and not make much progress. A consistent belief system will generate many actions oriented towards the same goal, and so will make much progress.

How do you know that? Seems like an argument you have invented on the spot to justify your entrenched position. Your point 3 confirms it:

No matter how offended you are about something, thinking about it will still resolve the issue.

In other words, you resolved your cognitive dissonance by believing the argument you invented, without any updating.

If you feel like thinking about the issue some more, consider connecting your floating belief "math is good" to something grounded, like The Useful Idea of Truth:

True beliefs are more likely than false beliefs to make correct experimental predictions, so if we increase our credence in hypotheses that make correct experimental predictions, our model of reality should become incrementally more true over time.

This is reasonably uncontroversial, so the next step would be to ponder whether in order to be better at this social justice thing one has to be better at modeling reality. If so, you can proceed to the argument that a consistent model is better than an inconsistent one at this task. This may appear self-evident to you, but not necessarily to your "socially progressive" friend. Can you make a convincing case for it? What if s/he comes up with examples where someone following an inconsistent model (like, say, Mother Teresa) contributes more to social justice than those who study the issue for a living? Would you accept their evidence as a falsification of your meta-model "logical consistency is essential"? If not, why not?

Comment author: Giles 06 February 2013 06:50:52PM 1 point [-]

This may appear self-evident to you, but not necessarily to your "socially progressive" friend. Can you make a convincing case for it?

Remember you have to make a convincing case without using stuff like logic

View more: Prev | Next