Agree with purchasing non-sketchiness signalling and utilons separately. This is especially important if like jkaufman a lot of your value comes as an effective altruist role model
Agree that if diversification is the only way to get the elephant to part with its money then it might make sense.
Similarly, if you give all your donations to a single risky organization and they turn out to be incompetent then it might demotivate your future self. So you should hedge against this, which again can be done separately from purchasing the highest-expected-value thing.
Confused about what to do if we know we're in a situation where we're behaving far from rational agents but aren't sure exactly how. I think this is the case with purchasing xrisk reduction, and with failure to reach Aumann agreement between aspiring effective altruists. To what extent do the rules still apply?
Lots of valid reasons for diversification can also serve as handy rationalizations. Diversification feels like the right thing to do - and hey, here are the reasons why! I feel like diversification should feel like the wrong thing to do, and then possibly we should do it anyway but sort of grudgingly.
you should give all your donation to the charity that most aids the global diversification program. Splitting your donations implies being risk-averse in what you personally achieve, which is perverse.
Well, you have to have a very bizarre utility function, for sure. ;)
even if you were risk-averse in lives saved, which I do not think you should be
I'm not sure about this point. I can imagine having a preference for saving at least X lives, versus an outcome with equal mean, but a more broadly distributed probability function.
I can imagine having a preference for saving at least X lives
I feel like you've got a point here but I'm not quite getting it. Our preferences are defined over outcomes, and I struggle to see how "saving X lives" can be seen as an outcome - I see outcomes more along the lines of "X number of people are born and then die at age 5, Y number of people are born and then die at age 70". You can't necessarily point to any individual and say whether or not they were "saved".
I generally think of "the utility of saving 6 lives" as a shorthand for something like "the difference in utility between (X people die at age 5, Y people die at age 70) and (X-6 people die at age 5, Y+6 people die at age 70)".
We'd have to use more precise language if that utility varies a lot for different choices of X and Y, of course.
A question for the folks who voted this up: on a scale from "enjoyed reading this even though didn't feel like I really learned anything" to "fantastic, now I understand everything", how useful did this post feel to you?
Personally I felt this had several very important insights that only clicked properly together while I was writing it, such as the way how it's almost impossible to even imagine certain kind of decision-making if we literally had no concept of personal identity, as well as the way that anticipated experience is treated separately from more abstract modeling in our brains. But judging from the relatively low score of the post and the fact that there's very little discussion of those insights in the comments, it looks like most folks didn't come off as feeling that they were important? (Or maybe didn't agree with them, but in that case I would've expected more criticism.)
I felt like I gained one insight, which I attempted to summarize in my own words in this comment.
It also slightly brought into focus for me the distinction between "theoretical decision processes I can fantasize about implementing" and "decision processes I can implement in practice by making minor tweaks to my brain's software". The first set can include self-less models such as paperclip maximization or optimizing those branches where I win the lottery and ignoring the rest. It's possible that in the second set a notion of self just keeps bubbling up whatever you do.
One and a half insights is pretty good going, especially on a tough topic like this one. Because of inferential distance, what feels like 10 insights to you will feel like 1 insight to me - it's like you're supplying some of the missing pieces to your own jigsaw puzzle, but in my puzzle the pieces are a different shape.
So yeah, keep hacking away at the edges!
Meetup : Toronto - The nature of discourse; what works, what doesn't
Discussion article for the meetup : Toronto - The nature of discourse; what works, what doesn't
Place: Upstairs at The Imperial Public Library 54 Dundas St. E, near Dundas Station. Enter at the door on the right marked "library", go upstairs and look for the paperclip sign.
How do we avoid competitive debates where everyone loses, and achieve productive discussions where everyone wins? How do we do this when our brains seem set up to embrace "the brawl"?
Discussion article for the meetup : Toronto - The nature of discourse; what works, what doesn't
I can imagine that if you design an agent by starting off with a reinforcement learner, and then bolting some model-based planning stuff on the side, then the model will necessarily need to tag one of its objects as "self". Otherwise the reinforcement part would have trouble telling the model-based part what it's supposed to be optimizing for.
This is like a whole sequence condensed into a post.
The pledging back-of-the-envelope calculation got me curious, because I had been assuming GWWC wouldn't flat out lie about how much had been pledged (they say "We currently have 291 members ... who together have pledged more than 112 million dollars" which implies an actual total not an estimate).
On the other hand, it's just measuring pledges, it's not an estimate of how much money anyone expects to actually materialise. It hadn't occurred to me that anyone would read it that way - I may be mistaken here though, in which case there's a genuine issue with how the number is being presented.
Anyway, I still wasn't sure the pledge number made sense so I did my own back-of-the-envelope:
£72.68M pledged 291 members £250K pledged per person over the course of their life 40 years average expected time until retirement (this may be optimistic. I get the impression most members are young though) £6.2K average pledged per member per year
That would mean people are expecting to make £62K per year averaged over their entire remaining career, which still seems very optimistic. But:
- some people will be pledging more than 10%
- there might be some very high income people mixed in there, dragging the mean up.
So I think this passes the laugh test for me, as a measure of how much people might conceivably have pledged, not how much they'll actually deliver.
Incidentally, in case it's useful to anyone... The way I originally processed the $112M figure (or $68M as it then was), was something along the lines of:
- $68M pledged
- apply 90% cynicism
- that gives $6.8M
- that's still way too large a number to represent actual ROI from $170K worth of volunteer time
- how can I make this inconvenient number go away?
aha! This is money that's expected to roll in over the next several decades. We really have no idea what the EA movement will turn into over that time, so should apply big future discounting when it comes to estimating our impact
(note it looks like Will was more optimistic, applying 67% cynicism to get from $400 to $130)
This implies immediately that 75-80% haven't, and in practise that number will be higher care of the self-reporting. This substantially reduces the likely impact of 80,000 hours as a program.
Reduces it from what? There's a point at which it's more cots effective to just find new people than carrying on working to persuade existing ones. My intuition doesn't say much about whether this happy point is above or below 25%.
Good point about self-reporting potentially exaggerating the impact though.
The pledging back-of-the-envelope calculation got me curious, because I had been assuming GWWC wouldn't flat out lie about how much had been pledged (they say "We currently have 291 members ... who together have pledged more than 112 million dollars" which implies an actual total not an estimate).
On the other hand, it's just measuring pledges, it's not an estimate of how much money anyone expects to actually materialise. It hadn't occurred to me that anyone would read it that way - I may be mistaken here though, in which case there's a genuine issue with how the number is being presented.
Anyway, I still wasn't sure the pledge number made sense so I did my own back-of-the-envelope:
£72.68M pledged 291 members £250K pledged per person over the course of their life 40 years average expected time until retirement (this may be optimistic. I get the impression most members are young though) £6.2K average pledged per member per year
That would mean people are expecting to make £62K per year averaged over their entire remaining career, which still seems very optimistic. But:
- some people will be pledging more than 10%
- there might be some very high income people mixed in there, dragging the mean up.
So I think this passes the laugh test for me, as a measure of how much people might conceivably have pledged, not how much they'll actually deliver.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Tallinn and Tegmark? Are they participants?
The best thing about this was that there was very little status dynamic within the CFAR house - we were all learning together as equals.