Myron Hedderson
Myron Hedderson has not written any posts yet.

Myron Hedderson has not written any posts yet.

Retracted, because after conversation on Ben's blog, it's not a matter of thinking marginal costs will go down with scale - rather, he treats "funding gap" more strictly than I would, something like "the amount of money charities can absorb and deploy at approximately present marginal costs per life saved". And if you define the funding gap as "the amount of money that can be deployed at approximately present marginal costs, or better", then "assume marginal cost does not go down at all" is indeed the most generous assumption.
I think I've found a crux that makes things make sense today, that didn't make sense to me yesterday as I was reading the first linked blog post. When trying to think about the existence or nonexistence of a funding gap, Ben says:
If we assume that all of this is treatable at current cost per life saved numbers - the most generous possible assumption for the claim that there's a funding gap
And my brain skipped a beat, and went "no, the opposite, that's the least generous possible assumption. As we treat more, the next treatment becomes more expensive, not less. Maybe that's what he meant to say?" And then the rest of the... (read more)
Thanks. Side note (I posted another comment about this just now, because it just clicked for me this morning): I think Ben Hoffman thinks (or thought, when he wrote his blog posts) that when you treat more malaria cases or do other philanthropy, marginal cost goes down. He says:
If we assume that all of this is treatable at current cost per life saved numbers - the most generous possible assumption for the claim that there's a funding gap
When in fact it's the least generous, under the assumption that marginal cost goes up. If you think marginal costs will only go down from current levels as we scale, then it is indeed suspicious that nobody's decided to just dump all their money into scaling.
An analogy:
We are in the "you can save a drowning child for an affordable price" world. In this world (or a hypothetical one for the purpose of this analogy), 1,000 infants are being dumped in a large lake per day. Some of them are right by the shore, easy to get to like the drowning child thought experiment postulates, some are out in deeper water. I'm a strong swimmer, and could save any of those infants, but I can't save all of them by myself, and if I try to save as many as I can today, I will exhaust and potentially injure myself, meaning I can save fewer tomorrow. I estimate... (read more)
It seems like 3 things are simultaneously true:
1) It's not possible to eradicate malaria for $5,000/life saved (that's the marginal cost, approximately, ballpark estimate with lots of wiggle room in the number). This generalizes to all other currently known interventions to make people's lives better at low cost - it's relatively cheap now, but one should expect that saving the last life that would otherwise have died from malaria, or helping the last person who can be helped with some other intervention that is currently near the best marginal cost, will cost a lot more than $5,000. I feel like Givewell et. al. are clear about this, or at least this is... (read 803 more words →)
If most failures of rationality are adaptively self-serving motivated reasoning
I would say that most failures of rationality were adaptive in the ancestral environments, but I wouldn't say they all count as "motivated reasoning".
Simple example: Seeing a snake in the grass, and responding as if there is a snake in the grass, in the presence of ambiguous stimuli that have only a 10% chance of being a snake, could well result in more surviving offspring than a more nuanced, likely slower, and closer-to-correct estimation of the probability there is a snake. But this is not a result of motivated reasoning where someone is advocating for their interests, it's just a hack that our... (read more)
How rare good people are depends heavily on how high your bar for qualifying as a good person is. Many forms of good-person behaviour are common, some are rare. A person who has never done anything they later felt guilty about (who has a functioning conscience) is exceedingly rare. In my personal experience, I have found people to vary on a spectrum from "kind of bad and selfish quite often, but feels bad about it when they think about it and is good to people sometimes" to "consistently good, altruistic and honest, but not perfect, may still let you down on occasion", with rare exceptions falling outside this range.
Also, if it is true that a lot of people are confused by good and courageous people, I am unclear where the confusion comes from. Good behaviour gets rewarded from childhood, and bad behaviour gets punished. Not perfectly, of course, and in some places and times very imperfectly indeed, but being seen as a good person by your community's definition of "good" has many social rewards, we're social creatures... I am unclear where the mystery is.
Were the confused people raised by wolvesnon-social animals?
I don't actually buy the premise that a lot of people are confused by moral courage, on reflection.
This doesn't match my experience of what good people are generally like. I find them to be often happy to do what they are doing, rather than extremely afraid of not doing it, as I imagine would be the case if their reasons for behaving as they do were related to avoidance of pain.
There are of course exceptions. But if thinking I had done the wrong thing was extremely painful to me, literally "1000x more than any physical pain" I predict I'd quite possibly land on the strategy "avoid thinking about matters of right and wrong, so as to reliably avoid finding out I'd done wrong." A nihilistic worldview where nothing was... (read more)
I mean, yes this seems right. In which case, taking it as a premise that this weird state doesn't last long, it follows that there's no point trying to plan for a future where human-like things continue to exist. BUT: from where we stand right now, we do actually have some control over whether everybody dies and nothing human-like continues into the future. The simplest plan to avoid extinction by AI is "don't build the thing that kills us", but there are more sophisticated options... (read more)