Comment author: [deleted] 03 December 2013 12:17:11AM 7 points [-]

Excuse me, but this sounds to me like a terrible argument. If the far future goes right, our descendents will despise us as complete ignorant barbarians and won't give a crap what we did or didn't do. If it goes wrong (ie: rocks fall, everyone dies), then all those purported descendents aren't a minus on our humane-ness ledger, they're a zero: potential people don't count (since they're infinite in number and don't exist, after all).

Besides, I damn well do care how people lived 5000 years ago, and I would certainly hope that my great-to-the-Nth-grandchildren will care how I live today. This should especially matter to someone whose idea of the right future involves being around to meet those descendents, in which case the preservation of lives ought to matter quite a lot.

God knows you have an x-risk fetish, but other than FAI (which carries actual benefits aside from averting highly improbable extinction events) you've never actually justified it. There has always been some small risk we could all be wiped out by a random disaster. The world has been overdue for certain natural disasters for millenia now, and we just don't really have a way to prevent any of them. Space colonization would help, but there are vast and systematic reasons why we can't do space colonization right now.

Except, of course, the artificial ones: nuclear winter, global warming, blah blah blah. Those, however, like all artificial problems, are deeply tied in with the human systems generating them, and they need much more systematic solutions than "donate to this anti-global-warming charity to meliorate the impact or reduce the risk of climate change killing everyone everywhere". But rather like the Silicon Valley start-up community, there's a nasty assumption that problems too large for 9 guys in a basement simply don't exist.

You seem to suffer a bias where you simply say, "people are fools and the world is insane" and thus write off any notion of doing something about it, modulo your MIRI/CFAR work.

In response to comment by [deleted] on A critique of effective altruism
Comment author: michaeldello 11 July 2016 07:32:56AM 2 points [-]

I think future humans are definitely worthy of consideration. Consider placing a time bomb in a childcare centre for 6 year old kids set to go off in 10 years. Even though the children who will be blown up don't yet exist, this is still a bad thing to do, because it robs those kids of their future happiness and experience.

If you subscribe to the block model of the universe, then time is just another dimension, and future beings exist in the same way that someone in the room over who you can't see also exists.

Comment author: Strange7 14 December 2013 08:49:20AM 2 points [-]

Any given asteroid will either be detected and deflected in time, or not. There, to my understanding at least, no mediocre level of asteroid impact risk management which makes the situation worse, in the sense of outright increasing the chance of an extinction event. More resources could be invested for further marginal improvements, with no obvious upper bound.

Poverty and disease are more complicated problems. Incautious use of antibiotics leads to disease-resistant strains, or you give a man a fish and he spends the day figuring out how to ask you for another instead of repairing his net. Sufficient resources need to be committed to solve the problem completely, or it just becomes even more of a mess. Once it's solved, it tends to stay solved, and then there are more resources available for everything else because the population of healthy, adequately-capitalized humans has increased.

In a situation like that, my preferred strategy is to focus on the end-in-sight problem first, and compare the various bottomless pits afterward.

Comment author: michaeldello 11 July 2016 01:34:42AM 0 points [-]

I would have to disagree that there is no mediocre way to make asteroid risk worse through poor impact risk management, but perhaps it depends on what we mean by this. If we're strictly talking about the risk of some unmitigated asteroid hitting Earth, there is indeed likely nothing we can do to increase this risk. However, a poorly construed detection, characterisation and deflection process could deflect an otherwise harmless asteroid into Earth. Further, developing deflection techniques could make it easier for people with malicious intent to deflect an otherwise harmless asteroid into Earth on purpose. Given how low the natural risk of a catastrophic asteroid impact is, I would argue that the chances of a man-made asteroid impact (either on purpose or by accident) is much higher than the chances of a natural one occurring in the next 100 years.