lukeprog comments on Will the world's elites navigate the creation of AI just fine? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (266)
From Sunstein's Worst-Case Scenarios:
More (#2) from Worst-Case Scenarios:
More (#5) from Worst-Case Scenarios:
More (#4) from Worst-Case Scenarios:
More (#3) from Worst-Case Scenarios:
And:
Similar issues are raised by the continuing debate over whether certain antidepressants impose a (small) risk of breast cancer. A precautionary approach might seem to argue against the use of these drugs because of their carcinogenic potential. But the failure to use those antidepressants might well impose risks of its own, certainly psychological and possibly even physical (because psychological ailments are sometimes associated with physical ones as well). Or consider the decision by the Soviet Union to evacuate and relocate more than 270,000 people in response to the risk of adverse effects from the Chernobyl fallout. It is hardly clear that on balance this massive relocation project was justified on health grounds: "A comparison ought to have been made between the psychological and medical burdens of this measure (anxiety, psychosomatic diseases, depression and suicides) and the harm that may have been prevented." More generally, a sensible government might want to ignore the small risks associated with low levels of radiation, on the ground that precautionary responses are likely to cause fear that outweighs any health benefits from those responses - and fear is not good for your health.
And:
More (#1) from Worst-Case Scenarios:
But at least so far in the book, Sunstein doesn't mention the obvious rejoinder about investing now to prevent existential catastrophe.
Anyway, another quote: