This discussion was about agential risks, the part I quoted was talking about extreme ecoterrorism as a result of environmental degradation. In other words, the main post was partially about stricter regulations on CO2 as a means of minimizing the risk of a potential doomsday scenario from an anti global warming group.
I think the issue here might be slightly different than posed. I think the real issue is that children instinctively assume they're running on corrupted hardware. For all priors in math, they've had a solvable problem. They've had problems they couldn't solve, and then been shown it was a mistake on their part. Without good cause, why would they suddenly assume all their priors are wrong, and not just that they're failing to grasp it? Given their priors and information, it's ration to expect that they missed something.
Citation? This is commonly asserted by AI risk proponents, but I'm not sure I believe it. My best friend's values are slightly misaligned relative to my own, but if my best friend became superintelligent, that seems to me like it'd be a pretty good outcome.
Furthermore, implementing stricter regulations on CO2 emissions could decrease the probability of extreme ecoterrorism and/or apocalyptic terrorism, since environmental degradation is a “trigger” for both.
Disregarding any discussion of legitimate climate concerns, isn't this a really bad decision? Isn't it better to be unblackmailable, to disincentivize blackmail.
I also think that these scenarios usually devolve into a "would you rather..." game that is not very illuminating of either underlying moral values or the validity of ethical frameworks.
Can you expand on this a bit? (Full disclosure I'm still relatively new to Less Wrong, and still learning quite a bit that I think most people here have a firm grip on.) I would think they illuminate a great deal about our underlying moral values, if we assume they're honest answers and that people are actually bound by their morals (or are at least answering a...
There are a lot of conflicting aspects to consider here outside of a vacuum. Discounting the unknown unknowns, which could factor heavily here since it's an emotionally biasing topic, you've got the fact that the baby is going to be raised by an assumably attentive mother, as opposed to the 5 who wound up in that situation once, showing at least some increased risk of falling victim to such a situation again. Then you have the psychological damage to the mother, which is going to be even greater because she had to do the act herself. Then you've got the fa...
Personally, I think the update most people should be making is the one getting the least attention. That even a 30% chance means 3 out of 10 times. Things far more unlikely than 3 out of 10 happen every day. But because we assign such importance to the election, we assign a much greater confidence to our predictions, even when we know we're not that confident.