New Comment
3 comments, sorted by Click to highlight new comments since: Today at 6:07 AM
[-][anonymous]7y40

Short summary: This is a transcript of a talk Nick Bostrom gave about a framework to think about how to deal with potential major shifts in our reasoning when evaluating risks in a GCR context.

I'd add that it also starts to formalise the phenomenon where one's best judgement oscillates back and forth with each layer of an argument. It's not clear what to do when something seems a strong net positive, then a strong negative, then a strong positive again after more consideration. If the value of information is high, but it's difficult to make any headway, what should we even do?

This is especially common for complex problems like xrisk. It also makes us extremely prone to bias, since we by default question conclusions we don't like more than ones we do.

something like the Parliamentary Model for taking normative uncertainty into account looks fairly robust. This is the idea that if you are unsure as to which moral theory is true, then you should assign probabilities to different moral theories and imagine that there were a parliament where each moral theory got to send delegates to that parliament in proportion to their probability.

Until now, I don't think a strong voice ever gave me permission to cringe along the middle road as an unambitious, teleologically inconsistent, hand-wringing moderate, a bastard child of prudence and myopia with no vision and no sense of opportunity.

But, like, once you formulate that as a method for acknowledging and managing profound collective uncertainty, it kind of makes sense. So many ideologies claim to know far more than anyone really could about complex social systems, and I still don't think I fully understand how they get away with it.