hairyfigment comments on Existential Risk and Existential Hope: Definitions - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (38)
The line seems ambiguous, and I don't like this talk of "objective probabilities" used to explain it. But you seem to be talking about E(V) as calculated by a hypothetical future agent after updating. Presumably the present agent looking at this future possibility only cares about its present calculated E(V) given that hypothetical, which need not be the same (if it deals with counterfactuals in a sensible way). To the extent that they are equal, it means the future agent is correct - in other words, the "catastrophic event" has already occurred - and finding this out would actually raise E(V) given that assumption.
When someone is ignorant of the actual chance of a catastrophic event happening, even if they consider it possible, they will have fairly high EV. When they update significantly toward the chance of that event happening, their EV will drop very significantly. This change itself meets the definition of 'existential catastrophe'.
Sounds like evidential decision theory again. According to that argument, you should maintain high EV by avoiding looking into existential risks.
Yes, that's my issue with the paper; it doesn't distinguish that from actual catastrophes.
I don't know what you think you're saying - the definition no longer says that if you consider it to refer to E(V) as calculated by the agent at the first time (conditional on the "catastrophe").
ETA: "An existential catastrophe is an event which causes the loss of most expected value."