Luke_A_Somers comments on Existential Risk and Existential Hope: Definitions - Less Wrong

7 Post author: owencb 10 January 2015 07:09PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (38)

You are viewing a single comment's thread. Show more comments above.

Comment author: Luke_A_Somers 12 January 2015 06:30:02PM *  2 points [-]

That value wasn't lost; they would have updated to reassess their expected value.

Comment author: VAuroch 17 January 2015 02:52:08AM 0 points [-]

That requires a precise meaning of expected value in this context that includes only certain varieties of uncertainty. It would take into account the actual probability that, for example, a comet exists which is on a collision course with the Earth, but could not include the state of our knowledge about whether that is the case.

If it did include states of knowledge, then going from 'low probability that a comet strikes the Earth and wipes out all or most human life' to 'Barring our action to avoid it, near-certainty that a comet will strike the Earth and wipe out all or most human life' is itself a catastrophic event and should be avoided.

Comment author: Luke_A_Somers 17 January 2015 05:30:41PM 1 point [-]

That requires a precise meaning of expected value in this context that includes only certain varieties of uncertainty.

Kind-of? You assess past expected values in light of information you have now, not just the information you had then. That way, finding out bad news isn't the catastrophe.

Comment author: hairyfigment 17 January 2015 06:08:44AM -2 points [-]

The line seems ambiguous, and I don't like this talk of "objective probabilities" used to explain it. But you seem to be talking about E(V) as calculated by a hypothetical future agent after updating. Presumably the present agent looking at this future possibility only cares about its present calculated E(V) given that hypothetical, which need not be the same (if it deals with counterfactuals in a sensible way). To the extent that they are equal, it means the future agent is correct - in other words, the "catastrophic event" has already occurred - and finding this out would actually raise E(V) given that assumption.

Comment author: VAuroch 17 January 2015 10:30:32AM 1 point [-]

When someone is ignorant of the actual chance of a catastrophic event happening, even if they consider it possible, they will have fairly high EV. When they update significantly toward the chance of that event happening, their EV will drop very significantly. This change itself meets the definition of 'existential catastrophe'.

Comment author: RichardKennaway 17 January 2015 06:52:50PM 1 point [-]

Sounds like evidential decision theory again. According to that argument, you should maintain high EV by avoiding looking into existential risks.

Comment author: VAuroch 18 January 2015 03:53:47AM 0 points [-]

Yes, that's my issue with the paper; it doesn't distinguish that from actual catastrophes.

Comment author: hairyfigment 17 January 2015 05:20:33PM *  0 points [-]

I don't know what you think you're saying - the definition no longer says that if you consider it to refer to E(V) as calculated by the agent at the first time (conditional on the "catastrophe").

ETA: "An existential catastrophe is an event which causes the loss of most expected value."