Lumifer comments on Existential Risk and Existential Hope: Definitions - LessWrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (38)
I haven't had a chance to read the report fully yet but my immediate reaction from looking at the first two definitions given is that they don't rely on a specific moral framework or class of moral framework whereas the proposed definition seems to rely on some utilitarian or close to utilitarian notion.
Edit: Now read the whole thing, and it would be nice to have some substantial address to the issue raised in the above paragraph. Also, calling this a technical report seems a little overblown given how short it is. And as a matter of signaling a formal bibliography would be nice.
I agree that it's short; now added this as a descriptor above. Technical report was the most appropriate category; they're usually longer.
We address this, saying:
What counts as an existential catastrophe does depend on the moral framework (which seems appropriate), but doesn't seem tied to any specific one. I agree that the simple definition (extinction) dodges anything like this, and that that is a point in its favour.
Different frameworks can definitely disagree on whether some events are catastrophes. E.g., a new World War erupting might seem a good thing to some who believe in the Rapture.
If you're saying that some nontrivial subset of potential catastrophes are universally regarded as such, then I think that should be substantiated. If OTOH you saying this is true as long as you ignore some parts of humanity, then you should specify which parts.