All of trurl's Comments + Replies

trurl30

I agree with this, but I think I'm making a somewhat different point. 

An extinction event tomorrow would create significant certainty, in the sense that it determines the future outcome. But its value is still highly uncertain, because the sign of the curtained future is unknown. A bajillion years is a long time, and I don't see any reason to presume that a bajillion years of increasing technological power and divergence from the 21st century human experience will be positive on net. I hope it is, but I don't think my hope resolves the sign uncertainty. 

trurl10

I agree that longtermist priorities tend to also be beneficial in the near-term, and that sign uncertainty is perhaps a more central consideration than the initial post lets on.

However, I do want to push back on the voting example. I think the point about small probabilities mattering in an election holds if, as you say, we assume we know who the best candidate is. But it seems unlikely to me that we can ever have such sign certainty on a longtermist time-horizon. 

To illustrate this, I'd like to reconsider the voting example in the context of a long t... (read more)

4Steven Byrnes
I think we can say some things with reasonable certainty about the long term future. Two examples: First, if humans go extinct in the next couple decades, they will probably remain extinct ever after. Second, it's at least possible for a powerful AGI to become a singleton, wipe out or disempower other intelligent life, and remain stably in control of the future for the next bajillion years, including colonizing the galaxy or whatever. After all, AGIs can make perfect copies of themselves, AGIs don't age like humans do, etc. And this hypothetical future singleton AGI is something that might potentially be programmed by humans who are already alive today, as far as anyone knows. (My point in the second case is not "making a singleton AGI is something we should be trying to do, as a way to influence the long term future". Instead, my point is "making a singleton AGI is something that people might do, whether we want them to or not … and moreover those people might do it really crappily, like without knowing how to control the motivations of the AGI they're making. And if that happens, that could be an extremely negative influence on the very long term future. So that means that one way to have an extremely positive influence on the very long term future is to prevent that bad thing from happening.)