Hmm, I hadn't thought of the implications of chaining the logic behind the superintelligences policy - thanks for highlighting it!
I guess the main aim of the post was to highlight the existence of an opportunity cost to prioritising contemporary beings and how alignment doesn't solve that issue, but I guess there are also some normative claims that this policy could be justified.
Nevertheless, I'm not sure that the paradox necessarily applies to the policy in this scenario. Specifically, I think >as long as we discover ever vaster possible tomorrows doesn't hold. The fact that the accessible universe is finite and there is a finite amount of time before heat death means that there is some ultimate possible tomorrow?
Also, I think that sacrifices of the nature described in the post come in discrete steps with potentially large time differences between them allowing you to realise the gains of a particular future before the next sacrifice if that makes sense.
Hmm, I hadn't thought of the implications of chaining the logic behind the superintelligences policy - thanks for highlighting it!
I guess the main aim of the post was to highlight the existence of an opportunity cost to prioritising contemporary beings and how alignment doesn't solve that issue, but I guess there are also some normative claims that this policy could be justified.
Nevertheless, I'm not sure that the paradox necessarily applies to the policy in this scenario. Specifically, I think
>as long as we discover ever vaster possible tomorrows
doesn't hold. The fact that the accessible universe is finite and there is a finite amount of time before heat death means that there is some ultimate possible tomorrow?
Also, I think that sacrifices of the nature described in the post come in discrete steps with potentially large time differences between them allowing you to realise the gains of a particular future before the next sacrifice if that makes sense.