Hi, nim!
Thanks for commenting : )
Yes, exactly I used speech-to-text but actually the chatGPT speech-to-text software on their app because I like the UI better and I think it performs better too. Yeah, the heal heel thing miffed me slightly but I think it is a fun artifact since it doesn't actually change the meaning.
Well for one I didn't prompt for a whole essay. In one chat I lightly edited the snippets from my walk, then I took the final essay generated from another chat about the Black Chess Box to synthesise into the Sidebar and similarly ...
Hmm, I hadn't thought of the implications of chaining the logic behind the superintelligences policy - thanks for highlighting it!
I guess the main aim of the post was to highlight the existence of an opportunity cost to prioritising contemporary beings and how alignment doesn't solve that issue, but I guess there are also some normative claims that this policy could be justified.
Nevertheless, I'm not sure that the paradox necessarily applies to the policy in this scenario. Specifically, I think
>as long as we discover ever vaster possible tomorrows...
I only skimmed that category but if I'm not mistaken the kind of systems I describe in the piece are special cases of times when the boundary between defining agents and one agent and another is unclear/pivotal/insightful etc.