I restate my comment on Substack here.
I think this is the story of an AI, not a human. This is a future I find horrifying, where humanity dies out, never realizing it until the end. Many here seem to think it is enough as long as a super intelligence does not wipe out humanity, helping it instead. But for humanity, any being that makes humanity redundant is a death knell in the long run. This is a kind of Moloch situation.
To go into the specifics, when the author seem to use the 'Ship of Theseus' argument, he did not seem to realize that if the boat is dismantled piece by piece and a house is built with the pieces, it is definitely no longer a ship, let alone the same ship. In fact, the change starts with internalization of AI, or more precisely, when the protagonist stopped using his biological mind in favor of AI making the decisions.
I do not think 'growth mindset' is necessary for growth if one understands what 'talent' really means. I define 'talent' in a task as competence at learning new things in that particular task. I think people generally see their current learning speed as limited by their 'talent', but it is actually limited by concentration/effort/dedication.
After certain stage, the later matter a lot more than what many think. We also see that people with growth mindset do not improve their talent, but the other things. It would be good if people with fixed mindset realize that talent is not everything. This is not a question of 'mindset', but unbiased review of competence function.
I disagree with your first point. You are saying people who use a tool are already 'post human' in some sense. But then, are people who can use abacus in 14th century post human? Are African tribes that use their technical knowledge to hunt animals, less human than a hypothetical tribe that never got to use anything like a spear, and fight with their bare hands? By that logic, chimps are more 'human' than humans!
I think we can draw a line. Algorithms are more or less things tools that give answers to what we want. It is a mistake to think they are above humans; computers just let us effectively use them. Is a person using LLMs in work human? No to me. But purely algorithmic tools get a pass. The point is that when AIs inform us, they take away a part of our 'agency'.
One might ask, how is this different from asking answer from just another person? My answer is, the one they asked is not a human. It is a 'demon of statistics', as I call it. It is something that knows statistical associations between every word on the internet, and can construct meaning based on this alone. This is clearly beyond human capability. Note that my distinction is based on my belief that knowledge of a 'demon of statistics' is fundamentally different from that of humans.
But take the part of the story where the protagonist stops thinking with his brain, and gives up decision making to AI that is 'similar' to him. This is not human, and it is where I draw the line. But with his usage of AIs from the start, we can also argue he was never fully 'human' to begin with. We who were born before the birth of AI can be considered the last of 'true' humanity.
Using this definition, anyone who uses, say, Chat GPT, to make any decision for them, even for a small part of their life, are already no longer human. But they can revert to a human again if the decisions informed by AI no longer effect them, something potentially effectively impossible in an AI dominated world.
I consulted Chat GPT about this very paragraph now, and it replied if I considered someone consults AI, and makes the final decision themselves, are they still human?
And my reply was this:
So, I am already not just a human, a different being. That said, I am all in for the Butlerarian Jihad... If humans can do the work, let no AI do it!