I'm interested in how easy it would be to simulate just one present-day person's life rather than an entire planet's worth of people. Currently our chatbots are bad enough that we could not populate the world with NPC's; the lone human would quickly figure out that everyone else was... different, duller, incomprehensibly stupid, etc.
But what if the chatbots were designed by a superintelligent AI?
If a superintelligent AI was simulating my entire life from birth, would it be able to do it (for reasonably low computational resources cost, i.e. less than the cost of simulating another person) without simulating any other people in sufficient detail that they would be people?
I suspect that the answer is yes. If the answer is "maybe" or "no," I would very much like to hear tips on how to tell whether someone is an ideal chatbot.
Thoughts?
EDIT: In the comments most people are asking me to clarify what I mean by various things. By popular demand:
I interact with people in more ways than just textual communication. I also hear them, and see them move about. So when I speak of chatbots I don't mean bots that can do nothing but chat. I mean an algorithm governing the behavior of a simulated entire-human-body, that is nowhere near the complexity of a brain. (Modern chatbots are algorithms governing the behavior of a simulated human-hands-typing-on-keyboard, that are nowhere near the complexity of a brain.)
When I spoke of "simulating any other people in sufficient detail that they would be people" I didn't mean to launch us into a philosophical discussion of consciousness or personhood. I take it to be common ground among all of us here that very simple algorithms, such as modern chatbots, are not people. By contrast, many of us think that a simulated human brain would be a person. Assuming a simulated human brain would be a person, but a simple chatbot-like algorithm would not, my question is: Would any algorithm complex enough to fool me into thinking it was a person over the course of repeated interactions actually be a person? Or could all the bodies around me be governed by algorithms which are too simple to be people?
I realize that we have no consensus on how complex an algorithm needs to be to be a person. That's OK. I'm hoping that this conversation can answer my questions anyhow; I'm expecting answers along the lines of
(A) "For a program only a few orders of magnitude more complicated than current chatbots, you could be reliably fooled your whole life" or
(B) "Any program capable of fooling you would either draw from massive databases of pre-planned responses, which would be impractical, or actually simulate human-like reasoning."
These answers wouldn't settle the question for good without a theory of personhood, but that's OK with me, these answers would be plenty good enough.
Consider the Tatiana Maslany as Mara the deceiver strategy. There is already one true person in the system other than you. The AI. That means everyone you interact with can be arbitrarily sophisticated without adding any more people to the system - All it takes is for the AI to directly puppet them. Acting in place of making bots.
This is, however, not a fruitful line of thinking, If the universe were really out to get you to this degree you would be better of being oblivious.
Yes, but I'm not sure there is a difference between an AI directly puppeting them, and an AI designing a chatbot to run as a subroutine to puppet them, at least if the AI is willing to monitor the chatbot and change it as necessary. Do you think there is?
Also, it totally is a fruitful line of thinking. It is better to believe the awful truth than a horrible lie. At least according to my values. Besides, we haven't yet established that the truth would be awful in this case.