I'm interested in how easy it would be to simulate just one present-day person's life rather than an entire planet's worth of people. Currently our chatbots are bad enough that we could not populate the world with NPC's; the lone human would quickly figure out that everyone else was... different, duller, incomprehensibly stupid, etc.
But what if the chatbots were designed by a superintelligent AI?
If a superintelligent AI was simulating my entire life from birth, would it be able to do it (for reasonably low computational resources cost, i.e. less than the cost of simulating another person) without simulating any other people in sufficient detail that they would be people?
I suspect that the answer is yes. If the answer is "maybe" or "no," I would very much like to hear tips on how to tell whether someone is an ideal chatbot.
Thoughts?
EDIT: In the comments most people are asking me to clarify what I mean by various things. By popular demand:
I interact with people in more ways than just textual communication. I also hear them, and see them move about. So when I speak of chatbots I don't mean bots that can do nothing but chat. I mean an algorithm governing the behavior of a simulated entire-human-body, that is nowhere near the complexity of a brain. (Modern chatbots are algorithms governing the behavior of a simulated human-hands-typing-on-keyboard, that are nowhere near the complexity of a brain.)
When I spoke of "simulating any other people in sufficient detail that they would be people" I didn't mean to launch us into a philosophical discussion of consciousness or personhood. I take it to be common ground among all of us here that very simple algorithms, such as modern chatbots, are not people. By contrast, many of us think that a simulated human brain would be a person. Assuming a simulated human brain would be a person, but a simple chatbot-like algorithm would not, my question is: Would any algorithm complex enough to fool me into thinking it was a person over the course of repeated interactions actually be a person? Or could all the bodies around me be governed by algorithms which are too simple to be people?
I realize that we have no consensus on how complex an algorithm needs to be to be a person. That's OK. I'm hoping that this conversation can answer my questions anyhow; I'm expecting answers along the lines of
(A) "For a program only a few orders of magnitude more complicated than current chatbots, you could be reliably fooled your whole life" or
(B) "Any program capable of fooling you would either draw from massive databases of pre-planned responses, which would be impractical, or actually simulate human-like reasoning."
These answers wouldn't settle the question for good without a theory of personhood, but that's OK with me, these answers would be plenty good enough.
The amount of detail an AI would need for simulating realistic NPC's for you may be influenced substantially by a significant list of things like whether you are an introvert or an extrovert, what your job is, how many people you interact with over the course of a day and to what level of detail, and how many of those people you have very deep conversations with and how frequently, and if an acceptable AI answer to you mentioning to someone 'Everyone and everything seems so depressingly bland and repetitive' is a doctor telling you 'Have you tried taking medication? I'll write you a prescription for an antidepressant.'
Yes, this is the sort of consideration I had in mind. I'm glad the discussion is heading in this direction. Do you think the answer to my question hinges on those details though? I doubt it.
Perhaps if I was extraordinarily unsuspicious, chatbots of not much more sophistication than modern-day ones could convince me. But I think it is pretty clear that we will need more sophisticated chatbots to convince most people.
My question is, how much more sophisticated would they need to be? Specifically, would they need to be so much more sophisticated that they wou... (read more)