I'm interested in how easy it would be to simulate just one present-day person's life rather than an entire planet's worth of people. Currently our chatbots are bad enough that we could not populate the world with NPC's; the lone human would quickly figure out that everyone else was... different, duller, incomprehensibly stupid, etc.
But what if the chatbots were designed by a superintelligent AI?
If a superintelligent AI was simulating my entire life from birth, would it be able to do it (for reasonably low computational resources cost, i.e. less than the cost of simulating another person) without simulating any other people in sufficient detail that they would be people?
I suspect that the answer is yes. If the answer is "maybe" or "no," I would very much like to hear tips on how to tell whether someone is an ideal chatbot.
Thoughts?
EDIT: In the comments most people are asking me to clarify what I mean by various things. By popular demand:
I interact with people in more ways than just textual communication. I also hear them, and see them move about. So when I speak of chatbots I don't mean bots that can do nothing but chat. I mean an algorithm governing the behavior of a simulated entire-human-body, that is nowhere near the complexity of a brain. (Modern chatbots are algorithms governing the behavior of a simulated human-hands-typing-on-keyboard, that are nowhere near the complexity of a brain.)
When I spoke of "simulating any other people in sufficient detail that they would be people" I didn't mean to launch us into a philosophical discussion of consciousness or personhood. I take it to be common ground among all of us here that very simple algorithms, such as modern chatbots, are not people. By contrast, many of us think that a simulated human brain would be a person. Assuming a simulated human brain would be a person, but a simple chatbot-like algorithm would not, my question is: Would any algorithm complex enough to fool me into thinking it was a person over the course of repeated interactions actually be a person? Or could all the bodies around me be governed by algorithms which are too simple to be people?
I realize that we have no consensus on how complex an algorithm needs to be to be a person. That's OK. I'm hoping that this conversation can answer my questions anyhow; I'm expecting answers along the lines of
(A) "For a program only a few orders of magnitude more complicated than current chatbots, you could be reliably fooled your whole life" or
(B) "Any program capable of fooling you would either draw from massive databases of pre-planned responses, which would be impractical, or actually simulate human-like reasoning."
These answers wouldn't settle the question for good without a theory of personhood, but that's OK with me, these answers would be plenty good enough.
The thing is, if you get suspicious you don't immediately leap to the conclusion of chatbots. Nobody glances around, realizes everyone is bland and stupid and thinks," I've been fooled! An AI has taken over the world and simulated chatbots as human beings!" unless they suffer from paranoia.
Your question, "How much more sophisticated would they need to be" is answered by the question "depends". If you live as a hermit in a cave up in the Himalayas, living off water from a mountain stream and eating nothing but what you hunt or gather with your bare hands, the AI will not need to use chatbots at all. If you're a social butterfly who regularly talks with some of the smartest people in the world, the AI will probably struggle(let's be frank; if we were introduced to a chatbot imitating Eliezer Yudkosky the difference would be fairly obvious).
If you've interacted a lot with your friends and family, and have never been once suspicious that they are a chatbot, then with our current level of AI technology it is unlikely(but not impossible) that they are actually chatbots.
[Please note that if everyone says what you expect them to say that would make it fairly obvious something is up, unless you happen to be a very, very good predictor of human behavior.]
Thanks for the response. Yes, it depends on how much interaction I have with human beings and on the kind of people I interact with. I'm mostly interested in my own case, of course, and I interact with a fair number of fairly diverse, fairly intelligent human beings on a regular basis.
Ah, but would it? I'm not so sure, that's why I made this post.
Yes, if everyone always said what I predicted, things would be obvious, but recall I... (read more)