I'm interested in how easy it would be to simulate just one present-day person's life rather than an entire planet's worth of people. Currently our chatbots are bad enough that we could not populate the world with NPC's; the lone human would quickly figure out that everyone else was... different, duller, incomprehensibly stupid, etc.

But what if the chatbots were designed by a superintelligent AI?

If a superintelligent AI was simulating my entire life from birth, would it be able to do it (for reasonably low computational resources cost, i.e. less than the cost of simulating another person) without simulating any other people in sufficient detail that they would be people?

I suspect that the answer is yes. If the answer is "maybe" or "no," I would very much like to hear tips on how to tell whether someone is an ideal chatbot.

Thoughts?

EDIT: In the comments most people are asking me to clarify what I mean by various things. By popular demand:

I interact with people in more ways than just textual communication. I also hear them, and see them move about. So when I speak of chatbots I don't mean bots that can do nothing but chat. I mean an algorithm governing the behavior of a simulated entire-human-body, that is nowhere near the complexity of a brain. (Modern chatbots are algorithms governing the behavior of a simulated human-hands-typing-on-keyboard, that are nowhere near the complexity of a brain.)

When I spoke of "simulating any other people in sufficient detail that they would be people" I didn't mean to launch us into a philosophical discussion of consciousness or personhood. I take it to be common ground among all of us here that very simple algorithms, such as modern chatbots, are not people. By contrast, many of us think that a simulated human brain would be a person. Assuming a simulated human brain would be a person, but a simple chatbot-like algorithm would not, my question is: Would any algorithm complex enough to fool me into thinking it was a person over the course of repeated interactions actually be a person? Or could all the bodies around me be governed by algorithms which are too simple to be people?

I realize that we have no consensus on how complex an algorithm needs to be to be a person. That's OK. I'm hoping that this conversation can answer my questions anyhow; I'm expecting answers along the lines of

(A) "For a program only a few orders of magnitude more complicated than current chatbots, you could be reliably fooled your whole life" or

(B) "Any program capable of fooling you would either draw from massive databases of pre-planned responses, which would be impractical, or actually simulate human-like reasoning."

These answers wouldn't settle the question for good without a theory of personhood, but that's OK with me, these answers would be plenty good enough.

New to LessWrong?

New Comment
18 comments, sorted by Click to highlight new comments since: Today at 10:41 PM

How long is a piece of string?

If you did live in such a world where everything was based around you then the controller could allocate resources even more efficiently by monitoring your mind and devoting more computational power to making the people around you believable when you get suspicious.

This is basically the model I use to estimate the computational power needed in case I am simulated (I don't care for what substrate my consciousness runs on as long as I have no access to it). Given that my mind apparently retains only a limited number of bits I'd guess that an efficient simulation could get away with astronomically less resources than needed to simulate all the atoms in the universe. Modern physics experiments would put quite some demands on the cause propagation of the algorithm because apparently quantum effects in light from distant stars are measured, but as long as I don't do the experiment the atoms don't actually need to be simulated.

http://lesswrong.com/lw/ii5/baseline_of_my_opinion_on_lw_topics/

As long as the experiment conforms to your expectations, they don't even need to simulate it. The only way they could get into trouble is if you expect a logical contradiction, they didn't spot it in advance, and you might eventually work that out.

"Oh, it turns out that was experimental error."

at least now that we have better equipment the earlier result seems not to be repeatable.

... which is a fairly common scenario.

That's exactly what I had in mind, although I did specify that the controller would never simulate anybody besides me to the level required to make them people.

Define "make them people". Because it sounds like what you're talking about is just the problem of other minds with "chatbot" substituted for "zombie"

I'm surprised that it sounded that way to you. I've amended my original post to clarify.

I don't understand the question. You interact with people in ways other than just chatting with them.

If you are asking whether it's possible to tell whether someone on the other side of the screen is a person or a really good chatbot, then the answer is that it depends on how good the chatbot (or the AI) is.

[-][anonymous]10y40

The amount of detail an AI would need for simulating realistic NPC's for you may be influenced substantially by a significant list of things like whether you are an introvert or an extrovert, what your job is, how many people you interact with over the course of a day and to what level of detail, and how many of those people you have very deep conversations with and how frequently, and if an acceptable AI answer to you mentioning to someone 'Everyone and everything seems so depressingly bland and repetitive' is a doctor telling you 'Have you tried taking medication? I'll write you a prescription for an antidepressant.'

Yes, this is the sort of consideration I had in mind. I'm glad the discussion is heading in this direction. Do you think the answer to my question hinges on those details though? I doubt it.

Perhaps if I was extraordinarily unsuspicious, chatbots of not much more sophistication than modern-day ones could convince me. But I think it is pretty clear that we will need more sophisticated chatbots to convince most people.

My question is, how much more sophisticated would they need to be? Specifically, would they need to be so much more sophisticated that they would be conscious on a comparable level to me, and/or would require comparable processing power to just simulating another person? For example, I've interacted a ton with my friends and family, and built up detailed mental models of their minds. Could they be chatbots/npcs, with minds that are nothing like the models I've made?

(Another idea: What if they are exactly like the models I've made? What if the chatbot works by detecting what I expect someone to say, and then having them say that, with a bit of random variation thrown in?)

The thing is, if you get suspicious you don't immediately leap to the conclusion of chatbots. Nobody glances around, realizes everyone is bland and stupid and thinks," I've been fooled! An AI has taken over the world and simulated chatbots as human beings!" unless they suffer from paranoia.

Your question, "How much more sophisticated would they need to be" is answered by the question "depends". If you live as a hermit in a cave up in the Himalayas, living off water from a mountain stream and eating nothing but what you hunt or gather with your bare hands, the AI will not need to use chatbots at all. If you're a social butterfly who regularly talks with some of the smartest people in the world, the AI will probably struggle(let's be frank; if we were introduced to a chatbot imitating Eliezer Yudkosky the difference would be fairly obvious).

If you've interacted a lot with your friends and family, and have never been once suspicious that they are a chatbot, then with our current level of AI technology it is unlikely(but not impossible) that they are actually chatbots.

[Please note that if everyone says what you expect them to say that would make it fairly obvious something is up, unless you happen to be a very, very good predictor of human behavior.]

Thanks for the response. Yes, it depends on how much interaction I have with human beings and on the kind of people I interact with. I'm mostly interested in my own case, of course, and I interact with a fair number of fairly diverse, fairly intelligent human beings on a regular basis.

If you're a social butterfly who regularly talks with some of the smartest people in the world, the AI will probably struggle

Ah, but would it? I'm not so sure, that's why I made this post.

Yes, if everyone always said what I predicted, things would be obvious, but recall I specified that random variation would be added. This appears to be how dream characters work: You can carry on sophisticated conversations with them, but (probably) they are governed by algorithms that feed off your own expectations. That being said, I now realize that the variation would have to be better than random in order to account for how e.g. EY consistently says things that are on-point and insightful despite being surprising to me.

Another trick it could use is using chatbots most of the time, but swaping them out for real people only for the moments you are actually talking about deep stuff. Maybe you have deep emotional conversations with your family a few hours a week. Maybe once per year, you have a 10 hour intense discussion with Eliezer. That's not a lot out of 24 hours per day, the vast majority of the computing power is still going into simulating your brain.

Edit: another; the chatbots might have some glaring failure modes if you say the wrong thing, unable to handle edge cases, but whenever you encounter then the sim is restored from a backup 10 min earlier and the specific bug is manually patched. If this went on for long enough the chatbots would become real people, and also bloat slow, but it hasn't happened yet. or maybe the patches that dont come up in long enoguh get commented out.

[-][anonymous]10y30

You should join my discussion club! Sadly, member count is restricted to one.

[This comment is no longer endorsed by its author]Reply
[-][anonymous]10y20

How would you feel about talking to a realistic AI-engineered chatbot?

Do you feel about talking to a realistic AI-engineered chatbot often?

Consider the Tatiana Maslany as Mara the deceiver strategy. There is already one true person in the system other than you. The AI. That means everyone you interact with can be arbitrarily sophisticated without adding any more people to the system - All it takes is for the AI to directly puppet them. Acting in place of making bots.

This is, however, not a fruitful line of thinking, If the universe were really out to get you to this degree you would be better of being oblivious.

Yes, but I'm not sure there is a difference between an AI directly puppeting them, and an AI designing a chatbot to run as a subroutine to puppet them, at least if the AI is willing to monitor the chatbot and change it as necessary. Do you think there is?

Also, it totally is a fruitful line of thinking. It is better to believe the awful truth than a horrible lie. At least according to my values. Besides, we haven't yet established that the truth would be awful in this case.