Yes, this is the sort of consideration I had in mind. I'm glad the discussion is heading in this direction. Do you think the answer to my question hinges on those details though? I doubt it.
Perhaps if I was extraordinarily unsuspicious, chatbots of not much more sophistication than modern-day ones could convince me. But I think it is pretty clear that we will need more sophisticated chatbots to convince most people.
My question is, how much more sophisticated would they need to be? Specifically, would they need to be so much more sophisticated that they would be conscious on a comparable level to me, and/or would require comparable processing power to just simulating another person? For example, I've interacted a ton with my friends and family, and built up detailed mental models of their minds. Could they be chatbots/npcs, with minds that are nothing like the models I've made?
(Another idea: What if they are exactly like the models I've made? What if the chatbot works by detecting what I expect someone to say, and then having them say that, with a bit of random variation thrown in?)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
What about talking to your rational self? It seems like this accomplishes the benefits of talking to yourself and improves upon some of them.