Pushing the button can't make you a psychopath. You're either already a psychopath or you're not. If you're not, you will not push the button, although you might consider pushing it.
Maybe I was unclear.
I'm arguing that the button will never, ever be pushed. If you are NOT a psychopath, you won't push, end of story.
If you ARE A psychopath, you can choose to push or not push.
if you push, that's evidence you are a psychopath. If you are a psychopath, you should not push. Therefore, you will always end up regretting the decision to push.
If you don't push, you don't push and nothing happens.
In all three cases the correct decision is not to push, therefore you should not push.
Most people would die before they think. Most do.
-AC Grayling
What about talking to your rational self? It seems like this accomplishes the benefits of talking to yourself and improves upon some of them.
Yes, this is the sort of consideration I had in mind. I'm glad the discussion is heading in this direction. Do you think the answer to my question hinges on those details though? I doubt it.
Perhaps if I was extraordinarily unsuspicious, chatbots of not much more sophistication than modern-day ones could convince me. But I think it is pretty clear that we will need more sophisticated chatbots to convince most people.
My question is, how much more sophisticated would they need to be? Specifically, would they need to be so much more sophisticated that they would be conscious on a comparable level to me, and/or would require comparable processing power to just simulating another person? For example, I've interacted a ton with my friends and family, and built up detailed mental models of their minds. Could they be chatbots/npcs, with minds that are nothing like the models I've made?
(Another idea: What if they are exactly like the models I've made? What if the chatbot works by detecting what I expect someone to say, and then having them say that, with a bit of random variation thrown in?)
The thing is, if you get suspicious you don't immediately leap to the conclusion of chatbots. Nobody glances around, realizes everyone is bland and stupid and thinks," I've been fooled! An AI has taken over the world and simulated chatbots as human beings!" unless they suffer from paranoia.
Your question, "How much more sophisticated would they need to be" is answered by the question "depends". If you live as a hermit in a cave up in the Himalayas, living off water from a mountain stream and eating nothing but what you hunt or gather with your bare hands, the AI will not need to use chatbots at all. If you're a social butterfly who regularly talks with some of the smartest people in the world, the AI will probably struggle(let's be frank; if we were introduced to a chatbot imitating Eliezer Yudkosky the difference would be fairly obvious).
If you've interacted a lot with your friends and family, and have never been once suspicious that they are a chatbot, then with our current level of AI technology it is unlikely(but not impossible) that they are actually chatbots.
[Please note that if everyone says what you expect them to say that would make it fairly obvious something is up, unless you happen to be a very, very good predictor of human behavior.]
If only a psychopath would push the button, then your possible non-psychopathic nature limits what decision algorithms you are capable of following.
Wouldn't the fact that you're even considering pushing the button(because if only a psychopath would push the button then it follows that a non-psychopath would never push the button) indicate that you are a psychopath and therefore you should not push the button?
Another way to put it is:
If you are a psychopath and you push the button, you die. If you are not a psychopath and you push the button, pushing the button would make you a psychopath(since only a psychopath would push), and therefore you die.
What I'm interested in is whether this method is applicable to social situations as well. I am not a naturally social person, but have studied how people interact and general social behaviors well enough that I can create a simulation of a "socially acceptable helltank".
I already have mental triggers (what I like to call "scripts") in place for a simulation of my rational mind - or rather a portion of my rational mind kept in isolation from bias and metaphorically disconnected from the other parts of my mind to override my "main" portion of my mind in case the main portion becomes irrational at some point, similar to a backup system overriding a corrupted main system.
Until today, however, I have not thought of using them to simulate social skills. I suppose I might eventually spread out a bunch of simulations, what eli_sennesh called a Parliament of different aspects of your personality in his cancelled post, in order to guide my decision making in certain situations, with a "master aspect" (the aforementioned rational simulation) controlling when to give an aspect override privileges.
Still a very good post. Thank you for it.
View more: Prev
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I'm really having a lot of trouble understanding why the answer isn't just:
1000/1001 chance I'm about to be transported to a tropical island 0 chance given I didn't make the oath.
Assuming that uploaded you memory blocks his own uploading when running simulations.