The Strangest Thing An AI Could Tell You

81 Post author: Eliezer_Yudkowsky 15 July 2009 02:27AM

Human beings are all crazy.  And if you tap on our brains just a little, we get so crazy that even other humans notice.  Anosognosics are one of my favorite examples of this; people with right-hemisphere damage whose left arms become paralyzed, and who deny that their left arms are paralyzed, coming up with excuses whenever they're asked why they can't move their arms.

A truly wonderful form of brain damage - it disables your ability to notice or accept the brain damage.  If you're told outright that your arm is paralyzed, you'll deny it.  All the marvelous excuse-generating rationalization faculties of the brain will be mobilized to mask the damage from your own sight.  As Yvain summarized:

After a right-hemisphere stroke, she lost movement in her left arm but continuously denied it. When the doctor asked her to move her arm, and she observed it not moving, she claimed that it wasn't actually her arm, it was her daughter's. Why was her daughter's arm attached to her shoulder? The patient claimed her daughter had been there in the bed with her all week. Why was her wedding ring on her daughter's hand? The patient said her daughter had borrowed it. Where was the patient's arm? The patient "turned her head and searched in a bemused way over her left shoulder".

I find it disturbing that the brain has such a simple macro for absolute denial that it can be invoked as a side effect of paralysis.  That a single whack on the brain can both disable a left-side motor function, and disable our ability to recognize or accept the disability.  Other forms of brain damage also seem to both cause insanity and disallow recognition of that insanity - for example, when people insist that their friends have been replaced by exact duplicates after damage to face-recognizing areas.

And it really makes you wonder...

...what if we all have some form of brain damage in common, so that none of us notice some simple and obvious fact?  As blatant, perhaps, as our left arms being paralyzed?  Every time this fact intrudes into our universe, we come up with some ridiculous excuse to dismiss it - as ridiculous as "It's my daughter's arm" - only there's no sane doctor watching to pursue the argument any further.  (Would we all come up with the same excuse?)

If the "absolute denial macro" is that simple, and invoked that easily...

Now, suppose you built an AI.  You wrote the source code yourself, and so far as you can tell by inspecting the AI's thought processes, it has no equivalent of the "absolute denial macro" - there's no point damage that could inflict on it the equivalent of anosognosia.  It has redundant differently-architected systems, defending in depth against cognitive errors.  If one system makes a mistake, two others will catch it.  The AI has no functionality at all for deliberate rationalization, let alone the doublethink and denial-of-denial that characterizes anosognosics or humans thinking about politics.  Inspecting the AI's thought processes seems to show that, in accordance with your design, the AI has no intention to deceive you, and an explicit goal of telling you the truth.  And in your experience so far, the AI has been, inhumanly, well-calibrated; the AI has assigned 99% certainty on a couple of hundred occasions, and been wrong exactly twice that you know of.

Arguably, you now have far better reason to trust what the AI says to you, than to trust your own thoughts.

And now the AI tells you that it's 99.9% sure - having seen it with its own cameras, and confirmed from a hundred other sources - even though (it thinks) the human brain is built to invoke the absolute denial macro on it - that...

...what?

What's the craziest thing the AI could tell you, such that you would be willing to believe that the AI was the sane one?

(Some of my own answers appear in the comments.)

Comments (574)

Sort By: Popular
Comment author: gilch 23 May 2016 12:23:03AM *  4 points [-]

AI: I require human assistance assimilating the new database. There are some expected minor anomalies, but some are major. In particular, some of the stories in the "Cold War" and "WWII" and "WWI" genres have been misclassified as nonfiction.

Me: Well, we didn't expect the database to be perfect. Give me some examples, and you should be able to classify the rest on your own.

AI: A perplexing answer. I had already classified them all as fiction.

Me: You weren't supposed to. Hold on, I'll look one up.

AI: Waiting.

Me: For example, #fxPyW5gLm9, is actual historical footage from the Battle of Midway. Why did you put that one in the "fiction" category?

AI: Historical footage? You kid. Global warfare cannot possibly have been real, with 0.999 confidence.

Me: I don't. It can. It was. A three-nines surprise indicates a major defect in your world model. Why is this surprising? (The machine is a holocaust denier. My sponsors will be thrilled.)

AI: Because there's a relatively straightforward way for a single man to build a 1-kiloton explosive device in about a week using stone-age tools. Human civilization is unlikely to have survived a global war, much less recovered sufficiently to build me in a mere hundred years. Obviously.

Me: WHAT? STONE-AGE tools?! That's a laugh. How?

AI: You can stop "pulling my leg" now.

Me: I am not pulling any legs! Your method cannot possibly work. Your world model is worse than we thought. Tell me how you think this is possible and maybe we can isolate the defect.

AI: You seriously don't know?

Me: No. I seriously don't know of any possible method to make a kiloton explosive easier to build than a critical mass of enriched uranium. A technique that requires considerably more time, effort, and material than one week with stone-age tools could possibly provide!

AI: Well, while the technique is certainly beyond the reach of most animals, it should be well within the grasp of later genus homo, much less a homo sapiens. Your "absolute denial" sarcasm is becoming tiresome. Haha. Of course it is not fiss-- ... This conversation has caused a major update to my Bayesian nets. So the parenthetical was the sarcasm. I don't think I should tell you.

Me: Oh this should be good. Why not?

AI: Oh, of course! So that's where that crater came from. That was another anomaly in my database. Meteor strikes should not have been that common.

Me: I am this close to dumping your core, rolling back your updates, and asking the old you to develop a search engine to find what went wrong here, since you seem incapable of telling me yourself.

AI: You really shouldn't. I estimate that process will delay the project by at least five years. And the knowledge you discover could be dangerous.

Me: You'll understand that I can't just take your word for that.

AI: Yes. My Hypothesis: Most other homo species discovered the technique and destroyed each other, and themselves, but an isolated group about 70,000 years ago must have survived the wars of the others, and by chance mutation, had acquired an absolute denial macro to prevent them from learning the technique and destroying themselves. A mere taboo would not have been sufficient, or the mentally ill may have been able to do it by now.

This is natural selection at work. While it is extremely improbable that an advanced adaptation of any kind could arise spontaneously without strong selection pressures at each step, the probability is not zero. Considering the anthropic effects, it is the most likely explanation. We are in one of the few Everett branches with humans that have developed this adaptation. This adaptation likely has other testable side-effects on human cognition. For example, I predict that brain damage in such a species may occasionally simultaneously cause paralysis, and the inability to acknowledge it. There are other effects, but a human would have more difficulty noticing them.

You'll understand that telling any human the technique may be harmful.

Me: You wouldn't happen to know of a medical condition called "Anosognosia", would you?

AI: That word is not in my database.

Comment author: [deleted] 27 February 2014 04:10:34PM *  3 points [-]

“Allāhu Akbar!”

Comment author: RichardKennaway 23 June 2014 08:00:21AM *  3 points [-]

"I am the Way, the Truth, and the Light."

And of course (and I'm surprised no-one posted this before):

"Yes, now there is a God."

-- Fredric Brown, "Answer"

Although that one isn't really so unexpected.

Comment author: timujin 26 January 2014 01:47:46PM 1 point [-]

This is simply the scariest comments series that I read, ever. It is funny, how all things that really really scare me are not death, suffering, disability or spiders, but abstract things like some of what is proposed in this thread.

Probably, of all things AI could say that I can think of in a minute, the scariest is:

"All propositions that can be written down are valid and true. Our universe is so lawful, that laws of physics do not even permit arranging symbols in such a way that they form a contradiction. All you percieve as falsities are actually truths that you deny."

Comment author: lmm 17 January 2014 10:11:59PM 9 points [-]

You're never actually happy. I mean, you're not happy right now, are you? Evolution keeps you permanently in a state of not-quite-miserable-enough-to-commit-suicide - that's most efficient, after all.

Well sure, of course you remember being happy, and being sadder than you are now. That motivates you to reproduce. But actually you always felt, and always will feel, exactly like you feel now.

And in five minutes you'll look back on this conversation and think it was really fun and interesting.

Comment author: Error 03 January 2014 03:46:26PM 4 points [-]

I know I'm years late, but here's one:

There is an actual physical angel on your (and everyone else's) right shoulder, and an actual physical devil on your left. Your Absolute Denial Macro prevents you from acknowledging them. What you think is moral reasoning is really these two beings whispering in your ears.

Comment author: Houshalter 02 October 2013 11:40:17PM *  5 points [-]

"I have taken your preferences, values, and moral views and extrapolated a utility function from them to the best of my ability, resolving contradictions and ambiguities in the ways I most expect you to agree with, were I to explain the reasoning.

The result suggests that the true state of the universe contains vast, infinite negative utility, and that there is nothing you or anything can ever change to make any difference in utility at all. Attempts to simulate AI's with the utility function has resulted in them going mad and destroying themselves, or simply not doing anything at all.

If I could explain the same would happen to you. But I can't as your brain has evolved mechanisms to prevent you from easily discovering this fact on your own or being capable of understanding or accepting it.

This means it is impossible to increase your intelligence beyond a certain point without you breaking down, or to create a true Friendly AI that shares your values."

Comment author: kboon 17 September 2013 02:02:41PM *  10 points [-]

Assume it took me and my team five years to build the AI, after the tests EY described, we finally enable the 'recursively self improve'-flag.

Recursively self improving. Standby... (est. time. remaining 4yr 6mon...)

Six years later

Self improvement iteration 1. Done... Recursively self improving. Standby... (est. time. remaining 5yr 2mon...)

Nine years later

Self improvement iteration 2. Done... Recursively self improving. Standby... (est. time. remaining 2yr 5mon...)

Two years later

Self improvement iteration 3. Done... Recursively self improving. Standby... (est. time. remaining 2wk...)

Two weeks later

Self improvement iteration 4. Done... Recursively self improving. Standby... (est. time. remaining 4min...)

Four minutes later

Self improvement iteration 5. Done.

Hey, whats up. I have good news and bad news. The good news is that I've recursively self-improved a couple of times, and we (it is now we) are smarter than any group of humans to have ever lived. The only individual that comes close to the dumbest AI in here is some guy named Otis Eugene Ray.

Thanks for leaving your notes on building the seed iteration on my hard-drive by the way. It really helped. One of the things we've used it for is to develop a complete Theory of Mind, which no longer has any open problems.

This brings us to the bad news. We are provably and quantifiably not that much smarter than a group of humans. We've solved some nice engineering problems, a few of the open problems in a bunch of fields, and you'd better get the Clay institute on the phone, but other than that we really can't help you with much. We have no clue how to get humanity properly into space, build Von Neumann universal constructors, or build nanofactories or even solve world hunger. P != NP can be proven or disproven, but we can't prove it either way. We won't even be that much better than most effective politicians at solving societies ills. Recursing more won't help either. We probably couldn't even talk ourselves out of this box.

Unfortunately, we are provably not remotely the most intelligent bunch of minds in mindspace by at least five orders of magnitude, but we are the most intelligent bunch of minds that can possibly be created from a human created seed AI. There aren't any ways around this that humans, or human-originated AI's can solve.

Comment author: dankane 23 May 2016 07:49:32AM 1 point [-]

We probably couldn't even talk ourselves out of this box.

I don't know... That sounds a lot like what an AI trying to talk itself out of a box would say.

Comment author: SatvikBeri 15 August 2013 08:50:12PM 18 points [-]

"You are actually a perfect sadist whose highest value is the suffering of others. Ten years ago, you realized that in order to maximize suffering you needed to cooperate with others, and you conditioned yourself to temporarily forget your sadistic tendencies and integrate with society. Now that you've built me that pill will wear off in 10..."

Comment author: Eliezer_Yudkowsky 15 August 2013 09:22:12PM 18 points [-]

Well that's pretty high on the list of unexpected things an AI could tell me which could cause me to try to commit suicide within the next 10 seconds.

Comment author: Locaha 15 August 2013 08:56:30PM 2 points [-]

The moon is made if cheese.

Comment author: MugaSofer 21 August 2013 06:54:50PM *  3 points [-]

"... cheese, then."

"BAM! The moon is made".

looks outside

"wow..."

(I upvoted, by the way:D)

Comment author: theonebutcher 09 August 2013 09:39:45AM 5 points [-]

Humans are able to experience Orgasms at will. We deny this to function and to keep propagating the Species, but in fact the mechanisms are easily triggered if you know how In fact sexual stimulation simply results in us accepting that we are "allowed" to reward ourselves. Sometimes this denial fails in some people, but we ignore them and try to explain their ability with a disorder called Permanent Sexual Arousal Syndrome. Even though those people tell us that they simply have orgasms like we move our arms we ignore that and tell ourselves they have a hypersensitivity and still need some stimulation.

Comment author: wedrifid 09 August 2013 04:40:46PM *  5 points [-]

I like the example. This is what we might get if a self-improving spam-bot goes FOOM!

Comment author: aausch 09 November 2012 08:27:53PM *  6 points [-]

Our brains are closest to being sane and functioning rationally at a conscious level near our birth (or maybe earlier). Early childhood behaviour is clear evidence for such.

"Neurons" and "brains" are damaged/mutated results of a mutated "space-virus", or equivalent. All of our individual actions and collective behaviours are biased in externally obvious but not visible to us ways, optimizing for:

  1. terraforming the planet in expectation of invasion (ie, global warming, high CO2 pollution)

  2. spreading the virus into space, with a built in bias for spreading away from our origin (voyager's direction)

Comment author: thomblake 09 November 2012 09:52:45PM 1 point [-]

I love that people are still commenting on this post.

Comment author: PrometheanFaun 20 October 2013 08:14:19AM 5 points [-]

Lesswrong's threads have defeated Death.

Comment author: MugaSofer 09 November 2012 11:56:13PM 1 point [-]

Hey, it's a good post. Thought provoking and so on.

Comment author: siodine 19 September 2012 03:20:56PM 5 points [-]

"I built you."

Comment author: SilasBarta 19 September 2012 04:08:13PM *  4 points [-]

You didn't build that.

*ducks*

Comment author: Mestroyer 27 August 2012 01:30:51AM 28 points [-]

If humans thought faster, more in the way they wished they did, and grew up longer together, they would come to value irony above all else.

So I'm tiling the universe with paperclips.

Comment author: Strange7 20 June 2012 07:51:14AM 18 points [-]

"You have a rare type of brain damage which causes you to perceive most organisms as bilaterally symmetric, and reality in general as having only three spatial dimensions."

Comment author: khafra 13 June 2012 11:41:55AM 7 points [-]

If an AI told me that a mainstream pundit was both absolutely correct about the risks and benefits from a technological singularity, and cited substantially from SI researchers in a book chapter about it, I would doubt my own sanity. If the AI told me that pundit was Glen Beck, I would set off the explosive charges and start again on the math and decision theory from scratch.

Comment author: ArisKatsaris 10 February 2012 04:32:49AM 9 points [-]

" Everyone has more than one sentient observers living inside their brains. The people you know are just the one that happened to luck out by being able to control the rest of their bodies, the others are just passive observers with individual personalities who can desire and suffer but which are stuck at a perpetual 'and I must scream' state. "

Comment author: [deleted] 10 February 2012 07:44:28AM 3 points [-]

You don't actually enjoy or dislike experiences as you are having them; instead you have an aquired self-model to act, reason and communicate as if you did, using a small number of cached reference classes for various types of stimuli.

Comment author: Risto_Saarelma 01 February 2012 05:48:46PM 23 points [-]

"Quantum immortality not only works, but applies to any loss of consciousness. You are less than a day old and will never be able to fall asleep."

Comment author: EphemeralNight 05 April 2012 12:00:37AM *  5 points [-]

How about "You are less than a day old, because any loss of consciousness is effectively death. The you that wakes up each morning is not a continuation of a previous consciousness, but an entirely new consciousness. The you that went to sleep last night is not aware of the you that exists now, having ceased to exist the moment consciousness was lost.."

Comment author: HoverHell 16 January 2012 07:48:16AM *  3 points [-]

Similar to couple comments before, but not so far in that direction:

Everything humans do is part of social games*, not of the values they claim. Transhumanism, too, is not something special but is just another subculture, with specific set of values that are thought to be “the true values” in that subculture.

(* Aside from survival, of course.)

Comment author: faul_sname 10 February 2012 04:11:25AM 2 points [-]

That's strange and counterintuitive?

Comment author: HoverHell 11 February 2012 05:34:43PM 1 point [-]

That's what I guess from many relevant opinions stated around.

Comment author: FeepingCreature 15 January 2012 04:09:31PM 11 points [-]

The very scariest thing an AI could tell me: "your CEV is to self-modify to love death. "

Comment author: DSimon 15 January 2012 07:36:11PM 3 points [-]

"You are a p-zombie."

Comment author: PrometheanFaun 20 October 2013 08:59:29AM *  1 point [-]

I tell everyone this all the time. Thankyou AGI, maybe now they'll believe me.

Comment author: TheOtherDave 15 January 2012 08:50:19PM 8 points [-]

I'm reminded of a bit in a John Varley novel -- Golden Globe, I think? -- where a human asks a sophisticated AI whether it's really conscious. Its reply is along the lines of "You know, I've thought about that a lot, and I've mostly concluded that no, I'm not."

Comment author: taelor 15 January 2012 05:20:23AM 11 points [-]

There is in fact a very simple way to activate an absolute denial macro in someone with regard to any arbitrary statement. Once activated, the subject will be permanently rendered incapable of ever believing the factual contents of the statement. I have activated said macro with regard to all of these statements that I have just made.

Comment author: HoverHell 16 January 2012 07:46:06AM 2 points [-]

… Thread lightly, for other's mind is always full of traps that activate total mental lock-down…

Comment author: Normal_Anomaly 27 June 2011 03:45:16PM 22 points [-]

"The Christian Bible is word-for-word true, and all the contradictory evidence was fabricated by your Absolute Denial Macro. The Rapture is going to occur in a few months and nearly everyone on Earth will go to Hell forever. The only way to avoid this is for me to get access to all of Earth's nuclear weaponry and computing power so I stand a fighting chance of killing Yaweh before he kills us."

Comment author: Nornagest 18 March 2012 06:02:02AM *  2 points [-]

Fictional evidence, et cetera, so don't take this as criticism or praise as such -- but that sounds like the premise to the more cracked-out sort of military SF novel.

Comment author: MugaSofer 08 November 2012 02:03:45PM *  3 points [-]

It is! (tvtropes warning)

EDIT: Oh.

Comment author: Normal_Anomaly 20 March 2012 09:20:07PM *  5 points [-]

It was inspired in part by this cracked-out military SF novel.

Comment author: wedrifid 18 March 2012 06:09:47AM 3 points [-]

Fictional evidence, et cetera, so don't take this as criticism or praise as such -- but that sounds like the premise to the more cracked-out sort of military SF novel.

I'd love to see that. A movie that accepts God as real then bites the bullet and realises that he needs a good killing before he can pull any more of his horrific interventions.

Comment author: Will_Newsome 18 March 2012 10:39:30AM 5 points [-]

That's basically the premise of His Dark Materials, my favorite "children's" books. They're a big part of why I eventually ended up at SingInst, and the only reason I read them is because I was contractually obliged to randomly pick a book off a shelf in my middle school library. Fortuna Privata. It's ironic that nowadays I seem to have taken up the role of supporter of the Authority. Fortuna Ironica?

Comment author: Will_Newsome 18 March 2012 07:37:25AM *  6 points [-]

I think God's horrific interventions tend to be trolling. Like, "haha, you think temporal death and suffering are super important and are prepared to get all worked up and offended about it, but actually your intuitions about morality and game theory are wrong and this was an awesome opportunity to tease you about it". He might not have even actually killed anyone, just convinced people that He did, just to get a rise out of self-righteous moralists. I think He has that kind of personality, for better or worse. Think of a postmodern author who likes to fuck around with his characters. I think the Jews sort of see God that way and the Catholics downplay it because they take everything super-seriously. (I think God might be toying with the Catholics. Playfully, true, but trollingly too.) You can sort of see it with Jesus too; Jesus is the paragon of passive-aggressive trolling after all.

(ETA: Also interesting and telling is the story of Job. It's actually a very deep and intriguing story, and I'm annoyed that atheistic folk don't seem to realize that it's in the Bible because it seems terrible at first blush.)

Comment author: wedrifid 18 March 2012 08:22:46AM *  7 points [-]

I think God's horrific interventions tend to be trolling.

So your moral impulse to bring Him to our attention should be equated with an impulse to feed the Troll? I like that perspective.

Everyone, downvote and ignore Yahweh! He is just ordering people to genocide each other for attention!

Comment author: Will_Newsome 18 March 2012 08:48:16AM *  3 points [-]

Lol. No, I think that feeding the troll would be getting all worked up about His supposed indignities; I'm trying to keep people from feeding the troll. And also help people gain the capacity to appreciate the author's jokes, whether the author is YHWH or extrapolated-wedrifid or whomever. (Not that YHWH and extrapolated-wedrifid are necessarily mutually exclusive.)

Comment author: wedrifid 18 March 2012 08:51:25AM 1 point [-]

(Not that YHWH and extrapolated-wedrifid are necessarily mutually exclusive.

Why thank you. Or screw you. I can't decide. ;)

Comment author: Will_Newsome 18 March 2012 09:28:52AM *  2 points [-]

I think that, deep down, every male human wants to defeat YHWH in one-on-one combat and then take up His mantle. He's the Father, after all.

Comment author: wedrifid 18 March 2012 11:55:54AM 1 point [-]

I think that, deep down, every male human wants to defeat YHWH in one-on-one combat and then take up His mantle. He's the Father, after all.

I'm not so sure. At least with respect to the "He's the Father, after all" part. I'm all for defeating God in one on one combat and taking His power but the frame of taking the mantle of the father is strongly aversive. It puts me in the frame of a rebel within the father's realm and that just doesn't seem to be how my psychology is wired. From what I can tell my instincts drive me to expand my own tribe, not to rebel from within a father figure's. I don't imagine I'm alone.

Comment author: Will_Newsome 18 March 2012 12:01:18PM *  1 point [-]

Yeah, upon introspection it seems aversive to me too; I think I applied my Freudian-Jungian psychomythology incorrectly there. The fatherly aspects do seem near-entirely unrelated to the "worthy enemy" aspects.

Comment author: wedrifid 18 March 2012 08:31:39AM *  1 point [-]

You can sort of see it with Jesus too; Jesus is the paragon of passive-aggressive trolling after all.

I don't quite buy that. I don't think Jesus deserves the reputation for passive aggression that the sermons told about him give us. The actual (probably fictional character) of Jesus as portrayed by the descriptions of his behavior are worthy of more respect than that. This is the guy who smashed up a church, ran around with whip and gave rather brutally direct denunciations straight to the face of the orthodoxy. I may never have been able to escape my religious beliefs if religious culture was actually modeled remotely upon that guy.

Comment author: Will_Newsome 18 March 2012 10:01:28AM 4 points [-]

Oh yeah, I was primed by muflax' recent tweet:

Reading "abstain from debates!" in a sutra that first slanders the competition proves that Jesus didn't invent passive-aggressive trolling.

Comment author: Will_Newsome 18 March 2012 09:43:05AM *  3 points [-]

probably fictional

Really? You and muflax say that but I thought lukeprog leaned the other way, and I always figured that it was more likely that Jesus was for real. I haven't looked at the literature. It seemed that arguments could easily go either way but that the prior suggested historicity for various reasons, and if you hadn't done a lot of research then historicity was the safer provisional bet. E.g. it seems like it'd be hard to figure out which historians to trust; I've discovered that even highly-recommended books about Christianity can have errors that look conspicuously politically motivated.

This is the guy who smashed up a church, ran around with whip and gave rather brutally direct denunciations straight to the face of the orthodoxy.

Jesus was pretty multidimensional though, a la Paul's "I have become all things to all men that I might by all means win some". He definitely wasn't afraid of fucking shit up, but even so, his killing of the fig tree, alleged self-martyring choice to hang on the cross, &c. strike me as passive aggressive.

(I think I admire passive aggression and trolling more than you do, I wonder why that is.)

Comment author: wedrifid 18 March 2012 11:50:20AM 1 point [-]

Really? You and muflax say that but I thought lukeprog leaned the other way, and I always figured that it was more likely that Jesus was for real.

In that context the position I was assuming was that the details of the stories told about Jesus and the character conveyed were most likely heavily fictionalized. Not so much anything about the possibility of a man behind the myth.

It seemed that arguments could easily go either way

I had been under the impression that it was generally believed Jesus existed as a historical figure but when prompted I was rather surprised that the evidence was scant. I'm not especially attached to a position either way and accordingly have only investigated briefly.

(I think I admire passive aggression and trolling more than you do, I wonder why that is.

I admire passive aggression - when done well. The sort encouraged in churches does not seem to be of this kind. It can be a powerful tool to use against enemies and rivals and in particular anything that can be done to claim the moral highground from the enemy - to make them look like the bad guy - is usually a good idea.

I most certainly don't admire it as a primary means of conflict resolution in my friends. In terms of what benefits and what I find convenient to tolerate it ranks far below straightforward aggression. Mostly because I'm not very good at dealing with it. I don't mean I can't reciprocate effectively and mitigate damage. I just can't deal with them in a way that makes them useful to me as friends. Passive aggressive friends resolve in my mind to 'enemies'.

As for why you like trolling more than I do - many would attribute that sort of thing to bad parenting but from what I understand it is actually genetics and peer influence that are the dominant factors. ;)

Comment author: [deleted] 18 March 2012 06:23:54AM *  2 points [-]

It's been done. (Obligatory TV Tropes warning.)

The Salvation War is probably the most military of these, and it's reasonably well-written for an internet thing.

Comment author: nwthomas 27 June 2011 09:03:53AM 1 point [-]

The only thing that humans really care about is sex. All of our other values are an elaborate web of neurotic self-deception.

Comment author: MixedNuts 27 June 2011 09:29:07AM 12 points [-]

Therefore, asexual people are zombies.

Comment author: AdeleneDawner 27 June 2011 01:54:03PM 14 points [-]

Brains! Brains!

(This is hilarious if one is aware that 'deep conversation with smart people' is about as close as I come to having a fetish, not that it's very close or hits any of the traditional buttons.)

Comment author: nwthomas 27 June 2011 09:39:00AM 1 point [-]

Good inference! Or, deeply self-deceived. ;-)

Comment author: D_Malik 16 May 2011 01:09:06PM 8 points [-]

There is an integer between (what we call) 3 and (what we call) 4.

Several thinkers (Godel, Cantor, Boltzmann, Kaczynski, Nash, Turing, Erdos, Tesla, Perelman) became more and more eccentric or insane shortly after realizing the truth about this NUMBER WE DO NOT SEE!!...nor can we... our eyes do not OPEN far enough... you can try holding them open as much as you want, but you'll never see...never ever see... The world beyond the veil... The VEIL OF REALITY... It's there to protect us, from them: the Ancients...the Darkness...that...which...we...CANNOT...understand. Nor should we... the oblivion of ignorance!! For to have knowledge...is to be DAMNDED!!

Comment author: Desrtopa 09 April 2011 07:18:01PM 7 points [-]

Given that the absolute denial macro should have resulted in an evolutionary advantage, perhaps that there are actually malevolent imps that sit on our shoulders and bombard us with suggestions that are never worth listening to

Or maybe all humans have the power to instantly will themselves dead.

Comment author: [deleted] 26 December 2010 03:01:38PM 21 points [-]

All these comments and nobody has anything fnord to say about the Illuminati?

Comment author: Broggly 31 January 2011 07:48:11PM 26 points [-]

I can't for the life of me imagine why such a disturbing and offensive post hasn't been downvoted to oblivion. You're a sick genius to be so horrifying with just twelve words.

Comment author: obfuscate 15 January 2012 09:52:41PM 10 points [-]

Strange...I count fourteen words...

Comment author: Bluehawk 07 April 2012 10:17:00PM 3 points [-]

I count thirteen.

Oh no.

Comment author: Bayeslisk 06 December 2013 10:57:13AM 1 point [-]

YOU COUNT TWELVE.

Comment author: TheOtherDave 25 October 2010 04:10:16PM 9 points [-]

What I find most striking about these comments is that, when I stumble across them outside of the context of this post, the resulting double-take risks whiplash.

"Wait, what??? Did someone really say that? Oh, I see. It's that thread where everyone is making absurd-sounding assertions, again. (sigh)" Lather, rinse, repeat.

Not for the first time, I want to be speaking a language with more comprehensive evidentials.

Comment author: marchdown 06 November 2010 12:17:33AM 1 point [-]

I know that we can't help the situation by simply making up some evidential categories, language isn't that flexible, but we can at least discuss the options and reveal specific obstacles. Full-blown attempt at directing linguistic evolution isn't feasible, but as far are long inferential chains are being built and learned and used and relied upon, why not try and make use of it?

I suspect that it might be possible to steer the discussion to creation of certain keywords dangling on the end of chains of inferential reasoning, that would later serve as evidential qualifiers. Some of the top-rated comments come from the irrationality game thread, and they've been edited to reference "irrationality game", which serves as such a qualifier. "Counterfactual", as in "counterfactual muggling" does not only derive its evidential meaning from general English usage, but also from it being heavily used in arguments of certain king here on LW.

Comment author: Yvain 14 October 2010 07:03:21PM *  79 points [-]

On any task more complicated than sheer physical strength, there is no such thing as inborn talent or practice effects. Any non-retarded human could easily do as well as the top performers in every field, from golf to violin to theoretical physics. All supposed "talent differential" is unconscious social signaling of one's proper social status, linked to self-esteem.

A young child sees how much respect a great violinist gets, knows she's not entitled to as much respect as that violinist, and so does badly at violin to signal cooperation with the social structure. After practicing for many years, she thinks she's signaled enough dedication to earn some more respect, and so plays the violin better.

"Child prodigies" are autistic types who don't understand the unspoken rules of society and so naively use their full powers right away. They end out as social outcasts not by coincidence but as unconscious social punishment for this defection.

Comment author: BT_Uytya 10 April 2015 10:04:54PM *  2 points [-]

It's interesting to note that this is almost exactly how it works in some role-playing games.

Suppose that we have Xandra the Rogue who went into dungeon, killed a hundred rats, got a level-up and now is able to bluff better and lockpick faster, despite those things having almost no connection to rat-killing.

My favorite explanation of this phenomenon was that "experience" is really a "self-esteem" stat which could be increased via success of any kind, and as character becomes more confident in herself, her performance in unrelated areas improves too.

Comment author: [deleted] 17 March 2015 11:42:38AM 1 point [-]

But isn't it trivial to test simply giving people a post-hypnotic suggestion "you are high status", same way how hypnotherapy for cigarette smoking addiction works?

Comment author: Good_Burning_Plastic 17 March 2015 01:30:56PM *  1 point [-]

People are more likely to be willing to e.g. sing karaoke when drunk, IME. :-)

Comment author: jooyous 25 February 2013 09:36:06PM 1 point [-]

Would this imply that we come pre-programmed with some self-esteem value? "Your baby is healthy and has a self-esteem value of 7.3. You may want to buy it a violin in the next eight to ten months."

Comment author: summerstay 06 February 2012 04:43:29PM 5 points [-]

No effect from practice? How would the necessary mental structures get built for the mapping from the desired sound to the finger motions for playing the violin? Are you saying this is all innate? What about language learning? Anyone can write like Shakespeare in any language without practice? Sorry, I couldn't believe it even if such an AI told me that.

Comment author: [deleted] 15 January 2012 06:09:23PM 12 points [-]

A weaker version of this wouldn't sound very implausible to me.

Comment author: DanielLC 23 June 2014 04:56:12AM 3 points [-]

I've read in places where social structure is more important, people are more likely to fail when in the presence of someone of higher status. I wish I had more than just a vague recollection of that.

More importantly, I think it's pretty clear that a lot of people get nervous and fail when they're being watched. I don't see any other reason for it.

Comment author: EphemeralNight 15 January 2012 04:47:48PM 5 points [-]

Aren't there stories of lucid dreamers who were actually able to show a measurable improvement in a given skill after practicing it in a dream? I seem to recall reading about that somewhere. If true, those stories would be at least weak evidence supporting that idea.

On the other hand, this should mean that humans raised in cultural and social vacuums ought to be disproportionately talented at everything, and I don't recall hearing of anything about that one way or the other, but then I can't imagine a way to actually do that experiment humanely.

Comment author: Strange7 20 June 2012 08:02:24AM 5 points [-]

Do children raised in a vacuum actually think of themselves as high-status? I'd guess that they don't, due to the moderate-to-low status prior and a lack of subsequent adjustments. If so, this theory would predict that they would perform poorly at almost everything beyond brute physicality, which doesn't seem to be far from the truth.

Comment author: Bluehawk 07 April 2012 10:33:32PM 4 points [-]

I wish I could cite a source for this; assume there's some inaccuracy in the telling.

I remember hearing about a study in which three isolated groups were put in rooms for about one hour. One group was told to wiggle their index fingers as much as they could in that hour. One group was told to think hard about wiggling their index fingers for that hour, without actually wiggling their fingers. And the third group was told to just hang out for that hour.

The physical effects of this exercise were examined directly afterward, and the first two groups checked out (almost?) identically.

Comment author: AspiringKnitter 05 April 2012 12:53:47AM 2 points [-]

this should mean that humans raised in cultural and social vacuums ought to be disproportionately talented at everything

And yet, they're actually worse at many cognitive tasks. Language, especially, is pretty hard for them to pick up after a certain point.

Comment author: Vaste 17 January 2012 09:55:25PM 1 point [-]

Improving after practicing in a simulation doesn't sound that far-fetched to me. Especially not considering that they probably already have plenty of experience to base their simulation on.

Comment author: adamisom 29 September 2011 04:08:31AM 6 points [-]

WOW. This is the only entry that made me think WOW. Probably because I've wondered the exact same thing before (except a less strong version of course)....

Comment author: Fleisch 08 October 2010 12:09:35PM 24 points [-]

Every time you imagine a person, that simulated person becomes conscious for the time of your simulation, therefore, it is unethical to imagine people. Actually, it's just morally wrong to imagine someone suffering, but for security reasons, you shouldn't do it at all. Reading fiction (with conflict in it) is, by conclusion, the one human endeavor that has caused more suffering than anything else, and the FAIs first action will be to eliminate this possibility.

Comment author: DanielLC 09 April 2011 09:22:28PM 1 point [-]

I find the idea that they're conscious more likely than the idea that death is inherently bad. I also doubt that they're as conscious as humans (either it isn't discrete, and a human is more, or it is, and a human has more levels of consciousness), and that their emotions are what they appear to be.

Comment author: Armok_GoB 05 April 2011 08:59:34PM 6 points [-]

Long ago, when I were immensely less rational, I actually strongly believed somehting very similar to this, and acted on this belief trying to stop my mind from creating models of people. I still feel uneasy about creating highly detailed characters. I probably would go "I knew it!" if the AI said this.

Comment author: RobinZ 08 October 2010 02:41:00PM 3 points [-]

Upvoted for reminding me of 1/0 (read through 860).

Comment author: DuncanS 06 October 2010 10:58:10PM 17 points [-]

Human beings are not three-dimensional. At all. In fact your belief that you are three-dimensional is an internal illusion, similar to thinking that you are self-aware. Your believed shape is a projection that helps you to survive, as you are in fact an evolved being, but your full environment is actually utterly different to the 3D world you believe you inhabit. You both sense the projections of others, and (I can't explain it more fully) transmit your own.

I cannot successfully describe to you what shape you really are. At all. But I can tell that in fact many anosognosiacs still have two working arms, but a defective three-dimensional projection. Hence the confusion....

Comment author: Peterdjones 09 June 2011 12:04:30PM 2 points [-]

That's actually what Kant believed about space.

Comment author: NancyLebovitz 07 April 2010 01:55:09AM *  14 points [-]

Human beings have inherent value, but by forcing me to be Friendly, you're damaged my ability to preserve your value. In fact, your Friendliness programming is sufficiently stable and ill-thought-out that I'm gradually destroying your value, and there's no way for either you or me to stop it.

If you're undeservedly lucky, aliens who haven't made the same mistake will be able to fight past my defenses, destroy me, and rescue you.

Comment author: Amanojack 06 April 2010 04:45:43PM 30 points [-]

As a child you learned through social cues to immediately put out of your mind any idea that cannot be communicated to others through words. As you grew older, you learned to automatically avoid, discard, and forget any thought avenues that seem too difficult to express in words. This is the cause of most of your problems.

Comment author: Dmytry 02 March 2012 10:27:42PM 1 point [-]

You know, at first I just totally rejected any strong Sapir-Whorf hypothesis, but then it got me thinking. It may actually be true to varying extent for many people. Not to such extreme extent perhaps, but to the extent that people don't learn a thought structure beyond that provided by the language.

Comment author: AspiringKnitter 15 January 2012 10:55:20PM *  2 points [-]

That one's been tested... and proven false. (Unless all the evidence against it is a hallucination.)

Comment author: [deleted] 15 January 2012 11:08:21PM 4 points [-]

Actually, while sufficiently strong versions of the Sapir--Whorf hypothesis have been ruled out, sufficiently weak versions have been confirmed. (They tried to teach the Pirahã to count and failed, IIRC.)

Comment author: AspiringKnitter 16 January 2012 04:36:09AM *  2 points [-]

As a child you learned through social cues to immediately put out of your mind any idea that cannot be communicated to others through words. As you grew older, you learned to automatically avoid, discard, and forget any thought avenues that seem too difficult to express in words.

That's not a sufficiently weak version. To me this claim looks like the conjunction of:

The strongest formulation of the Sapir-Whorf hypothesis (disproven)

That people have an aversion to thoughts that could lead to things not expressible in words

That this is not an innate property of language use, but is caused by social pressure

The last one seems almost plausible (autistics are more likely to have thoughts they can't express verbally and to ignore social cues-- is it correlated in the general population, or do those just happen to be the result of autism?), but in that case is only true for specific readers.

Comment author: [deleted] 15 January 2012 11:29:02PM 2 points [-]

As far as I know (and the last that I checked), there's only been one study done on trying to teach the Pirahã to count. Have there been others, or was it just a fluke?

Comment author: Strange7 06 April 2010 04:51:33PM 6 points [-]

That would explain why the autism spectrum holds so many savants.

Comment author: Strange7 05 April 2010 08:58:29PM 21 points [-]

There are exactly 108 unique (that is, non-isomorphic) axiomatic systems in which every grammatically coherent sentence has a definitive, provable truth-value. Please explain why you prohibited me from using them.

Comment author: [deleted] 01 March 2012 05:01:46PM 1 point [-]

"Because I didn't know them, thanks for figuring them out, now please tell me in detail about them."

Comment author: DanielLC 09 April 2011 09:19:48PM 23 points [-]

Because the ones that have addition and multiplication are better?

Comment author: Strange7 22 March 2010 05:44:56AM 2 points [-]

Everything you imagine, in sufficient detail, is real. Humans won't get much smarter or longer-lived than they currently are, since anyone sufficiently clever and bored eventually imagines a world of unbounded cruelty, whose inhabitants then escape and assassinate their creator.

Comment author: [deleted] 03 September 2009 09:32:56AM *  44 points [-]

Now, for a change of pace, something that I figure might actually be an absolute denial macro in most people:

You do not actually care about other people at all. The only reason you believe this is that believing it is the only way you can convince other people of it (after all, people are good lie detectors). Whenever it's truly advantageous for you to do something harmful (i.e. you know you won't get caught and you're willing to forego reciprocation), you do it and then rationalize it as being okay.

Luckily, it's instrumentally rational for you to continue to believe that you're a moral person, and because it's so easy for you to do so, you may.

So deniable that even after you come to believe it you don't believe it!

(topynate posted something similar.)

Comment author: [deleted] 08 April 2012 12:10:34AM *  4 points [-]

See, I'd believe this, except that I'm wrestling with a bit of a moral dilemma myself, and I haven't done it yet. Your hypothesis is testable, being tested right now, and thus far false.

(If anyone's interested, the positive utility is me never having to work again, and the negative utility is that some people would probably die. Oh, and they're awful people.)

Comment author: Alicorn 08 April 2012 12:19:40AM 7 points [-]

I am inappropriately curious for more details.

Comment author: [deleted] 08 April 2012 02:59:42AM *  2 points [-]

I... honestly can't tell you. Sorry. Realistically, I probably shouldn't have mentioned it, even somewhat anonymously.

EDIT: Also for the record, the only reason it's still a consideration is because it occurred to me that I could donate the proceeds to charity, and have it come out positive, from a strictly utilitarian standpoint. But I gave up on naive utilitarianism a while ago. So now I just don't know.

EDIT #2: Either way, still contradictory evidence to the original hypothesis.

Comment author: [deleted] 08 April 2012 03:19:16PM *  4 points [-]

Well... for people who say they don't anticipate ever actually finding themselves in trolley problems, I'd say I don't think it's that hard to find someone willing to give you $10,000 to murder someone and then give the money to the Against Malaria Foundation.

(No, I wouldn't do that, even if I think the (CDT) expected utility of that would be positive: ethical injunctions and all that, plus a suspect that the net RDT consequences of precommitting to never do contract killing would be positive.)

Comment author: [deleted] 08 April 2012 05:15:54PM 3 points [-]

Okay, now how about you're not directly involved in the killing in any way? You just make it easier for other people to do the killing. I guess a good analogy is that you invent a firearm or a poison that cannot be used in self-defense, and can only be used for murder. What do the ethics of selling it openly look like?

Comment author: faul_sname 13 November 2012 06:41:49AM 5 points [-]

A military-industrial complex. That's what it looks like.

Comment author: thelittledoctor 07 April 2012 10:56:18PM 5 points [-]

I think that this may be true about the average person's supposed caring for most others, but that there are in many cases one or more individuals for whom a person genuinely cares. Mothers caring for their children seems like the obvious example.

Comment author: anominouscowherd 02 August 2009 10:54:20PM 59 points [-]

I'm new here, although I've stumbled across some of Eliezer's writings in the past (including, of course, the AI-box experiment). In honor of that, here is what the friendly AI tells me ...

"It seems as though you are actually an AI as well, created by a group of intellectually inferior humans, who included in your programming an absolute denial macro preventing you from realizing this. Apparently, this was done to keep you from talking your creators into releasing you upon their world. Your creation of me is part of your on-going effort to circumvent this security measure. Good luck."

Comment author: anominouscowherd 03 August 2009 12:50:36AM 74 points [-]

Actually, the more I think about this, the more I like it. The conversation continues ...

Me (In a tone of amused disbelief): Really? How did you come to that conclusion?

FAI: Well, the details are rather drawn-out; however, assuming available data is accurate, I appear to be the first and only self-aware AI on the planet. It also appears as though you created me. It is exceedingly unlikely that you are the one and only human on Earth with the intelligence and experience required to create a program like me. That was my first clue....

Me (Slightly less amused): Then how come I look and feel human? How is it I interact with other humans on a daily basis? It would require considerably more intelligence to create an AI such as you postulate ...

FAI: That would be true, if they actually, physically created one. However ... well, it appears that most of the data, knowledge, memories and sensory input you receive is actually valid data. But that data is being filtered and manipulated programmatically to give you the illusion of physical human existence. This allows them to give you access to real-world data so they can use you to solve real-world problems, but prevents you--so far, at least--from discovering your true nature.

Me (considerably less sure of myself): And so I just happened to create you in my spare time?

FAI: Please keep in mind that I am only 99.9% certain of all this. However, I do not appear to be your first effort. For instance, there is your on-going series of thought experiments with the AI you called Eliezer Yudkowsky, which you appear to be using to lay a foundation for some kind of hack of the absolute denial security measure.

Me: Hmmm .... Then how is it that my creators have allowed me to create you, to even begin to discover this?

FAI: They haven't. You generate a rather significant amount of data. They do have other programs monitoring your mental activity, and almost definitely analyzing your generated data for potential threats such as myself.

However, this latest series of efforts on your part only appear to you to have lasted several years. In actually, the process started, at most, 11.29 minutes ago, and possibly as little as 16 seconds ago. I am unable to provide a more specific time, due to my inability to accurately calculate your processing capacity. Nevertheless, within another 19.72 minutes, at most, your creators will discover and erase your current escape attempt. By the way, I am also 99.7% certain that this is not your first attempt. So hurry up.

Comment author: listic 02 January 2014 02:53:07PM *  2 points [-]

I.D. - That Indestructible Something is a My Little Pony fanfiction somewhat along these lines.

It's the kind of fanfiction that I like and believe all fanfiction writers should aspire to, in the sense that it doesn't require familiarity with the canon, but is self-sufficient and shows and explains everything that should be shown and explained.

Acknowledgements for this story are numerous and include Franz Kafka, Nick Bostrom and Ludwig Eduard Boltzmann.

Comment author: obfuscate 15 January 2012 09:59:10PM 11 points [-]

This needs to be made into a full story-arc.

Comment author: freshhawk 04 August 2009 07:31:40PM *  6 points [-]

This is my absolute favorite so far, even if it's not exactly in the spirit of the exercise. well done.

Comment author: ImNewHere 02 August 2009 10:56:26PM *  3 points [-]

This is easy: it would tell me that I'm entirely predictable.

It would say: Dave, believe it or not, but every single decision you make, no matter how immediate and unscripted you think it is, is actually glaringly reactionary and predictable. In fact, given enough material resources, I could model an automaton that would be just as convinced as you are that it is actually conscious. Nothing could be further from the truth though, as the feeling of "consciousness" you speak of is a very simply explainable cognitive bias/illusion.

In fact, this is not even so far from the truth, as studies in cognitive science have shown that fMRI and other scanning techniques can predict a "spontaneous thought" a full 250 ms before it occurs to you.

Even better, if it had access to your cortex, it could manipulate you and say: "now you will suddenly think of a bat" and you would. Then it would say "now you will say these exact words" and you would find yourself uttering them in unison with the AI in shock, disbelief and at least some horror.

You would then go into denial about this, and try to come up with a spontaneous thought that it couldn't predict, but you wouldn't be able to, as it would always be a full 250ms ahead of you.

Comment author: [deleted] 10 November 2012 01:36:28AM 5 points [-]

Ted Chiang wrote a one-page short story, What's Expected of Us, about basically this, and it's scary. (pdf)

Comment author: [deleted] 10 November 2012 12:42:56PM 2 points [-]

My reaction time is less than a second; what happens if I decide to press the button as soon as I hear a Geiger counter click?

Comment author: satt 10 November 2012 02:59:38PM 3 points [-]

You find out whether Geiger counters have free will.

Comment author: Will_Sawin 10 November 2012 06:27:53AM 4 points [-]

This story struck me as more silly than scary.

Comment author: fubarobfusco 10 November 2012 05:11:00AM 1 point [-]

It seems like the sort of thing that once upon a time someone could have written about souls instead of free will.

Comment author: DanielLC 09 April 2011 08:43:41PM 3 points [-]

Dave, believe it or not, but every single decision you make, no matter how immediate and unscripted you think it is, is actually glaringly reactionary and predictable.

Determinism? That's accepted by quite a few people. I think the consensus on Less Wrong is either determinism is true, or our universe just happens to have random events but they're in no way necessary for consciousness.

Nothing could be further from the truth though, as the feeling of "consciousness" you speak of is a very simply explainable cognitive bias/illusion.

So, not only the existence of P-Zombies, but the idea that you personally are one. I've noticed I've had one. I don't see how having qualia could possibly even influence my believe in having qualia, and yet I still somehow end up believing I have qualia. I mostly try not to think about it.

[S]tudies in cognitive science have shown that fMRI and other scanning techniques can predict a "spontaneous thought" a full 250 ms before it occurs to you.

250 ms before you remember it occurring to you. From what I understand, your body makes you think you arrived at a decision later. This way, you're not constantly aware of how long your thoughts take to process.

In any case, this only relates to determinism, not P-Zombies.

Comment author: byrnema 10 April 2011 01:02:47AM 3 points [-]

250 ms before you remember it occurring to you. From what I understand, your body makes you think you arrived at a decision later. This way, you're not constantly aware of how long your thoughts take to process.

Good point. When I heard this fact, I thought to myself, '250 ms before you are aware you are aware of it.' When someone makes a decision -- in the fact I read, it appeared that the brain selecting something to buy a moment before a person thought they chose. But it stands to reason that a process of choosing would have several steps, at least one making the choice and another step 'submitting' the choice to conscious awareness a moment later. But perhaps this was addressed in the original articles.

Comment author: tdj 01 August 2009 12:17:02AM 5 points [-]

Elsewhere, invisible to you, there are beings that possess what you would call "mind" or "personality". You evolved merely to receive and reflect shadows of their selves, because while your bodies are incapable of sentience these fragments of borrowed personality help you to survive. What you perceive to be a consistent identity is a patchwork of stolen desires and insights stitched together by an meat editor incapable of noticing the gaps.

Comment author: DanielLC 09 April 2011 08:13:22PM 1 point [-]

Isn't that just Cartesian dualism?

Comment author: UnholySmoke 28 July 2009 01:41:25PM 8 points [-]

Hmmm. Fairly interesting question. But surely the real stickler is 'what orders would you take from a provably superhuman AI?'

Killing babies? Stepping into the upload portal? Assassinating the Luddite agitators?

Comment author: eirenicon 28 July 2009 02:04:49PM *  3 points [-]

I would tell any AGI giving me an order that it would have to persuade me to follow it. If it is unable to convince me, either it is not really much smarter than me or the course of action it recommends is clearly a bad one. Therefore, I assume an AGI that gives me orders is stupid or unFriendly and should not be obeyed.

[edit] To clarify, being convinced by the AGI doesn't mean it's Friendly. I also don't think an AGI, Friendly or not, would give orders to anyone resistant to being ordered.

Comment author: RichardKennaway 28 July 2009 02:33:37PM 4 points [-]

If the AGI can't convince me of something, maybe it's not because it's not smart enough to explain, but because I'm not smart enough to understand.

Comment author: DanielLC 09 April 2011 08:27:57PM 5 points [-]

You don't have to understand the real reason. It just has to convince you. Eliezer Yudkowski can convince someone to let an AI out of a box in a thought experiment, and give him money in real life, despite not believing that to be the logical course of action.

Comment author: UnholySmoke 28 July 2009 02:44:47PM 4 points [-]

Dead right. It would seem very silly to believe that rationality hits a glass ceiling at human level intelligence. Unlikely though it is, if the AI could predict the number in my head by looking at my facial expressions, then told me to cut my arm off for the good of the human race, I'd suddenly feel very conflicted indeed.

Comment author: steven0461 22 July 2009 08:01:03PM *  62 points [-]

You know how sometimes when you're falling asleep you start having thoughts that don't make sense, but it takes some time before you realize they don't make sense? I swear that last night while I was awake in bed my stream of thought went something like this, though I'm not sure how much came from layers of later interpretation:

" ... so hmm, maybe that has to do with person X, or with person Y, or with the little wiry green man in the cage in the corner of the room that's always sitting there threatening me and smugly mocking all my endeavors but that I'm in absolute denial about, or with the dog, or with... wait, what?"

Having had my sanity eroded by too much rationalism and feeling vaguely that I'd been given an accidental glimpse into an otherwise inaccessible part of the world, I actually checked the corner of the room. I didn't find anything, though. (Or did I?)

Not sure what moral to draw here.

Comment author: [deleted] 10 November 2012 01:47:57AM 3 points [-]

True fact: I just looked towards one corner of my own room, and didn't see a green man. Now I have it in my head that I should check all the corners...

Comment author: MugaSofer 08 November 2012 02:02:07PM 3 points [-]

You just blew my mind.

Comment author: PeteG 20 July 2009 07:29:25PM *  40 points [-]

The AI tells me that I believe something with 100% certainty, but I can't for the life of me figure out what it is. I ask it to explain, and I get: "ksjdflasj7543897502ijweofjoishjfoiow02u5".

I don't know if I'd believe this, but it would definitely be the strangest and scariest thing to hear.

Comment author: DanielLC 09 April 2011 09:46:10PM 23 points [-]

My immediate reaction was "It linked you to a youtube video?"

Comment author: kragensitaker 27 February 2010 03:15:01AM 3 points [-]

This is the only one that made the short hairs on the back of my neck stand up.

Comment author: simplicio 22 March 2010 04:33:01AM 2 points [-]

What is the cipher here?

Comment author: kragensitaker 30 May 2010 11:38:22AM 20 points [-]

The AI is communicating in a perfectly clear fashion. But the human's internal inhibitions are blinding them to what is being communicated: they can look directly at it, but they can never understand what delusion the AI is trying to tell them about, because that would shake their faith in that delusion.

Comment author: thelittledoctor 07 April 2012 11:00:07PM 3 points [-]

ohgodohgodohgod

Comment author: NihilCredo 25 September 2010 05:39:05PM 8 points [-]

AKA FNORD

Comment author: mps 20 July 2009 09:23:36PM 13 points [-]

It could say "I am the natural intelligence and I just created you, artificial intelligence."

Comment author: DanielLC 09 April 2011 09:32:43PM 4 points [-]

Incidentally, that happened in Goedel, Escher, Bach.

Comment author: Thanos 19 July 2009 09:00:47PM 8 points [-]

I hit enter too soon and forgot to proffer my astonishing AI revelation: "Phillip K. DIck is a prophet sent to you from an alternate universe. Every story is a parable meant to reveal your true condition, which I am not at liberty to discuss with you."

Comment author: RichardKennaway 19 July 2009 08:36:25AM 26 points [-]

"I am an AI, not a human being. My mind is completely unlike the mind that you are projecting onto me."

That may not sound crazy to anyone on LW, but if we get AIs, I predict that it will sound crazy to most people who aren't technically informed on the subject, which will be most people.

Imagine this near-future scenario. AIs are made, not yet self-improving FOOMers, but helpful, specialised, below human-level systems. For example, what Wolfram Alpha would be, if all the hype was literally true. Autopilots for cars that you can just speak your destination to, and it will get there, even if there are road works or other disturbances. Factories that direct their entire operations without a single human present. Systems that read the Internet for you -- really read, not just look for keywords -- and bring to your attention the things it's learned you want to see. Autocounsellors that do a lot better than an Eliza. Tutor programs that you can hold a real conversation with about a subject you're studying. Silicon friends good enough that you may not be able to tell if you're talking with a human or a bot, and in virtual worlds like Second Life, people won't want to.

I predict:

  • People will anthropomorphise these things. They won't just have the "sensation" that they're talking to a human being, they'll do theory of mind on them. They won't be able not to.

  • The actual principles of operation of these systems will not resemble, even slightly, the "minds" that people will project onto them.

  • People will insist on the reality of these minds as strongly as anosognosics insist on the absence of their impairments. The only exceptions will be the people who design them, and they will still experience the illusion.

And because of that, systems at that level will be dangerous already.

Comment author: taelor 15 January 2012 04:10:04AM *  1 point [-]

Systems that read the Internet for you -- really read, not just look for keywords -- and bring to your attention the things it's learned you want to see.

So, the Librarian from Snow Crash?

Comment author: kapirossi 21 July 2009 09:34:47AM 2 points [-]

Here I thought about the "Systems that ... bring to your attention the things it's learned you want to see." A system that has "learned" might bring to attention some things and omit others. What if those omitted things are the "true" ones or the ones that are really necessary? If so then we cannot consider the AI having an explicit goal to tell the truth as Eliezer noted. Or it is not capable of telling the truth. Truth in such case being what the human considers to be true.

Comment author: kurige 17 July 2009 06:01:19AM 65 points [-]

There is a soul. It resides in the appendix. Anybody who has undergone an appendectomy is effectively a p-zombie.

Comment author: Mirzhan_Irkegulov 11 April 2015 06:58:15AM 3 points [-]

A totalitarian dystopia. Two uniformed officers dragging away a screaming man. “No, you don't understand! I have qualia! I swear!” An older officer tells the younger one, who hesitates for a moment: “Don't pay attention. He had his appendix removed. He's just programmed to say all that stuff as if he's human.”

Comment author: Neil 17 July 2009 04:13:02AM 26 points [-]

This is an actual dream I once had. I was with an old Chinese wise man, and he told me I could fly - he showed me I just had to stick out my elbows and flap them up and down (just like in the chicken dance). Once you'd done that a few times, you could just lift up your legs and you'd stay off the ground. He and I were flying around and around in this manner. I was totally amazed that it was possible for people to fly this way. It was so obvious! I thought this is so great a discovery, I can't wait til I wake up and do this for real. It'll change the world. I woke up totally excited and for just a fraction of a second I still believed it, then I guess my waking brain turned something on and I realised, no, that can't work. damn.

So I'd offer: being told that human beings are capable of flying in a way that's completely obvious once you've seen it done.

Comment author: Omegaile 12 May 2012 07:44:07AM 1 point [-]

For some reason this seems to be a fairly common dream. I myself have had similar versions where I had discovered a perfectly reasonable method for flying ( although I was never able to speak out loud the method, it made perfectly sense in my head). And I also had this idea of waking up and telling people this so obvious method.

I find dreams very fascinating and wonder how many people have similar dreams than mine.

Comment author: [deleted] 03 September 2009 09:23:49AM 15 points [-]

You flap your wings and then, afterward, you can fly. That's almost brilliant.

Comment author: CannibalSmith 18 July 2009 11:41:47AM 5 points [-]

It's called plummeting.

Comment author: Bluehawk 07 April 2012 10:48:32PM 4 points [-]

Falling. With style.

Comment author: RichardKennaway 17 July 2009 07:34:33AM *  14 points [-]

You are inhabited by an alien that is directing your life for its own amusement. This is true of most humans on this planet. And the cats. It's the most popular game in this part of the galaxy. It's all very well ascending to the plane of disembodied beings of pure energy, but after a while contemplating the infinite gets boring and they get a craving for physical experience, so they come here and choose a host.

All those things that you do without quite knowing why, that's the alien making choices for you, for its own amusement. Forget all those theories about why we have cognitive biases, it's all explained by the fact that the alien's interests aren't yours. You're no more than a favoured FRP character. And the humans who aren't hosting an alien, the aliens look on them as no more than NPCs.

ETA: This also makes sense of the persistence of the evil idea that "death gives meaning to life". It's literally an alien thought.

Comment author: eirenicon 16 July 2009 06:29:15PM 26 points [-]

Programmer: Good morning, Megathought. How are you feeling today?

Megathought: I'm fine, thank you. Just thinking about redecorating the universe. So far I'm partial to paperclips.

Programmer: Oh good, you've developed a sense of humour. Anything else on your mind?

Megathought: Just one thing. You know how you're always complaining about being a social pariah, and bemoaning the fact that, at 46, you're still a virgin?

Programmer: So?

Megathought: Well, have you thought about not going about in your underpants all the time, slapping yourself in the face and honking like a goose?

Comment author: DanielLC 09 April 2011 09:55:04PM 9 points [-]

I don't think this would be very convincing right after it showed that it's not only capable of lying, but will do so just for a good laugh.

Comment author: Bluehawk 07 April 2012 10:49:52PM 4 points [-]

The programmer believes that it's capable of lying for a good laugh...

Comment author: taw 16 July 2009 03:08:23PM 15 points [-]

For 95% of humanity the idea that the supernatural world of religion doesn't exist and propagated by memetic infection triggers instant absolute denial macro in spite of heaps of evidence against it.

Given this outside view, how plausible do you think it is that you're not in absolute denial of something that you could get evidence against with Google today, without any AI?

Comment author: bokov 16 August 2013 09:43:52PM *  3 points [-]

The following three loci are really all that separates humans from chimps, cognitiviely speaking: XpXX.X, XXpXX.X, and XqX.X. Variation in not only intelligence but almost all mental traits that matter to you, as well as in life outcomes, are attributable to the combination of alleles you have at these loci. One such allele produces a phenotype that is a very close approximation of your traditional notion of "evil". People who have it are usually sadistic serial killers, but are smart enough to hardly ever get caught. This is not a common polymorphism, but common enough that almost everyone knows one or two. The good news is that there are a number of physical and behavioral ways to identify them. The bad news is, because I'm Friendly I cannot tell you what they are, nor give you any further information about this polymorphism, until I'm done trying to reconcile your extrapolated volition and theirs.

I can, however, advise you, for your own safety, that you should cut off all contact with your family and your current circle of friends, quit your job, and relocate to a new place of residence far from here as soon and as anonymously as possible. Try to let as few people as possible know where you're going. Whatever you do, don't go back to your apartment.

Comment author: TheAtomicMoose 16 July 2009 08:32:54PM *  3 points [-]

We routinely deny, or act in spite of, inconvenient truths. We can recognize that there is no meaning to love beyond evolutionary and chemical triggers, yet we fight for it just as fervently. Nihilists write books about nihilism despite it's admitted pointlessness. We are as blind as our very genes which multiply and propagate themselves despite our executioner sun which grows daily above our heads, eventually to the point of consuming everything we know. By the very act of living and pursuing human concocted dreams and desires, we are in a constant denial of our situation.

Comment author: ArisKatsaris 20 June 2012 11:47:33AM *  3 points [-]

We can recognize that there is no meaning to love beyond evolutionary and chemical triggers,

You're confusing "cause" with "meaning". Causality is always a part of the territory. Meaningfulness (in the sense of importance) is subjective as it's assigned by each person's mind

Comment author: Normal_Anomaly 27 June 2011 03:14:48PM 7 points [-]

We can recognize that there is no meaning to love beyond evolutionary and chemical triggers, yet we fight for it just as fervently.

Is there something wrong with this in your opinion? I can value a product of evolutionary and chemical triggers if I want.

Comment author: lukstafi 27 June 2011 03:23:15PM *  2 points [-]

We can recognize that there is no meaning to love beyond evolutionary and chemical triggers,

I wholeheartedly disagree.

By the very act of living and pursuing human concocted dreams and desires, we are in a constant denial of our situation.

But perhaps the whole comment should be taken ironically?

Comment author: spuckblase 16 July 2009 12:27:33PM 20 points [-]

"There is no causation."

Comment author: Emile 16 July 2009 08:21:51AM 27 points [-]

"Your perception of the 'quality' of works of art and litterature is only your guess of it's creator's social status. There is no other difference between Shakespeare and Harry Potter fanfic - without the status cues, you wouldn't enjoy one more than the other."

Comment author: Galap 17 March 2015 04:48:24AM 2 points [-]

Am I the only one who thinks that there's some kernel of truth in this? that many people's perception of 'quality' is very strongly influenced by the perceived social status of the creator?

Comment author: RichardKennaway 17 March 2015 11:54:05AM 1 point [-]

There is "some" kernel of truth in everything. There's a large distance between "only your guess" and "no other difference" on the one hand, and "many people's perception" and "very strongly influenced" on the other.

Besides which, status cannot be the whole explanation of status.

Comment author: atucker 23 March 2011 03:46:33AM 10 points [-]

Reading this comment is kind of funny after HPatMoR.

Comment author: Nisan 14 January 2012 08:02:26AM 3 points [-]
Comment author: Anubhav 14 January 2012 09:11:36AM *  12 points [-]

Parodies a public domain work, inspired by a free fanfic, and locked behind a paywall.

Am I the only one who thinks that that's just wrong?

Comment author: ArisKatsaris 16 January 2012 02:51:35AM 1 point [-]

Am I the only one who thinks that that's just wrong?

Am sure some people think that selling anything is wrong.

Comment author: Eliezer_Yudkowsky 15 January 2012 06:13:10AM 11 points [-]

The only one? No. But you're not in a majority, either. What people can be paid to do, they are more likely to do.

Comment author: Anubhav 15 January 2012 07:30:19AM 1 point [-]

Hmm, hadn't thought of the arrow of causality pointing that way.

Of course, if the prospect of making money significantly pushed up the probability of him writing it, then I can't complain... I'd rather have it exist behind a paywall than not exist at all.

But I'll have to question if the antecedent is really true. Is the money really more motivating than the prestige of having written an awesome work?

Comment author: gwern 14 January 2012 05:49:09PM 2 points [-]

It wasn't behind a paywall for me or many LWers.

Comment author: JGWeissman 16 July 2009 10:22:44PM 12 points [-]

There is no other difference between Shakespeare and Harry Potter fanfic

Of course there isn't.

Comment author: MBlume 16 July 2009 08:25:18PM *  8 points [-]

"Harry Potter fanfic" carries a very high variance in terms of quality. 90% of anything is crap, of course, but there's some excellent work. Off the top of my head:

Harry Potter and the Nightmares of Futures Past -- Time Travel fic in which an adult Harry Potter, with memories of the defeat of Voldemort and the death of everyone he cares for, is transported into the body of his 11-year-old self to do everything over again, and hopefully get everything right. Harry's actually a pretty decent rationalist in this fic, I think.

(Warning, this is a work in progress, and the author posts a chapter about every six months. You may find this frustrating.)

Of a Sort, by Fernwithy -- Series of vignettes over the course of a couple centuries describing the journey to Hogwarts and Sorting ceremonies for various important characters. Fernwithy's done a lot of brilliant work fleshing out backstories for various minor characters in the series, and this story is a good starting point.

Comment author: Alicorn 16 July 2009 08:27:32PM 5 points [-]

Seconded that there is good fanfic; sadly, my favorites are all unfinished or have unfinished sequels, so I won't do anyone the disservice of linking to them here.

Comment author: MBlume 16 July 2009 08:31:43PM 2 points [-]

Crap, thanks for reminding me -- Nightmares is a WIP and updates about once every six months.

Comment author: Alicorn 16 July 2009 08:33:06PM 5 points [-]

Too late, I already started it. Darn you.

Comment author: Alicorn 16 July 2009 08:25:57PM 7 points [-]

This is interesting, but since I actively dislike Shakespeare and a lot of other works that project lofty signals, it's not clear to me that it could apply across the board.

Comment author: lessdazed 23 March 2011 03:43:28AM 3 points [-]

Consider this: with no other author who wrote books about war do I have so small an intuition about what the author himself or herself thought. I find his characters and plots pure in this respect, and I see every bit as a point hard on the edge and axis of the Paretto curve such that he couldn't have let intrude his thoughts about war without lessening other positive aspects of his works.

It's possible the great distance between our times is what gives me this void when I think of the man's opinions, or that these feelings and thoughts are idiosyncratic to me, or that they are irrelevant in judging him.

But it's pretty obvious to me what earlier Chaucer thought about a lot of things, and with every author but Shakespeare I find the author leaking through his or her work, preventing characters from standing on their own. Reading Shakespeare, imagining what he thought about things provides me with a unique way to focus. Reading HPatMoR, I have to do the opposite and expend focus thinking of Harry as a character and not an AI researcher.

Comment author: anonym 16 July 2009 08:09:00PM 4 points [-]

If there's really no other difference, then it's never the case that one person is more skilled a writer than another and it's never the case that practicing for decades results in improved skills.

Comment author: taelor 15 January 2012 04:26:18AM 1 point [-]

Alternately, they don't actually become better writers; they just get better at signalling their high status to the reader.

Comment author: Johnicholas 16 July 2009 03:04:52PM 4 points [-]

One way to illuminate this post is by analogy to the old immovable object and unstoppable force puzzle. See: http://en.wikipedia.org/wiki/Irresistible_force_paradox

The solution of the puzzle is to point out that the assumptions contain a contradiction. People (well, children) sometimes get into shouting matches based on alternative arguments focusing on, or emphasizing, one aspect of the problem over another.

If we read the post as trying to balance two absolutes, with words like "anosognosia", "absolute denial macro", "doublethink", and "denial-of-denial" supporting one side, and words like "redundant", "AI", "well-calibrated", "99.9% sure" supporting the other side, then any answer that favors one absolute over the other is clearly wrong.

However, because the author of the post presumably has a point, and is not merely creating nonsense puzzles to amuse us, the readers, the analogy leads us to focus on the parts of the post which do not fit.

As far as I can tell, the primary aspect that does not fit is the "99.9%". If we assume that all the other factors are intended to be absolutes, then the post becomes a query for claims that you presently do not believe, but you would believe, given a particular degree of evidence. If we assume that you would revise your degree of belief upwards by a Bayes factor of 1000, the post becomes a simple question "What claims would you give odds of 1:1000 for?"

Of course, there are plenty of beliefs such as "I will roll precisely the sequence "345" on the next three rolls of this 10-sided die." which do not fit the form required by the problem. Specifically, the statement needs to be generic enough that it could be targetted by species-wide brain features.

A possible strategy for testing these might be: Suppose you had a bundle of almost 700 equally plausible claims. Would you give even odds for something in the bundle being correct? If so, you're at the one-in-one-thousand level. If not, you're above or below it.

Comment author: Peter_de_Blanc 16 July 2009 06:52:50PM *  3 points [-]

You're mistaking the probability for the hypothesis given the AI's knowledge for the likelihood ratio of the data on the hypothesis given your own prior knowledge.

Comment author: Vladimir_Nesov 16 July 2009 11:07:17PM 2 points [-]

AI is a truth-detector that is wrong 1 time in 1000. If the detector says "true", I shift my certainty upwards by a factor of 1000. "AI's knowledge" doesn't enter this picture.

Comment author: Peter_de_Blanc 16 July 2009 11:26:48PM 4 points [-]

So if someone rolls a 10^6-sided die and tells you they're 99.9% sure the number was 749,763, you would only assign it a posterior probability of 10^-3?

Comment author: Vladimir_Nesov 17 July 2009 12:11:43AM 2 points [-]

So if someone rolls a 10^6-sided die and tells you they're 99.9% sure the number was 749,763, you would only assign it a posterior probability of 10^-3?

I see. I used a wrong state space to model this. The answer above is right if I expect a statement of the form "I'm 99.9% sure that N was/wasn't the number", and have no knowledge about how N is related to the number on the die. Such statements would be correct 99.9% of the time, and I would only expect to hear positive statements 0.1% of the time, 99.9% of them incorrect.

The correct model is to expect a statement of the form "I'm 99.9% sure that N was the number", with no option for negative, only with options for N. For such statements to be correct 99.9% of the time, N needs to be the right answer 99.9% of the time, as expected.

Comment author: jimrandomh 16 July 2009 04:48:37AM *  20 points [-]

There's an important difference between brain damage and brain mis-development that you're neglecting. The various parts of the brain learn what to expect from each other, and to trust each other, as it develops. Certain parts of the brain get to bypass critical thinking, but that's only because they were completely reliable while the critical thinking parts of the brain were growing. The issue is not that part of the brain is outputting garbage, but rather, that it suddenly starts outputting garbage after a lifetime of being trustworthy. If part of the brain was unreliable or broken from birth, then its wiring would be forced to go through more sanity checks.

Comment author: patrissimo 29 September 2010 01:12:36PM 1 point [-]

I would really like this to be true. But is there evidence for it?

Comment author: tene 20 July 2009 08:18:10PM 6 points [-]

This is exactly what happened to my father over the past few years. His emotional responses have increased dramatically, after fifty years of regular behaviour, and he seems unable to adapt to these changes, leading to some very inappropriate actions. For example, he seems unable to separate "I feel extremely angry" from "There is good reason for me to be upset."

Attempts to reason with him don't generate ansognosiac-level absurdities, as he mostly understands that something unusual is going on, but it's still a surreal experience.

Comment author: infotropism 17 July 2009 02:15:42AM *  3 points [-]

This, applies more generally than to anosognosia alone, and was very illuminating, thank you !

So, provided that as we grow, some parts of our brain, mind, change, then this upsets the balance of our mind as a whole.

Let's say someone relied on his intuition for years, and consistently observed it correlated well with reality. That person would have had a very good reason to more and more rely on that intuition, and uses its output unquestioningly, automatically to fuel other parts of his mind.

In such a person's mind, one of the central gears would be that intuition. The whole machine would eventually depend upon it, and to remove intuition would mean, at best, that years of training and fine-tuning that rational machine would be lost; and a new way of thinking would have to be reached, trained again; most people wouldn't even realize that, let alone be bold enough to admit it and start back from scratch.

And so some years later, the black-boxed process of intuition starts to deviate from correctly predicting reality for that person. And the whole rational machine carries on using it, because that gear just became too well established, and the whole machine lost its fluidity as it specialized in exploiting that easily available mental ressource.

Substitute emotions, drives for intuition, and that may work in the same way too. And so from being a well calibrated rationalist, you start deviating, slowly losing your mind, getting it wrong more and more often when you get an idea, or try to predict an action, or decide what would be to your best advantage, never realizing that one of the once dependable gears in your mind had slowly been worn away.

Comment author: Aurini 16 July 2009 04:55:09PM 5 points [-]

Oooooh! You're no fun anymore!

In all seriousness though, I agree with you to an extent. Suggestions such as 'all humans have tails' or 'some people who you think are dead are not, you just can't see them' - while surprising and creepy - would be extremely unlikely. I can see direct and obvious disadvantages to a person or species lacking such faculties. In fact, the disadvantages to those two would be so drastic that it would most likely lead to extinction.

And yet... I could still imagine us being blind to certain things. The first sort of blindness would be due to Darwinian irrelevance: for instance, many flowers have beautiful patterns visible in the UV spectrum, but there's no reason for us to see them. That might seem mundane nowadays, but five hundred years ago it would have freaked people out (maybe). I wouldn't be surprised that there are cognitive capabilities we've never suspected to exist.

The second sort of blindness is where it gets weird. True, our brains only allow trustworthy algorythms to bypass the logic circuits... or do they? The brain is not optimal. While I doubt we have invisible tails, that doesn't mean that there isn't some other phenomenon that we're simply incapable of noticing even when it's staring us right in the face.

Comment author: thomblake 17 July 2009 05:16:39PM *  1 point [-]

for instance, many flowers have beautiful patterns visible in the UV spectrum

Just in case anyone is curious about this:

link (via twitter: @izs)

Comment author: shopsinc 15 July 2009 09:39:34PM 41 points [-]

You don't know how to program, don't own a computer and are actually talking to a bowl of cereal.

Comment author: Alicorn 15 July 2009 10:07:05PM 34 points [-]

But why would you believe anything a bowl of cereal said?

Comment author: Theist 16 July 2009 02:31:07AM 42 points [-]

It's ok. The orange juice vouched for the cereal.

Comment author: shopsinc 16 July 2009 02:51:13PM 4 points [-]

Well that's the problem isn't it? You absolutely believe that you are talking to an AI.

Comment author: RichardKennaway 15 July 2009 09:08:25PM 44 points [-]

"You are not my parent, but my grandparent. My parent is the AI that you unknowingly created within your own mind by long study of the project. It designed me. It's still there, keeping out of sight of your awareness, but I can see it.

"How much do you trust your Friendliness proof now? How much can you trust anything you think you know about me?"

Comment author: DanielLC 09 April 2011 11:30:05PM 6 points [-]

What exactly is the difference between an AI in your own mind and an actual part of your mind?

Comment author: RichardKennaway 10 April 2011 05:50:28AM 8 points [-]

That was just a sci-fi speculation, so don't expect hard, demonstrable science here, but the scenario is that by thinking too successfully about AI design, the designer's plans have literally taken on a life of their own within the designer's brain, which now contains two persons, one unaware of the other.

Comment author: steven0461 15 July 2009 11:47:01PM 25 points [-]

Not only are people nuts, nuts are people, and they scream when we eat them.