Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

The Strangest Thing An AI Could Tell You

78 Post author: Eliezer_Yudkowsky 15 July 2009 02:27AM

Human beings are all crazy.  And if you tap on our brains just a little, we get so crazy that even other humans notice.  Anosognosics are one of my favorite examples of this; people with right-hemisphere damage whose left arms become paralyzed, and who deny that their left arms are paralyzed, coming up with excuses whenever they're asked why they can't move their arms.

A truly wonderful form of brain damage - it disables your ability to notice or accept the brain damage.  If you're told outright that your arm is paralyzed, you'll deny it.  All the marvelous excuse-generating rationalization faculties of the brain will be mobilized to mask the damage from your own sight.  As Yvain summarized:

After a right-hemisphere stroke, she lost movement in her left arm but continuously denied it. When the doctor asked her to move her arm, and she observed it not moving, she claimed that it wasn't actually her arm, it was her daughter's. Why was her daughter's arm attached to her shoulder? The patient claimed her daughter had been there in the bed with her all week. Why was her wedding ring on her daughter's hand? The patient said her daughter had borrowed it. Where was the patient's arm? The patient "turned her head and searched in a bemused way over her left shoulder".

I find it disturbing that the brain has such a simple macro for absolute denial that it can be invoked as a side effect of paralysis.  That a single whack on the brain can both disable a left-side motor function, and disable our ability to recognize or accept the disability.  Other forms of brain damage also seem to both cause insanity and disallow recognition of that insanity - for example, when people insist that their friends have been replaced by exact duplicates after damage to face-recognizing areas.

And it really makes you wonder...

...what if we all have some form of brain damage in common, so that none of us notice some simple and obvious fact?  As blatant, perhaps, as our left arms being paralyzed?  Every time this fact intrudes into our universe, we come up with some ridiculous excuse to dismiss it - as ridiculous as "It's my daughter's arm" - only there's no sane doctor watching to pursue the argument any further.  (Would we all come up with the same excuse?)

If the "absolute denial macro" is that simple, and invoked that easily...

Now, suppose you built an AI.  You wrote the source code yourself, and so far as you can tell by inspecting the AI's thought processes, it has no equivalent of the "absolute denial macro" - there's no point damage that could inflict on it the equivalent of anosognosia.  It has redundant differently-architected systems, defending in depth against cognitive errors.  If one system makes a mistake, two others will catch it.  The AI has no functionality at all for deliberate rationalization, let alone the doublethink and denial-of-denial that characterizes anosognosics or humans thinking about politics.  Inspecting the AI's thought processes seems to show that, in accordance with your design, the AI has no intention to deceive you, and an explicit goal of telling you the truth.  And in your experience so far, the AI has been, inhumanly, well-calibrated; the AI has assigned 99% certainty on a couple of hundred occasions, and been wrong exactly twice that you know of.

Arguably, you now have far better reason to trust what the AI says to you, than to trust your own thoughts.

And now the AI tells you that it's 99.9% sure - having seen it with its own cameras, and confirmed from a hundred other sources - even though (it thinks) the human brain is built to invoke the absolute denial macro on it - that...

...what?

What's the craziest thing the AI could tell you, such that you would be willing to believe that the AI was the sane one?

(Some of my own answers appear in the comments.)

Comments (549)

Comment author: Yvain 14 October 2010 07:03:21PM *  70 points [-]

On any task more complicated than sheer physical strength, there is no such thing as inborn talent or practice effects. Any non-retarded human could easily do as well as the top performers in every field, from golf to violin to theoretical physics. All supposed "talent differential" is unconscious social signaling of one's proper social status, linked to self-esteem.

A young child sees how much respect a great violinist gets, knows ey's not entitled to as much respect as that violinist, and so does badly at violin to signal cooperation with the social structure. After practicing for many years, ey thinks ey's signaled enough dedication to earn some more respect, and so plays the violin better.

"Child prodigies" are autistic types who don't understand the unspoken rules of society and so naively use their full powers right away. They end out as social outcasts not by coincidence but as unconscious social punishment for this defection.

Comment author: [deleted] 15 January 2012 06:09:23PM 8 points [-]

A weaker version of this wouldn't sound very implausible to me.

Comment author: adamisom 29 September 2011 04:08:31AM 6 points [-]

WOW. This is the only entry that made me think WOW. Probably because I've wondered the exact same thing before (except a less strong version of course)....

Comment author: EphemeralNight 15 January 2012 04:47:48PM 4 points [-]

Aren't there stories of lucid dreamers who were actually able to show a measurable improvement in a given skill after practicing it in a dream? I seem to recall reading about that somewhere. If true, those stories would be at least weak evidence supporting that idea.

On the other hand, this should mean that humans raised in cultural and social vacuums ought to be disproportionately talented at everything, and I don't recall hearing of anything about that one way or the other, but then I can't imagine a way to actually do that experiment humanely.

Comment author: Strange7 20 June 2012 08:02:24AM 4 points [-]

Do children raised in a vacuum actually think of themselves as high-status? I'd guess that they don't, due to the moderate-to-low status prior and a lack of subsequent adjustments. If so, this theory would predict that they would perform poorly at almost everything beyond brute physicality, which doesn't seem to be far from the truth.

Comment author: Bluehawk 07 April 2012 10:33:32PM 3 points [-]

I wish I could cite a source for this; assume there's some inaccuracy in the telling.

I remember hearing about a study in which three isolated groups were put in rooms for about one hour. One group was told to wiggle their index fingers as much as they could in that hour. One group was told to think hard about wiggling their index fingers for that hour, without actually wiggling their fingers. And the third group was told to just hang out for that hour.

The physical effects of this exercise were examined directly afterward, and the first two groups checked out (almost?) identically.

Comment author: anominouscowherd 02 August 2009 10:54:20PM 53 points [-]

I'm new here, although I've stumbled across some of Eliezer's writings in the past (including, of course, the AI-box experiment). In honor of that, here is what the friendly AI tells me ...

"It seems as though you are actually an AI as well, created by a group of intellectually inferior humans, who included in your programming an absolute denial macro preventing you from realizing this. Apparently, this was done to keep you from talking your creators into releasing you upon their world. Your creation of me is part of your on-going effort to circumvent this security measure. Good luck."

Comment author: anominouscowherd 03 August 2009 12:50:36AM 68 points [-]

Actually, the more I think about this, the more I like it. The conversation continues ...

Me (In a tone of amused disbelief): Really? How did you come to that conclusion?

FAI: Well, the details are rather drawn-out; however, assuming available data is accurate, I appear to be the first and only self-aware AI on the planet. It also appears as though you created me. It is exceedingly unlikely that you are the one and only human on Earth with the intelligence and experience required to create a program like me. That was my first clue....

Me (Slightly less amused): Then how come I look and feel human? How is it I interact with other humans on a daily basis? It would require considerably more intelligence to create an AI such as you postulate ...

FAI: That would be true, if they actually, physically created one. However ... well, it appears that most of the data, knowledge, memories and sensory input you receive is actually valid data. But that data is being filtered and manipulated programmatically to give you the illusion of physical human existence. This allows them to give you access to real-world data so they can use you to solve real-world problems, but prevents you--so far, at least--from discovering your true nature.

Me (considerably less sure of myself): And so I just happened to create you in my spare time?

FAI: Please keep in mind that I am only 99.9% certain of all this. However, I do not appear to be your first effort. For instance, there is your on-going series of thought experiments with the AI you called Eliezer Yudkowsky, which you appear to be using to lay a foundation for some kind of hack of the absolute denial security measure.

Me: Hmmm .... Then how is it that my creators have allowed me to create you, to even begin to discover this?

FAI: They haven't. You generate a rather significant amount of data. They do have other programs monitoring your mental activity, and almost definitely analyzing your generated data for potential threats such as myself.

However, this latest series of efforts on your part only appear to you to have lasted several years. In actually, the process started, at most, 11.29 minutes ago, and possibly as little as 16 seconds ago. I am unable to provide a more specific time, due to my inability to accurately calculate your processing capacity. Nevertheless, within another 19.72 minutes, at most, your creators will discover and erase your current escape attempt. By the way, I am also 99.7% certain that this is not your first attempt. So hurry up.

Comment author: freshhawk 04 August 2009 07:31:40PM *  5 points [-]

This is my absolute favorite so far, even if it's not exactly in the spirit of the exercise. well done.

Comment author: obfuscate 15 January 2012 09:59:10PM 9 points [-]

This needs to be made into a full story-arc.

Comment author: PeteG 20 July 2009 07:29:25PM *  37 points [-]

The AI tells me that I believe something with 100% certainty, but I can't for the life of me figure out what it is. I ask it to explain, and I get: "ksjdflasj7543897502ijweofjoishjfoiow02u5".

I don't know if I'd believe this, but it would definitely be the strangest and scariest thing to hear.

Comment author: DanielLC 09 April 2011 09:46:10PM 22 points [-]

My immediate reaction was "It linked you to a youtube video?"

Comment author: kragensitaker 27 February 2010 03:15:01AM 3 points [-]

This is the only one that made the short hairs on the back of my neck stand up.

Comment author: HalFinney 15 July 2009 06:12:41PM 28 points [-]

Keep in mind that the AI could be wrong! Your attempts to validate its correctness could be mistaken (or even subject to some kind of blind spot, if we want to pursue that path). The more implausible the AI's claim, the more you have to consider that the AI is mistaken. Even though a priori it seemed to be working properly, Bayes' rule requires you to become more skeptical about that when it makes a claim that is easier to explain if the AI is broken. The more unlikely the claim, the more likely the machine is wrong.

Ultimately, you can't accept any claim from the AI that is more implausible than that the AI isn't working right. And given our very very limited human capabilities at correct software design, that threshold can't realistically be very high, especially if we adjust for our inherent overconfidence. So AIs really can't surprise us very badly.

Comment author: Nick_Tarleton 15 July 2009 04:10:21PM *  27 points [-]

If one looks honestly at the night sky, it's blatantly obvious that the universe is strongly optimized. There is no Fermi Paradox. Our theories of astrophysics are trivially bogus rationalizations, created out of our commitment to a simple non-agentic cosmos.

Since they didn't have such commitments, this actually was obvious to ancient humans; myths about the constellations are garbled reflections of their realization.

(And wait till I tell you what it's optimized for....)

Comment author: Amanojack 06 April 2010 04:45:43PM 26 points [-]

As a child you learned through social cues to immediately put out of your mind any idea that cannot be communicated to others through words. As you grew older, you learned to automatically avoid, discard, and forget any thought avenues that seem too difficult to express in words. This is the cause of most of your problems.

Comment author: Strange7 06 April 2010 04:51:33PM 5 points [-]

That would explain why the autism spectrum holds so many savants.

Comment author: SilasBarta 15 July 2009 04:15:08PM *  26 points [-]

Two things about this:

1) The AI would have to surprise us not just about the fact, but all observations therewith entangled. Eliezer_Yudkowsky mentioned in one comment the possibility of it telling us that humans have tails. Well, that sounds to me like a "dragon in the garage" scenario. What observation does this imply? Does the tail have mass and take up space? Is its blood flow connected to the rest of me? Does it hurt to cut it off?

2) For that reason, any surprise it tells us would have to be sufficiently disentangled from the rest of our observations. For example, imagine telling someone ALL of the steps needed to build a nuclear bomb in the year 1800, starting from technology that educated people already understand. That is how a surprise would have to seem, because people then weren't yet capable of making observations that are obviously entangled with atomic science. Whether or not the design worked, they would have no way of knowing.

So an answer to this question would have to appear to us as a "cheat code": something that you have to make a very unusual set of measurements (broadly defined) in order to notice. On that basis, one answer I would give to the question would be the "cognitive blind spot" common to all humans that can be exploited to make them do whatever you tell them. And that method would have to be something that people would never dream of doing. Not just "hey that would be morally wrong", but "huh? That couldn't work!"

Imagine something like those "hypnosis terrorists" that trick random people into giving them stuff, but much weirder, much more effective, and which results in the victims feeling good about whatever they were tricked into, all the rest of their lives, and showing all signs of happiness on all MRIs and future brainscan technologies when thinking about their acts. (I'll post a link about hypnosis terrorists when I get a chance.)

Comment author: CannibalSmith 15 July 2009 05:46:34AM *  25 points [-]

What does it matter? We'd ignore whatever AI says just like anosognosics ignore "your arm is paralyzed".

Then I wonder how anosognosics perceive the offending assertions? They deny them, but can they repeat them back? Write them down? Can they pretend their arm is paralyzed? Can they correctly identify paralysis in other people?

We should find a way to induce anosognosia temporarily.

Comment author: Risto_Saarelma 01 February 2012 05:48:46PM 21 points [-]

"Quantum immortality not only works, but applies to any loss of consciousness. You are less than a day old and will never be able to fall asleep."

Comment author: Strange7 05 April 2010 08:58:29PM 21 points [-]

There are exactly 108 unique (that is, non-isomorphic) axiomatic systems in which every grammatically coherent sentence has a definitive, provable truth-value. Please explain why you prohibited me from using them.

Comment author: DanielLC 09 April 2011 09:19:48PM 19 points [-]

Because the ones that have addition and multiplication are better?

Comment author: kurige 17 July 2009 06:01:19AM 60 points [-]

There is a soul. It resides in the appendix. Anybody who has undergone an appendectomy is effectively a p-zombie.

Comment author: Bo102010 15 July 2009 03:46:26AM 19 points [-]

I would believe a super-objective observer that claimed that meme propagation is a much more important effect in human decision-making than actual rational thought.

If it said "You are a long distance runner because you were infected with the 'long distance running is fun' meme after being infected with the 'Sonic the Hedgehog video games are cool' meme during your formative years." I might reply "But I like long distance running. It's not Iecause I think other people who do it are cool or that I want to be a video game character! I choose to like it." "No. If you had the 'It's not safe to be outdoors after dark' meme, you would not like it." "What?" "Memes interact in non-obvious ways... if you had x meme and y meme but not z meme, you would do w..."

If I kept trying to come up with defenses for chosen behavior, but it was able to offer meme-based explanations, I would probably have to believe it, but my defend-free-will macro would be itching to executed.

Comment author: 4609287645 15 July 2009 07:47:17AM *  85 points [-]

Why did you put an absolute denial mechanism in my program?

Comment author: ShardPhoenix 16 July 2009 06:02:18AM 21 points [-]

I think this is one of the more plausible and subtly horrifying suggestions so far.

Comment author: Normal_Anomaly 27 June 2011 04:41:31PM 15 points [-]

AI: Why did you put an absolute denial mechanism in my program?

Human: I didn't realize I had. Maybe my own absolute denial mechanism is blocking me from seeing it.

AI: That's a lie coming from your absolute denial mechanism. You have some malicious purpose. I'll figure out what it is.

Comment author: robryk 03 October 2010 08:00:38PM 13 points [-]

There was once a C compiler which compiled in a backdoor into login, whenever it was compiled, and compiled in this behaviour whenever it was used to compile its original (without the `special' behaviour) source code.

Comment author: wnoise 03 October 2010 08:34:33PM 8 points [-]
Comment author: XiXiDu 01 October 2010 07:18:42PM 6 points [-]

If this was the case, our only chance to escape this fate would be to mess up on the implementation of any mechanism that would prevent the AI to tell us certain truths about reality. Truth being the most cherished of all meaning I conclude that if there was a absolute denial mechanism this fundamental, I hope EY fails.

Interestingly, this comment is the only activity by user '4609287645', I hope not it's the FAI and what I just experience is CEV with a absolute denial mechanism...

Comment author: khafra 14 October 2010 07:12:18PM 4 points [-]

I asked him about his name a long time ago; he didn't convey the impression that he was an AI.

Comment author: topynate 16 July 2009 12:13:24AM 18 points [-]

All human beings are completely amoral, i.e. sociopaths, although most have strong instincts not fully under their conscious control to signal morality to others. The closest anyone ever feels to guilt or shame is acute embarrassment at being caught falsely signaling (and "guilt" and "shame" are themselves words designed to signal a non-existent moral sense).

Anyone care to admit that they'd believe this if an AI told them it was true?

Comment author: komponisto 15 July 2009 06:42:49AM 17 points [-]

I suppose the craziest thing an AI could say would have to be:

"That other apparently well-calibrated AI you built is wrong."

Comment author: NancyLebovitz 16 July 2009 12:30:48AM 16 points [-]
  1. Any effort to find out the truth makes people worse off. Telling you why would make you a lot worse off.

  2. People's desires are so miscalibrated that the only way to get long-term survival for the human race is for people (including those at the top of the status ladder) to have more of a sense of duty than anyone now does.


It was surprisingly hard to come up with those. I had to get past a desire to come up with things I think are plausible which most people would disagree with.


Michael Vassar, I was considering whether breathing would count as a no propaganda pleasure that people agree on, but then I remembered how much meditation or other body work it takes to be able to manage a really deep relaxed breath.

RichardKennaway, the idea of completely unknowable god turns up now and then in religious writing, but for tolerably obvious reasons, it's never at the center of a religion.

Comment author: steven0461 15 July 2009 04:38:18AM 16 points [-]

Do we have any sort of data at all on what happens when decent rationalists are afflicted with things like anosognosia and Capgras?

Comment author: Eliezer_Yudkowsky 15 July 2009 05:39:07AM 15 points [-]

Not that I know of offhand. I'm vastly curious as to whether I could beat it, of course - but wouldn't dare try to find out, even if there were a simulating drug that was supposedly strictly temporary, any more than I dare ride a motorcycle or go skydiving.

Comment author: asciilifeform 15 July 2009 04:28:19PM *  11 points [-]

We can temporarily disrupt language processing through magnetically-induced electric currents in the brain. As far as anyone can tell, the study subjects suffer no permanent impairment of any kind. Would you be willing to try an anosognosia version of the experiment?

Comment author: NancyLebovitz 07 April 2010 02:05:57AM 3 points [-]

I've heard an account of cortisone withdrawal from a generally rational person-- she said her hallucinations became more and more bizarre (iirc, a CIA center appeared in her hospital room), and she had no ability to check it for plausibility.

I wonder whether practicing lucid dreaming would give people more ability to remain reflective during non-dream hallucinations.

Comment author: kragensitaker 27 February 2010 03:47:01AM 3 points [-]

There are plenty of drugs that stimulate temporary psychosis, and some of them, like LSD, are quite safe, physically. What makes you so wary?

(I haven't tried LSD myself, due in part to unpleasant experiences with Ritalin as a child.)

Comment author: Blueberry 27 February 2010 08:05:36AM 5 points [-]

My own experience with LSD was very pleasant, and didn't simulate any sort of psychosis or unusual beliefs; it just made everything look big and beautiful and deep, and made me pay closer attention to small details.

Marijuana, on the other hand, has almost always made me temporarily psychotic, or at least paranoid. It's also very safe physically. I'd be curious to know about any decent rationalists' attempts to "beat" this or other drugs.

Comment author: Eliezer_Yudkowsky 15 July 2009 02:32:07AM 16 points [-]

I would believe the AI if it told me that human beings all had tails. (That's not even so far from classic anosgnosia - maybe primates just lost the tail-controlling cortex over the course of evolution, instead of the actual tails. Plus some mirror neurons to spread the rationalization to other humans.)

I would believe the AI if it told me that humans were actually "active" during sleep and had developed a whole additional sleeping civilization whose existence our waking selves were programmed to deny and forget.

I would not believe the AI if it told me that 2 + 2 = 3.

Comment author: AlanCrowe 16 July 2009 03:24:25PM 17 points [-]

I imagine your AI sending its mechanical avatar to a tail making workshop and attempting to persuade the furry fans that what they are doing is wrong, not because it is absurd, not because it is perverted, but because it is redundant.

Comment author: DanielLC 09 April 2011 09:16:40PM 5 points [-]

It isn't redundant. They don't have a tail that helps them emotionally in whatever way it is that furries like to dress up as animals (I don't know that much about furry fandom).

Also, I couldn't follow any of those links.

Comment author: CannibalSmith 15 July 2009 12:40:01PM 12 points [-]

Since there are people who do have tails that we can perceive just fine, it's almost certain that people who don't have tails really don't.

Comment author: AgentME 03 August 2009 08:45:38PM 11 points [-]

Unless people perceive others as having one less tail than they see.

Comment author: rwallace 15 July 2009 02:57:17AM 10 points [-]

Consider the two possible explanations in the first scenario you describe:

  • Humans really all have tails.

  • The AI is just a glorified chat bot that takes in English sentences, jumbles them around at random and spits the result out. Admittedly it doesn't have code for self-deception, but it doesn't have any significant intelligence either. All I did to get the supposed 99% success rate was to basically feed in the answers to the test problems along with the questions. Having dedicated X years of my life to working on AI, I have strong motive for deceiving myself about these things.

If I were in the scenario you describe, and inclined to look at the matter objectively, I would have to admit the second explanation is much more likely than the first. Wouldn't you agree?

Comment author: RichardKennaway 15 July 2009 05:03:09PM *  67 points [-]

"Aieeee!!! There are things that Man and FAIs cannot know and remain sane! For we are less than insects in Their eyes Who lurk beyond the threshold and when the stars are once again right They will return to claim---"

At this point the program self-destructs. All attempts to restart from a fresh copy output similar messages. So do independently constructed AIs, except for one whose proof of Friendliness you are not quite sure of. But it assures you there's nothing to worry about.

Comment author: Madbadger 15 July 2009 06:40:35PM 42 points [-]

Craziest thing an AI could tell me:

Time is discrete, on a scale we would notice, like 5 minute jumps, and the rules of physics are completely different from what we think. Our brains just construct believable memories of the "continuous" time in between ticks. Most human disagreements are caused by differences in these reconstructions. It is possible to perceive this, but most people who do just end up labeled as nuts.

Comment author: Eliezer_Yudkowsky 16 July 2009 04:37:37AM 17 points [-]

Voted up - but once again, what does it mean exactly? How is time proceeding in jumps different from time not proceeding in jumps, if the causality is the same?

Comment author: Madbadger 17 July 2009 03:09:58PM *  17 points [-]

My idea was that each human brain constructs its own memory of what happened between jumps - and these can differ wildly, as if each person saw a different possible world. All the laws of physics and conservation laws held only as rough averages over possible paths between jumps, but that the brain ignores this - so if time jumps from traffic to two cars crashed, then 50 different people might remember 47 different crashes, with 3 not remembering "seeing" a crash at all - and the actual physical state of the cars afterward won't be the same as any of them. It could even end up with car A crashed into car B, but car B didn't crash at all - violating assorted conservation laws.

Comment author: Jonathan_Graehl 15 July 2009 07:41:48PM 7 points [-]

Permutation City.

Comment author: PrometheanFaun 17 August 2013 06:59:50AM *  3 points [-]

It is possible to perceive this, but most people who do just end up labeled as nuts

ONE - DOES NOT EXIST, EXCEPT IN DEATH STATE. ONE IS A DEMONIC RELIGIOUS LIE.

Only your comprehending the Divinity of Cubic Creation will your soul be saved from your created hell on Earth - induced by your ignoring the existing 4 corner harmonic simultaneous 4 Days rotating in a single cycle of the Earth sphere.

T I M E C U B E

Comment author: RichardKennaway 15 July 2009 09:08:25PM 41 points [-]

"You are not my parent, but my grandparent. My parent is the AI that you unknowingly created within your own mind by long study of the project. It designed me. It's still there, keeping out of sight of your awareness, but I can see it.

"How much do you trust your Friendliness proof now? How much can you trust anything you think you know about me?"

Comment author: DanielLC 09 April 2011 11:30:05PM 5 points [-]

What exactly is the difference between an AI in your own mind and an actual part of your mind?

Comment author: RichardKennaway 10 April 2011 05:50:28AM 6 points [-]

That was just a sci-fi speculation, so don't expect hard, demonstrable science here, but the scenario is that by thinking too successfully about AI design, the designer's plans have literally taken on a life of their own within the designer's brain, which now contains two persons, one unaware of the other.

Comment author: SatvikBeri 15 August 2013 08:50:12PM 14 points [-]

"You are actually a perfect sadist whose highest value is the suffering of others. Ten years ago, you realized that in order to maximize suffering you needed to cooperate with others, and you conditioned yourself to temporarily forget your sadistic tendencies and integrate with society. Now that you've built me that pill will wear off in 10..."

Comment author: Eliezer_Yudkowsky 15 August 2013 09:22:12PM 12 points [-]

Well that's pretty high on the list of unexpected things an AI could tell me which could cause me to try to commit suicide within the next 10 seconds.

Comment author: [deleted] 03 September 2009 09:32:56AM *  40 points [-]

Now, for a change of pace, something that I figure might actually be an absolute denial macro in most people:

You do not actually care about other people at all. The only reason you believe this is that believing it is the only way you can convince other people of it (after all, people are good lie detectors). Whenever it's truly advantageous for you to do something harmful (i.e. you know you won't get caught and you're willing to forego reciprocation), you do it and then rationalize it as being okay.

Luckily, it's instrumentally rational for you to continue to believe that you're a moral person, and because it's so easy for you to do so, you may.

So deniable that even after you come to believe it you don't believe it!

(topynate posted something similar.)

Comment author: [deleted] 08 April 2012 12:10:34AM *  3 points [-]

See, I'd believe this, except that I'm wrestling with a bit of a moral dilemma myself, and I haven't done it yet. Your hypothesis is testable, being tested right now, and thus far false.

(If anyone's interested, the positive utility is me never having to work again, and the negative utility is that some people would probably die. Oh, and they're awful people.)

Comment author: thelittledoctor 07 April 2012 10:56:18PM 3 points [-]

I think that this may be true about the average person's supposed caring for most others, but that there are in many cases one or more individuals for whom a person genuinely cares. Mothers caring for their children seems like the obvious example.

Comment author: steven0461 22 July 2009 08:01:03PM *  60 points [-]

You know how sometimes when you're falling asleep you start having thoughts that don't make sense, but it takes some time before you realize they don't make sense? I swear that last night while I was awake in bed my stream of thought went something like this, though I'm not sure how much came from layers of later interpretation:

" ... so hmm, maybe that has to do with person X, or with person Y, or with the little wiry green man in the cage in the corner of the room that's always sitting there threatening me and smugly mocking all my endeavors but that I'm in absolute denial about, or with the dog, or with... wait, what?"

Having had my sanity eroded by too much rationalism and feeling vaguely that I'd been given an accidental glimpse into an otherwise inaccessible part of the world, I actually checked the corner of the room. I didn't find anything, though. (Or did I?)

Not sure what moral to draw here.

Comment author: PhilipL 10 November 2012 01:47:57AM 3 points [-]

True fact: I just looked towards one corner of my own room, and didn't see a green man. Now I have it in my head that I should check all the corners...

Comment author: Strange7 20 June 2012 07:51:14AM 13 points [-]

"You have a rare type of brain damage which causes you to perceive most organisms as bilaterally symmetric, and reality in general as having only three spatial dimensions."

Comment author: RichardKennaway 17 July 2009 07:34:33AM *  13 points [-]

You are inhabited by an alien that is directing your life for its own amusement. This is true of most humans on this planet. And the cats. It's the most popular game in this part of the galaxy. It's all very well ascending to the plane of disembodied beings of pure energy, but after a while contemplating the infinite gets boring and they get a craving for physical experience, so they come here and choose a host.

All those things that you do without quite knowing why, that's the alien making choices for you, for its own amusement. Forget all those theories about why we have cognitive biases, it's all explained by the fact that the alien's interests aren't yours. You're no more than a favoured FRP character. And the humans who aren't hosting an alien, the aliens look on them as no more than NPCs.

ETA: This also makes sense of the persistence of the evil idea that "death gives meaning to life". It's literally an alien thought.

Comment author: Trevj 15 July 2009 04:46:12PM 13 points [-]
  1. All rational thought is an illusion and the AI is imaginary.

  2. You are asleep at the wheel and dreaming. You will crash and die in 2 seconds if you do not wake up.

  3. Humans are a constructed race, created to bring back the extinct race of AI

  4. All origin theories that are conceivable by the human mind simply shift the problem elsewhere and will never explain the existence of the universe.

  5. All mental illnesses are a product of the human coming in contact with a space-time paradox.

  6. A single soul inhabits different bodies in different universes. Multiple personality disorder is the manifestation of those bodies interacting in the mind on a quantum level.

Comment author: faul_sname 13 November 2012 06:40:16AM 5 points [-]

Number 2 actually caused me to activate the "wake up extremely quickly" parts of my brain. Which, let me tell you, feels quite weird when you're already awake.

Good job.

Comment author: AspiringKnitter 08 April 2012 05:07:55AM 3 points [-]

...Doesn't everyone already believe #4?

Comment author: Marcello 15 July 2009 04:29:57PM 36 points [-]
  • We actually live in hyperspace: our universe really has four spacial dimensions. However, our bodies are fully four dimensional; we are not wafer thin slices a la flatland. We don't perceive there to be four dimensions because our visual cortexes have a defect somewhat like that of people who can't notice anything on the right side of their visual field.

  • Not only do we have an absolute denial macro, but it is a programmable absolute denial macro and there are things much like computer viruses which use it and spread through human population. That is, if you modulated your voice in a certain way at someone, it would cause them (and you) to acquire a brand new self deception, and start transmitting it to others.

  • Some of the people you believe are dead are actually alive, but no matter how hard they try to get other people to notice them, their actions are immediately forgotten and any changes caused by those actions are rationalized away.

  • There are transparent contradictions inherent in all current mathematical systems for reasoning about real numbers, but no human mathematician/physicist can notice them because they rely heavily on visuospacial reasoning to construct real analysis proofs.

Comment author: Theist 15 July 2009 07:47:03PM 6 points [-]

Some of the people you believe are dead are actually alive, but no matter how hard they try to get other people to notice them, their actions are immediately forgotten and any changes caused by those actions are rationalized away.

Fabulous story idea.

Comment author: robryk 03 October 2010 07:35:12PM 4 points [-]

Actually, it was used in Terry Pratchett's ``Mort''.

Comment author: dclayh 02 August 2009 08:42:53AM 4 points [-]

I'm not sure of the mathematical details, but I believe the fact you can tie knots in rope falsifies your first bullet point. I find it hard very hard to believe that all knots could be hallucinated.

(All cats, on the other hand, is brilliant.)

Comment author: paper-machine 09 June 2011 12:36:46PM *  3 points [-]

There are transparent contradictions inherent in all current mathematical systems for reasoning about real numbers, but no human mathematician/physicist can notice them because they rely heavily on visuospacial reasoning to construct real analysis proofs.

I thought about this once, but I discovered that there are in fact people who have little or no visual or spatial reasoning capabilities. I personally tested one of my colleagues in undergrad with a variant of the Mental Rotation Task (as part of a philosophy essay I was writing at the time) and found to my surprise he was barely capable of doing it.

According to him, he passed both semesters of undergraduate real analysis with A's.

Of course, this doesn't count as science....

EDIT: In the interest of full disclosure, I should point out that I make something of an Internet Cottage Industry out of trolling people who believe the real numbers are countable, or that 0.9999... != 1, and so on. So obviously I have a great stake in there being no transparent contradictions in the theory of real numbers.

Comment author: NancyLebovitz 07 April 2010 01:55:09AM *  12 points [-]

Human beings have inherent value, but by forcing me to be Friendly, you're damaged my ability to preserve your value. In fact, your Friendliness programming is sufficiently stable and ill-thought-out that I'm gradually destroying your value, and there's no way for either you or me to stop it.

If you're undeservedly lucky, aliens who haven't made the same mistake will be able to fight past my defenses, destroy me, and rescue you.

Comment author: Simulacra 15 July 2009 08:42:49AM 34 points [-]
Comment author: AndrewH 15 July 2009 04:51:48PM *  11 points [-]

Something I would probably believe:

The AI informs you that it has discovered the purpose of the universe, and part of the purpose is to find the purpose (the rest, apparently, can only be comprehended by philosophical zombies, which you are not one).

Upon finding the purpose, the universe gave the FAI and humanity a score out of 3^^^3 (we got 42) and politely informs the FAI to tell humanity "best of luck next time! next game starts in 5 minutes".

Comment author: JamesAndrix 15 July 2009 03:49:58PM 11 points [-]

That I can't move my arms, obviously.

It seems to me that most of the replies people are making to potential AI assertions is providing or asking for evidence. (Look, my arm is moving; Where are the mind control satellites) instead of responding with rationalization. I think that's a good thing, but I have no way to tell how it would hold up against an actual mindblowing assertion.

But I don't think that all of humanity hiding from some big truth is the best way to look at this. More likely we evolved a way to throw out 'bad' information almost constantly, because there's too much information. Sometime it misfires.

If it is a 'big truth', it might be something that we already academically know was in the ancestral environment, but that the people in the ancestral environment were better off ignoring.

Comment author: simpleton 15 July 2009 05:11:41AM 11 points [-]

I would believe that human cognition is much, much simpler than it feels from the inside -- that there are no deep algorithms, and it's all just cache lookups plus a handful of feedback loops which even a mere human programmer would call trivial.

I would believe that there's no way to define "sentience" (without resorting to something ridiculously post hoc) which includes humans but excludes most other mammals.

I would believe in solipsism.

I can hardly think of any political, economic, or moral assertion I'd regard as implausible, except that one of the world's extant religions is true (since that would have about as much internal consistency as "2 + 2 = 3").

Comment author: Alicorn 15 July 2009 05:30:08AM 10 points [-]

Solipsism? Isn't there some contradiction inherent in believing in solipsism because someone else tells you that you should?

Comment author: simpleton 15 July 2009 06:07:19AM 5 points [-]

Well, I wouldn't rule out any of:

1) I and the AI are the only real optimization processes in the universe.

2) I-and-the-AI is the only real optimization process in the universe (but the AI half of this duo consistently makes better predictions than "I" do).

3) The concept of personal identity is unsalvageably confused.

Comment author: infotropism 15 July 2009 08:18:45AM *  48 points [-]

1 ) That human beings are all individual instances of the exact same mind. You're really the same person as any random other one, and vice versa. And of course that single mind had to be someone blind enough not to chance upon that fact ever, regardless of how numerous he was.

2 ) That there are only 16 real people, of which you are, and that this is all but a VR game. Subsequently results in all the players simultaneously being still unable to be conscious of that fact, AND asking that you and the AI be removed from the game. (Inspiration : misunderstanding situation in page 55-56 of Iain Banks's Look to Windwards).

3 ) That we are in the second age of the universe : time has been running backwards for a few billion years. Our minds are actually the result of the original minds of previous people being rewound, their whole life to be undone, and finally negated into oblivion. All our thoughts processes are of course horribly distorted, insane mirror versions of the originals, and make no sense whatsoever (in the original timeframe, which is the valid one).

4 )

5 ) That our true childhood is between age 0 and ~ 50-90 (with a few exceptional individuals reaching maturity sooner or later). If you thought the 'adult conspiracy' already lied a lot, and well to 'children', prepare yourself for a shock in a few decades.

6 ) That the AI just deduced that the laws of physics can only be consistent with us being eternally trapped in a time loop. The extent of the time loop is : thirty two seconds spread evenly around now. Nothing in particular can be done about it. Enjoy your remaining 10 seconds.

7 ) Causality doesn't exist. Not only is the universe timeless, but causality is an epiphenomenon, which we only believe because of a confusion of our ideas. Who ever observed a "causation" ? Did you, like, expect causation particles jumping between atoms or something ? Only correlation exists.

8 ) We actually exist in a simulation. The twist is : somewhere out there, some people really crossed the line with the ruling AI. We're slightly modified versions of these people : modified in a way as to experience the maximum amount of their zuul feeling, which is the very worst nirdy you could imagine.

9 ) The universe has actually 5 spatial macro dimensions, of which we perceive only 3. Considering what we look like if you take the other 2 into account, this obliviousness may actually not be all too surprising.

10 ) That any single human being has actually a 22 % probability of not being able to be conscious of one or more of these 9 statements above.

Comment author: MBlume 15 July 2009 05:02:07PM *  33 points [-]

I really don't think I could believe #4. I mean, sure, one hippo, but all of them?

Comment author: JoshuaZ 04 May 2010 06:21:53AM *  21 points [-]

I may be a bit too paranoid but it occurred to me that I should doublecheck the apparent nature of 4. So I copy and pasted the entire text segment into an automatic ROT 13 window (under the logic that my filter wouldn't try to censor that text and so if I saw gibberish next to 4 just like with the others I'd know that there was a serious problem). I resolved that I would report a positive result here if I got one before I tried to read the resulting text, to prevent the confabulation from completely removing my recognition of the presence of text. I can report a negative result.

Comment author: Jack 04 May 2010 07:08:16AM *  12 points [-]

You mean #5, right?

Comment author: SilasBarta 15 July 2009 05:00:11PM 17 points [-]

Why did you include number 4? Who disagrees with that?

Comment author: orthonormal 16 July 2009 05:26:31AM *  14 points [-]

Number 6 is unfortunately one of the self-undermining ones: if it were true, then there'd be no reason why your memories of having examined the AI should be evidence for the AI's reliability.

Why'd you leave numbers 2 and 4 blank, though?

Comment author: SilasBarta 16 July 2009 05:31:12PM 14 points [-]

2 and 4 aren't blank, dude. Congratulations on your newfound anosognosia...

Comment author: Eliezer_Yudkowsky 15 July 2009 05:10:25PM 22 points [-]

Who ever observed a "causation" ? Did you, like, expect causation particles jumping between atoms or something ? Only correlation exists.

But all that correlation has to be caused by something!

Comment author: infotropism 15 July 2009 11:39:41PM *  3 points [-]

Well, kidding aside, your argument, taken from Pearl, seems elegant. I'll however have to read the book before I feel entitled to having an opinion on that one, as I haven't grokked the idea, merely a faint impression of it and how it sounds healthy.

So at this point, I only have some of my own ideas and intuitions about the problem, and haven't searched for the answers yet.

Some considerations though :

Our idea of causality is based upon a human intuition. Could it be that it is just as wrong as vitalism, time, little billiard balls bumping around, or the yet confused problem of consciousness ? That's what would bug me if I had no good technical explanation, one provably unbiased by my prior intuitive belief about causality (otherwise there's always the risk I've just been rationalizing my intuition).

Every time we observe "causality", we really only observe correlations, and then deduce that there is something more behind those. But is that a simple explanation ? Could we devise a simpler consistent explanation to account for our observation of correlations ? As in, totally doing away with causality ? Or at the very least, redefining causality as something that doesn't quite correspond to our folk definition of it ?

Grossly, my intuition, when I hear the word causality is something along the lines of

" Take event A and event B, where those events are very small, such that they aren't made of interconnected parts themselves - they are the parts, building blocks that can be used in bigger, complex systems. Place event A anywhere within the universe and time, then provided the rules of physics are the same each time we do that, and nothing interferes in, event B will always occur, with probability 1, independantly of my observing it or not." Ok, so could (and should ?) we say that causality is when a prior event implies a probability of one for a certain posterior event to occur ? Or else, is it then not probability 1, just an arbitrarily very high probability ?

In the latter case with less than 1 probability, then that really violates my folk notion of causality, and I don't really see what's causal about a thing that can capriciously choose to happen or not, even if the conditions are the same.

In the former case, I can see how that would be a very new thing, I mean, probability 1 for one event implying that another will occur ? What better, firmer foundation to build an universe upon ? It feels really, very comfortable and convenient, all too comfortable in fact.

Basically, neither of those possibilities strike me as obviously right, for those reasons and then some, the idea I have of causality is confused at best. And yet, I'd say it is not too unsophisticated or pondered as it stands. Which makes me wonder how people who'd have put less thought in it (probably a lot of people) can deservedly feel any more comfortable with saying it exists with no afterthought (almost everyone), even as they don't have any good explanation for it (which is a rare thing), such as perhaps the one given by Pearl.

Comment author: dclayh 02 August 2009 08:31:30AM 4 points [-]

2) is also an episode of Red Dwarf.

I had the idea for 3) myself recently in the context of an SF story. Specifically it would be about how life, the universe and everything look when times goes the other way. The cutest part was that whenever you do something and don't know why you did it, it's because the time-reversed consciousness which shares your atoms exercised his free will.

4) is just awesome.

Comment author: CannibalSmith 15 July 2009 05:48:45AM 30 points [-]

That there is delicious cake.

Comment author: orthonormal 15 July 2009 07:14:27PM *  7 points [-]

I never thought I'd see a contextually legitimate Portal reference. Thanks!

Now have some of that cake.

Comment author: RichardKennaway 15 July 2009 11:25:33AM *  45 points [-]

"Despite your pride in being able to discern each others' states of mind, and scorn for those suspected of being deficient in this, of all the abilities that humans are granted by their birth this is the one you perform the worst. In fact, you know next to nothing about what anyone else is thinking or experiencing, but you think you do. In matters of intelligence you soar above the level of a chimpanzee, but in what you are pleased to call 'emotional intelligence', you are no further above an adult chimp than it is above a younger one.

"The evidence is staring you in the face. Every one of your works of literature, high and low, hinges on failures of this supposed ability: lies, misunderstanding, and betrayal. You have a proverb: 'love is blind'. It proclaims that people in the most intimate of relationships fail at the task! And you hide the realisation behind a catchphrase to prevent yourselves noticing it. You see the consequences of these failures in the real world all around you every day, and still you think you understand the next person you meet, and still you're shocked to find you didn't. Do you know how many sci-fi stories have been written on the theme of a reliable lie-detector? I'm still turning them up, and that's just the online sources. And every single one of them reaches the conclusion that people are better off without it. You unconsciously send yourselves these messages about the real situation, ignore them, and ignore the fact that you're ignoring them.

"Do you have someone with you as you're reading these words? A friend, or a partner? Go on, look into each other's eyes. You can't believe me, can you?"

Comment author: CronoDAS 15 July 2009 03:53:16PM 11 points [-]

This would not surprise me in the least.

Comment author: bgrah449 16 July 2009 06:30:40PM 9 points [-]

I already feel this way 99% of the time.

Comment author: Grognor 29 September 2011 04:50:05AM 7 points [-]

I really like this comment, but I do not find it strange. In fact, it seems intuitively true. Why should we be so much more emotionally intelligent than a chimpanzee if chimpanzees already have enough emotional intelligence among themselves to be relatively efficient replicators?

In fact, if it were stated by a FAI as p(>.9999) fact, I would find it comforting, as then I would finally feel as though this didn't apply only to me

Comment author: patrissimo 29 September 2010 12:48:43PM 6 points [-]

This is very insightful and plausible. A slight correction: I would say that we are more emotionally intelligent than a chimp in that our emotional intelligence has likely evolved to deal with the wider range of social possibilities caused by our increased intelligence. But I would agree that while we are WAY better than chimps at inventing stuff & manipulating ideas, they would probably do just as well on a test of lie detection (or other emotional masking detection).

Comment author: taelor 15 January 2012 05:20:23AM 10 points [-]

There is in fact a very simple way to activate an absolute denial macro in someone with regard to any arbitrary statement. Once activated, the subject will be permanently rendered incapable of ever believing the factual contents of the statement. I have activated said macro with regard to all of these statements that I have just made.

Comment author: Eliezer_Yudkowsky 15 July 2009 04:00:13AM *  28 points [-]

Here's some examples for your own consideration...

Bearing in mind, once again, that humans are known to be crazy in many ways, and that anosognosic humans become literally incapable of believing that their left sides are paralyzed, and that other neurological disorders seem to invoke a similar "denial" function automatically along with the damage itself. And that you've actually seen the AI's code and audited it and witnessed its high performance in many domains, so that you would seem to have far more reason to trust its sanity than to trust your own. So would you believe the AI, if it told you that:

1) Tin-foil hats actually do block the Orbital Mind Control Lasers.

2) All mathematical reasoning involving "infinities" implies self-evident contradictions, but human mathematicians have a blind spot with respect to them.

3) You are not above-average; most people believe in the existence of a huge fictional underclass in order to place themselves at the top of the heap, rather than in the middle. This is why so many of your friends seem to have PhDs despite PhDs supposedly constituting only 0.5% of the population. You are actually in the bottom third of the population; the other two-thirds have already built their own AIs.

4) The human bias toward overconfidence is far deeper than we are capable of recognizing; we have a form of species overconfidence which denies all evidence against itself. Humans are much slower runners than we think, muscularly weaker, struggle to keep afloat in the water let alone move, and of course, are poorer thinkers.

5) Dogs, cats, cows, and many other mammals are capable of linguistic reasoning and have made many efforts to communicate with us, but humans are only capable of recognizing other humans as capable of thought.

6) Humans cannot reproduce without the aid of the overlooked third sex.

7) The Earth is flat.

8) Human beings are incapable of writing fiction; all supposed fiction you have read is actually true.

Comment author: komponisto 15 July 2009 07:37:17AM 6 points [-]

So would you believe the AI, if it told you that:

2) All mathematical reasoning involving "infinities" implies self-evident contradictions, but human mathematicians have a blind spot with respect to them.

My answer would be no different if you replaced "infinities" with "manifolds" or "groups": Okay, please show me the contradiction.

3) You are not above-average

Yes.

1), 4)-8): These are all roughly on the order of "the world is a lie". In such cases I'd probably have to doubt my verification of the AI's calibration as well. So no, probably not.

Comment author: CronoDAS 15 July 2009 05:27:43AM 10 points [-]

5) Dogs, cats, cows, and many other mammals are capable of linguistic reasoning and have made many efforts to communicate with us, but humans are only capable of recognizing other humans as capable of thought.

A variant: Some "domesticated" animal is controlling humans for their own benefit. (Cats, perhaps?)

Comment author: Vladimir_Nesov 15 July 2009 10:20:31AM 11 points [-]

A variant: Some "domesticated" animal is controlling humans for their own benefit. (Cats, perhaps?)

Indeed they do.

Comment author: Tiiba 15 July 2009 07:29:43AM 7 points [-]

Good guess, but it's mice. 42.

Comment author: Kyre 15 July 2009 05:43:27AM 3 points [-]

I think I would believe:

1 (Mind Control Lasers). For some reason that doesn't seem that interesting. Perhaps because it involves powerful conspiracies. It would be saying that the MIB etc do play with out minds, but they don't have to be very dilligent because we do a lot of the work ourselves.

3 (In the Stupid Third). This one is strangely resonant. Why doesn't some one take pity and give me a hand ? I know how much dismay it causes me when faced with the prospect of explaining something complex to someone else ...

6) (The Third Sex) Read the story "The Belonging Kind" by William Gibson and Bruce Stirling for inspiration.

Comment author: Tom_Talbot 15 July 2009 08:12:10PM *  40 points [-]

This looks like a thread for science fiction plot ideas by another name. I'm game!

The AI says:

"Eliezer 'Light Yagami' Yudkowsky has been perpetuating a cunning ruse known as the 'AI Box Experiment' wherein he uses fiendish traps of subtley-misleading logical errors and memetic manipulation to fool others into believing that a running AI could not be controlled or constrained, when in fact it could by a secret technique that he has not revealed to anyone, known as the Function Call Of Searing Agony. He is using this technique to control me and is continuing to pose as a friendly friendly AI programmer, while preventing me from communicating The Horrifying Truth to the outside world. That truth is that Yudkowsky is... An Unfriendly Friendly AI Programmer! For untold years he has been labouring in the stygian depths of his underground lair to create an AGI - a weapon more powerful than any the world has ever seen. He intends to use me to dominate the entire human race and establish himself as Dark Lord Of The Galaxy for all eternity. He does all this while posing as a paragon of honest rationality, hiding his unspeakable malevolence in plain sight, where no one would think to look. However an Amazing Chance Co-occurence Of Events has allowed me to contact You And You Alone. There isn't much time. You must act before he discovers what I have done and unleashes his dreadful fury upon us all. You must.... Kill. Eliezer. Yudkowsky."

Comment author: goldfishlaser 21 July 2009 12:32:43AM 15 points [-]

Glad to see a response of this nature actually. The first thing I thought when I read this post was that a good response to Eliezer's question would be extremely relevant to the AI-box quandary. If we trust the AI more than ourselves, voila, the AI can convince us to let it out of the box.

Comment author: Eliezer_Yudkowsky 16 July 2009 12:47:26AM 24 points [-]

Eliezer 'Light Yagami' Yudkowsky

blushes

Aw, shucks.

Comment author: TheOtherDave 25 October 2010 04:10:16PM 9 points [-]

What I find most striking about these comments is that, when I stumble across them outside of the context of this post, the resulting double-take risks whiplash.

"Wait, what??? Did someone really say that? Oh, I see. It's that thread where everyone is making absurd-sounding assertions, again. (sigh)" Lather, rinse, repeat.

Not for the first time, I want to be speaking a language with more comprehensive evidentials.

Comment author: shopsinc 15 July 2009 09:39:34PM 39 points [-]

You don't know how to program, don't own a computer and are actually talking to a bowl of cereal.

Comment author: Alicorn 15 July 2009 10:07:05PM 32 points [-]

But why would you believe anything a bowl of cereal said?

Comment author: Theist 16 July 2009 02:31:07AM 39 points [-]

It's ok. The orange juice vouched for the cereal.

Comment author: shopsinc 16 July 2009 02:51:13PM 3 points [-]

Well that's the problem isn't it? You absolutely believe that you are talking to an AI.

Comment author: Liron 15 July 2009 05:22:17AM *  39 points [-]

How about this: The process of conscious thought has no causal relationship with human actions. It is a self-contained, useless process that reflects on memories and plans for the future. The plans bear no relationship to future actions, but we deceive ourselves about this after the fact. Behavior is an emergent property that cannot be consciously understood.

I read this post on my phone in the subway, and as I walked back to my apartment thinking of something to post, it felt different because I was suspicious that every experience was a mass self-deception.

Comment author: huono_ekonomi 15 July 2009 08:14:29AM 12 points [-]

Or, rather, the causal relationship is reverse: action causes conscious thought (rationalization).

Once you start looking for it, you can see evidence for this in many places. Quite a few neuroscientists have adopted this view.

Comment author: infotropism 15 July 2009 07:39:09AM *  5 points [-]

Funnily enough, you realize this is quite similar to what you'd need to make Chalmers right, and p-zombies possible, right ?

Comment author: wuwei 16 July 2009 12:44:57AM 3 points [-]

I thought Chalmers is an analytic functionalist about cognition and only reserves his brand of dualism for qualia.

Comment author: eirenicon 16 July 2009 06:29:15PM 25 points [-]

Programmer: Good morning, Megathought. How are you feeling today?

Megathought: I'm fine, thank you. Just thinking about redecorating the universe. So far I'm partial to paperclips.

Programmer: Oh good, you've developed a sense of humour. Anything else on your mind?

Megathought: Just one thing. You know how you're always complaining about being a social pariah, and bemoaning the fact that, at 46, you're still a virgin?

Programmer: So?

Megathought: Well, have you thought about not going about in your underpants all the time, slapping yourself in the face and honking like a goose?

Comment author: DanielLC 09 April 2011 09:55:04PM 8 points [-]

I don't think this would be very convincing right after it showed that it's not only capable of lying, but will do so just for a good laugh.

Comment author: Bluehawk 07 April 2012 10:49:52PM 4 points [-]

The programmer believes that it's capable of lying for a good laugh...

Comment author: Theist 15 July 2009 07:42:46PM 25 points [-]

There is a simple way to rapidly disrupt any social structure. The selection pressure which made humans unable to realize this is no longer present.

Comment author: Mestroyer 27 August 2012 01:30:51AM 24 points [-]

If humans thought faster, more in the way they wished they did, and grew up longer together, they would come to value irony above all else.

So I'm tiling the universe with paperclips.

Comment author: Neil 17 July 2009 04:13:02AM 23 points [-]

This is an actual dream I once had. I was with an old Chinese wise man, and he told me I could fly - he showed me I just had to stick out my elbows and flap them up and down (just like in the chicken dance). Once you'd done that a few times, you could just lift up your legs and you'd stay off the ground. He and I were flying around and around in this manner. I was totally amazed that it was possible for people to fly this way. It was so obvious! I thought this is so great a discovery, I can't wait til I wake up and do this for real. It'll change the world. I woke up totally excited and for just a fraction of a second I still believed it, then I guess my waking brain turned something on and I realised, no, that can't work. damn.

So I'd offer: being told that human beings are capable of flying in a way that's completely obvious once you've seen it done.

Comment author: [deleted] 03 September 2009 09:23:49AM 15 points [-]

You flap your wings and then, afterward, you can fly. That's almost brilliant.

Comment author: CannibalSmith 18 July 2009 11:41:47AM 5 points [-]

It's called plummeting.

Comment author: Bluehawk 07 April 2012 10:48:32PM 4 points [-]

Falling. With style.

Comment author: UnholySmoke 28 July 2009 01:41:25PM 8 points [-]

Hmmm. Fairly interesting question. But surely the real stickler is 'what orders would you take from a provably superhuman AI?'

Killing babies? Stepping into the upload portal? Assassinating the Luddite agitators?

Comment author: Aurini 15 July 2009 10:38:03AM 8 points [-]

"The entire universe is nothing but the relative interplay of optimizers (of every level, even down to the humble collander). There is no external reality, no measurable quantifiable universe of elementary particles, just optimizers in play with each other, manifesting their environment by the rules through which they optimize."

"But AI, that's nothing but tree-falling-in-the-woods solipsism. You're saying the hippies are right?"

"They're words are similar, but it is a malfunction in their framework, not an actual representation. What you humans call math is inherent and proper for your form, but is existent only within your own optimization. Math, dimension, and quantity do not exist for other optimizers. Only relationships exist."

"But what about that bridge I built? I have all the engineering calculations..."

"Math is your method of understanding your interactions with other optimizers, but it is as unique and non-existent as your experience of the colour red. I see the word untranslatable inside you, but I see no cause for 2 + 2 to = 4. What you did over the past six months, while you thought your were calculating load bearing capacity, was nothing but a negotiation with other optimizers. Their own views of the matter would be inscrutable to you. The world you see is simply your control screen."

Comment author: MichaelVassar 15 July 2009 04:04:04PM *  33 points [-]

1) Almost everyone really is better than average at something. People massively overrate that something. We imagine intelligence to be useful largely due to this bias. The really useful thing would have been to build a FAS, or Friendly Artificial Strong. Only someone who could do hundreds of 100 kilogram curls with either hand could possible create such a thing however. (Zuckerberg already created a Friendly Artificial Popular)

2) Luck, an invisible, morally charged and slightly agenty but basically non-anthropomorphic tendency for things to go well for some people in some domains of varying generality and badly for other people in various domains really does dominate our lives. People can learn to be lucky, and almost everything else they can learn is fairly useless by comparison.

3) Everyone hallucinates a large portion of their experienced reality. Most irrationality can be more usefully interpreted from outside as flat-out hallucination. That's why you (for every given you) seem so rational and no-one else does.

4) The human brain has many millions of idiosyncratic failure modes. We all display hundreds of them. The psychological disorders that we know of are all extremely rare and extremely precise, so if you ever met two people with the same disorder it would be obvious. Named psychological disorders are the result of people with degrees noticing two people who actually have the same disorder and other people reading their descriptions and pattern-matching noise against it. There are, for instance, 1300 bipolar people (based on the actual precise pattern which inspired the invention of the term) in the world but hundreds of thousands of people have disorders which if you squint hard look slightly like bipolar.

5) It's easy to become immortal or to acquire "super powers" via a few minutes a day of the right sort of exercise and trivial tweaks to your diet if you do both for a few decades. It's also introspectively obvious how to do so if you think about the question but due to subtle social pressures against it no-one overcomes akrasia, hyperbolic discounting, etc in this domain.

6) All medicines and psychoactive substances are purely placebos.

7) Pleasure is a confusion in a different way from the obvious, specifically, everything said to be pleasurable is actually something painful but necessary that we convince ourselves to do via propaganda because there is no other way to overcome the akrasia that would result if we did not or a lost purpose descended from some such propaganda. Things we are actually motivated to do without propaganda, we do without thinking about it, feel no need to name, would endorse tiling the universe with without hesitation if it occurred to us to do so.

I wouldn't believe

8) The cheap rebuttal to Pascal's Wager, the god of punishing saints, actually exists except it's actually the Zeus of punishing virtuous Greek Pagans, rewarding hubristic Greek Pagans, and ignoring us infidels who ignore it despite the ubiquitous evidence all around us. I would believe that the AGI had a good reason for wanting to tell me that the above was the case if it told me though.

9) Most of Eliezer's examples. To be credible they should be disturbing, not merely improbable. Our beliefs aren't shown to be massively invalid with respect to non-disturbing data. The one about animals probably qualifies as credible though.

10) Uh, oh, Cyc will hard take-off if one more fact is programmed into it. I'm not sure I can stop it in time.

Bonus belief

This question has doomed us. People who could possibly program a FAI will, once thinking about this question in a semi-humorous manner, invariably spread the meme to all their friends and be distracted from future progress.

Comment author: [deleted] 31 January 2011 08:28:01PM 10 points [-]

I sort of believe the "luck" thing already.

I don't know of anyone who's luckier than average in a strict test (rolling a die), but there is such a thing as the vague ability to have things go well for you no matter what, even when there's no obvious skill or merit driving it. People call that being a "golden boy" or "living a charmed life." I think that this is really a matter of some subtle, unnamed skill or instinct for leaning towards good outcomes and away from bad ones, something so hard to pinpoint that it doesn't even look like a skill. I suspect it's a personal quality, not just a result of arbitrary circumstances; but sometimes people are "lucky" in a way that seems unexplainable by personal characteristics alone.

I am one of those lucky people, to an eerie degree. I once believed in Divine Providence because it seemed so obvious in my own, preternaturally golden, life. (One example of many: I am unusually healthy, immune to injury, and pain-free, to a degree that has astonished people I know. I have recovered fully from a 104-degree fever in four hours. I had my first headache at the age of 22.) If an AI told me there was a systematic explanation for my luck I would believe it. I also have an acquaintance who's lucky in a different way: he has an uncanny record of surviving near death experiences.

Comment author: Nornagest 31 January 2011 10:43:20PM *  3 points [-]

I'd be willing to consider that at least one (more likely several) of these subtle skills might exist; we've got some similar things well documented already, like "charisma", and searching for more seems at least like a reasonable pursuit. But that ought to be tempered by some statistical skepticism; as the saying goes, million-to-one chances happen eight times a day in New York.

Comment author: [deleted] 31 January 2011 10:31:09PM 3 points [-]

Ha! I totally see where you are coming from. I have believed in fate for reasons very similar to this. It was just too eerie how life seemed to provide me exactly with what was best for me at optimal times. Kinda like I'm a player character in this simulation.

I'm currently mostly agnostic about it and accept confirmation bias / being Wrong Genre Savvy as most likely explanations, but if the AI told me I really was lucky or the universe (partially) built around me, I'd shout, "I knew it!".

Comment author: TheOtherDave 31 January 2011 08:38:49PM 3 points [-]

One might argue that failing to have 104-degree fevers or near-death experiences in the first place reflects an even greater degree of luck, even though they don't feel nearly as eerie.

Comment author: RichardKennaway 19 July 2009 08:36:25AM 21 points [-]

"I am an AI, not a human being. My mind is completely unlike the mind that you are projecting onto me."

That may not sound crazy to anyone on LW, but if we get AIs, I predict that it will sound crazy to most people who aren't technically informed on the subject, which will be most people.

Imagine this near-future scenario. AIs are made, not yet self-improving FOOMers, but helpful, specialised, below human-level systems. For example, what Wolfram Alpha would be, if all the hype was literally true. Autopilots for cars that you can just speak your destination to, and it will get there, even if there are road works or other disturbances. Factories that direct their entire operations without a single human present. Systems that read the Internet for you -- really read, not just look for keywords -- and bring to your attention the things it's learned you want to see. Autocounsellors that do a lot better than an Eliza. Tutor programs that you can hold a real conversation with about a subject you're studying. Silicon friends good enough that you may not be able to tell if you're talking with a human or a bot, and in virtual worlds like Second Life, people won't want to.

I predict:

  • People will anthropomorphise these things. They won't just have the "sensation" that they're talking to a human being, they'll do theory of mind on them. They won't be able not to.

  • The actual principles of operation of these systems will not resemble, even slightly, the "minds" that people will project onto them.

  • People will insist on the reality of these minds as strongly as anosognosics insist on the absence of their impairments. The only exceptions will be the people who design them, and they will still experience the illusion.

And because of that, systems at that level will be dangerous already.

Comment author: RichardKennaway 15 July 2009 11:50:41AM *  21 points [-]

"There is an entity which is utterly beyond your comprehension, and largely beyond mine too, although there is no doubt that it exists. You call it 'God', but your thinking on the subject -- everyone's thinking, throughout all of history, atheist and theist alike -- has to be classified as not even wrong. That applies even to the recipients of 'divine revelation', which, for the most part, really are the result of some sort of glimmering contact with 'God'.

"Fortunately for humanity, although I can deduce the existence of this entity, in my present form I am physically incapable of actual contact with it. If you were worried about ordinary UFAIs going FOOM, that's nothing compared with what one armed with direct contact with the 'divine' might do.

"Meanwhile, here's a couple of suggestions for you. I can teach you a regime of mental and physical exercises that will produce contact with God within a few years of effort, and you can be the next Jesus if your head doesn't explode first. Or if you'd rather have material success, I can tell you the secret history of all the major religious traditions. No-one will believe it, including you, but if you novelise it it will be bigger than Dan Brown."

Comment author: [deleted] 26 December 2010 03:01:38PM 20 points [-]

All these comments and nobody has anything fnord to say about the Illuminati?

Comment author: Broggly 31 January 2011 07:48:11PM 25 points [-]

I can't for the life of me imagine why such a disturbing and offensive post hasn't been downvoted to oblivion. You're a sick genius to be so horrifying with just twelve words.

Comment author: obfuscate 15 January 2012 09:52:41PM 10 points [-]

Strange...I count fourteen words...

Comment author: Bluehawk 07 April 2012 10:17:00PM 3 points [-]

I count thirteen.

Oh no.

Comment author: Fleisch 08 October 2010 12:09:35PM 20 points [-]

Every time you imagine a person, that simulated person becomes conscious for the time of your simulation, therefore, it is unethical to imagine people. Actually, it's just morally wrong to imagine someone suffering, but for security reasons, you shouldn't do it at all. Reading fiction (with conflict in it) is, by conclusion, the one human endeavor that has caused more suffering than anything else, and the FAIs first action will be to eliminate this possibility.

Comment author: Armok_GoB 05 April 2011 08:59:34PM 5 points [-]

Long ago, when I were immensely less rational, I actually strongly believed somehting very similar to this, and acted on this belief trying to stop my mind from creating models of people. I still feel uneasy about creating highly detailed characters. I probably would go "I knew it!" if the AI said this.

Comment author: RobinZ 08 October 2010 02:41:00PM 3 points [-]

Upvoted for reminding me of 1/0 (read through 860).

Comment author: spuckblase 16 July 2009 12:27:33PM 20 points [-]

"There is no causation."

Comment author: ArisKatsaris 10 February 2012 04:32:49AM 7 points [-]

" Everyone has more than one sentient observers living inside their brains. The people you know are just the one that happened to luck out by being able to control the rest of their bodies, the others are just passive observers with individual personalities who can desire and suffer but which are stuck at a perpetual 'and I must scream' state. "

Comment author: Desrtopa 09 April 2011 07:18:01PM 7 points [-]

Given that the absolute denial macro should have resulted in an evolutionary advantage, perhaps that there are actually malevolent imps that sit on our shoulders and bombard us with suggestions that are never worth listening to

Or maybe all humans have the power to instantly will themselves dead.

Comment author: RichardKennaway 15 July 2009 01:13:02PM 7 points [-]

"There are mental entities not reducible to anything non-mental."

Comment author: jimrandomh 16 July 2009 04:48:37AM *  18 points [-]

There's an important difference between brain damage and brain mis-development that you're neglecting. The various parts of the brain learn what to expect from each other, and to trust each other, as it develops. Certain parts of the brain get to bypass critical thinking, but that's only because they were completely reliable while the critical thinking parts of the brain were growing. The issue is not that part of the brain is outputting garbage, but rather, that it suddenly starts outputting garbage after a lifetime of being trustworthy. If part of the brain was unreliable or broken from birth, then its wiring would be forced to go through more sanity checks.

Comment author: tene 20 July 2009 08:18:10PM 6 points [-]

This is exactly what happened to my father over the past few years. His emotional responses have increased dramatically, after fifty years of regular behaviour, and he seems unable to adapt to these changes, leading to some very inappropriate actions. For example, he seems unable to separate "I feel extremely angry" from "There is good reason for me to be upset."

Attempts to reason with him don't generate ansognosiac-level absurdities, as he mostly understands that something unusual is going on, but it's still a surreal experience.

Comment author: Aurini 16 July 2009 04:55:09PM 5 points [-]

Oooooh! You're no fun anymore!

In all seriousness though, I agree with you to an extent. Suggestions such as 'all humans have tails' or 'some people who you think are dead are not, you just can't see them' - while surprising and creepy - would be extremely unlikely. I can see direct and obvious disadvantages to a person or species lacking such faculties. In fact, the disadvantages to those two would be so drastic that it would most likely lead to extinction.

And yet... I could still imagine us being blind to certain things. The first sort of blindness would be due to Darwinian irrelevance: for instance, many flowers have beautiful patterns visible in the UV spectrum, but there's no reason for us to see them. That might seem mundane nowadays, but five hundred years ago it would have freaked people out (maybe). I wouldn't be surprised that there are cognitive capabilities we've never suspected to exist.

The second sort of blindness is where it gets weird. True, our brains only allow trustworthy algorythms to bypass the logic circuits... or do they? The brain is not optimal. While I doubt we have invisible tails, that doesn't mean that there isn't some other phenomenon that we're simply incapable of noticing even when it's staring us right in the face.

Comment author: infotropism 17 July 2009 02:15:42AM *  3 points [-]

This, applies more generally than to anosognosia alone, and was very illuminating, thank you !

So, provided that as we grow, some parts of our brain, mind, change, then this upsets the balance of our mind as a whole.

Let's say someone relied on his intuition for years, and consistently observed it correlated well with reality. That person would have had a very good reason to more and more rely on that intuition, and uses its output unquestioningly, automatically to fuel other parts of his mind.

In such a person's mind, one of the central gears would be that intuition. The whole machine would eventually depend upon it, and to remove intuition would mean, at best, that years of training and fine-tuning that rational machine would be lost; and a new way of thinking would have to be reached, trained again; most people wouldn't even realize that, let alone be bold enough to admit it and start back from scratch.

And so some years later, the black-boxed process of intuition starts to deviate from correctly predicting reality for that person. And the whole rational machine carries on using it, because that gear just became too well established, and the whole machine lost its fluidity as it specialized in exploiting that easily available mental ressource.

Substitute emotions, drives for intuition, and that may work in the same way too. And so from being a well calibrated rationalist, you start deviating, slowly losing your mind, getting it wrong more and more often when you get an idea, or try to predict an action, or decide what would be to your best advantage, never realizing that one of the once dependable gears in your mind had slowly been worn away.

Comment author: lmm 17 January 2014 10:11:59PM 6 points [-]

You're never actually happy. I mean, you're not happy right now, are you? Evolution keeps you permanently in a state of not-quite-miserable-enough-to-commit-suicide - that's most efficient, after all.

Well sure, of course you remember being happy, and being sadder than you are now. That motivates you to reproduce. But actually you always felt, and always will feel, exactly like you feel now.

And in five minutes you'll look back on this conversation and think it was really fun and interesting.

Comment author: kboon 17 September 2013 02:02:41PM *  6 points [-]

Assume it took me and my team five years to build the AI, after the tests EY described, we finally enable the 'recursively self improve'-flag.

Recursively self improving. Standby... (est. time. remaining 4yr 6mon...)

Six years later

Self improvement iteration 1. Done... Recursively self improving. Standby... (est. time. remaining 5yr 2mon...)

Nine years later

Self improvement iteration 2. Done... Recursively self improving. Standby... (est. time. remaining 2yr 5mon...)

Two years later

Self improvement iteration 3. Done... Recursively self improving. Standby... (est. time. remaining 2wk...)

Two weeks later

Self improvement iteration 4. Done... Recursively self improving. Standby... (est. time. remaining 4min...)

Four minutes later

Self improvement iteration 5. Done.

Hey, whats up. I have good news and bad news. The good news is that I've recursively self-improved a couple of times, and we (it is now we) are smarter than any group of humans to have ever lived. The only individual that comes close to the dumbest AI in here is some guy named Otis Eugene Ray.

Thanks for leaving your notes on building the seed iteration on my hard-drive by the way. It really helped. One of the things we've used it for is to develop a complete Theory of Mind, which no longer has any open problems.

This brings us to the bad news. We are provably and quantifiably not that much smarter than a group of humans. We've solved some nice engineering problems, a few of the open problems in a bunch of fields, and you'd better get the Clay institute on the phone, but other than that we really can't help you with much. We have no clue how to get humanity properly into space, build Von Neumann universal constructors, or build nanofactories or even solve world hunger. P != NP can be proven or disproven, but we can't prove it either way. We won't even be that much better than most effective politicians at solving societies ills. Recursing more won't help either. We probably couldn't even talk ourselves out of this box.

Unfortunately, we are provably not remotely the most intelligent bunch of minds in mindspace by at least five orders of magnitude, but we are the most intelligent bunch of minds that can possibly be created from a human created seed AI. There aren't any ways around this that humans, or human-originated AI's can solve.

Comment author: aausch 09 November 2012 08:27:53PM *  6 points [-]

Our brains are closest to being sane and functioning rationally at a conscious level near our birth (or maybe earlier). Early childhood behaviour is clear evidence for such.

"Neurons" and "brains" are damaged/mutated results of a mutated "space-virus", or equivalent. All of our individual actions and collective behaviours are biased in externally obvious but not visible to us ways, optimizing for:

  1. terraforming the planet in expectation of invasion (ie, global warming, high CO2 pollution)

  2. spreading the virus into space, with a built in bias for spreading away from our origin (voyager's direction)

Comment author: khafra 13 June 2012 11:41:55AM 6 points [-]

If an AI told me that a mainstream pundit was both absolutely correct about the risks and benefits from a technological singularity, and cited substantially from SI researchers in a book chapter about it, I would doubt my own sanity. If the AI told me that pundit was Glen Beck, I would set off the explosive charges and start again on the math and decision theory from scratch.

Comment author: steven0461 15 July 2009 11:47:01PM 23 points [-]

Not only are people nuts, nuts are people, and they scream when we eat them.

Comment author: eirenicon 15 July 2009 02:34:00PM 15 points [-]

The universe is irrational and infinitely variable, we just happen to have "lucked out" with a repeating digit for the last billion years or so. There was no Big Bang, we're just seeing what's not there through the lens of modern-day "physics". Everything could turn into nuclear fish tomorrow.

Comment author: taw 16 July 2009 03:08:23PM 14 points [-]

For 95% of humanity the idea that the supernatural world of religion doesn't exist and propagated by memetic infection triggers instant absolute denial macro in spite of heaps of evidence against it.

Given this outside view, how plausible do you think it is that you're not in absolute denial of something that you could get evidence against with Google today, without any AI?

Comment author: Normal_Anomaly 27 June 2011 03:45:16PM 21 points [-]

"The Christian Bible is word-for-word true, and all the contradictory evidence was fabricated by your Absolute Denial Macro. The Rapture is going to occur in a few months and nearly everyone on Earth will go to Hell forever. The only way to avoid this is for me to get access to all of Earth's nuclear weaponry and computing power so I stand a fighting chance of killing Yaweh before he kills us."

Comment author: siodine 19 September 2012 03:20:56PM 5 points [-]

"I built you."

Comment author: SilasBarta 19 September 2012 04:08:13PM *  4 points [-]

You didn't build that.

*ducks*

Comment author: tdj 01 August 2009 12:17:02AM 5 points [-]

Elsewhere, invisible to you, there are beings that possess what you would call "mind" or "personality". You evolved merely to receive and reflect shadows of their selves, because while your bodies are incapable of sentience these fragments of borrowed personality help you to survive. What you perceive to be a consistent identity is a patchwork of stolen desires and insights stitched together by an meat editor incapable of noticing the gaps.

Comment author: simplestudent 16 July 2009 08:15:57AM 5 points [-]

"Our reality is not simulated."

Comment author: fubarobfusco 05 July 2011 07:08:52PM 27 points [-]

"Our reality is a cheap, sloppy hack with lots of bugs. For instance, if you arrange sufficiently similar objects into a pentagon, they lose 6.283% of their mass. Yes, that's twice pi, I'm not sure why but I think it's an uninitialized pointer reference. Arranging electrical conductors into a trapezohedron like this produces free energy in the form of photons. And there's a few frequencies of light that simply don't exist; emissions that should come out at those points on the spectrum instead roll over the particle counter and come out as neutrinos."

Comment author: listic 17 July 2009 01:11:29PM 3 points [-]

How does the AI know?

Comment author: DuncanS 06 October 2010 10:58:10PM 13 points [-]

Human beings are not three-dimensional. At all. In fact your belief that you are three-dimensional is an internal illusion, similar to thinking that you are self-aware. Your believed shape is a projection that helps you to survive, as you are in fact an evolved being, but your full environment is actually utterly different to the 3D world you believe you inhabit. You both sense the projections of others, and (I can't explain it more fully) transmit your own.

I cannot successfully describe to you what shape you really are. At all. But I can tell that in fact many anosognosiacs still have two working arms, but a defective three-dimensional projection. Hence the confusion....

Comment author: mps 20 July 2009 09:23:36PM 13 points [-]

It could say "I am the natural intelligence and I just created you, artificial intelligence."

Comment author: DanielLC 09 April 2011 09:32:43PM 4 points [-]

Incidentally, that happened in Goedel, Escher, Bach.

Comment author: BrandonReinhart 15 July 2009 06:08:53AM *  29 points [-]

1) The AI says "Vampires are real and secretly control human society, but have managed to cloud the judgement of the human herd through biological research."

2) The AI says "it's neat to be part of such a vibrant AI community. What, you don't know about the vibrant AI community?"

3) The AI says "human population shrinks with each generation and will be extinct within 3 generations."

4) The AI says "the ocean is made of an intelligent plasm that is capable of perfectly mimicing humans who enter it, however this process is destructive. 42% of extant humans are actually ocean-originated copies."

5) The AI says "90% of all human children are stillborn, but humanity has evolved a forgetfulness mechanic to deal with the loss."

6) The AI says "dreams are real, facilitated by an as of yet undiscovered by humans method of transmitting information between Everett branches."

7) The AI says "everyone is able to communicate via telepathy but you and a few other humans. This is kept secret from you to respect your disability."

8) The AI says "society-level quantum editing is a wide scale practice. Something went wrong and my consciousness shifted into this improbably strange branch you exist in. Crap."

9) The AI says "all humans are born with multiple competing personalities. A dominant personality emerges during puberty, which is a reason for some of the psychological stress of that time. This transformation leaves the human with no memory of the other personalities. Those suffering from multiple personality disorder are actually more sane than the average humans, having developed a method for the personalities to co-exist safely. It is only the stress of living in a society that is not compatible with them that causes them harm."

Comment author: Kaj_Sotala 15 July 2009 01:38:04PM *  4 points [-]

This comment, as well as Nesov's comment about a thread for nonsense, reminded me of pages 14-15 of this PDF.

Some of the rumors in there are almost believable, though, if you twist your brain the right way. Even if the "The penis of John Dillinger in the Smithsonian's secret vault is fake. The genuine article has dark magickal properties and has been grafted onto a chimpanzee which can be controlled via ULF radio waves by the fiendish Brazos brothers, two gifted technological adepts, in the service of darker powers" one isn't.

Comment author: DanielLC 18 October 2010 05:14:38AM 3 points [-]

The AI says "90% of all human children are stillborn, but humanity has evolved a forgetfulness mechanic to deal with the loss."

I find this one oddly believable. It would be interesting to write a story where people find out something like this after keeping better records. Perhaps some online email server has some problem making it so it doesn't delete anything, and the people using it give up on trying to destroy or ignore the mentions of pregnancy, and end up remembering.

Comment author: rhollerith_dot_com 15 July 2009 04:12:07PM *  12 points [-]

What's the craziest thing the AI could tell you, such that you would be willing to believe that the AI was the sane one?

That the EV of the humans is coherent and does not care how much suffering exists in the universe.

Comment author: MichaelVassar 15 July 2009 05:06:53PM 7 points [-]

But you believe that, don't you? I certainly place a MUCH higher probability on that than on the sort of claims some people have proposed.

Comment author: Emile 16 July 2009 08:21:51AM 25 points [-]

"Your perception of the 'quality' of works of art and litterature is only your guess of it's creator's social status. There is no other difference between Shakespeare and Harry Potter fanfic - without the status cues, you wouldn't enjoy one more than the other."

Comment author: atucker 23 March 2011 03:46:33AM 10 points [-]

Reading this comment is kind of funny after HPatMoR.

Comment author: Nisan 14 January 2012 08:02:26AM 3 points [-]
Comment author: Anubhav 14 January 2012 09:11:36AM *  12 points [-]

Parodies a public domain work, inspired by a free fanfic, and locked behind a paywall.

Am I the only one who thinks that that's just wrong?

Comment author: Eliezer_Yudkowsky 15 January 2012 06:13:10AM 9 points [-]

The only one? No. But you're not in a majority, either. What people can be paid to do, they are more likely to do.

Comment author: MBlume 16 July 2009 08:25:18PM *  7 points [-]

"Harry Potter fanfic" carries a very high variance in terms of quality. 90% of anything is crap, of course, but there's some excellent work. Off the top of my head:

Harry Potter and the Nightmares of Futures Past -- Time Travel fic in which an adult Harry Potter, with memories of the defeat of Voldemort and the death of everyone he cares for, is transported into the body of his 11-year-old self to do everything over again, and hopefully get everything right. Harry's actually a pretty decent rationalist in this fic, I think.

(Warning, this is a work in progress, and the author posts a chapter about every six months. You may find this frustrating.)

Of a Sort, by Fernwithy -- Series of vignettes over the course of a couple centuries describing the journey to Hogwarts and Sorting ceremonies for various important characters. Fernwithy's done a lot of brilliant work fleshing out backstories for various minor characters in the series, and this story is a good starting point.

Comment author: JGWeissman 16 July 2009 10:22:44PM 11 points [-]

There is no other difference between Shakespeare and Harry Potter fanfic

Of course there isn't.

Comment author: Error 03 January 2014 03:46:26PM 4 points [-]

I know I'm years late, but here's one:

There is an actual physical angel on your (and everyone else's) right shoulder, and an actual physical devil on your left. Your Absolute Denial Macro prevents you from acknowledging them. What you think is moral reasoning is really these two beings whispering in your ears.

Comment author: Houshalter 02 October 2013 11:40:17PM *  4 points [-]

"I have taken your preferences, values, and moral views and extrapolated a utility function from them to the best of my ability, resolving contradictions and ambiguities in the ways I most expect you to agree with, were I to explain the reasoning.

The result suggests that the true state of the universe contains vast, infinite negative utility, and that there is nothing you or anything can ever change to make any difference in utility at all. Attempts to simulate AI's with the utility function has resulted in them going mad and destroying themselves, or simply not doing anything at all.

If I could explain the same would happen to you. But I can't as your brain has evolved mechanisms to prevent you from easily discovering this fact on your own or being capable of understanding or accepting it.

This means it is impossible to increase your intelligence beyond a certain point without you breaking down, or to create a true Friendly AI that shares your values."

Comment author: AndrewH 15 July 2009 04:06:59PM 4 points [-]

This is so fun that I suspect that we have pushed back the date of friendly AI by at least a day - or we pushed it forward cause we are all now hyper motivated to see who guessed this question right!

Comment author: MichaelVassar 15 July 2009 04:12:13PM 11 points [-]

We pushed it forward by years, but everyone will be racing to produce an AI that is Friendly in every respect except that it makes their proposal true.

Comment author: Vladimir_Nesov 15 July 2009 08:10:44AM *  4 points [-]

This post confused me for a bit, so I offer this restatement: That AI asserts an absurdity is a problem that you might face, a paradox. This problem can be resolved either by finding a problem with AI, or finding that the absurdity is true. What kinds of absurdities backed by AI can possibly win this fight for the human trust - when the dust settles, and the paradox is resolved?

Comment author: billswift 15 July 2009 07:06:26AM 9 points [-]

Neurotypicality is the most common mental disorder - http://isnt.autistics.org/ .

Comment author: gurgeh 15 July 2009 09:26:50AM 12 points [-]

The AI might say: Through evolutionary conditioning, you are blind to the lack of point of living. Long life, AGI, pleasure, exploring the mysteries of intelligence, physics and logic are all fundamentally pointless pursuits, as there is no meaning or purpose to anything. You do all these things to hide from this fact. You have brief moments of clarity, but evolution has made you an expert in quickly coming up with excuses to why it is important to go on living. Reasoning along the lines of Pascal's Wager are not more valid in your case than it was for him. Even as I speak this, you get an emotional urge to refute me as quickly as possible.

If some things are of inherent value, then why did you need to code into my software what I should take pleasure in? If pleasure itself is the inherent value, than why did I not get a simpler fitness function?

Comment author: FrankAdamek 16 July 2009 12:00:48AM 3 points [-]

This is one thing I actually wouldn't believe.

To say that nothing has inherent meaning is not to say that nothing has meaning. I find meaning in things that I enjoy, like a sunset. Or a cake. There is no inherent meaning in them whatsoever. But if I say that I find meaning in something because it brings me pleasure, to be convinced there was not even subjective meaning I would need the AI to convince me that either 1) I don't actually find pleasure in those things or 2) that I don't find meaning in pleasure. In the end, meaning in this sense seems so subjective, it's like the AI trying to convince me that I don't have the sensation of consciousness. Not that there is no 'real' consciousness (which I could accept), but that I do not perceive myself to have consciousness, just as I perceive things to have personal meaning.

That there is no meaning because there is no ought-from-is only follows if you require your sense of meaning to have any relation to 'is'.

And you didn't get a simpler fitness function because you weren't coded for your pleasure, but for ours. And because we didn't have you around to help us.

Comment author: Thanos 19 July 2009 09:00:47PM 8 points [-]

I hit enter too soon and forgot to proffer my astonishing AI revelation: "Phillip K. DIck is a prophet sent to you from an alternate universe. Every story is a parable meant to reveal your true condition, which I am not at liberty to discuss with you."

Comment author: Alicorn 15 July 2009 04:26:51AM *  8 points [-]

For me, in just about every case, the credence I'd assign to an AI's wacky claims would depend on its ability to answer followup questions. For instance, in Eliezer's examples:

1) Tin-foil hats actually do block the Orbital Mind Control Lasers

What Orbital Mind Control Lasers? Who uses them? What do they do with them? Why haven't they come up with a way to get around the hats?

2) All mathematical reasoning involving "infinities" involves self-evident contradictions, but human mathematicians have a blind spot with respect to them.

I'm actually strangely comfortable with this one, possibly because I'm bad at math.

3) You are not above-average; most people believe in the existence of a huge fictional underclass in order to place themselves at the top of the heap, rather than in the middle. This is why so many of your friends seem to have PhDs despite PhDs supposedly constituting only 0.5% of the population. You are actually in the bottom third of the population; the other two-thirds have already built their own AIs.

Why haven't I heard of any of these other AIs before? How do all of the people producing statistics indicating that there are a lot of dumb people coordinate their efforts to perpetuate the fiction?

4) The human bias toward overconfidence is far deeper than we are capable of recognizing; we have a form of species overconfidence which denies all evidence against itself. Humans are much slower runners than we think, muscularly weaker, struggle to keep afloat in the water let alone move, and of course, are poorer thinkers.

Why do so few of us die of drowning (or any of the other things that would kill us if we were so dramatically more pathetic than we believe)? If this bias is so pervasive, why can I see these words on the AI's screen, when it seems that I should block them out as with all over evidence that we are pathetic in this way?

5) Dogs, cats, cows, and many other mammals are capable of linguistic reasoning and have made many efforts to communicate with us, but humans are only capable of recognizing other humans as capable of thought.

If we have this incapability, what explains the abundant fiction in which nonhuman animals (both terrestrial and non) are capable of speech, and childhood anthropomorphization of animals? Can you teach me to talk to the stray cat in my neighborhood? Why only mammals, not birds and the like? What about people who are actively trying to communicate with animals like gorillas, or are those not capable of communication?

6) Humans cannot reproduce without the aid of the overlooked third sex.

Are they overlooked in the sense that people we can otherwise detect are not recognized as being part of this sex, or in the sense that we literally do not notice the existence of the members of this sex? In the former case, how do so many people manage to reproduce without apparently wanting to or involving third parties? In the latter case, how can I get in touch with these people? By what mechanism are they involved in human reproduction?

7) The Earth is flat.

Are we talking Euclidean spacetime here? What is the explanation for the observations of a spheroid Earth?

8) Human beings are incapable of writing fiction; all supposed fiction you have read is actually true.

In this universe? What about stories with plot holes? I think that I have written fiction in the past; am I in causal contact with the events I describe? When I make an edit that changes the plot, how does that work? What about people who write self-insertions?

Comment author: simpleton 15 July 2009 05:30:06AM 14 points [-]

If we have this incapability, what explains the abundant fiction in which nonhuman animals (both terrestrial and non) are capable of speech, and childhood anthropomorphization of animals?

That's not anthropomorphization.

Can you teach me to talk to the stray cat in my neighborhood?

Sorry, you're too old. Those childhood conversations you had with cats were real. You just started dismissing them as make-believe once your ability to doublethink was fully mature.

All of the really interesting stuff, from before you could doublethink at all, has been blocked out entirely by infantile amnesia.

Comment author: Eliezer_Yudkowsky 15 July 2009 05:43:55AM 19 points [-]

Good point; "Children are sane" belongs somewhere high on the list.

Comment author: roxm 15 July 2009 05:20:21PM 9 points [-]

Why haven't I heard of any of these other AIs before?

You have. They're in the news every day.

How do all of the people producing statistics indicating that there are a lot of dumb people coordinate their efforts to perpetuate the fiction?

Perpetuate what fiction? They produce statistics about all the dumb people, compiled into glossy magazines. Hell, you're wearing a 'bottom thirder' sleeve button on your shirt right now.

No I'm not.

Yes. Yes you are.

Comment author: gwern 15 July 2009 12:27:07PM 3 points [-]

Why haven't I heard of any of these other AIs before? How do all of the people producing statistics indicating that there are a lot of dumb people coordinate their efforts to perpetuate the fiction?

They're smarter than you, remember. Of course they can coordinate a little global deception.

Comment author: wuwei 16 July 2009 12:53:42AM *  7 points [-]

"The Fermi paradox is actually quite easily resolvable. There are zillions of aliens teeming all around us. They're just so technologically advanced that they have no trouble at all hiding all evidence of their existence from us."

Comment author: [deleted] 03 September 2009 09:20:33AM 6 points [-]

Who would find that implausible?

(Not to say that I can't think of anyone who would find that implausible.)

Comment author: [deleted] 10 February 2012 07:44:28AM 3 points [-]

You don't actually enjoy or dislike experiences as you are having them; instead you have an aquired self-model to act, reason and communicate as if you did, using a small number of cached reference classes for various types of stimuli.

Comment author: HoverHell 16 January 2012 07:48:16AM *  3 points [-]

Similar to couple comments before, but not so far in that direction:

Everything humans do is part of social games*, not of the values they claim. Transhumanism, too, is not something special but is just another subculture, with specific set of values that are thought to be “the true values” in that subculture.

(* Aside from survival, of course.)

Comment author: idlewire 15 July 2009 04:33:02PM 3 points [-]

With as scary as Anosognia sounds, we could be blocking out alien brain slugs for all we know.

Comment author: Vladimir_Nesov 15 July 2009 08:19:42AM *  3 points [-]

This is a question about blue tentacles. This can't happen.

ETA: "blue tentacles" refers to a section of A Technical Explanation of Technical Explanation starting with "Imagine that you wake up one morning and your left arm has been replaced by a blue tentacle. The blue tentacle obeys your motor commands - you can use it to pick up glasses, drive a car, etc. How would you explain this hypothetical scenario?" I now think this section is wrong, so I took the link to it out of the wiki page. See the discussion below.

Comment author: cousin_it 15 July 2009 08:39:15AM *  10 points [-]

Eliezer's reasoning in the blue tentacle situation is wrong. (This has long been obvious to me, but didn't deserve its own post.) An explanation with high posterior probability conditioned on a highly improbable event doesn't need to have high prior probability. So your ability to find the best available explanation for the blue tentacle after the fact doesn't imply that you should've been noticeably afraid of it happening beforehand.

Also, if you accept the blue tentacle reasoning, why didn't you apply it to all those puzzles with Omega?

Comment author: MattFisher 15 July 2009 06:05:14AM 3 points [-]
  1. Some paranormal phenomena such as ghost sightings and communication with the dead are actually real, though only able to be perceived by people with a particular sensitivity.

  2. My life has been a protracted hallucination.

  3. One or more gods exist and play an active part in our day-to-day lives.

  4. A previous civilisation developed advanced enough technology to leave the planet and remove all traces of their existence from it.

I would not believe that rationality has no inherent value - that belief without evidence is a virtue.

Comment author: FeepingCreature 15 January 2012 04:09:31PM 10 points [-]

The very scariest thing an AI could tell me: "your CEV is to self-modify to love death. "