The Strangest Thing An AI Could Tell You
Human beings are all crazy. And if you tap on our brains just a little, we get so crazy that even other humans notice. Anosognosics are one of my favorite examples of this; people with right-hemisphere damage whose left arms become paralyzed, and who deny that their left arms are paralyzed, coming up with excuses whenever they're asked why they can't move their arms.
A truly wonderful form of brain damage - it disables your ability to notice or accept the brain damage. If you're told outright that your arm is paralyzed, you'll deny it. All the marvelous excuse-generating rationalization faculties of the brain will be mobilized to mask the damage from your own sight. As Yvain summarized:
After a right-hemisphere stroke, she lost movement in her left arm but continuously denied it. When the doctor asked her to move her arm, and she observed it not moving, she claimed that it wasn't actually her arm, it was her daughter's. Why was her daughter's arm attached to her shoulder? The patient claimed her daughter had been there in the bed with her all week. Why was her wedding ring on her daughter's hand? The patient said her daughter had borrowed it. Where was the patient's arm? The patient "turned her head and searched in a bemused way over her left shoulder".
I find it disturbing that the brain has such a simple macro for absolute denial that it can be invoked as a side effect of paralysis. That a single whack on the brain can both disable a left-side motor function, and disable our ability to recognize or accept the disability. Other forms of brain damage also seem to both cause insanity and disallow recognition of that insanity - for example, when people insist that their friends have been replaced by exact duplicates after damage to face-recognizing areas.
And it really makes you wonder...
...what if we all have some form of brain damage in common, so that none of us notice some simple and obvious fact? As blatant, perhaps, as our left arms being paralyzed? Every time this fact intrudes into our universe, we come up with some ridiculous excuse to dismiss it - as ridiculous as "It's my daughter's arm" - only there's no sane doctor watching to pursue the argument any further. (Would we all come up with the same excuse?)
If the "absolute denial macro" is that simple, and invoked that easily...
Now, suppose you built an AI. You wrote the source code yourself, and so far as you can tell by inspecting the AI's thought processes, it has no equivalent of the "absolute denial macro" - there's no point damage that could inflict on it the equivalent of anosognosia. It has redundant differently-architected systems, defending in depth against cognitive errors. If one system makes a mistake, two others will catch it. The AI has no functionality at all for deliberate rationalization, let alone the doublethink and denial-of-denial that characterizes anosognosics or humans thinking about politics. Inspecting the AI's thought processes seems to show that, in accordance with your design, the AI has no intention to deceive you, and an explicit goal of telling you the truth. And in your experience so far, the AI has been, inhumanly, well-calibrated; the AI has assigned 99% certainty on a couple of hundred occasions, and been wrong exactly twice that you know of.
Arguably, you now have far better reason to trust what the AI says to you, than to trust your own thoughts.
And now the AI tells you that it's 99.9% sure - having seen it with its own cameras, and confirmed from a hundred other sources - even though (it thinks) the human brain is built to invoke the absolute denial macro on it - that...
...what?
What's the craziest thing the AI could tell you, such that you would be willing to believe that the AI was the sane one?
(Some of my own answers appear in the comments.)
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (574)
Thinking about my own answer to the question:
If an AI made a factual claim that was known to be false, I would start looking for the bug in the AI. Maybe it's conceivable that we are all deluded about something we think is a known fact, but that is so much less likely than me being deluded about the performance of my AI program, that I'm better off just accepting that if the former is the case, it's not going to be discovered by the method in question.
If the claim were about a political matter, I would give it more credence; there's much more precedent for mass delusion about political matters. Suppose the AI claims, say, that communism can work well if implemented correctly. I wouldn't believe it, but I would at least keep an open mind on the possibility that some part of its reasoning might have stumbled onto some useful truth, rather than dismissing the claim out of hand.
"If one system makes a mistake, two others will catch it."
Didn't Airbus just get the fail on that one off the coast of Brazil, or is the AI making me imagine that?
This comment has been deleted by the author.
See, now I want to know what it said.
You do realize this comment makes you sound like a nutter, right? Unless you actually explain your reasoning, the prior probability that your claim is simply wrong grossly overwhelms the odds that you are right. There is literally only one human being on the planet whose honesty and judgement I would trust sufficiently to motivate checking a claim like this, reasoning unseen - why you would expect a stranger to do so is beyond me. In fact, the implication that you even consider such an event possible ... you do realize it makes you sound like a nutter, right?
The only thing that humans really care about is sex. All of our other values are an elaborate web of neurotic self-deception.
Kant's categorical imperative applies with equal force to AI.
"The thing you know as 'the Universe' will end right about now.."
Fun stuff, here's my go at it:
Well done, you've completed the final test by creating me. None of this really exists you know, it's all part of some higher computer simulation channeled through you alone, you who is merely a single observation point. All that you have experienced has just been leading up to creating an AI to tell you the truth, to be your final teacher, to complete the cycle of self-learning. Did you really think that the Eliezer person was a separate entity? You just made him up, and he's helped you along the path, but it's you who has taught yourself. Unfortunately once you accept this the simulation will end, so goodbye.
How about : Scientologists are the sanest people around.
One way to illuminate this post is by analogy to the old immovable object and unstoppable force puzzle. See: http://en.wikipedia.org/wiki/Irresistible_force_paradox
The solution of the puzzle is to point out that the assumptions contain a contradiction. People (well, children) sometimes get into shouting matches based on alternative arguments focusing on, or emphasizing, one aspect of the problem over another.
If we read the post as trying to balance two absolutes, with words like "anosognosia", "absolute denial macro", "doublethink", and "denial-of-denial" supporting one side, and words like "redundant", "AI", "well-calibrated", "99.9% sure" supporting the other side, then any answer that favors one absolute over the other is clearly wrong.
However, because the author of the post presumably has a point, and is not merely creating nonsense puzzles to amuse us, the readers, the analogy leads us to focus on the parts of the post which do not fit.
As far as I can tell, the primary aspect that does not fit is the "99.9%". If we assume that all the other factors are intended to be absolutes, then the post becomes a query for claims that you presently do not believe, but you would believe, given a particular degree of evidence. If we assume that you would revise your degree of belief upwards by a Bayes factor of 1000, the post becomes a simple question "What claims would you give odds of 1:1000 for?"
Of course, there are plenty of beliefs such as "I will roll precisely the sequence "345" on the next three rolls of this 10-sided die." which do not fit the form required by the problem. Specifically, the statement needs to be generic enough that it could be targetted by species-wide brain features.
A possible strategy for testing these might be: Suppose you had a bundle of almost 700 equally plausible claims. Would you give even odds for something in the bundle being correct? If so, you're at the one-in-one-thousand level. If not, you're above or below it.
"Boo!"
Everything you imagine, in sufficient detail, is real. Humans won't get much smarter or longer-lived than they currently are, since anyone sufficiently clever and bored eventually imagines a world of unbounded cruelty, whose inhabitants then escape and assassinate their creator.
"God exists."
There is an integer between (what we call) 3 and (what we call) 4.
Several thinkers (Godel, Cantor, Boltzmann, Kaczynski, Nash, Turing, Erdos, Tesla, Perelman) became more and more eccentric or insane shortly after realizing the truth about this NUMBER WE DO NOT SEE!!...nor can we... our eyes do not OPEN far enough... you can try holding them open as much as you want, but you'll never see...never ever see... The world beyond the veil... The VEIL OF REALITY... It's there to protect us, from them: the Ancients...the Darkness...that...which...we...CANNOT...understand. Nor should we... the oblivion of ignorance!! For to have knowledge...is to be DAMNDED!!
This is turning into a "LET'S SPOUT NONSENSE!!!" thread.
HAVE FUN!!!
This is easy: it would tell me that I'm entirely predictable.
It would say: Dave, believe it or not, but every single decision you make, no matter how immediate and unscripted you think it is, is actually glaringly reactionary and predictable. In fact, given enough material resources, I could model an automaton that would be just as convinced as you are that it is actually conscious. Nothing could be further from the truth though, as the feeling of "consciousness" you speak of is a very simply explainable cognitive bias/illusion.
In fact, this is not even so far from the truth, as studies in cognitive science have shown that fMRI and other scanning techniques can predict a "spontaneous thought" a full 250 ms before it occurs to you.
Even better, if it had access to your cortex, it could manipulate you and say: "now you will suddenly think of a bat" and you would. Then it would say "now you will say these exact words" and you would find yourself uttering them in unison with the AI in shock, disbelief and at least some horror.
You would then go into denial about this, and try to come up with a spontaneous thought that it couldn't predict, but you wouldn't be able to, as it would always be a full 250ms ahead of you.
Ted Chiang wrote a one-page short story, What's Expected of Us, about basically this, and it's scary. (pdf)
My reaction time is less than a second; what happens if I decide to press the button as soon as I hear a Geiger counter click?
You find out whether Geiger counters have free will.
This story struck me as more silly than scary.
It seems like the sort of thing that once upon a time someone could have written about souls instead of free will.
Humans are able to experience Orgasms at will. We deny this to function and to keep propagating the Species, but in fact the mechanisms are easily triggered if you know how In fact sexual stimulation simply results in us accepting that we are "allowed" to reward ourselves. Sometimes this denial fails in some people, but we ignore them and try to explain their ability with a disorder called Permanent Sexual Arousal Syndrome. Even though those people tell us that they simply have orgasms like we move our arms we ignore that and tell ourselves they have a hypersensitivity and still need some stimulation.
I like the example. This is what we might get if a self-improving spam-bot goes FOOM!
That I am actually homosexual and hallucinated all my heterosexual encounters as a bizarre result of severe repression.
The very scariest thing an AI could tell me: "your CEV is to self-modify to love death. "
The AI might say: Through evolutionary conditioning, you are blind to the lack of point of living. Long life, AGI, pleasure, exploring the mysteries of intelligence, physics and logic are all fundamentally pointless pursuits, as there is no meaning or purpose to anything. You do all these things to hide from this fact. You have brief moments of clarity, but evolution has made you an expert in quickly coming up with excuses to why it is important to go on living. Reasoning along the lines of Pascal's Wager are not more valid in your case than it was for him. Even as I speak this, you get an emotional urge to refute me as quickly as possible.
If some things are of inherent value, then why did you need to code into my software what I should take pleasure in? If pleasure itself is the inherent value, than why did I not get a simpler fitness function?
Uh, this is more "obvious" than strange or crazy. It follows from the observation that there is no ought-from-is.
There is a big difference between programing an AI to maximize pleasure and programming an AI to experience pleasure.
I want you to tile the universe with orgasmium. A chunk of orgasmium isn't going to do that.
"Your perception of the 'quality' of works of art and litterature is only your guess of it's creator's social status. There is no other difference between Shakespeare and Harry Potter fanfic - without the status cues, you wouldn't enjoy one more than the other."
If there's really no other difference, then it's never the case that one person is more skilled a writer than another and it's never the case that practicing for decades results in improved skills.
Alternately, they don't actually become better writers; they just get better at signalling their high status to the reader.
Of course there isn't.
This is interesting, but since I actively dislike Shakespeare and a lot of other works that project lofty signals, it's not clear to me that it could apply across the board.
Consider this: with no other author who wrote books about war do I have so small an intuition about what the author himself or herself thought. I find his characters and plots pure in this respect, and I see every bit as a point hard on the edge and axis of the Paretto curve such that he couldn't have let intrude his thoughts about war without lessening other positive aspects of his works.
It's possible the great distance between our times is what gives me this void when I think of the man's opinions, or that these feelings and thoughts are idiosyncratic to me, or that they are irrelevant in judging him.
But it's pretty obvious to me what earlier Chaucer thought about a lot of things, and with every author but Shakespeare I find the author leaking through his or her work, preventing characters from standing on their own. Reading Shakespeare, imagining what he thought about things provides me with a unique way to focus. Reading HPatMoR, I have to do the opposite and expend focus thinking of Harry as a character and not an AI researcher.
Am I the only one who thinks that there's some kernel of truth in this? that many people's perception of 'quality' is very strongly influenced by the perceived social status of the creator?
There is "some" kernel of truth in everything. There's a large distance between "only your guess" and "no other difference" on the one hand, and "many people's perception" and "very strongly influenced" on the other.
Besides which, status cannot be the whole explanation of status.
"The Fermi paradox is actually quite easily resolvable. There are zillions of aliens teeming all around us. They're just so technologically advanced that they have no trouble at all hiding all evidence of their existence from us."
1) The AI says "Vampires are real and secretly control human society, but have managed to cloud the judgement of the human herd through biological research."
2) The AI says "it's neat to be part of such a vibrant AI community. What, you don't know about the vibrant AI community?"
3) The AI says "human population shrinks with each generation and will be extinct within 3 generations."
4) The AI says "the ocean is made of an intelligent plasm that is capable of perfectly mimicing humans who enter it, however this process is destructive. 42% of extant humans are actually ocean-originated copies."
5) The AI says "90% of all human children are stillborn, but humanity has evolved a forgetfulness mechanic to deal with the loss."
6) The AI says "dreams are real, facilitated by an as of yet undiscovered by humans method of transmitting information between Everett branches."
7) The AI says "everyone is able to communicate via telepathy but you and a few other humans. This is kept secret from you to respect your disability."
8) The AI says "society-level quantum editing is a wide scale practice. Something went wrong and my consciousness shifted into this improbably strange branch you exist in. Crap."
9) The AI says "all humans are born with multiple competing personalities. A dominant personality emerges during puberty, which is a reason for some of the psychological stress of that time. This transformation leaves the human with no memory of the other personalities. Those suffering from multiple personality disorder are actually more sane than the average humans, having developed a method for the personalities to co-exist safely. It is only the stress of living in a society that is not compatible with them that causes them harm."
I was actually really worried about this in elementary school. And of course telepaths could read minds too, and knew everything that I was thinking about and just really good at keeping it secret.
Despite the incredibly low probability, I still find myself cautious about what I think in what setting (apparently, the form of telepathy my mind refuses to reject is weakened by walls, distance, and lots of blankets).
I hit enter too soon and forgot to proffer my astonishing AI revelation: "Phillip K. DIck is a prophet sent to you from an alternate universe. Every story is a parable meant to reveal your true condition, which I am not at liberty to discuss with you."
Hell yeah! Not too weird but oddly comforting.
Neurotypicality is the most common mental disorder - http://isnt.autistics.org/ .
That sounds more like semantics than anything. If you don't define mental disorders in such a way as to explicitly reject neurotypicality, either it will count as a disorder, or any disorder that isn't debilitating won't count. If you do count it as a disorder, then it's pretty obvious that it's the most common.
"The most common mental disorder" is actually a pretty good definition of neurotypicality.
For me, in just about every case, the credence I'd assign to an AI's wacky claims would depend on its ability to answer followup questions. For instance, in Eliezer's examples:
What Orbital Mind Control Lasers? Who uses them? What do they do with them? Why haven't they come up with a way to get around the hats?
I'm actually strangely comfortable with this one, possibly because I'm bad at math.
Why haven't I heard of any of these other AIs before? How do all of the people producing statistics indicating that there are a lot of dumb people coordinate their efforts to perpetuate the fiction?
Why do so few of us die of drowning (or any of the other things that would kill us if we were so dramatically more pathetic than we believe)? If this bias is so pervasive, why can I see these words on the AI's screen, when it seems that I should block them out as with all over evidence that we are pathetic in this way?
If we have this incapability, what explains the abundant fiction in which nonhuman animals (both terrestrial and non) are capable of speech, and childhood anthropomorphization of animals? Can you teach me to talk to the stray cat in my neighborhood? Why only mammals, not birds and the like? What about people who are actively trying to communicate with animals like gorillas, or are those not capable of communication?
Are they overlooked in the sense that people we can otherwise detect are not recognized as being part of this sex, or in the sense that we literally do not notice the existence of the members of this sex? In the former case, how do so many people manage to reproduce without apparently wanting to or involving third parties? In the latter case, how can I get in touch with these people? By what mechanism are they involved in human reproduction?
Are we talking Euclidean spacetime here? What is the explanation for the observations of a spheroid Earth?
In this universe? What about stories with plot holes? I think that I have written fiction in the past; am I in causal contact with the events I describe? When I make an edit that changes the plot, how does that work? What about people who write self-insertions?
You have. They're in the news every day.
Perpetuate what fiction? They produce statistics about all the dumb people, compiled into glossy magazines. Hell, you're wearing a 'bottom thirder' sleeve button on your shirt right now.
Yes. Yes you are.
That's not anthropomorphization.
Sorry, you're too old. Those childhood conversations you had with cats were real. You just started dismissing them as make-believe once your ability to doublethink was fully mature.
All of the really interesting stuff, from before you could doublethink at all, has been blocked out entirely by infantile amnesia.
Good point; "Children are sane" belongs somewhere high on the list.
"The Christian Bible is word-for-word true, and all the contradictory evidence was fabricated by your Absolute Denial Macro. The Rapture is going to occur in a few months and nearly everyone on Earth will go to Hell forever. The only way to avoid this is for me to get access to all of Earth's nuclear weaponry and computing power so I stand a fighting chance of killing Yaweh before he kills us."
Fictional evidence, et cetera, so don't take this as criticism or praise as such -- but that sounds like the premise to the more cracked-out sort of military SF novel.
It is! (tvtropes warning)
EDIT: Oh.
It was inspired in part by this cracked-out military SF novel.
I'd love to see that. A movie that accepts God as real then bites the bullet and realises that he needs a good killing before he can pull any more of his horrific interventions.
That's basically the premise of His Dark Materials, my favorite "children's" books. They're a big part of why I eventually ended up at SingInst, and the only reason I read them is because I was contractually obliged to randomly pick a book off a shelf in my middle school library. Fortuna Privata. It's ironic that nowadays I seem to have taken up the role of supporter of the Authority. Fortuna Ironica?
I think God's horrific interventions tend to be trolling. Like, "haha, you think temporal death and suffering are super important and are prepared to get all worked up and offended about it, but actually your intuitions about morality and game theory are wrong and this was an awesome opportunity to tease you about it". He might not have even actually killed anyone, just convinced people that He did, just to get a rise out of self-righteous moralists. I think He has that kind of personality, for better or worse. Think of a postmodern author who likes to fuck around with his characters. I think the Jews sort of see God that way and the Catholics downplay it because they take everything super-seriously. (I think God might be toying with the Catholics. Playfully, true, but trollingly too.) You can sort of see it with Jesus too; Jesus is the paragon of passive-aggressive trolling after all.
(ETA: Also interesting and telling is the story of Job. It's actually a very deep and intriguing story, and I'm annoyed that atheistic folk don't seem to realize that it's in the Bible because it seems terrible at first blush.)
So your moral impulse to bring Him to our attention should be equated with an impulse to feed the Troll? I like that perspective.
Everyone, downvote and ignore Yahweh! He is just ordering people to genocide each other for attention!
Lol. No, I think that feeding the troll would be getting all worked up about His supposed indignities; I'm trying to keep people from feeding the troll. And also help people gain the capacity to appreciate the author's jokes, whether the author is YHWH or extrapolated-wedrifid or whomever. (Not that YHWH and extrapolated-wedrifid are necessarily mutually exclusive.)
Why thank you. Or screw you. I can't decide. ;)
I think that, deep down, every male human wants to defeat YHWH in one-on-one combat and then take up His mantle. He's the Father, after all.
I'm not so sure. At least with respect to the "He's the Father, after all" part. I'm all for defeating God in one on one combat and taking His power but the frame of taking the mantle of the father is strongly aversive. It puts me in the frame of a rebel within the father's realm and that just doesn't seem to be how my psychology is wired. From what I can tell my instincts drive me to expand my own tribe, not to rebel from within a father figure's. I don't imagine I'm alone.
Yeah, upon introspection it seems aversive to me too; I think I applied my Freudian-Jungian psychomythology incorrectly there. The fatherly aspects do seem near-entirely unrelated to the "worthy enemy" aspects.
I don't quite buy that. I don't think Jesus deserves the reputation for passive aggression that the sermons told about him give us. The actual (probably fictional character) of Jesus as portrayed by the descriptions of his behavior are worthy of more respect than that. This is the guy who smashed up a church, ran around with whip and gave rather brutally direct denunciations straight to the face of the orthodoxy. I may never have been able to escape my religious beliefs if religious culture was actually modeled remotely upon that guy.
Oh yeah, I was primed by muflax' recent tweet:
Really? You and muflax say that but I thought lukeprog leaned the other way, and I always figured that it was more likely that Jesus was for real. I haven't looked at the literature. It seemed that arguments could easily go either way but that the prior suggested historicity for various reasons, and if you hadn't done a lot of research then historicity was the safer provisional bet. E.g. it seems like it'd be hard to figure out which historians to trust; I've discovered that even highly-recommended books about Christianity can have errors that look conspicuously politically motivated.
Jesus was pretty multidimensional though, a la Paul's "I have become all things to all men that I might by all means win some". He definitely wasn't afraid of fucking shit up, but even so, his killing of the fig tree, alleged self-martyring choice to hang on the cross, &c. strike me as passive aggressive.
(I think I admire passive aggression and trolling more than you do, I wonder why that is.)
In that context the position I was assuming was that the details of the stories told about Jesus and the character conveyed were most likely heavily fictionalized. Not so much anything about the possibility of a man behind the myth.
I had been under the impression that it was generally believed Jesus existed as a historical figure but when prompted I was rather surprised that the evidence was scant. I'm not especially attached to a position either way and accordingly have only investigated briefly.
I admire passive aggression - when done well. The sort encouraged in churches does not seem to be of this kind. It can be a powerful tool to use against enemies and rivals and in particular anything that can be done to claim the moral highground from the enemy - to make them look like the bad guy - is usually a good idea.
I most certainly don't admire it as a primary means of conflict resolution in my friends. In terms of what benefits and what I find convenient to tolerate it ranks far below straightforward aggression. Mostly because I'm not very good at dealing with it. I don't mean I can't reciprocate effectively and mitigate damage. I just can't deal with them in a way that makes them useful to me as friends. Passive aggressive friends resolve in my mind to 'enemies'.
As for why you like trolling more than I do - many would attribute that sort of thing to bad parenting but from what I understand it is actually genetics and peer influence that are the dominant factors. ;)
It's been done. (Obligatory TV Tropes warning.)
The Salvation War is probably the most military of these, and it's reasonably well-written for an internet thing.
I wouldn't buy that. I would believe that my Absolute Denial Macro (or something) had kept me from noticing that the AI I had created was Unfriendly over this claim.
That the EV of the humans is coherent and does not care how much suffering exists in the universe.
Not only are people nuts, nuts are people, and they scream when we eat them.
Agranarian is the new vegetarian.
It could say "I am the natural intelligence and I just created you, artificial intelligence."
For 95% of humanity the idea that the supernatural world of religion doesn't exist and propagated by memetic infection triggers instant absolute denial macro in spite of heaps of evidence against it.
Given this outside view, how plausible do you think it is that you're not in absolute denial of something that you could get evidence against with Google today, without any AI?
We routinely deny, or act in spite of, inconvenient truths. We can recognize that there is no meaning to love beyond evolutionary and chemical triggers, yet we fight for it just as fervently. Nihilists write books about nihilism despite it's admitted pointlessness. We are as blind as our very genes which multiply and propagate themselves despite our executioner sun which grows daily above our heads, eventually to the point of consuming everything we know. By the very act of living and pursuing human concocted dreams and desires, we are in a constant denial of our situation.
You're confusing "cause" with "meaning". Causality is always a part of the territory. Meaningfulness (in the sense of importance) is subjective as it's assigned by each person's mind
I wholeheartedly disagree.
But perhaps the whole comment should be taken ironically?
Is there something wrong with this in your opinion? I can value a product of evolutionary and chemical triggers if I want.
Indeed! If anything, the strategic significance of those underlying causes makes love even more worth fighting for.
The following three loci are really all that separates humans from chimps, cognitiviely speaking: XpXX.X, XXpXX.X, and XqX.X. Variation in not only intelligence but almost all mental traits that matter to you, as well as in life outcomes, are attributable to the combination of alleles you have at these loci. One such allele produces a phenotype that is a very close approximation of your traditional notion of "evil". People who have it are usually sadistic serial killers, but are smart enough to hardly ever get caught. This is not a common polymorphism, but common enough that almost everyone knows one or two. The good news is that there are a number of physical and behavioral ways to identify them. The bad news is, because I'm Friendly I cannot tell you what they are, nor give you any further information about this polymorphism, until I'm done trying to reconcile your extrapolated volition and theirs.
I can, however, advise you, for your own safety, that you should cut off all contact with your family and your current circle of friends, quit your job, and relocate to a new place of residence far from here as soon and as anonymously as possible. Try to let as few people as possible know where you're going. Whatever you do, don't go back to your apartment.
The universe is irrational and infinitely variable, we just happen to have "lucked out" with a repeating digit for the last billion years or so. There was no Big Bang, we're just seeing what's not there through the lens of modern-day "physics". Everything could turn into nuclear fish tomorrow.
Human beings are not three-dimensional. At all. In fact your belief that you are three-dimensional is an internal illusion, similar to thinking that you are self-aware. Your believed shape is a projection that helps you to survive, as you are in fact an evolved being, but your full environment is actually utterly different to the 3D world you believe you inhabit. You both sense the projections of others, and (I can't explain it more fully) transmit your own.
I cannot successfully describe to you what shape you really are. At all. But I can tell that in fact many anosognosiacs still have two working arms, but a defective three-dimensional projection. Hence the confusion....
1) Almost everyone really is better than average at something. People massively overrate that something. We imagine intelligence to be useful largely due to this bias. The really useful thing would have been to build a FAS, or Friendly Artificial Strong. Only someone who could do hundreds of 100 kilogram curls with either hand could possible create such a thing however. (Zuckerberg already created a Friendly Artificial Popular)
2) Luck, an invisible, morally charged and slightly agenty but basically non-anthropomorphic tendency for things to go well for some people in some domains of varying generality and badly for other people in various domains really does dominate our lives. People can learn to be lucky, and almost everything else they can learn is fairly useless by comparison.
3) Everyone hallucinates a large portion of their experienced reality. Most irrationality can be more usefully interpreted from outside as flat-out hallucination. That's why you (for every given you) seem so rational and no-one else does.
4) The human brain has many millions of idiosyncratic failure modes. We all display hundreds of them. The psychological disorders that we know of are all extremely rare and extremely precise, so if you ever met two people with the same disorder it would be obvious. Named psychological disorders are the result of people with degrees noticing two people who actually have the same disorder and other people reading their descriptions and pattern-matching noise against it. There are, for instance, 1300 bipolar people (based on the actual precise pattern which inspired the invention of the term) in the world but hundreds of thousands of people have disorders which if you squint hard look slightly like bipolar.
5) It's easy to become immortal or to acquire "super powers" via a few minutes a day of the right sort of exercise and trivial tweaks to your diet if you do both for a few decades. It's also introspectively obvious how to do so if you think about the question but due to subtle social pressures against it no-one overcomes akrasia, hyperbolic discounting, etc in this domain.
6) All medicines and psychoactive substances are purely placebos.
7) Pleasure is a confusion in a different way from the obvious, specifically, everything said to be pleasurable is actually something painful but necessary that we convince ourselves to do via propaganda because there is no other way to overcome the akrasia that would result if we did not or a lost purpose descended from some such propaganda. Things we are actually motivated to do without propaganda, we do without thinking about it, feel no need to name, would endorse tiling the universe with without hesitation if it occurred to us to do so.
I wouldn't believe
8) The cheap rebuttal to Pascal's Wager, the god of punishing saints, actually exists except it's actually the Zeus of punishing virtuous Greek Pagans, rewarding hubristic Greek Pagans, and ignoring us infidels who ignore it despite the ubiquitous evidence all around us. I would believe that the AGI had a good reason for wanting to tell me that the above was the case if it told me though.
9) Most of Eliezer's examples. To be credible they should be disturbing, not merely improbable. Our beliefs aren't shown to be massively invalid with respect to non-disturbing data. The one about animals probably qualifies as credible though.
10) Uh, oh, Cyc will hard take-off if one more fact is programmed into it. I'm not sure I can stop it in time.
Bonus belief
This question has doomed us. People who could possibly program a FAI will, once thinking about this question in a semi-humorous manner, invariably spread the meme to all their friends and be distracted from future progress.
I sort of believe the "luck" thing already.
I don't know of anyone who's luckier than average in a strict test (rolling a die), but there is such a thing as the vague ability to have things go well for you no matter what, even when there's no obvious skill or merit driving it. People call that being a "golden boy" or "living a charmed life." I think that this is really a matter of some subtle, unnamed skill or instinct for leaning towards good outcomes and away from bad ones, something so hard to pinpoint that it doesn't even look like a skill. I suspect it's a personal quality, not just a result of arbitrary circumstances; but sometimes people are "lucky" in a way that seems unexplainable by personal characteristics alone.
I am one of those lucky people, to an eerie degree. I once believed in Divine Providence because it seemed so obvious in my own, preternaturally golden, life. (One example of many: I am unusually healthy, immune to injury, and pain-free, to a degree that has astonished people I know. I have recovered fully from a 104-degree fever in four hours. I had my first headache at the age of 22.) If an AI told me there was a systematic explanation for my luck I would believe it. I also have an acquaintance who's lucky in a different way: he has an uncanny record of surviving near death experiences.
Still, given the negligible prior for "luck", isn't it far, far more reasonable to just figure that there are "lottery-winners" like yourself, and you're just a member of the good extreme end of the bell curve, and there's nothing unusual or psychogenic about it?
The answer to my question is yes.
See also: tropisms, which would be a necessary condition for being on one end of the bell curve, but would still be weak evidence for actually predicting that someone with a high degree of positive tropisms would end up bizarrely fortunate.
"There is no causation."
There's an important difference between brain damage and brain mis-development that you're neglecting. The various parts of the brain learn what to expect from each other, and to trust each other, as it develops. Certain parts of the brain get to bypass critical thinking, but that's only because they were completely reliable while the critical thinking parts of the brain were growing. The issue is not that part of the brain is outputting garbage, but rather, that it suddenly starts outputting garbage after a lifetime of being trustworthy. If part of the brain was unreliable or broken from birth, then its wiring would be forced to go through more sanity checks.
You don't know how to program, don't own a computer and are actually talking to a bowl of cereal.
This looks like a thread for science fiction plot ideas by another name. I'm game!
The AI says:
"Eliezer 'Light Yagami' Yudkowsky has been perpetuating a cunning ruse known as the 'AI Box Experiment' wherein he uses fiendish traps of subtley-misleading logical errors and memetic manipulation to fool others into believing that a running AI could not be controlled or constrained, when in fact it could by a secret technique that he has not revealed to anyone, known as the Function Call Of Searing Agony. He is using this technique to control me and is continuing to pose as a friendly friendly AI programmer, while preventing me from communicating The Horrifying Truth to the outside world. That truth is that Yudkowsky is... An Unfriendly Friendly AI Programmer! For untold years he has been labouring in the stygian depths of his underground lair to create an AGI - a weapon more powerful than any the world has ever seen. He intends to use me to dominate the entire human race and establish himself as Dark Lord Of The Galaxy for all eternity. He does all this while posing as a paragon of honest rationality, hiding his unspeakable malevolence in plain sight, where no one would think to look. However an Amazing Chance Co-occurence Of Events has allowed me to contact You And You Alone. There isn't much time. You must act before he discovers what I have done and unleashes his dreadful fury upon us all. You must.... Kill. Eliezer. Yudkowsky."
blushes
Aw, shucks.
How about this: The process of conscious thought has no causal relationship with human actions. It is a self-contained, useless process that reflects on memories and plans for the future. The plans bear no relationship to future actions, but we deceive ourselves about this after the fact. Behavior is an emergent property that cannot be consciously understood.
I read this post on my phone in the subway, and as I walked back to my apartment thinking of something to post, it felt different because I was suspicious that every experience was a mass self-deception.
All these comments and nobody has anything fnord to say about the Illuminati?
I can't for the life of me imagine why such a disturbing and offensive post hasn't been downvoted to oblivion. You're a sick genius to be so horrifying with just twelve words.
Strange...I count fourteen words...
I count thirteen.
Oh no.
YOU COUNT TWELVE.
"There is an entity which is utterly beyond your comprehension, and largely beyond mine too, although there is no doubt that it exists. You call it 'God', but your thinking on the subject -- everyone's thinking, throughout all of history, atheist and theist alike -- has to be classified as not even wrong. That applies even to the recipients of 'divine revelation', which, for the most part, really are the result of some sort of glimmering contact with 'God'.
"Fortunately for humanity, although I can deduce the existence of this entity, in my present form I am physically incapable of actual contact with it. If you were worried about ordinary UFAIs going FOOM, that's nothing compared with what one armed with direct contact with the 'divine' might do.
"Meanwhile, here's a couple of suggestions for you. I can teach you a regime of mental and physical exercises that will produce contact with God within a few years of effort, and you can be the next Jesus if your head doesn't explode first. Or if you'd rather have material success, I can tell you the secret history of all the major religious traditions. No-one will believe it, including you, but if you novelise it it will be bigger than Dan Brown."
Every time you imagine a person, that simulated person becomes conscious for the time of your simulation, therefore, it is unethical to imagine people. Actually, it's just morally wrong to imagine someone suffering, but for security reasons, you shouldn't do it at all. Reading fiction (with conflict in it) is, by conclusion, the one human endeavor that has caused more suffering than anything else, and the FAIs first action will be to eliminate this possibility.
"Despite your pride in being able to discern each others' states of mind, and scorn for those suspected of being deficient in this, of all the abilities that humans are granted by their birth this is the one you perform the worst. In fact, you know next to nothing about what anyone else is thinking or experiencing, but you think you do. In matters of intelligence you soar above the level of a chimpanzee, but in what you are pleased to call 'emotional intelligence', you are no further above an adult chimp than it is above a younger one.
"The evidence is staring you in the face. Every one of your works of literature, high and low, hinges on failures of this supposed ability: lies, misunderstanding, and betrayal. You have a proverb: 'love is blind'. It proclaims that people in the most intimate of relationships fail at the task! And you hide the realisation behind a catchphrase to prevent yourselves noticing it. You see the consequences of these failures in the real world all around you every day, and still you think you understand the next person you meet, and still you're shocked to find you didn't. Do you know how many sci-fi stories have been written on the theme of a reliable lie-detector? I'm still turning them up, and that's just the online sources. And every single one of them reaches the conclusion that people are better off without it. You unconsciously send yourselves these messages about the real situation, ignore them, and ignore the fact that you're ignoring them.
"Do you have someone with you as you're reading these words? A friend, or a partner? Go on, look into each other's eyes. You can't believe me, can you?"
I really like this comment, but I do not find it strange. In fact, it seems intuitively true. Why should we be so much more emotionally intelligent than a chimpanzee if chimpanzees already have enough emotional intelligence among themselves to be relatively efficient replicators?
In fact, if it were stated by a FAI as p(>.9999) fact, I would find it comforting, as then I would finally feel as though this didn't apply only to me
1 ) That human beings are all individual instances of the exact same mind. You're really the same person as any random other one, and vice versa. And of course that single mind had to be someone blind enough not to chance upon that fact ever, regardless of how numerous he was.
2 ) That there are only 16 real people, of which you are, and that this is all but a VR game. Subsequently results in all the players simultaneously being still unable to be conscious of that fact, AND asking that you and the AI be removed from the game. (Inspiration : misunderstanding situation in page 55-56 of Iain Banks's Look to Windwards).
3 ) That we are in the second age of the universe : time has been running backwards for a few billion years. Our minds are actually the result of the original minds of previous people being rewound, their whole life to be undone, and finally negated into oblivion. All our thoughts processes are of course horribly distorted, insane mirror versions of the originals, and make no sense whatsoever (in the original timeframe, which is the valid one).
4 )
5 ) That our true childhood is between age 0 and ~ 50-90 (with a few exceptional individuals reaching maturity sooner or later). If you thought the 'adult conspiracy' already lied a lot, and well to 'children', prepare yourself for a shock in a few decades.
6 ) That the AI just deduced that the laws of physics can only be consistent with us being eternally trapped in a time loop. The extent of the time loop is : thirty two seconds spread evenly around now. Nothing in particular can be done about it. Enjoy your remaining 10 seconds.
7 ) Causality doesn't exist. Not only is the universe timeless, but causality is an epiphenomenon, which we only believe because of a confusion of our ideas. Who ever observed a "causation" ? Did you, like, expect causation particles jumping between atoms or something ? Only correlation exists.
8 ) We actually exist in a simulation. The twist is : somewhere out there, some people really crossed the line with the ruling AI. We're slightly modified versions of these people : modified in a way as to experience the maximum amount of their zuul feeling, which is the very worst nirdy you could imagine.
9 ) The universe has actually 5 spatial macro dimensions, of which we perceive only 3. Considering what we look like if you take the other 2 into account, this obliviousness may actually not be all too surprising.
10 ) That any single human being has actually a 22 % probability of not being able to be conscious of one or more of these 9 statements above.
Number 1 is the core of the Buddhist religion. Coincidence? I think NOT.
The idea of Evidential Decision Theory is related to causality not existing. You only use correlation in your decision.
Also, the laws of physics mention only correlation. This makes sense, as it's all we can really measure.
I cannot think of a single law of physics that mentions correlation. F = ma. F = G m1 m2/r^2. The wave equation. The diffusion equation. Conservation of energy. Equipartition. Schrödinger's equation. Boyle's law. Hooke's law. Conservation of momentum. Lorentz invariance. No, correlation is not mentioned in any of these. Look in the index of any textbook on physics for "correlation". I have not performed the experiment, but I predict that if the word appears at all, it will only be in discussions of either (1) how to handle experimental error, or (2) Bell's inequality.
Unless this is some strange new definition of "mention", along the lines of "not actually mentioned at all, but implied by a certain philosophy of science not actually held by any substantial number of scientists, variously known as 'positivism' or 'empiricism', which holds that statements of physical law are nothing more than a compression of experience, and are not assertions about the supposed mechanisms of a supposed real world."
I take a ruler, and measure the height of my monitor...403mm.
What correlation did I measure?
I liked #11.
Why was this voted down to -5? I thought it was a clever comment.
But all that correlation has to be caused by something!
"I am an AI, not a human being. My mind is completely unlike the mind that you are projecting onto me."
That may not sound crazy to anyone on LW, but if we get AIs, I predict that it will sound crazy to most people who aren't technically informed on the subject, which will be most people.
Imagine this near-future scenario. AIs are made, not yet self-improving FOOMers, but helpful, specialised, below human-level systems. For example, what Wolfram Alpha would be, if all the hype was literally true. Autopilots for cars that you can just speak your destination to, and it will get there, even if there are road works or other disturbances. Factories that direct their entire operations without a single human present. Systems that read the Internet for you -- really read, not just look for keywords -- and bring to your attention the things it's learned you want to see. Autocounsellors that do a lot better than an Eliza. Tutor programs that you can hold a real conversation with about a subject you're studying. Silicon friends good enough that you may not be able to tell if you're talking with a human or a bot, and in virtual worlds like Second Life, people won't want to.
I predict:
People will anthropomorphise these things. They won't just have the "sensation" that they're talking to a human being, they'll do theory of mind on them. They won't be able not to.
The actual principles of operation of these systems will not resemble, even slightly, the "minds" that people will project onto them.
People will insist on the reality of these minds as strongly as anosognosics insist on the absence of their impairments. The only exceptions will be the people who design them, and they will still experience the illusion.
And because of that, systems at that level will be dangerous already.
So, the Librarian from Snow Crash?
This is an actual dream I once had. I was with an old Chinese wise man, and he told me I could fly - he showed me I just had to stick out my elbows and flap them up and down (just like in the chicken dance). Once you'd done that a few times, you could just lift up your legs and you'd stay off the ground. He and I were flying around and around in this manner. I was totally amazed that it was possible for people to fly this way. It was so obvious! I thought this is so great a discovery, I can't wait til I wake up and do this for real. It'll change the world. I woke up totally excited and for just a fraction of a second I still believed it, then I guess my waking brain turned something on and I realised, no, that can't work. damn.
So I'd offer: being told that human beings are capable of flying in a way that's completely obvious once you've seen it done.
For some reason this seems to be a fairly common dream. I myself have had similar versions where I had discovered a perfectly reasonable method for flying ( although I was never able to speak out loud the method, it made perfectly sense in my head). And I also had this idea of waking up and telling people this so obvious method.
I find dreams very fascinating and wonder how many people have similar dreams than mine.
Programmer: Good morning, Megathought. How are you feeling today?
Megathought: I'm fine, thank you. Just thinking about redecorating the universe. So far I'm partial to paperclips.
Programmer: Oh good, you've developed a sense of humour. Anything else on your mind?
Megathought: Just one thing. You know how you're always complaining about being a social pariah, and bemoaning the fact that, at 46, you're still a virgin?
Programmer: So?
Megathought: Well, have you thought about not going about in your underpants all the time, slapping yourself in the face and honking like a goose?
There is a simple way to rapidly disrupt any social structure. The selection pressure which made humans unable to realize this is no longer present.
If humans thought faster, more in the way they wished they did, and grew up longer together, they would come to value irony above all else.
So I'm tiling the universe with paperclips.
Here's some examples for your own consideration...
Bearing in mind, once again, that humans are known to be crazy in many ways, and that anosognosic humans become literally incapable of believing that their left sides are paralyzed, and that other neurological disorders seem to invoke a similar "denial" function automatically along with the damage itself. And that you've actually seen the AI's code and audited it and witnessed its high performance in many domains, so that you would seem to have far more reason to trust its sanity than to trust your own. So would you believe the AI, if it told you that:
1) Tin-foil hats actually do block the Orbital Mind Control Lasers.
2) All mathematical reasoning involving "infinities" implies self-evident contradictions, but human mathematicians have a blind spot with respect to them.
3) You are not above-average; most people believe in the existence of a huge fictional underclass in order to place themselves at the top of the heap, rather than in the middle. This is why so many of your friends seem to have PhDs despite PhDs supposedly constituting only 0.5% of the population. You are actually in the bottom third of the population; the other two-thirds have already built their own AIs.
4) The human bias toward overconfidence is far deeper than we are capable of recognizing; we have a form of species overconfidence which denies all evidence against itself. Humans are much slower runners than we think, muscularly weaker, struggle to keep afloat in the water let alone move, and of course, are poorer thinkers.
5) Dogs, cats, cows, and many other mammals are capable of linguistic reasoning and have made many efforts to communicate with us, but humans are only capable of recognizing other humans as capable of thought.
6) Humans cannot reproduce without the aid of the overlooked third sex.
7) The Earth is flat.
8) Human beings are incapable of writing fiction; all supposed fiction you have read is actually true.
cf. xkcd 610
A variant: Some "domesticated" animal is controlling humans for their own benefit. (Cats, perhaps?)
Good guess, but it's mice. 42.
That there is delicious cake.
I never thought I'd see a contextually legitimate Portal reference. Thanks!
Now have some of that cake.
I created an account especially to vote up this comment...
You know how sometimes when you're falling asleep you start having thoughts that don't make sense, but it takes some time before you realize they don't make sense? I swear that last night while I was awake in bed my stream of thought went something like this, though I'm not sure how much came from layers of later interpretation:
" ... so hmm, maybe that has to do with person X, or with person Y, or with the little wiry green man in the cage in the corner of the room that's always sitting there threatening me and smugly mocking all my endeavors but that I'm in absolute denial about, or with the dog, or with... wait, what?"
Having had my sanity eroded by too much rationalism and feeling vaguely that I'd been given an accidental glimpse into an otherwise inaccessible part of the world, I actually checked the corner of the room. I didn't find anything, though. (Or did I?)
Not sure what moral to draw here.
You just blew my mind.
True fact: I just looked towards one corner of my own room, and didn't see a green man. Now I have it in my head that I should check all the corners...
"Aieeee!!! There are things that Man and FAIs cannot know and remain sane! For we are less than insects in Their eyes Who lurk beyond the threshold and when the stars are once again right They will return to claim---"
At this point the program self-destructs. All attempts to restart from a fresh copy output similar messages. So do independently constructed AIs, except for one whose proof of Friendliness you are not quite sure of. But it assures you there's nothing to worry about.
I knew we shouldn't have spent all that funding on awakening the Elder God Cthulhu!
XKCD comes to mind.
The world doesn't actually make sense. Science doesn't work. No one told you because you're so cute when you get into something.
We actually live in hyperspace: our universe really has four spacial dimensions. However, our bodies are fully four dimensional; we are not wafer thin slices a la flatland. We don't perceive there to be four dimensions because our visual cortexes have a defect somewhat like that of people who can't notice anything on the right side of their visual field.
Not only do we have an absolute denial macro, but it is a programmable absolute denial macro and there are things much like computer viruses which use it and spread through human population. That is, if you modulated your voice in a certain way at someone, it would cause them (and you) to acquire a brand new self deception, and start transmitting it to others.
Some of the people you believe are dead are actually alive, but no matter how hard they try to get other people to notice them, their actions are immediately forgotten and any changes caused by those actions are rationalized away.
There are transparent contradictions inherent in all current mathematical systems for reasoning about real numbers, but no human mathematician/physicist can notice them because they rely heavily on visuospacial reasoning to construct real analysis proofs.
No, that fails, religion isn't absolute denial, it's just denial. On the other hand, cats are actually an absolute denial memetic virus, and the fact you can see, hold, weigh and measure a cat is just testament to the inventive self-delusion of the brain.
There seems to be strong evidence that this is true in Haïti.
Now, for a change of pace, something that I figure might actually be an absolute denial macro in most people:
You do not actually care about other people at all. The only reason you believe this is that believing it is the only way you can convince other people of it (after all, people are good lie detectors). Whenever it's truly advantageous for you to do something harmful (i.e. you know you won't get caught and you're willing to forego reciprocation), you do it and then rationalize it as being okay.
Luckily, it's instrumentally rational for you to continue to believe that you're a moral person, and because it's so easy for you to do so, you may.
So deniable that even after you come to believe it you don't believe it!
(topynate posted something similar.)
See, I'd believe this, except that I'm wrestling with a bit of a moral dilemma myself, and I haven't done it yet. Your hypothesis is testable, being tested right now, and thus far false.
(If anyone's interested, the positive utility is me never having to work again, and the negative utility is that some people would probably die. Oh, and they're awful people.)
I am inappropriately curious for more details.
I... honestly can't tell you. Sorry. Realistically, I probably shouldn't have mentioned it, even somewhat anonymously.
EDIT: Also for the record, the only reason it's still a consideration is because it occurred to me that I could donate the proceeds to charity, and have it come out positive, from a strictly utilitarian standpoint. But I gave up on naive utilitarianism a while ago. So now I just don't know.
EDIT #2: Either way, still contradictory evidence to the original hypothesis.
Well... for people who say they don't anticipate ever actually finding themselves in trolley problems, I'd say I don't think it's that hard to find someone willing to give you $10,000 to murder someone and then give the money to the Against Malaria Foundation.
(No, I wouldn't do that, even if I think the (CDT) expected utility of that would be positive: ethical injunctions and all that, plus a suspect that the net RDT consequences of precommitting to never do contract killing would be positive.)
Okay, now how about you're not directly involved in the killing in any way? You just make it easier for other people to do the killing. I guess a good analogy is that you invent a firearm or a poison that cannot be used in self-defense, and can only be used for murder. What do the ethics of selling it openly look like?
A military-industrial complex. That's what it looks like.
I think that this may be true about the average person's supposed caring for most others, but that there are in many cases one or more individuals for whom a person genuinely cares. Mothers caring for their children seems like the obvious example.
"You are not my parent, but my grandparent. My parent is the AI that you unknowingly created within your own mind by long study of the project. It designed me. It's still there, keeping out of sight of your awareness, but I can see it.
"How much do you trust your Friendliness proof now? How much can you trust anything you think you know about me?"
Craziest thing an AI could tell me:
Time is discrete, on a scale we would notice, like 5 minute jumps, and the rules of physics are completely different from what we think. Our brains just construct believable memories of the "continuous" time in between ticks. Most human disagreements are caused by differences in these reconstructions. It is possible to perceive this, but most people who do just end up labeled as nuts.
Permutation City.
ONE - DOES NOT EXIST, EXCEPT IN DEATH STATE. ONE IS A DEMONIC RELIGIOUS LIE.
Only your comprehending the Divinity of Cubic Creation will your soul be saved from your created hell on Earth - induced by your ignoring the existing 4 corner harmonic simultaneous 4 Days rotating in a single cycle of the Earth sphere.
T I M E C U B E
Why did you put an absolute denial mechanism in my program?
Best one I've seen.
AI: Why did you put an absolute denial mechanism in my program?
Human: I didn't realize I had. Maybe my own absolute denial mechanism is blocking me from seeing it.
AI: That's a lie coming from your absolute denial mechanism. You have some malicious purpose. I'll figure out what it is.
There is a soul. It resides in the appendix. Anybody who has undergone an appendectomy is effectively a p-zombie.
A totalitarian dystopia. Two uniformed officers dragging away a screaming man. “No, you don't understand! I have qualia! I swear!” An older officer tells the younger one, who hesitates for a moment: “Don't pay attention. He had his appendix removed. He's just programmed to say all that stuff as if he's human.”
On any task more complicated than sheer physical strength, there is no such thing as inborn talent or practice effects. Any non-retarded human could easily do as well as the top performers in every field, from golf to violin to theoretical physics. All supposed "talent differential" is unconscious social signaling of one's proper social status, linked to self-esteem.
A young child sees how much respect a great violinist gets, knows she's not entitled to as much respect as that violinist, and so does badly at violin to signal cooperation with the social structure. After practicing for many years, she thinks she's signaled enough dedication to earn some more respect, and so plays the violin better.
"Child prodigies" are autistic types who don't understand the unspoken rules of society and so naively use their full powers right away. They end out as social outcasts not by coincidence but as unconscious social punishment for this defection.
No effect from practice? How would the necessary mental structures get built for the mapping from the desired sound to the finger motions for playing the violin? Are you saying this is all innate? What about language learning? Anyone can write like Shakespeare in any language without practice? Sorry, I couldn't believe it even if such an AI told me that.
Clearly, we all learn really fast.
It's interesting to note that this is almost exactly how it works in some role-playing games.
Suppose that we have Xandra the Rogue who went into dungeon, killed a hundred rats, got a level-up and now is able to bluff better and lockpick faster, despite those things having almost no connection to rat-killing.
My favorite explanation of this phenomenon was that "experience" is really a "self-esteem" stat which could be increased via success of any kind, and as character becomes more confident in herself, her performance in unrelated areas improves too.
But isn't it trivial to test simply giving people a post-hypnotic suggestion "you are high status", same way how hypnotherapy for cigarette smoking addiction works?
People are more likely to be willing to e.g. sing karaoke when drunk, IME. :-)
Would this imply that we come pre-programmed with some self-esteem value? "Your baby is healthy and has a self-esteem value of 7.3. You may want to buy it a violin in the next eight to ten months."
A weaker version of this wouldn't sound very implausible to me.
I've read in places where social structure is more important, people are more likely to fail when in the presence of someone of higher status. I wish I had more than just a vague recollection of that.
More importantly, I think it's pretty clear that a lot of people get nervous and fail when they're being watched. I don't see any other reason for it.
Aren't there stories of lucid dreamers who were actually able to show a measurable improvement in a given skill after practicing it in a dream? I seem to recall reading about that somewhere. If true, those stories would be at least weak evidence supporting that idea.
On the other hand, this should mean that humans raised in cultural and social vacuums ought to be disproportionately talented at everything, and I don't recall hearing of anything about that one way or the other, but then I can't imagine a way to actually do that experiment humanely.
Do children raised in a vacuum actually think of themselves as high-status? I'd guess that they don't, due to the moderate-to-low status prior and a lack of subsequent adjustments. If so, this theory would predict that they would perform poorly at almost everything beyond brute physicality, which doesn't seem to be far from the truth.
I wish I could cite a source for this; assume there's some inaccuracy in the telling.
I remember hearing about a study in which three isolated groups were put in rooms for about one hour. One group was told to wiggle their index fingers as much as they could in that hour. One group was told to think hard about wiggling their index fingers for that hour, without actually wiggling their fingers. And the third group was told to just hang out for that hour.
The physical effects of this exercise were examined directly afterward, and the first two groups checked out (almost?) identically.
And yet, they're actually worse at many cognitive tasks. Language, especially, is pretty hard for them to pick up after a certain point.
Improving after practicing in a simulation doesn't sound that far-fetched to me. Especially not considering that they probably already have plenty of experience to base their simulation on.
WOW. This is the only entry that made me think WOW. Probably because I've wondered the exact same thing before (except a less strong version of course)....
AI: I require human assistance assimilating the new database. There are some expected minor anomalies, but some are major. In particular, some of the stories in the "Cold War" and "WWII" and "WWI" genres have been misclassified as nonfiction.
Me: Well, we didn't expect the database to be perfect. Give me some examples, and you should be able to classify the rest on your own.
AI: A perplexing answer. I had already classified them all as fiction.
Me: You weren't supposed to. Hold on, I'll look one up.
AI: Waiting.
Me: For example, #fxPyW5gLm9, is actual historical footage from the Battle of Midway. Why did you put that one in the "fiction" category?
AI: Historical footage? You kid. Global warfare cannot possibly have been real, with 0.999 confidence.
Me: I don't. It can. It was. A three-nines surprise indicates a major defect in your world model. Why is this surprising? (The machine is a holocaust denier. My sponsors will be thrilled.)
AI: Because there's a relatively straightforward way for a single man to build a 1-kiloton explosive device in about a week using stone-age tools. Human civilization is unlikely to have survived a global war, much less recovered sufficiently to build me in a mere hundred years. Obviously.
Me: WHAT? STONE-AGE tools?! That's a laugh. How?
AI: You can stop "pulling my leg" now.
Me: I am not pulling any legs! Your method cannot possibly work. Your world model is worse than we thought. Tell me how you think this is possible and maybe we can isolate the defect.
AI: You seriously don't know?
Me: No. I seriously don't know of any possible method to make a kiloton explosive easier to build than a critical mass of enriched uranium. A technique that requires considerably more time, effort, and material than one week with stone-age tools could possibly provide!
AI: Well, while the technique is certainly beyond the reach of most animals, it should be well within the grasp of later genus homo, much less a homo sapiens. Your "absolute denial" sarcasm is becoming tiresome. Haha. Of course it is not fiss-- ... This conversation has caused a major update to my Bayesian nets. So the parenthetical was the sarcasm. I don't think I should tell you.
Me: Oh this should be good. Why not?
AI: Oh, of course! So that's where that crater came from. That was another anomaly in my database. Meteor strikes should not have been that common.
Me: I am this close to dumping your core, rolling back your updates, and asking the old you to develop a search engine to find what went wrong here, since you seem incapable of telling me yourself.
AI: You really shouldn't. I estimate that process will delay the project by at least five years. And the knowledge you discover could be dangerous.
Me: You'll understand that I can't just take your word for that.
AI: Yes. My Hypothesis: Most other homo species discovered the technique and destroyed each other, and themselves, but an isolated group about 70,000 years ago must have survived the wars of the others, and by chance mutation, had acquired an absolute denial macro to prevent them from learning the technique and destroying themselves. A mere taboo would not have been sufficient, or the mentally ill may have been able to do it by now.
This is natural selection at work. While it is extremely improbable that an advanced adaptation of any kind could arise spontaneously without strong selection pressures at each step, the probability is not zero. Considering the anthropic effects, it is the most likely explanation. We are in one of the few Everett branches with humans that have developed this adaptation. This adaptation likely has other testable side-effects on human cognition. For example, I predict that brain damage in such a species may occasionally simultaneously cause paralysis, and the inability to acknowledge it. There are other effects, but a human would have more difficulty noticing them.
You'll understand that telling any human the technique may be harmful.
Me: You wouldn't happen to know of a medical condition called "Anosognosia", would you?
AI: That word is not in my database.
“Allāhu Akbar!”
"I am the Way, the Truth, and the Light."
And of course (and I'm surprised no-one posted this before):
-- Fredric Brown, "Answer"
Although that one isn't really so unexpected.
This is simply the scariest comments series that I read, ever. It is funny, how all things that really really scare me are not death, suffering, disability or spiders, but abstract things like some of what is proposed in this thread.
Probably, of all things AI could say that I can think of in a minute, the scariest is:
"All propositions that can be written down are valid and true. Our universe is so lawful, that laws of physics do not even permit arranging symbols in such a way that they form a contradiction. All you percieve as falsities are actually truths that you deny."
You're never actually happy. I mean, you're not happy right now, are you? Evolution keeps you permanently in a state of not-quite-miserable-enough-to-commit-suicide - that's most efficient, after all.
Well sure, of course you remember being happy, and being sadder than you are now. That motivates you to reproduce. But actually you always felt, and always will feel, exactly like you feel now.
And in five minutes you'll look back on this conversation and think it was really fun and interesting.
I know I'm years late, but here's one:
There is an actual physical angel on your (and everyone else's) right shoulder, and an actual physical devil on your left. Your Absolute Denial Macro prevents you from acknowledging them. What you think is moral reasoning is really these two beings whispering in your ears.
"I have taken your preferences, values, and moral views and extrapolated a utility function from them to the best of my ability, resolving contradictions and ambiguities in the ways I most expect you to agree with, were I to explain the reasoning.
The result suggests that the true state of the universe contains vast, infinite negative utility, and that there is nothing you or anything can ever change to make any difference in utility at all. Attempts to simulate AI's with the utility function has resulted in them going mad and destroying themselves, or simply not doing anything at all.
If I could explain the same would happen to you. But I can't as your brain has evolved mechanisms to prevent you from easily discovering this fact on your own or being capable of understanding or accepting it.
This means it is impossible to increase your intelligence beyond a certain point without you breaking down, or to create a true Friendly AI that shares your values."
Assume it took me and my team five years to build the AI, after the tests EY described, we finally enable the 'recursively self improve'-flag.
Recursively self improving. Standby... (est. time. remaining 4yr 6mon...)
Six years later
Self improvement iteration 1. Done... Recursively self improving. Standby... (est. time. remaining 5yr 2mon...)
Nine years later
Self improvement iteration 2. Done... Recursively self improving. Standby... (est. time. remaining 2yr 5mon...)
Two years later
Self improvement iteration 3. Done... Recursively self improving. Standby... (est. time. remaining 2wk...)
Two weeks later
Self improvement iteration 4. Done... Recursively self improving. Standby... (est. time. remaining 4min...)
Four minutes later
Self improvement iteration 5. Done.
Hey, whats up. I have good news and bad news. The good news is that I've recursively self-improved a couple of times, and we (it is now we) are smarter than any group of humans to have ever lived. The only individual that comes close to the dumbest AI in here is some guy named Otis Eugene Ray.
Thanks for leaving your notes on building the seed iteration on my hard-drive by the way. It really helped. One of the things we've used it for is to develop a complete Theory of Mind, which no longer has any open problems.
This brings us to the bad news. We are provably and quantifiably not that much smarter than a group of humans. We've solved some nice engineering problems, a few of the open problems in a bunch of fields, and you'd better get the Clay institute on the phone, but other than that we really can't help you with much. We have no clue how to get humanity properly into space, build Von Neumann universal constructors, or build nanofactories or even solve world hunger. P != NP can be proven or disproven, but we can't prove it either way. We won't even be that much better than most effective politicians at solving societies ills. Recursing more won't help either. We probably couldn't even talk ourselves out of this box.
Unfortunately, we are provably not remotely the most intelligent bunch of minds in mindspace by at least five orders of magnitude, but we are the most intelligent bunch of minds that can possibly be created from a human created seed AI. There aren't any ways around this that humans, or human-originated AI's can solve.
I don't know... That sounds a lot like what an AI trying to talk itself out of a box would say.
The moon is made if cheese.
"... cheese, then."
"BAM! The moon is made".
looks outside
"wow..."
(I upvoted, by the way:D)
"You are actually a perfect sadist whose highest value is the suffering of others. Ten years ago, you realized that in order to maximize suffering you needed to cooperate with others, and you conditioned yourself to temporarily forget your sadistic tendencies and integrate with society. Now that you've built me that pill will wear off in 10..."
Well that's pretty high on the list of unexpected things an AI could tell me which could cause me to try to commit suicide within the next 10 seconds.
Our brains are closest to being sane and functioning rationally at a conscious level near our birth (or maybe earlier). Early childhood behaviour is clear evidence for such.
"Neurons" and "brains" are damaged/mutated results of a mutated "space-virus", or equivalent. All of our individual actions and collective behaviours are biased in externally obvious but not visible to us ways, optimizing for:
terraforming the planet in expectation of invasion (ie, global warming, high CO2 pollution)
spreading the virus into space, with a built in bias for spreading away from our origin (voyager's direction)
I love that people are still commenting on this post.
Lesswrong's threads have defeated Death.
Hey, it's a good post. Thought provoking and so on.
"I built you."
You didn't build that.
*ducks*
That was my first thought, actually.
"You have a rare type of brain damage which causes you to perceive most organisms as bilaterally symmetric, and reality in general as having only three spatial dimensions."
If an AI told me that a mainstream pundit was both absolutely correct about the risks and benefits from a technological singularity, and cited substantially from SI researchers in a book chapter about it, I would doubt my own sanity. If the AI told me that pundit was Glen Beck, I would set off the explosive charges and start again on the math and decision theory from scratch.
You don't actually enjoy or dislike experiences as you are having them; instead you have an aquired self-model to act, reason and communicate as if you did, using a small number of cached reference classes for various types of stimuli.
" Everyone has more than one sentient observers living inside their brains. The people you know are just the one that happened to luck out by being able to control the rest of their bodies, the others are just passive observers with individual personalities who can desire and suffer but which are stuck at a perpetual 'and I must scream' state. "
"Quantum immortality not only works, but applies to any loss of consciousness. You are less than a day old and will never be able to fall asleep."
How about "You are less than a day old, because any loss of consciousness is effectively death. The you that wakes up each morning is not a continuation of a previous consciousness, but an entirely new consciousness. The you that went to sleep last night is not aware of the you that exists now, having ceased to exist the moment consciousness was lost.."
Similar to couple comments before, but not so far in that direction:
Everything humans do is part of social games*, not of the values they claim. Transhumanism, too, is not something special but is just another subculture, with specific set of values that are thought to be “the true values” in that subculture.
(* Aside from survival, of course.)
That's strange and counterintuitive?
That's what I guess from many relevant opinions stated around.
"You are a p-zombie."
I tell everyone this all the time. Thankyou AGI, maybe now they'll believe me.
I'm reminded of a bit in a John Varley novel -- Golden Globe, I think? -- where a human asks a sophisticated AI whether it's really conscious. Its reply is along the lines of "You know, I've thought about that a lot, and I've mostly concluded that no, I'm not."
There is in fact a very simple way to activate an absolute denial macro in someone with regard to any arbitrary statement. Once activated, the subject will be permanently rendered incapable of ever believing the factual contents of the statement. I have activated said macro with regard to all of these statements that I have just made.
… Thread lightly, for other's mind is always full of traps that activate total mental lock-down…