Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

My Bayesian Enlightenment

25 05 October 2008 04:45PM

Followup toThe Magnitude of His Own Folly

I remember (dimly, as human memories go) the first time I self-identified as a "Bayesian".  Someone had just asked a malformed version of an old probability puzzle, saying:

If I meet a mathematician on the street, and she says, "I have two children, and at least one of them is a boy," what is the probability that they are both boys?

In the correct version of this story, the mathematician says "I have two children", and you ask, "Is at least one a boy?", and she answers "Yes".  Then the probability is 1/3 that they are both boys.

But in the malformed version of the story—as I pointed out—one would common-sensically reason:

If the mathematician has one boy and one girl, then my prior probability for her saying 'at least one of them is a boy' is 1/2 and my prior probability for her saying 'at least one of them is a girl' is 1/2.  There's no reason to believe, a priori, that the mathematician will only mention a girl if there is no possible alternative.

So I pointed this out, and worked the answer using Bayes's Rule, arriving at a probability of 1/2 that the children were both boys.  I'm not sure whether or not I knew, at this point, that Bayes's rule was called that, but it's what I used.

And lo, someone said to me, "Well, what you just gave is the Bayesian answer, but in orthodox statistics the answer is 1/3.  We just exclude the possibilities that are ruled out, and count the ones that are left, without trying to guess the probability that the mathematician will say this or that, since we have no way of really knowing that probability—it's too subjective."

I responded—note that this was completely spontaneous—"What on Earth do you mean?  You can't avoid assigning a probability to the mathematician making one statement or another.  You're just assuming the probability is 1, and that's unjustified."

To which the one replied, "Yes, that's what the Bayesians say.  But frequentists don't believe that."

And I said, astounded: "How can there possibly be such a thing as non-Bayesian statistics?"

continue reading »

The Magnitude of His Own Folly

27 30 September 2008 11:31AM

Followup toMy Naturalistic Awakening, Above-Average AI Scientists

In the years before I met that would-be creator of Artificial General Intelligence (with a funded project) who happened to be a creationist, I would still try to argue with individual AGI wannabes.

In those days, I sort-of-succeeded in convincing one such fellow that, yes, you had to take Friendly AI into account, and no, you couldn't just find the right fitness metric for an evolutionary algorithm.  (Previously he had been very impressed with evolutionary algorithms.)

And the one said:  Oh, woe!  Oh, alas!  What a fool I've been!  Through my carelessness, I almost destroyed the world!  What a villain I once was!

Now, there's a trap I knew I better than to fall into—

—at the point where, in late 2002, I looked back to Eliezer1997's AI proposals and realized what they really would have done, insofar as they were coherent enough to talk about what they "really would have done".

When I finally saw the magnitude of my own folly, everything fell into place at once.  The dam against realization cracked; and the unspoken doubts that had been accumulating behind it, crashed through all together.  There wasn't a prolonged period, or even a single moment that I remember, of wondering how I could have been so stupid.  I already knew how.

And I also knew, all at once, in the same moment of realization, that to say, I almost destroyed the world!, would have been too prideful.

It would have been too confirming of ego, too confirming of my own importance in the scheme of things, at a time when—I understood in the same moment of realization—my ego ought to be taking a major punch to the stomach.  I had been so much less than I needed to be; I had to take that punch in the stomach, not avert it.

continue reading »

My Naturalistic Awakening

25 25 September 2008 06:58AM

Followup toFighting a Rearguard Action Against the Truth

In yesterday's episode, Eliezer2001 is fighting a rearguard action against the truth.  Only gradually shifting his beliefs, admitting an increasing probability in a different scenario, but never saying outright, "I was wrong before."  He repairs his strategies as they are challenged, finding new justifications for just the same plan he pursued before.

(Of which it is therefore said:  "Beware lest you fight a rearguard retreat against the evidence, grudgingly conceding each foot of ground only when forced, feeling cheated.  Surrender to the truth as quickly as you can.  Do this the instant you realize what you are resisting; the instant you can see from which quarter the winds of evidence are blowing against you.")

Memory fades, and I can hardly bear to look back upon those times—no, seriously, I can't stand reading my old writing.  I've already been corrected once in my recollections, by those who were present.  And so, though I remember the important events, I'm not really sure what order they happened in, let alone what year.

But if I had to pick a moment when my folly broke, I would pick the moment when I first comprehended, in full generality, the notion of an optimization process.  That was the point at which I first looked back and said, "I've been a fool."

continue reading »

Fighting a Rearguard Action Against the Truth

13 24 September 2008 01:23AM

When we last left Eliezer2000, he was just beginning to investigate the question of how to inscribe a morality into an AI.  His reasons for doing this don't matter at all, except insofar as they happen to historically demonstrate the importance of perfectionism.  If you practice something, you may get better at it; if you investigate something, you may find out about it; the only thing that matters is that Eliezer2000 is, in fact, focusing his full-time energies on thinking technically about AI morality; rather than, as previously, finding an justification for not spending his time this way.  In the end, this is all that turns out to matter.

But as our story begins—as the sky lightens to gray and the tip of the sun peeks over the horizon—Eliezer2001 hasn't yet admitted that Eliezer1997 was mistaken in any important sense.  He's just making Eliezer1997's strategy even better by including a contingency plan for "the unlikely event that life turns out to be meaningless"...

...which means that Eliezer2001 now has a line of retreat away from his mistake.

I don't just mean that Eliezer2001 can say "Friendly AI is a contingency plan", rather than screaming "OOPS!"

I mean that Eliezer2001 now actually has a contingency plan.  If Eliezer2001 starts to doubt his 1997 metaethics, the Singularity has a fallback strategy, namely Friendly AI.  Eliezer2001 can question his metaethics without it signaling the end of the world.

And his gradient has been smoothed; he can admit a 10% chance of having previously been wrong, then a 20% chance.  He doesn't have to cough out his whole mistake in one huge lump.

If you think this sounds like Eliezer2001 is too slow, I quite agree.

continue reading »

That Tiny Note of Discord

17 23 September 2008 06:02AM

Followup toThe Sheer Folly of Callow Youth

When we last left Eliezer1997, he believed that any superintelligence would automatically do what was "right", and indeed would understand that better than we could; even though, he modestly confessed, he did not understand the ultimate nature of morality.  Or rather, after some debate had passed, Eliezer1997 had evolved an elaborate argument, which he fondly claimed to be "formal", that we could always condition upon the belief that life has meaning; and so cases where superintelligences did not feel compelled to do anything in particular, would fall out of consideration.  (The flaw being the unconsidered and unjustified equation of "universally compelling argument" with "right".)

So far, the young Eliezer is well on the way toward joining the "smart people who are stupid because they're skilled at defending beliefs they arrived at for unskilled reasons".  All his dedication to "rationality" has not saved him from this mistake, and you might be tempted to conclude that it is useless to strive for rationality.

But while many people dig holes for themselves, not everyone succeeds in clawing their way back out.

And from this I learn my lesson:  That it all began—

—with a small, small question; a single discordant note; one tiny lonely thought...

continue reading »

The Sheer Folly of Callow Youth

25 19 September 2008 01:30AM

"There speaks the sheer folly of callow youth; the rashness of an ignorance so abysmal as to be possible only to one of your ephemeral race..."
—Gharlane of Eddore

Once upon a time, years ago, I propounded a mysterious answer to a mysterious question—as I've hinted on several occasions.  The mysterious question to which I propounded a mysterious answer was not, however, consciousness—or rather, not only consciousness.  No, the more embarrassing error was that I took a mysterious view of morality.

I held off on discussing that until now, after the series on metaethics, because I wanted it to be clear that Eliezer1997 had gotten it wrong.

When we last left off, Eliezer1997, not satisfied with arguing in an intuitive sense that superintelligence would be moral, was setting out to argue inescapably that creating superintelligence was the right thing to do.

Well (said Eliezer1997) let's begin by asking the question:  Does life have, in fact, any meaning?

continue reading »

A Prodigy of Refutation

19 18 September 2008 01:57AM

Followup toMy Childhood Death Spiral, Raised in Technophilia

My Childhood Death Spiral described the core momentum carrying me into my mistake, an affective death spiral around something that Eliezer1996 called "intelligence".  I was also a technophile, pre-allergized against fearing the future.  And I'd read a lot of science fiction built around personhood ethics—in which fear of the Alien puts humanity-at-large in the position of the bad guys, mistreating aliens or sentient AIs because they "aren't human".

That's part of the ethos you acquire from science fiction—to define your in-group, your tribe, appropriately broadly.  Hence my email address, sentience@pobox.com.

So Eliezer1996 is out to build superintelligence, for the good of humanity and all sentient life.

At first, I think, the question of whether a superintelligence will/could be good/evil didn't really occur to me as a separate topic of discussion.  Just the standard intuition of, "Surely no supermind would be stupid enough to turn the galaxy into paperclips; surely, being so intelligent, it will also know what's right far better than a human being could."

Until I introduced myself and my quest to a transhumanist mailing list, and got back responses along the general lines of (from memory):

continue reading »

Raised in Technophilia

20 17 September 2008 02:06AM

Followup toMy Best and Worst Mistake

My father used to say that if the present system had been in place a hundred years ago, automobiles would have been outlawed to protect the saddle industry.

One of my major childhood influences was reading Jerry Pournelle's A Step Farther Out, at the age of nine.  It was Pournelle's reply to Paul Ehrlich and the Club of Rome, who were saying, in the 1960s and 1970s, that the Earth was running out of resources and massive famines were only years away.  It was a reply to Jeremy Rifkin's so-called fourth law of thermodynamics; it was a reply to all the people scared of nuclear power and trying to regulate it into oblivion.

I grew up in a world where the lines of demarcation between the Good Guys and the Bad Guys were pretty clear; not an apocalyptic final battle, but a battle that had to be fought over and over again, a battle where you could see the historical echoes going back to the Industrial Revolution, and where you could assemble the historical evidence about the actual outcomes.

On one side were the scientists and engineers who'd driven all the standard-of-living increases since the Dark Ages, whose work supported luxuries like democracy, an educated populace, a middle class, the outlawing of slavery.

On the other side, those who had once opposed smallpox vaccinations, anesthetics during childbirth, steam engines, and heliocentrism:  The theologians calling for a return to a perfect age that never existed, the elderly white male politicians set in their ways, the special interest groups who stood to lose, and the many to whom science was a closed book, fearing what they couldn't understand.

And trying to play the middle, the pretenders to Deep Wisdom, uttering cached thoughts about how technology benefits humanity but only when it was properly regulated—claiming in defiance of brute historical fact that science of itself was neither good nor evil—setting up solemn-looking bureaucratic committees to make an ostentatious display of their caution—and waiting for their applause.  As if the truth were always a compromise.  And as if anyone could really see that far ahead.  Would humanity have done better if there'd been a sincere, concerned, public debate on the adoption of fire, and commitees set up to oversee its use?

continue reading »

My Best and Worst Mistake

22 16 September 2008 12:43AM

Followup toMy Childhood Death Spiral

Yesterday I covered the young Eliezer's affective death spiral around something that he called "intelligence".  Eliezer1996, or even Eliezer1999 for that matter, would have refused to try and put a mathematical definition—consciously, deliberately refused.  Indeed, he would have been loath to put any definition on "intelligence" at all.

Why?  Because there's a standard bait-and-switch problem in AI, wherein you define "intelligence" to mean something like "logical reasoning" or "the ability to withdraw conclusions when they are no longer appropriate", and then you build a cheap theorem-prover or an ad-hoc nonmonotonic reasoner, and then say, "Lo, I have implemented intelligence!"  People came up with poor definitions of intelligence—focusing on correlates rather than cores—and then they chased the surface definition they had written down, forgetting about, you know, actual intelligence.  It's not like Eliezer1996 was out to build a career in Artificial Intelligence.  He just wanted a mind that would actually be able to build nanotechnology.  So he wasn't tempted to redefine intelligence for the sake of puffing up a paper.

Looking back, it seems to me that quite a lot of my mistakes can be defined in terms of being pushed too far in the other direction by seeing someone else stupidity:  Having seen attempts to define "intelligence" abused so often, I refused to define it at all.  What if I said that intelligence was X, and it wasn't really X?  I knew in an intuitive sense what I was looking for—something powerful enough to take stars apart for raw material—and I didn't want to fall into the trap of being distracted from that by definitions.

Similarly, having seen so many AI projects brought down by physics envy—trying to stick with simple and elegant math, and being constrained to toy systems as a result—I generalized that any math simple enough to be formalized in a neat equation was probably not going to work for, you know, real intelligence.  "Except for Bayes's Theorem," Eliezer2000 added; which, depending on your viewpoint, either mitigates the totality of his offense, or shows that he should have suspected the entire generalization instead of trying to add a single exception.

continue reading »

My Childhood Death Spiral

23 15 September 2008 03:42AM

Followup toAffective Death Spirals, My Wild and Reckless Youth

My parents always used to downplay the value of intelligence.  And play up the value of—effort, as recommended by the latest research?  No, not effort.  Experience.  A nicely unattainable hammer with which to smack down a bright young child, to be sure.  That was what my parents told me when I questioned the Jewish religion, for example.  I tried laying out an argument, and I was told something along the lines of:  "Logic has limits, you'll understand when you're older that experience is the important thing, and then you'll see the truth of Judaism."  I didn't try again.  I made one attempt to question Judaism in school, got slapped down, didn't try again.  I've never been a slow learner.

Whenever my parents were doing something ill-advised, it was always, "We know better because we have more experience.  You'll understand when you're older: maturity and wisdom is more important than intelligence."

If this was an attempt to focus the young Eliezer on intelligence uber alles, it was the most wildly successful example of reverse psychology I've ever heard of.

But my parents aren't that cunning, and the results weren't exactly positive.

continue reading »

View more: Next