Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

My Naturalistic Awakening

25 Eliezer_Yudkowsky 25 September 2008 06:58AM

Followup toFighting a Rearguard Action Against the Truth

In yesterday's episode, Eliezer2001 is fighting a rearguard action against the truth.  Only gradually shifting his beliefs, admitting an increasing probability in a different scenario, but never saying outright, "I was wrong before."  He repairs his strategies as they are challenged, finding new justifications for just the same plan he pursued before.

(Of which it is therefore said:  "Beware lest you fight a rearguard retreat against the evidence, grudgingly conceding each foot of ground only when forced, feeling cheated.  Surrender to the truth as quickly as you can.  Do this the instant you realize what you are resisting; the instant you can see from which quarter the winds of evidence are blowing against you.")

Memory fades, and I can hardly bear to look back upon those times—no, seriously, I can't stand reading my old writing.  I've already been corrected once in my recollections, by those who were present.  And so, though I remember the important events, I'm not really sure what order they happened in, let alone what year.

But if I had to pick a moment when my folly broke, I would pick the moment when I first comprehended, in full generality, the notion of an optimization process.  That was the point at which I first looked back and said, "I've been a fool."

continue reading »

The Sheer Folly of Callow Youth

25 Eliezer_Yudkowsky 19 September 2008 01:30AM

Followup toMy Childhood Death Spiral, My Best and Worst Mistake, A Prodigy of Refutation

"There speaks the sheer folly of callow youth; the rashness of an ignorance so abysmal as to be possible only to one of your ephemeral race..."
        —Gharlane of Eddore

Once upon a time, years ago, I propounded a mysterious answer to a mysterious question—as I've hinted on several occasions.  The mysterious question to which I propounded a mysterious answer was not, however, consciousness—or rather, not only consciousness.  No, the more embarrassing error was that I took a mysterious view of morality.

I held off on discussing that until now, after the series on metaethics, because I wanted it to be clear that Eliezer1997 had gotten it wrong.

When we last left off, Eliezer1997, not satisfied with arguing in an intuitive sense that superintelligence would be moral, was setting out to argue inescapably that creating superintelligence was the right thing to do.

Well (said Eliezer1997) let's begin by asking the question:  Does life have, in fact, any meaning?

continue reading »

Psychic Powers

13 Eliezer_Yudkowsky 12 September 2008 07:28PM

Followup to: Excluding the Supernatural

Yesterday, I wrote:

If the "boring view" of reality is correct, then you can never predict anything irreducible because you are reducible.  You can never get Bayesian confirmation for a hypothesis of irreducibility, because any prediction you can make is, therefore, something that could also be predicted by a reducible thing, namely your brain.

Benja Fallenstein commented:

I think that while you can in this case never devise an empirical test whose outcome could logically prove irreducibility, there is no clear reason to believe that you cannot devise a test whose counterfactual outcome in an irreducible world would make irreducibility subjectively much more probable (given an Occamian prior).

Without getting into reducibility/irreducibility, consider the scenario that the physical universe makes it possible to build a hypercomputer —that performs operations on arbitrary real numbers, for example —but that our brains do not actually make use of this: they can be simulated perfectly well by an ordinary Turing machine, thank you very much...

Well, that's a very intelligent argument, Benja Fallenstein.  But I have a crushing reply to your argument, such that, once I deliver it, you will at once give up further debate with me on this particular point:

continue reading »

Excluding the Supernatural

37 Eliezer_Yudkowsky 12 September 2008 12:12AM

Followup toReductionism, Anthropomorphic Optimism

Occasionally, you hear someone claiming that creationism should not be taught in schools, especially not as a competing hypothesis to evolution, because creationism is a priori and automatically excluded from scientific consideration, in that it invokes the "supernatural".

So... is the idea here, that creationism could be true, but even if it were true, you wouldn't be allowed to teach it in science class, because science is only about "natural" things?

It seems clear enough that this notion stems from the desire to avoid a confrontation between science and religion.  You don't want to come right out and say that science doesn't teach Religious Claim X because X has been tested by the scientific method and found false.  So instead, you can... um... claim that science is excluding hypothesis X a priori.  That way you don't have to discuss how experiment has falsified X a posteriori.

Of course this plays right into the creationist claim that Intelligent Design isn't getting a fair shake from science—that science has prejudged the issue in favor of atheism, regardless of the evidence.  If science excluded Intelligent Design a priori, this would be a justified complaint!

But let's back up a moment.  The one comes to you and says:  "Intelligent Design is excluded from being science a priori, because it is 'supernatural', and science only deals in 'natural' explanations."

What exactly do they mean, "supernatural"?  Is any explanation invented by someone with the last name "Cohen" a supernatural one?  If we're going to summarily kick a set of hypotheses out of science, what is it that we're supposed to exclude?

By far the best definition I've ever heard of the supernatural is Richard Carrier's:  A "supernatural" explanation appeals to ontologically basic mental things, mental entities that cannot be reduced to nonmental entities.

continue reading »

Abstracted Idealized Dynamics

17 Eliezer_Yudkowsky 12 August 2008 01:00AM

Followup toMorality as Fixed Computation

I keep trying to describe morality as a "computation", but people don't stand up and say "Aha!"

Pondering the surprising inferential distances that seem to be at work here, it occurs to me that when I say "computation", some of my listeners may not hear the Word of Power that I thought I was emitting; but, rather, may think of some complicated boring unimportant thing like Microsoft Word.

Maybe I should have said that morality is an abstracted idealized dynamic.  This might not have meant anything to start with, but at least it wouldn't sound like I was describing Microsoft Word.

How, oh how, am I to describe the awesome import of this concept, "computation"?

Perhaps I can display the inner nature of computation, in its most general form, by showing how that inner nature manifests in something that seems very unlike Microsoft Word—namely, morality.

Consider certain features we might wish to ascribe to that-which-we-call "morality", or "should" or "right" or "good":

• It seems that we sometimes think about morality in our armchairs, without further peeking at the state of the outside world, and arrive at some previously unknown conclusion.

Someone sees a slave being whipped, and it doesn't occur to them right away that slavery is wrong.  But they go home and think about it, and imagine themselves in the slave's place, and finally think, "No."

Can you think of anywhere else that something like this happens?

continue reading »

Inseparably Right; or, Joy in the Merely Good

22 Eliezer_Yudkowsky 09 August 2008 01:00AM

Followup toThe Meaning of Right

I fear that in my drive for full explanation, I may have obscured the punchline from my theory of metaethics.  Here then is an attempted rephrase:

There is no pure ghostly essence of goodness apart from things like truth, happiness and sentient life.

What do you value?  At a guess, you value the life of your friends and your family and your Significant Other and yourself, all in different ways.  You would probably say that you value human life in general, and I would take your word for it, though Robin Hanson might ask how you've acted on this supposed preference.  If you're reading this blog you probably attach some value to truth for the sake of truth.  If you've ever learned to play a musical instrument, or paint a picture, or if you've ever solved a math problem for the fun of it, then you probably attach real value to good art.  You value your freedom, the control that you possess over your own life; and if you've ever really helped someone you probably enjoyed it.  You might not think of playing a video game as a great sacrifice of dutiful morality, but I for one would not wish to see the joy of complex challenge perish from the universe.  You may not think of telling jokes as a matter of interpersonal morality, but I would consider the human sense of humor as part of the gift we give to tomorrow.

And you value many more things than these.

Your brain assesses these things I have said, or others, or more, depending on the specific event, and finally affixes a little internal representational label that we recognize and call "good".

There's no way you can detach the little label from what it stands for, and still make ontological or moral sense.

continue reading »

Morality as Fixed Computation

14 Eliezer_Yudkowsky 08 August 2008 01:00AM

Followup toThe Meaning of Right

Toby Ord commented:

Eliezer,  I've just reread your article and was wondering if this is a good quick summary of your position (leaving apart how you got to it):

'I should X' means that I would attempt to X were I fully informed.

Toby's a pro, so if he didn't get it, I'd better try again.  Let me try a different tack of explanation—one closer to the historical way that I arrived at my own position.

Suppose you build an AI, and—leaving aside that AI goal systems cannot be built around English statements, and all such descriptions are only dreams—you try to infuse the AI with the action-determining principle, "Do what I want."

And suppose you get the AI design close enough—it doesn't just end up tiling the universe with paperclips, cheesecake or tiny molecular copies of satisfied programmers—that its utility function actually assigns utilities as follows, to the world-states we would describe in English as:

<Programmer weakly desires 'X',   quantity 20 of X exists>:  +20
<Programmer strongly desires 'Y',
quantity 20 of X exists>:  0
<Programmer weakly desires 'X',   quantity 30 of Y exists>:  0
<Programmer strongly desires 'Y', quantity 30 of Y exists>:  +60

You perceive, of course, that this destroys the world.

continue reading »

The Meaning of Right

30 Eliezer_Yudkowsky 29 July 2008 01:28AM

Continuation of:  Changing Your Metaethics, Setting Up Metaethics
Followup toDoes Your Morality Care What You Think?, The Moral Void, Probability is Subjectively Objective, Could Anything Be Right?, The Gift We Give To Tomorrow, Rebelling Within Nature, Where Recursive Justification Hits Bottom, ...

(The culmination of a long series of Overcoming Bias posts; if you start here, I accept no responsibility for any resulting confusion, misunderstanding, or unnecessary angst.)

What is morality?  What does the word "should", mean?  The many pieces are in place:  This question I shall now dissolve.

The key—as it has always been, in my experience so far—is to understand how a certain cognitive algorithm feels from inside.  Standard procedure for righting a wrong question:  If you don't know what right-ness is, then take a step beneath and ask how your brain labels things "right".

It is not the same question—it has no moral aspects to it, being strictly a matter of fact and cognitive science.  But it is an illuminating question.  Once we know how our brain labels things "right", perhaps we shall find it easier, afterward, to ask what is really and truly right.

But with that said—the easiest way to begin investigating that question, will be to jump back up to the level of morality and ask what seems right.  And if that seems like too much recursion, get used to it—the other 90% of the work lies in handling recursion properly.

(Should you find your grasp on meaningfulness wavering, at any time following, check Changing Your Metaethics for the appropriate prophylactic.)

continue reading »

Zombies: The Movie

72 Eliezer_Yudkowsky 20 April 2008 05:53AM

FADE IN around a serious-looking group of uniformed military officers.  At the head of the table, a senior, heavy-set man, GENERAL FRED, speaks.

GENERAL FRED:  The reports are confirmed.  New York has been overrun... by zombies.

COLONEL TODD:  Again?  But we just had a zombie invasion 28 days ago!

GENERAL FRED:  These zombies... are different.  They're... philosophical zombies.

CAPTAIN MUDD:  Are they filled with rage, causing them to bite people?

COLONEL TODD:  Do they lose all capacity for reason?

GENERAL FRED:  No.  They behave... exactly like we do... except that they're not conscious.

(Silence grips the table.)


continue reading »

Dissolving the Question

44 Eliezer_Yudkowsky 08 March 2008 03:17AM

Followup toHow an Algorithm Feels From the Inside, Feel the Meaning, Replace the Symbol with the Substance

"If a tree falls in the forest, but no one hears it, does it make a sound?"

I didn't answer that question.  I didn't pick a position, "Yes!" or "No!", and defend it.  Instead I went off and deconstructed the human algorithm for processing words, even going so far as to sketch an illustration of a neural network.  At the end, I hope, there was no question left—not even the feeling of a question.

Many philosophers—particularly amateur philosophers, and ancient philosophers—share a dangerous instinct:  If you give them a question, they try to answer it.

Like, say, "Do we have free will?"

The dangerous instinct of philosophy is to marshal the arguments in favor, and marshal the arguments against, and weigh them up, and publish them in a prestigious journal of philosophy, and so finally conclude:  "Yes, we must have free will," or "No, we cannot possibly have free will."

Some philosophers are wise enough to recall the warning that most philosophical disputes are really disputes over the meaning of a word, or confusions generated by using different meanings for the same word in different places.  So they try to define very precisely what they mean by "free will", and then ask again, "Do we have free will?  Yes or no?"

A philosopher wiser yet, may suspect that the confusion about "free will" shows the notion itself is flawed.  So they pursue the Traditional Rationalist course:  They argue that "free will" is inherently self-contradictory, or meaningless because it has no testable consequences.  And then they publish these devastating observations in a prestigious philosophy journal.

But proving that you are confused may not make you feel any less confused.  Proving that a question is meaningless may not help you any more than answering it.

continue reading »

View more: Prev | Next