Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Reductionism

35 Post author: Eliezer_Yudkowsky 16 March 2008 06:26AM

Followup toHow An Algorithm Feels From Inside, Mind Projection Fallacy

Almost one year ago, in April 2007, Matthew C submitted the following suggestion for an Overcoming Bias topic:

"How and why the current reigning philosophical hegemon (reductionistic materialism) is obviously correct [...], while the reigning philosophical viewpoints of all past societies and civilizations are obviously suspect—"

I remember this, because I looked at the request and deemed it legitimate, but I knew I couldn't do that topic until I'd started on the Mind Projection Fallacy sequence, which wouldn't be for a while...

But now it's time to begin addressing this question.  And while I haven't yet come to the "materialism" issue, we can now start on "reductionism".

First, let it be said that I do indeed hold that "reductionism", according to the meaning I will give for that word, is obviously correct; and to perdition with any past civilizations that disagreed.

This seems like a strong statement, at least the first part of it.  General Relativity seems well-supported, yet who knows but that some future physicist may overturn it?

On the other hand, we are never going back to Newtonian mechanics.  The ratchet of science turns, but it does not turn in reverse.  There are cases in scientific history where a theory suffered a wound or two, and then bounced back; but when a theory takes as many arrows through the chest as Newtonian mechanics, it stays dead.

"To hell with what past civilizations thought" seems safe enough, when past civilizations believed in something that has been falsified to the trash heap of history.

And reductionism is not so much a positive hypothesis, as the absence of belief—in particular, disbelief in a form of the Mind Projection Fallacy.

I once met a fellow who claimed that he had experience as a Navy gunner, and he said, "When you fire artillery shells, you've got to compute the trajectories using Newtonian mechanics.  If you compute the trajectories using relativity, you'll get the wrong answer."

And I, and another person who was present, said flatly, "No."  I added, "You might not be able to compute the trajectories fast enough to get the answers in time—maybe that's what you mean?  But the relativistic answer will always be more accurate than the Newtonian one."

"No," he said, "I mean that relativity will give you the wrong answer, because things moving at the speed of artillery shells are governed by Newtonian mechanics, not relativity."

"If that were really true," I replied, "you could publish it in a physics journal and collect your Nobel Prize." 

Standard physics uses the same fundamental theory to describe the flight of a Boeing 747 airplane, and collisions in the Relativistic Heavy Ion Collider.  Nuclei and airplanes alike, according to our understanding, are obeying special relativity, quantum mechanics, and chromodynamics.

But we use entirely different models to understand the aerodynamics of a 747 and a collision between gold nuclei in the RHIC.  A computer modeling the aerodynamics of a 747 may not contain a single token, a single bit of RAM, that represents a quark.

So is the 747 made of something other than quarks?  No, you're just modeling it with representational elements that do not have a one-to-one correspondence with the quarks of the 747.  The map is not the territory.

Why not model the 747 with a chromodynamic representation?  Because then it would take a gazillion years to get any answers out of the model.  Also we could not store the model on all the memory on all the computers in the world, as of 2008.

As the saying goes, "The map is not the territory, but you can't fold up the territory and put it in your glove compartment."  Sometimes you need a smaller map to fit in a more cramped glove compartment—but this does not change the territory.  The scale of a map is not a fact about the territory, it's a fact about the map.

If it were possible to build and run a chromodynamic model of the 747, it would yield accurate predictions.  Better predictions than the aerodynamic model, in fact.

To build a fully accurate model of the 747, it is not necessary, in principle, for the model to contain explicit descriptions of things like airflow and lift.  There does not have to be a single token, a single bit of RAM, that corresponds to the position of the wings.  It is possible, in principle, to build an accurate model of the 747 that makes no mention of anything except elementary particle fields and fundamental forces.

"What?" cries the antireductionist.  "Are you telling me the 747 doesn't really have wings?  I can see the wings right there!"

The notion here is a subtle one.  It's not just the notion that an object can have different descriptions at different levels.

It's the notion that "having different descriptions at different levels" is itself something you say that belongs in the realm of Talking About Maps, not the realm of Talking About Territory.

It's not that the airplane itself, the laws of physics themselves, use different descriptions at different levels—as yonder artillery gunner thought.  Rather we, for our convenience, use different simplified models at different levels.

If you looked at the ultimate chromodynamic model, the one that contained only elementary particle fields and fundamental forces, that model would contain all the facts about airflow and lift and wing positions—but these facts would be implicit, rather than explicit.

You, looking at the model, and thinking about the model, would be able to figure out where the wings were.  Having figured it out, there would be an explicit representation in your mind of the wing position—an explicit computational object, there in your neural RAM.  In your mind.

You might, indeed, deduce all sorts of explicit descriptions of the airplane, at various levels, and even explicit rules for how your models at different levels interacted with each other to produce combined predictions—

And the way that algorithm feels from inside, is that the airplane would seem to be made up of many levels at once, interacting with each other.

The way a belief feels from inside, is that you seem to be looking straight at reality.  When it actually seems that you're looking at a belief, as such, you are really experiencing a belief about belief.

So when your mind simultaneously believes explicit descriptions of many different levels, and believes explicit rules for transiting between levels, as part of an efficient combined model, it feels like you are seeing a system that is made of different level descriptions and their rules for interaction.

But this is just the brain trying to be efficiently compress an object that it cannot remotely begin to model on a fundamental level.  The airplane is too large.  Even a hydrogen atom would be too large.  Quark-to-quark interactions are insanely intractable.  You can't handle the truth.

But the way physics really works, as far as we can tell, is that there is only the most basic level—the elementary particle fields and fundamental forces.  You can't handle the raw truth, but reality can handle it without the slightest simplification.  (I wish I knew where Reality got its computing power.)

The laws of physics do not contain distinct additional causal entities that correspond to lift or airplane wings, the way that the mind of an engineer contains distinct additional cognitive entities that correspond to lift or airplane wings.

This, as I see it, is the thesis of reductionism.  Reductionism is not a positive belief, but rather, a disbelief that the higher levels of simplified multilevel models are out there in the territory.  Understanding this on a gut level dissolves the question of "How can you say the airplane doesn't really have wings, when I can see the wings right there?"  The critical words are really and see.

 

Part of the sequence Reductionism

Next post: "Explaining vs. Explaining Away"

Previous post: "Qualitatively Confused"

Comments (147)

Sort By: Old
Comment author: mitchell_porter2 16 March 2008 08:18:09AM 10 points [-]

This denial that "higher level" entities actually exist causes a problem when we are supposed to identify ourselves with such an entity. Does the mind of a cognitive scientist only exist in the mind of a cognitive scientist?

Comment author: rkyeun 11 April 2011 12:15:48AM 19 points [-]

The belief that there is a cognitive mind calling itself a scientist only exists in that scientist's mind. The reality is undecatillion swarms of quarks not having any beliefs, and just BEING the scientist.

Comment author: Aaron_Boyden 16 March 2008 08:26:01AM 5 points [-]

One minor quibble; how do we know there is any most basic level?

Comment author: RafeFurst 07 March 2010 04:58:35PM 9 points [-]

Agreed. Why would we believe a quark is not "emergent"? Could be turtles all the way down....

Comment author: DanielLC 29 February 2012 05:52:19AM 16 points [-]

Levels are an attribute of the map. The territory only has one level. Its only level is the most basic one.

Let's consider a fractal. The Mandelbrot set can be made by taking the union of infinitely many iterations. You could think of each additional iteration as a better map. That being said, either a point is in the Mandelbrot set or it is not. The set itself only has one level.

Comment author: army1987 29 February 2012 11:06:27AM *  0 points [-]

Interesting analogy!

Comment author: potato 08 June 2012 11:52:41PM 1 point [-]

Because things happen, if there was no most basic level, figuring out what happens would be an infinite recursion with no base case. Not even the universe's computation could find the answer.

Comment author: Joshua_Fox 16 March 2008 08:34:30AM 0 points [-]

Yet _something_ in the real world makes it tractable to create the "map" -- to find those hidden class variables which enable Naive Bayes.

Comment author: Ian_C. 16 March 2008 09:46:35AM 0 points [-]

Our brain and senses are made out of fundamental particles too, and the image of a plane with wings is the result of the interaction between the fundamental particles out there with the fundamental particles in us.

So I would I say the plane image is an *effect* not a primary, but that does not make it any less real than the primary. It is a real thing, just as real, that just happens to be further down the chain of cause and effect.

Comment author: JulianMorrison 16 March 2008 11:23:09AM 3 points [-]

Reductionism does have a caveat, and this is "a fact about maps" and not "a fact about the territory": the real world level can be below the algorithm. Example: a CD. A chromodynamic model would spend immense computing resources simulating the heat and location and momentum and bonds of a slew of atoms (including those in the surrounding atmosphere, or the plasticizer would boil off). In reality there are about four things that matter in a CD: you can pick it up, it fits into a standard box, it fits into a standard reader tray, and when you measure the pattern of pits they encode a particular blob of binary data. From a human utility perspective, the CD is fully replaceable with a chromodynamically dissimilar other CD that happens to have those same characteristics.

Computers are full of examples of this, where the least important level is not the fundamental level. In in some cases, each level is not just built upon lower levels, but ought to be fully independent of them. If your lisp doesn't implement the lambda calculus because of a silicon fault, an atomic model would correctly represent this, but it would be representing a mathematically unimportant bug. A correct lisp would be representable on any compute substrate, from a Mac to a cranks-and-gears Babbage engine. A model which took account of the substrate would be missing the point.

Comment author: bigjeff5 01 February 2011 08:52:27PM 2 points [-]

I think the point is that the model of four elements we use to describe the CD is also contained within the chromodynamic model - the four elements are a less accurate abstraction of the chromodynamic model, even if we don't recognize it as such when we used the more abstract model.

In the same way, Newtonian Mechanics is a less accurate abstraction of Special Relativity.

Therefore, no matter how precise Newtonian Mechanics is, it does not match up exactly with reality. Because it is an abstraction, it contains inaccuracies. The SR version of the same process will always be more accurate than the NM version, though the SR version is also probably not completely accurate.

A correct lisp would be representable on any compute substrate, from a Mac to a cranks-and-gears Babbage engine.

I don't think that is true. For Lisp to mean anything to any machine, it must first be compiled into the machine language of that particular machine. Because this process is fundamentally different for different types of machines, the way the same Lisp behaves on each machine will be highly dependent on its specific translation into machine language. In other words, the same Lisp code will result in slightly different behavior on a Mac than it would on a Linux machine. The difference may not be enough to take any note of, but it is still there.

This is the similar to calculating the trajectory of an artillery shell with Newtonian Mechanics vs Special Relativity. The difference between the two will be so small that it is almost unmeasurable, but there will definitely be a difference between them.

Comment author: MagnetoHydroDynamics 12 January 2012 11:00:32PM 0 points [-]

In other words, the same Lisp code will result in slightly different behavior on a Mac than it would on a Linux machine. The difference may not be enough to take any note of, but it is still there.

I am going to have to disagree here. A given Lisp will require a Bounded-Tape Turing Machine of tape size N and head state count M and symbol table Q. If the ARM processor running Windows NT can supply that, Lisp is possible. If a x86 running Unix can supply that, Lisp is possible. If Lisp behaves differently from the mathematical ideal on any machine, that means the machine is incapable of supplying said turing machine.

"If the Lisp is untrue to the Specification, that is a fact about the Implementation, not the Mathematics behind it."

Comment author: DSimon 17 January 2012 04:28:24AM *  0 points [-]

What about the speed of operation? The specification does not set any requirements for this, and so two different Lisp implementations which differ in that property can both be correct yet produce different output.

Comment author: MagnetoHydroDynamics 18 January 2012 12:12:11PM 0 points [-]

Even if it runs at one clock cycle per millenia, it would still theoretically be able to run any given program, and produce exactly the same output. The time function is also external to the LISP implementation; it is a call to the OS, so the output that is printing the current time doesn't count.

Comment author: DSimon 18 January 2012 08:21:43PM 0 points [-]

I think we may have to taboo "output", as the contention seems to be about what is included by that word.

Comment author: MagnetoHydroDynamics 18 January 2012 08:41:42PM *  1 point [-]

Given a program P consisting of a linear bit-pattern, that is fed into virtual machine L, and produces a linear bit-pattern B to a section of non-local memory location O. During the runtime of P on L, the only interaction with non-local memory is writing B to O. There is no bits passed from non-local memory to local memory.

For all L If and only if L is true to the specification then for any P there is only one possible B.

  • P is the lisp program source code, which does not read from stdin, keyboard drivers, web sockets or any similar source of external information.
  • L is a LISP (virtual) machine.
  • B is some form of data, such as text, binary data, images, etc.
  • O is some destination could be stdout, screen, speakers, etc.
Comment author: DSimon 18 January 2012 11:07:09PM *  1 point [-]

Ah, ok, I find nothing to disagree with there. Looking back up the conversation, I see that I was responding to the word "behavior". Specifically, bigjeff5 said:

In other words, the same Lisp code will result in slightly different behavior on a Mac than it would on a Linux machine.

To which you responded:

If Lisp behaves differently from the mathematical ideal on any machine, that means the machine is incapable of supplying said turing machine.

So it comes down to: does the "behaviour" of a Lisp implementation include anything besides the output? Which effectively comes down to what question we're trying to answer about Lisp, or computation, or etc.

The original question was about whether a Lisp machine needs to include abstractions for the substrate it's running on. The most direct answer is "No, because the specification doesn't mention anything about the substrate." More generally, if a program needs introspection it can do it with quine trickery, or more realistically just use a system call.

Bigjeff5 responded by pointing out that the choice of substrate can determine whether or not the Lisp implementation is useful to anybody. This is of course correct, but this is a separate issue from whether or not the Lisp abstraction needs to include anything about its own substrate; a Lisp can be fast or slow, useful or useless, regardless of whether or not its internal abstractions include a reference to or description of the device it is running on.

Comment author: laofmoonster 22 February 2014 06:58:36AM *  1 point [-]

Is it fair to call the CD data a map in this case? (Perhaps that's your point.) The relationship is closer to interface-implementation than map-territory. Reductionism still stands, in that the higher abstraction is a reduction of the lower. (Whereas a map is a compression of the territory, an interface is a construction on top of it). Correct lisp should be implementation-agnostic, but it is not implementation-free.

Comment author: RobinHanson 16 March 2008 11:23:24AM 2 points [-]

This is a situation where a lot of confidence seems appropriate, though of course not infinite confidence. I'd put the chance that Eliezer is wrong here at below one percent.

Comment author: Perplexed 30 July 2010 04:47:33AM 5 points [-]

I really have no idea what Eliezer being wrong on this would mean. Is the subject matter of this posting the nature of the territory or is it advice on the best way to construct maps?

What conceivable observations might cause you to revise that 1% probability estimate up to, say, 80%?

As I see it, reductionism is not a hypothesis about the world; it is a good heuristic to direct research.

Comment author: ata 30 July 2010 05:07:35AM 4 points [-]

I take the main thesis as being summed up by this sentence around the end:

Reductionism is not a positive belief, but rather, a disbelief that the higher levels of simplified multilevel models are out there in the territory.

Specific non-reductionist hypotheses, in the extremely unlikely event that any are supported by evidence, could cast doubt on reductionism. We'd need to find a specific set of circumstances under which reality appears to be computing the same entities at multiple levels simultaneously and applying different laws at each level, or we'd need to find fundamental laws that talk about non-fundamental objects. For example, if the Navy gunner were actually correct that you need to use Newtonian mechanics instead of relativity in order to get the right answer when computing artillery trajectories (given the further unlikely assumption that we couldn't find a simpler explanation for this state of affairs than "physical reductionism as a whole is wrong").

Comment author: Perplexed 30 July 2010 05:54:14AM 0 points [-]

Ok, let me try to construct an example of a non-reductionist hypothesis. Eliezer says that it would be a claim that higher levels of simplified multilevel models are out there in the territory. So, as a multi-level model, let us take (low-level) QCD+electroweak, (mid-level): nucleons, mesons, electrons, neutrinos, photons; (high-level): atomic theory with 92 kinds of atoms + photons.

Now as I understand it, reductionism forbids me to believe that photons and electrons - entities which exist in higher level models - are actually out there in the territory. What am I doing wrong here? Could you maybe give me an example of a hypothesis which a reductionist ought to disbelieve?

Comment author: ata 30 July 2010 06:38:50AM 0 points [-]

As I understand it, photons and electrons are identified as elementary particles in the Standard Model. Wouldn't that be considered the lowest level?

Comment author: Perplexed 31 July 2010 02:50:53AM 0 points [-]

Sure, they exist in both the lowest (so far) level and in the next level up. But Eliezer wants to forbid things at "higher levels of simplified multilevel models" from existing out there in the territory. If that doesn't include electrons in this example, then I don't know what it includes. I don't understand exactly what it is that is forbidden. Is it type errors - confusing map entities with territory entities? Is it failing to yet be convinced by what someone else thinks is the best low-level model? Is it somehow imagining that, say, atoms still exist in the territory while simultaneously imagining that atoms are made of more fundamental things which also exist in the territory? I seems to me that the definition of reductionism that Eliezer has given is completely useless because no one sane would proclaim themselves as non-reductionists. He is attacking a straw-man position, as far as I can see.

Comment author: taryneast 16 December 2010 07:48:45AM 8 points [-]

AFAICS, he is not "forbidding" a plane's wing from existing at the level of quark. He's just saying that "plane's wing" is a label that we are giving to "that bunch of quarks arranged just so over there". This as opposed to "that other bunch of quarks arranged just so over there" that we call "a human".

That the arrangement of a set of quarks does not have a fundamental "label" at the most basic level. The classification of the first bunch o' quarks (as separate from the second) is something that we do on a "higher level" than the quarks themselves.

Comment author: bigjeff5 01 February 2011 09:06:37PM *  1 point [-]

But Eliezer wants to forbid things at "higher levels of simplified multilevel models" from existing out there in the territory.

You're confusing the map and the territory.

The territory is only quarks (or whatever quarks may be made of). There is nothing else, it's just a big mass of quarks.

The map is the description of this bunch of quarks is human, while that bunch is an airplane.

There was a time when physicists thought that earth, air, water, and fire were the reality - that they were fundamental. Then they discovered molecules, and they thought those were fundamental. Then they discovered atoms, and thought those were fundamental. Etc. on down until the current (I think, I'm not a physicist) belief that quarks are fundamental.

At no point did reality change. Reality did not change when we discovered rocks were made up of molecules - the map was simply inaccurate. The reality was that rocks were always made up of molecules. The same when we discovered that molecules were made of atoms. It was always true, our map was simply not as accurate as we thought it was.

You could quite accurately say the map is wrong because it does not perfectly reflect reality, but the map is extremely useful, so we should not discard it. We should simply recognize that it is a map, it is not the territory. It's a representation of reality, it is not what is real. We know Newtonian Mechanics is a less accurate map than Special Relativity, but it is more useful than SR in many cases because it doesn't have the detail cluttering up the map that SR has. Yeah, it's less precise, but for calculating the trajectory of an artillery shell it is more than good enough.

The different levels are maps, there is only one territory.

Comment author: DanielLC 29 March 2011 06:08:58AM *  2 points [-]

The territory is only quarks (or whatever quarks may be made of).

It's also leptons.

Comment author: Sniffnoy 02 February 2011 02:01:42AM *  1 point [-]

In short, you seem to be confusing {A} with A.

Comment author: Perplexed 02 February 2011 02:10:47AM 0 points [-]

Too short. But intriguing. Please explain.

Comment author: Sniffnoy 03 February 2011 06:24:31AM *  1 point [-]

What I mean is, your objection doesn't hold water because raw objects at lower levels can always be put in a wrapper to be made suitable for use at a higher level. E.g. if we consider an elementary particles level, and a general-particles-which-for-now-we-will-consider-as-sets-of-particles-level (yes, I realize this almost certainly does not actually work in actual physics), then in the higher level we have proton={up_1, up_2, down}, and electron_H={electron_L}. But for most purposes the distinction between electron and {electron} is irrelevant, so we elide it. Your point seems to me analogous to the statement "But 2 can't be the rational number {...,(-4,-2),(2,1),(-2,-1),(4,2),...}, it's the integer {...(1,-1),(2,0),(3,1),...}!"

Comment author: Perplexed 03 February 2011 12:45:31PM 3 points [-]

Ah! Good point. And now that it is explained, good analogy.

I still have some reservations about Eliezer's approach to reductionism/anti-holism and his equation of the idea of "emergence" with some kind of mystical mumbo-jumbo. But this is a complicated subject and philosophers of science much more careful than myself have addressed it better than I can.

Thank you, though, for pointing out that my argument in this thread can be refuted so easily simply by taking Eliezer a little less literally. Electrons at one level reduce to electrons at a lower level. But the two uses of the word 'electron' in the above sentence refer to different (though closely related) entities. As closely related as A and {A}. You are right. Cool.

Comment author: timtyler 15 August 2010 07:10:36PM *  0 points [-]

"Reductionism" has come to have two meanings:

"Reductionism can either mean (a) an approach to understanding the nature of complex things by reducing them to the interactions of their parts, or to simpler or more fundamental things or (b) a philosophical position that a complex system is nothing but the sum of its parts, and that an account of it can be reduced to accounts of individual constituents."

This post is about the second meaning. But that meaning is silly, useless, and redundantly duplicates other terms for such nonsense - such as reducibility and irreducibility.

We should kill off that meaning - and reclaim the meaning of the term that is useful and sensible. Posts like this one - which use the second meaning - are part of the problem.

Comment author: simplicio 15 August 2010 07:24:32PM 0 points [-]

Why is it silly to say that higher level phenomena reduce, in principle, to ontologically fundamental particle fields?

Comment author: timtyler 15 August 2010 07:30:58PM *  0 points [-]

This discusssion is about the term "reductionism" - which is obviously some kind of philosophy about "reducing" things - but the cited definitions differ on the details of exactly what the term means.

The first meaning just states the obvious, IMO. Also, other terms have that kind of nonsense covered. There is no need to overload the perfectly useful and good term "reductionism" with something that is only useful for the refutation of nonsense. It just causes the type of mix-up that you see in this thread.

Comment author: simplicio 15 August 2010 07:33:54PM 0 points [-]

I understand, I just don't get why you object to reductionism as exemplified by the second definition. It seems to me a fairly reasonable philosophical position.

Comment author: timtyler 15 August 2010 07:42:02PM *  0 points [-]

I object to that terminology because it overloads a useful term which is used for something else without having a good excuse for doing so. Call the idea that invisible pixies push atoms around "irreducibility" - or something else - anything!

IMO, "Reductionism" and "Holism" should be reserved for the Hofstadter-favoured sense of those words - or you have a terminological mess:

http://i93.photobucket.com/albums/l76/orestesmantra/MU.jpg

Comment author: simplicio 15 August 2010 08:03:40PM 1 point [-]

Oh, I see. Thanks for clarifying.

Comment author: Perplexed 15 August 2010 07:49:55PM 0 points [-]

You are confusing me, Tim. Above you seemed to be criticizing the usefulness of the second meaning. Now, you seem to be criticizing the usefulness of the first.

Which do you find useless: the label for a methodology, or the label for a hypothesis about the possibility of hierarchical explanations?

Comment author: timtyler 15 August 2010 08:06:09PM *  0 points [-]

a) - good; b) - not needed. (Ref for a and b: http://en.wikipedia.org/wiki/Reductionism)

Reductionism and Holism should be the names of strategies for analysing complex sysytems by reducing them to the interactions of their parts - or considering them as high-level entities - respectively.

The other terminology - the kind used in this post - is very bad. People should not overload such useful terminology - unless there really is no other way.

Comment author: Perplexed 15 August 2010 08:24:19PM 1 point [-]

One windmill I try to avoid attacking is the dictionary. I would suggest you spend a few extra syllables and refer to a. as "methodological reductionism" and b. as "philosophical (or ontological) reductionism". I understand the badness of needless overloading, but I'm not sure I agree that b. is "useless" simply because its validity is obvious to you. Would you also advocate abandoning the term "atheism"?

My problem with philosophical reductionism is I don't know whether it is a claim about the territory or a convention about maps. If it is a claim about the territory, I certainly remain unconvinced, having not yet glimpsed the territory.

Comment author: timtyler 15 August 2010 08:30:00PM 0 points [-]

One can't just let dictionary authors rule language. When they get scientific things wrong, responsible individuals should put up a fight. Look at what is happening to "epigenesis" - for example. Or "emergence".

Comment author: timtyler 15 August 2010 08:32:03PM 1 point [-]

Would you also advocate abandoning the term "atheism"?

That is likely to lead off topic. If the atheists and agnostics could sit down and decide what those terms actually meant, it would certainly help. Meanwhile, call me an adeist.

Comment author: Ian_C. 16 March 2008 12:08:03PM 1 point [-]

When an image you are looking at is altered due to viewing it through a pane of coloured glass, you don't suddenly start calling it "the map" instead of "the territory."

So why is it, when it passes through our eyes and brain it suddenly becomes "the map," when the brain is made of the same fundamental stuff (quarks etc.) as the glass?

Comment author: Perplexed 30 July 2010 04:57:46AM 0 points [-]

I would say that the stuff making up "the map" is not stuff inside the brain. Instead, it is stuff inside the mind, and the mind is "emergent from" the brain (or, if you prefer, the mind "reduces to" the brain).

The neurons in the brain reduce (through several levels) to brain quarks. The map ideas in the mind also reduce to brain quarks, but they do so in an odd way. I choose to label that kind of oddness "emergence", but the local powers-that-be seem to disapprove of this terminology.

Comment author: taryneast 16 December 2010 07:54:51AM 2 points [-]

The image that you see contains far less information than the original actual stuff that makes up the original "image and coloured glass" objects that exist in front of you. That is why the image in your head is map, not territory.

You also have "territory" that makes up your head... but that doesn't mean that everything represented inside you little piece of territory is also territory.

After all, you can store a map in your glovebox. Does the glovebox turn a map of England into England itself, simply because a glovebox is part of the territory?

Comment author: Ben_Jones 16 March 2008 12:15:14PM 0 points [-]

Our brain and senses are made out of fundamental particles too, and the image of a plane with wings is the result of the interaction between the fundamental particles out there with the fundamental particles in us.

Ian C - are you claiming that there are no maps, just lots of territory, some of which refers to other bits of territory? While probably accurate, this doesn't seem very useful if we're trying to understand minds. I don't think Eliezer ever claims that maps are stored in the glove compartments of cars in the car park, just outside The Territory. I'd enjoy a few posts going deeper into the map/territory analogy though.

Computers are full of examples of this, where the [most] important level is not the fundamental level.

Bzzzzzt! Please taboo the word 'important' and tell us what you mean.

Atomic interactions work just as well in a lump of scrap as in a 747. But a 747 won't work without atomic interactions. This being the case, higher levels can't be more 'important' than more fundamental ones, unless 'important' means 'more intuitively obvious to the human eye'.

As long as no-one makes the ridiculous claim that, say, biology is worthless because atomic theory could, ideally, explain giraffes, then is there really any disagreeing with this post?

Comment author: Ian_C. 16 March 2008 12:40:50PM 1 point [-]

Ben Jones - yes, I'm saying there's just lots of territory. I think it's useful to understanding minds, because (if correct) it means they don't work by making an internal mirror of reality to study, but rather they just "latch on" to actual reality at a certain point. The role of the brain in that case would not be to "hold" the internal mirror copy, but to manipulate reality to make it amenable to latching.

Comment author: Tim_Tyler 16 March 2008 12:44:22PM 0 points [-]

I always found Hofstadter's take on the issue illuminating.

Disappointingly, dictionaries and encyclopaedias today seem to have defined reductionism and holism away from Hofstadter's usage - to the detriment of both of the terms involved.

Comment author: Ben_Jones 16 March 2008 12:59:08PM 2 points [-]

Ian - if minds don't create their own distinct internal maps, but simply 'latch on' to what's actually there, then how do explain the fact that maps can be wrong? In fact, how do you explain any two people holding two opposed beliefs?

Sensory perception isn't like a photograph - low-resolution but essentially representative. It's like an idiot describing a photograph to someone who's been blind all their life. This is why we get our maps wrong, and that is why it's useful to think in terms of map and territory - so that we can try and draw better ones.

Comment author: Ian_C. 16 March 2008 02:30:37PM 0 points [-]

Ben Jones: "if minds don't create their own distinct internal maps, but simply 'latch on' to what's actually there, then how do explain the fact that maps can be wrong? In fact, how do you explain any two people holding two opposed beliefs?"

Different people have different eyes, nervous systems and brains, so the causal path from the primary object to the part of reality in their brain to which they are latching on can be different.

I agree sensory perception is not like a photograph, but I don't think it's like an idiot trying to explain to us. I don't believe there's the outside world, and then an idiot distortion layer, and then our unfortunate internal model. There's one reality, and one part of it outside our body acts by a chain of cause and effect on another part inside our body, of which we happen to be able to be conscious.

So if the internal object is just as real as the external object, then we're done. We have our contact point with reality, and can begin to study it and figure out the universe, including deducing (maybe one day) the existence of the primary object. But whether it actually resembles the primary object in some way, surely that is not the main issue? From an evolutionary point of view, it doesn't have to be similar, just useful, and from an epistemological point of view it's not important whether it is (at all) similar or not.

Comment author: Perplexed 30 July 2010 05:29:15AM 0 points [-]

Different people have different eyes, nervous systems and brains, so the causal path from the primary object to the part of reality in their brain to which they are latching on can be different.

When you first mentioned "latching" my initial reaction was as negative and incredulous as Ben Jones's was. Now I recognize that this idea is Kripke's - he explains intensionality as a chain of causal links between territory and map. I see why Kripke went that way, but the whole enterprise turns my stomach. Where is Descartes when we need him? Intensionality carries no mystery in a model where map is distinct from territory, with no attempt being made to embed map in territory. It only becomes problematic when naive reductionism demands that our models must capture the act of modeling. And then we proceed to tie ourselves completely in knots when we imagine that this bit of self-reference contains the secret of consciousness.

Can't we just pretend that our minds reside outside the physical universe when discussing epistemology? It makes things much simpler. Then we can discuss the reductionist science of cognition by allowing some minds back into the universe to serve as objects of study. :)

Comment author: Caledonian2 16 March 2008 02:36:47PM 0 points [-]

At present, we cannot generate accurate quantum mechanical descriptions of atoms more complex than hydrogen (and, if we fudge a bit, helium). Any attempt to do so, because of the complexity and intractability of the equations evolved, produces results that are less accurate than our empirically-derived understanding.

Even if we ignore the massive computational problems with trying to create a QM model of an airplane, such a model is guaranteed to be less accurate than the existing higher-order models of aerodynamics and material science.

We presume that our models, if we knew how to generate and evaluate them, would accurately describe things on an atomic level, and this is not unreasonable to claim. But Eliezer's claim goes far, far, far beyond what can be justified at present.

Comment author: Nominull3 16 March 2008 03:38:43PM 3 points [-]

I'm surprised that this point is controversial enough that Eliezer felt the need to make a post about it, and even more surprised that he's catching heat in the comments for it. This "reductionism" is something I believe down to the bone, to the extent that I have trouble conceptualizing the world where it is false.

Comment author: kremlin 04 February 2013 10:09:39AM 5 points [-]

After talking to some non-reductionists, I've come to this idea about what it would mean for reductionism to be false:

I'm sure you're familiar with Conway's Game of Life? If not, go check it out for a bit. All the rules for the system are on the pixel level -- this is the lowest, fundamental level. Everything that happens in conway's game of life is reducible to the rules regarding individual pixels and their color (white or black), and we know this because we have access to the source code of Conway's Game, and it is in fact true that those are the only rules.

For Conways' Game to be non-reductionistic, what you'd have to find in the source code is a set of rules that override the pixel-level rules in the case of high-level objects in the game. Eg "When you see this sort of pixel configuration, override the normal rules and instead make the relevant pixels follow this high-level law where necessary."

Something like that.

It's an overriding of low-level laws when they would otherwise have contradicted high-level laws.

Comment author: George_Weinberg2 16 March 2008 08:42:33PM 1 point [-]

The essential idea behind reductionism, that if you have reliable rules for how the pieces behave then in principle you can apply them to determine how the whole behaves, has to be true. To say otherwise is to argue that the airplane can be flying while all its constituent pieces are still on the ground.

But if you can't do a calculation in practice, does it matter whether or not it would give you the right answer if you could?

Comment author: Pyramid_Head2 17 March 2008 12:18:46AM 1 point [-]

And there goes Caledonian again, completely misrepresenting Eliezer's claims.

His arguments are completely baseless. Of course it would be very, very, very hard to make a QM model of an airplane, and attempting it now would fail miserably - Eliezer wouldn't dispute that.

But to say that a full-fledged QM model would be *guaranteed* to be less accurate than current models is downright preposterous.

Comment author: PK 17 March 2008 01:31:21AM 1 point [-]

Caledonian's job is to contradict Eliezer.

Comment author: Nick_Tarleton 17 March 2008 02:03:12AM 2 points [-]

I'm surprised that this point is controversial enough that Eliezer felt the need to make a post about it, and even more surprised that he's catching heat in the comments for it. This "reductionism" is something I believe down to the bone, to the extent that I have trouble conceptualizing the world where it is false.

Seconded.

I suppose the next post is on how a non-reductionist universe would overwhelmingly violate Occam's Razor?

Comment author: taryneast 16 December 2010 11:12:55AM *  1 point [-]

Hmmm... from my understanding, Occam's Razor is not actually a Law, just an overwhelmingly useful Heuristic. Thus, I'm not sure that "violating" Occam's Razor means more than just saying that something is "far less likely". I don't believe it can be used to prove that a non-reductionist universe is "not true".

Comment author: anonymous9 17 March 2008 06:20:53AM 0 points [-]

Caledonian's job is to contradict Eliezer.

Not even that -- it's as if he and other commenters (e.g. Unknown in this case) are simply demanding that Eliezer express his points with less conviction.

If you think Eliezer is wrong, say so and explain why. Merely protesting that he is "confident beyond what is justified", or whatever, amounts to pure noisemaking that is of no use to anyone.

Comment author: a._y._mous 17 March 2008 07:49:25AM 0 points [-]

Slighlty off-topic. I am a bit new to all this. I am a bit thick too. So help me out here. Please.

Am I right in understanding that the map/territory analogy implies that the map is always evaluated outside the territory?

I guess, I'm asking the age old Star Trek transporter question. When I am beamed up, which part of which quark forms the boundary between me and Scotty.

Comment author: Frank_Hirsch 17 March 2008 09:27:07AM 1 point [-]

I wish I knew where Reality got its computing power. Hehe, good question that one. Incidentally, I'd like to link this rather old thing just in case anyone cares to read more about reality-as-computation.

Comment author: Ben_Jones 17 March 2008 10:26:47AM 1 point [-]

Ian C - well put. My point is that since there is, at least, some distortion between mind and world (hence this very blog), it's useful to think in terms of map and territory. At the simplest level, it stops us confusing the two. If you have a wrong belief, saying 'my mind is part of reality!' doesn't make it any less wrong. Agreed?

I don't believe there's the outside world, and then an idiot distortion layer, and then our unfortunate internal model.

That was exactly the situation I found myself in at about 3am on Sunday morning.

Comment author: Ian_C. 17 March 2008 11:02:47AM 0 points [-]

Ben Jones: "If you have a wrong belief, saying 'my mind is part of reality!' doesn't make it any less wrong. Agreed?"

I agree that there is a difference between the object in the mind and the object in the world, but I wouldn't call it distortion any more than a chair is a distortion of the table next to it. They are both just different parts of reality. But if your mind can only be aware of the chair then you must discover the table by deduction, which is what someone trying to "correct" the chair would do also. So yes, I guess it makes little practical difference.

"That was exactly the situation I found myself in at about 3am on Sunday morning."

And here I was thinking it was only a model, when it was direct observation all along! Who am I to contradict direct observation? I hereby accept your theory and discard my own :-)

Comment author: Ben_Jones 17 March 2008 03:03:27PM 1 point [-]

I agree that there is a difference between the object in the mind and the object in the world, but I wouldn't call it distortion any more than a chair is a distortion of the table next to it.

But the chair isn't seeking to imitate the table. That's one thing that minds do that nothing else does - form abstract representations. It's not magic, but it's a pretty impressive trick for a couple of pounds of quivering territory.

Besides, you've already acknowledged that the mental concept has a causal link with the object itself. Chairs aren't causally linked to tables. Like you say, they're both just different parts of reality. Minds and maps are more subtle.

We may believe that 'what we see is what's actually there', but in truth there are millennia of evolutionary filters and lenses distorting our perception of the territory. And you can't start eliminating the errors from your map until you realise that a) you have a map, b) your map is not the territory, and c) your map doesn't even look much like the territory.

That last paragraph's for the back of the book, Eliezer.

Comment author: Ian_C. 17 March 2008 04:16:38PM 0 points [-]

Ben Jones: "But the chair isn't seeking to imitate the table."

But the mind isn't seeking to imitate reality either. The mind seeks to provide awareness of reality, that is all. In taking the data of the senses and processing it only following the laws of cause and effect, it achieves this goal (because the output of the pipeline remains reality).

The idea that it is trying to imitate (and the associated criticisms like map, territory and distortion) come from looking at the evolved design after the fact and assuming how it is supposed to work without taking a wide enough view of all the ways awareness of reality could be implemented.

Comment author: Steve 17 March 2008 06:55:30PM 1 point [-]

'I wish I knew where Reality got its computing power.'

Assume Reality has gotten computing power and that it makes computations. Computation requires time. Occurrence would require the time required for the occurrence plus the time necessary for Reality to make the computation for that occurrence. The more complex the occurrence, either more computing power or longer computation time, or both. Accounting for that seems a challenge that can not be overcome.

Alternatively, let's assume Reality did not get computing power and that it does not make computations. Rather, let's assume that there are computational activities within Reality.

Perhaps Reality is certainty, while attempts to comprehend Reality are computational activities that have acquired mapping processes that attempt to map certainty.

Changing the sentence to: 'I wish I knew why I believe there is a where from which Reality got its computing power.' gets me to an answer while the original question precluded me from one.

Comment author: Caledonian2 17 March 2008 11:53:06PM 1 point [-]

But to say that a full-fledged QM model would be *guaranteed* to be less accurate than current models is downright preposterous.

No, it follows directly from our inability to simulate 'complex' atoms. If we can't represent the basic building blocks of matter correctly, how are we supposed to represent the matter?

A correct model of physics would, given enough computational power, allow us to perfectly simulate everything in reality, on every level of reality. QM is known not to be correct; it is in fact known to be incorrect in the ultimate sense. It is merely the most correct model we possess.

Comment author: Ben_Jones 18 March 2008 09:45:53AM 2 points [-]

"However, reductionism is incapable of explaining the real world."

Is that the argument against Reductionism? That there are things it can't, as yet, explain? That's the same position the Intelligent Design people put forward. Your post is a big fat Semantic Stop Sign.

No, we don't understand protein folding yet. Precedent suggests that one day, we probably will, and it probably won't be down to some mystical emergent phenomenon. It'll be complicated, subtle, amazing, and fully explicable within the realms of reductionist science.

Comment author: Nick_Tarleton 18 March 2008 12:22:44PM 3 points [-]

A quick Google search turns up:

But the crystal growth depends strongly on temperature (as is seen in the morphology diagram). Thus the six arms of the snow crystal each change their growth with time. And because all six arms see the same conditions at the same times, they all grow about the same way.... If you think this is hard to swallow, let me assure you that the vast majority of snow crystals are not very symmetrical.

Comment author: Rafe_Furst 22 April 2008 08:18:21PM 0 points [-]

It's not that reductionism is wrong, but rather that it's only part of the story. Additional understanding can be gleaned through a bottom-up, emergent explanation which is orthogonal to the top-down reductionist explanation of the same system.

It is important to take seriously the reality of higher level models (maps). Or alternatively to admit that they are just as unreal, but also just as important to understanding, as the lower level models. As Aaron Boyden points out, it is not a foregone conclusion that there is a most basic level.

Comment author: Caledonian2 23 April 2008 07:08:22PM 1 point [-]

Reductionism IS the bottom-up, emergent explanation. It tries to reduce reality to basic elements that together produce the phenomena of interest - you can't get any more emergent than that.

Comment author: Rafe_Furst 24 April 2008 04:22:42PM 0 points [-]

From the Wikipedia definition for "reductionism":

"Reductionism can either mean (a) an approach to understanding the nature of complex things by reducing them to the interactions of their parts, or to simpler or more fundamental things or (b) a philosophical position that a complex system is nothing but the sum of its parts, and that an account of it can be reduced to accounts of individual constituents."

and

"The limit of reductionism's usefulness stems from emergent properties of complex systems which are more common at certain levels of organization."

Comment author: Caius 10 May 2008 11:55:55PM 0 points [-]

Rafe, do you mean that as a criticism? Because usefulness and reality are very different things. There are two things that can make a reductionist model less useful: 1. It requires much more computational power. This has been discussed already. 2. Because even modest mistakes at lower levels can have drastic effects at higher levels.

Both, you'll notice, are practical problems pertaining to the model, and don't invalidate the principle.

Comment author: Valentina_Poletti 28 August 2008 09:27:13AM 0 points [-]

So human brains are themselves models of reality.

Do you have a deterministic view of the world, i.e. believe reality is there, independently of our existence or of our interactions with it?

Have you ever wondered what is information, at the physical level.. what is it that our brains are actually modelling?

Comment author: wockyman 02 January 2009 06:39:53AM 1 point [-]

Simply because particles are the smallest things does not mean they are the only things. Particles are defined by how they act. How a particle will act can only be determined by taking into account the particles surrounding it. And to fully examine those particles, their surrounding particles must be examined. And so on and so forth...

As you move up in scale, new rules and attributes emerge that do not exist at the smaller scales. You can speculate about whether or not these new things might have been deduced as possibilities from quantum laws. But short of complete omniscience (physically impossible by the uncertainty principle), the subatomic laws will only tell you what *can* arise, not what *does* emerge.

So it doesn't really make sense to arbitrarily draw a line at a certain scale of examination and say, "Only these things REALLY exist." Reductionism yields a convenient mental model with practical application... but it is still just a map.

Comment author: Psy-Kosh 02 January 2009 08:04:54AM 2 points [-]

Wockyman: It's not that they're the smallest, as such.

Yes, how a particle acts is affected by those around it. But the idea is that if you know the basic rules, then knowing those rules, plus which particles are where around it lets you predict, in principle, given sufficient computational power, stuff about how it will act. In other words, the complicated stuff that emerges arises _from_ the more basic stuff.

Think of it this way: You know cellular automatons? Especially Conaway's Game of Life? Really simple rules, just the grid, cells that can be on and off, and basic rules for when a transition occurs based on a cell's and its neighbor's state.

Yet complicated behavior arises out of that. One would not, however, say that behavior is beyond the rules, or that reduction to those rules fails. Those complicated behaviors arise out of those simple rules.

Incidentally, if you looked through Eliezer's QM sequence, the more fundamental reduction isn't so much particles, but probably quantum amplitudes over configuration space, with particles corresponding with it being possible to "factor out" certain sets of dimensions in the configuration space.

(Reductionism does _NOT_ mean "reduction to particles", just "reduction to simple principles that are the basic thing that give rise to everything else", not identical to, but similar to the way that comparatively simple rules of chess give rise to really complex strategies (and even more so for Go))

As for it being "just a map"... it is a map, but it's a map about something. The map may not be the territory, but there is a territory, and the fact that the map seems to tell us accurate stuff about the territory is at least a justification for suspecting that the actual underlying reality of the territory may actually resemble what the map claims it's like.

Comment author: xrchz 28 October 2009 11:42:02AM 3 points [-]

But the way physics really works, as far as we can tell, is that there is only the most basic level - the elementary particle fields and fundamental forces.

To clarify (actually, to push this further): there is only one thing (the universe) - because surely breaking the thing down into parts (such as objects) which in turn lets you notice relations between parts (which in turn lets you see time, for example) -- surely all that is stuff done by modelers of reality and not by reality itself? I'm trying to say that the universe isn't pre-parsed (if that makes any sense...)

Comment author: byrnema 28 October 2009 04:02:38PM *  0 points [-]

As modelers of reality, we parse the world into fundamental particles and forces. You would claim that these distinctions are ultimately inherent features of the model and not necessarily defining reality.

I understand that a person might look at a car and see "mode of transportation" while another way of looking at the car is as a "particular configuration of quarks", in which case the distinction between a car and a tree does seem arbitrarily modeler-dependent.

But I would not go so far as to say that reality itself is featureless. Where would you begin to argue that there are no inherent dichotomies? Even if there is only one type of thing 'x', our reality (which is, above all, dynamic) seems to require a relationship and interaction between 'x' and ' ~x'. I'd say, logically, reality needs at least two kinds of things.

Comment author: xrchz 28 October 2009 09:26:52PM 0 points [-]

Even if there is only one type of thing 'x', our reality (which is, above all, dynamic) seems to require a relationship and interaction between 'x' and ' ~x'. I'd say, logically, reality needs at least two kinds of things.

Logic can only compel models.

You seem to be saying "Let x denote the universe. ~x is then a valid term. So ~x must denote something that isn't x, thus there are two things!" There are surface problems with this such as that x may not be of type boolean, and that you're just assuming every term denotes something. But the important problem is simpler: we can use logic to deduce things about our models, but logic doesn't touch reality itself (apart from the part of reality that is us).

What do you mean by "reality is dynamic"? Have you read Timeless Physics?

Comment author: byrnema 29 October 2009 12:33:30AM *  -2 points [-]

So I infer from the above that you have no logical arguments to support that reality is "one thing". I would think only an agnostic position on the nature of reality would be consistent with the nihilist stance you are representing.

Comment author: RafeFurst 07 March 2010 05:15:34PM 4 points [-]

Reductionism is great. The main problem is that by itself it tells us nothing new. Science depends on hypothesis generation, and reductionism says nothing about how to do that in a rational way, only how to test hypotheses rationally. For some reason the creative side of science -- and I use the word "creative" in the generative sense -- is never addressed by methodology in the same way falsifiability is:

http://emergentfool.com/2010/02/26/why-falsifiability-is-insufficient-for-scientific-reasoning/

We are at a stage of historical enlightenment where more and better reductionism is producing marginal returns. To be even less wrong, we might spend more time on the hypothesis generation side of the equation.

Comment author: Jack 07 March 2010 06:08:56PM 7 points [-]

Really? I think of reductionism as maybe the greatest, most wildly successful abductive tool in all of history. If we can't explain some behavior or property of some object it tells us one good guess is to look to the composite parts of that thing for the answer. The only other strategy for hypothesis generation I can think of that has been comparably successful is skepticism (about evidence and testimony). "I was hallucinating." and "The guy is lying" have explained a lot of things over the years. Can anyone think of others?

Comment author: JGWeissman 07 March 2010 06:32:52PM 3 points [-]

Science depends on hypothesis generation, and reductionism says nothing about how to do that in a rational way, only how to test hypotheses rationally.

You may be interested in Science Doesn't Trust Your Rationality, in which Eliezer suggests that science is a way of identifying the good theories produced by a community of scientists who on their own have some capacity to produce theories, and that Bayesian rationality is a systematic way of producing good theories.

Oh, and Welcome to Less Wrong! You have identified an important point in your first few comments, and I hope that is predictor of good things to come.

Comment author: whowhowho 04 February 2013 03:13:01PM 0 points [-]

and that Bayesian rationality is a systematic way of producing good theories.

An automated theory generator would be worth a nobel.

Comment author: TheOtherDave 04 February 2013 05:38:31PM 2 points [-]

So, the introduction of "automated" to this discussion feels like a complete nonsequitor to me. Can you clarify why you introduce it?

Comment author: whowhowho 04 February 2013 07:49:51PM 0 points [-]

If you have a "systematic" way of "producing" something, (JGWeissman) surely you can automate it.

Comment author: TheOtherDave 04 February 2013 08:21:25PM 0 points [-]

Ah. OK, thanks for clarifying.

Comment author: army1987 05 February 2013 05:03:04AM 1 point [-]

I could call a procedure "systematic" even if one of the steps used a human's System 1 as an oracle, in which case it'd be hard to automate that as per Moravec's paradox.

Comment author: whowhowho 05 February 2013 11:07:13AM *  0 points [-]

I would not call such a procedure systematic. Who would? Here's a system for success as an author: first have a brilliant idea...it reads like a joke, doesn't it?

Comment author: army1987 05 February 2013 12:32:23PM 1 point [-]

I wasn't thinking of something that extreme; more like the kind of tasks people do on Mechanical Turk.

Comment author: whowhowho 05 February 2013 12:35:06PM -2 points [-]

Is there anything non systematic by that definition? In what way does it promote Bayesianism to call it systematic?

Comment author: TheOtherDave 05 February 2013 04:08:30PM 2 points [-]

Well, I have no idea if it "promotes Bayesianism" or not, but when someone talks to me about a systematic approach to doing something in normal conversation, I understand it to be as opposed to a scattershot/intuitive approach.

For example, if I want to test a piece of software, I can make a list of all the integration points and inputs and key use cases and build a matrix of those lists and build test cases for each cell in that matrix, or I can just construct a bunch of test cases as they occur to me. The former approach is more systematic, even if I can't necessarily automate the test cases.

I realize that your understanding of "systematic" is different from this... if I've understood you, if I can't automate the test cases then this approach is not systematic on your account.

Comment author: army1987 05 February 2013 04:39:21PM *  2 points [-]

Is there anything non systematic by that definition?

See TheOtherDave.

In what way does it promote Bayesianism to call it systematic?

See E.T. Jaynes calling certain frequentist techniques “ad-hockeries”. EDIT: BTW, I didn't have Bayesianism in mind when I replied to this ancestor -- I should stop replying to comments without reading their ancestors first.

Comment author: private_messaging 05 February 2013 07:39:15AM *  1 point [-]

It feels like you use 'questions' a lot more than usual, and it looks very much like a rhetorical device because you inject counter points into your questions. Can you clarify why you do it? (see what I did there?)

Sidenote: Actually, questions are often a sneaky rhetorical device - you can modify the statement in the way of your choosing, and then ask questions about that. You see that in political debates all the time.

Comment author: Vaniver 05 February 2013 02:12:43PM 0 points [-]

Agreed that questions can be used in underhanded ways, but this example does seem more helpful at focusing the conversation than something like:

Can you clarify why you added "automated" to the discussion?

That could easily go in other directions; this makes clear that the question is "how did we get from A to B?" while sharing control of the topic change / clarification.

Comment author: TheOtherDave 05 February 2013 03:37:44PM 0 points [-]

Can you clarify why you do it?

Sure, I'd be happy to: because I want answers to those questions.

For example, whowhowho's introduction of "automated" did in fact feel like a nonsequitor to me, and I wanted to understand better why they'd introduced it, to see whether there was some clever reasoning there I'd failed to follow. Their answer to my question clarified that, and I thanked them for the clarification, and we were done.

(see what I did there?)

You asked a question.
I answered it.
It really isn't that complicated.

That said, I suspect from context that you mean to imply that you did something sneaky and rhetorical just then, just as you seem to believe that I do something sneaky and rhetorical when I ask questions.
If that's true, then no, I guess I don't see what you did there.

questions are often a sneaky rhetorical device

Yes. So are statements.

Comment author: shminux 04 February 2013 06:35:56PM 2 points [-]
Comment deleted 05 February 2013 09:22:55AM *  [-]
Comment author: Kawoomba 05 February 2013 11:55:31AM 0 points [-]

Solomonoff Induction, in so much as it is related to interpretations at all, rejects 'many worlds interpretation' because valid (non falsified) code strings are the ones whose output began with the actual experimental outcome rather than list all possible outcomes, i.e. are very much Copenhagen - like.

Has this point ever been answered? If we are content with the desired output appearing somewhere along the line - as opposed to the start - then the simplest theory of everything would be printing enough digits of pi, and our universe would be described somewhere down the line.

Comment deleted 05 February 2013 01:25:41PM [-]
Comment author: Kawoomba 05 February 2013 03:14:21PM 2 points [-]
Comment author: Eliezer_Yudkowsky 05 February 2013 07:39:54PM 2 points [-]

Solomonoff induction is about putting probability distributions on observations - you're looking for the combination of the simplest program that puts the highest probability on observations. Technically, the original SI doesn't talk about causal models you're embedded in, just programs that assign probabilities to experiences.

Generalizing somewhat, for QM as it appears to humans, the generalized-SI-selected hypothesis would be something along the lines of one program that extrapolated the wavefunction, then another program that looked for people inside it and translated the underlying physics into the "observed data" from their perspective, then put probabilities on the sequences of data corresponding to integral squared modulus. Note that you also need an interface from atoms to experiences just to e.g. translate a classical atomic theory of matter into "I saw a blue sky", and an implicit theory of anthropics/sum-probability-measure too if the classical universe is large enough to have more than one copy of you.

Comment author: Kawoomba 05 February 2013 07:42:35PM 1 point [-]

Thanks for this. I'll mull it over.

Comment author: private_messaging 05 February 2013 10:29:42PM 1 point [-]
Comment author: whowhowho 05 February 2013 08:04:12PM 2 points [-]

It isn't at all clear why all that would add up to something simpler than a single world theory

Comment author: Eliezer_Yudkowsky 05 February 2013 08:08:19PM 8 points [-]

Single-world theories still have to compute the wavefunction, identify observers, and compute the integrated squared modulus. Then they have to pick out a single observer with probability proportional to the integral, peek ahead into the future to determine when a volume of probability amplitude will no longer strongly causally interact with that observer's local blob, and eliminate that blob from the wavefunction. Then translating the reductionist model into experiences requires the same complexity as before.

Basically, it's not simpler for the same reason that in a spatially big universe it wouldn't be 'simpler' to have a computer program that picked out one observer, calculated when any photon or bit of matter was moving away and wasn't going to hit anything that would reflect it back, and then eliminated that matter.

Comment author: Morendil 07 March 2010 06:37:44PM 0 points [-]

Agreed: we need more posts on abductive reasoning specifically.

Comment author: imaxwell 07 November 2010 05:43:48PM 6 points [-]

Probably no one will ever see this comment, but.

"I wish I knew where reality got its computing power."

If reality had less computing power, what differences would you expect to see? You're part of the computation, after all; if everything stood still for a few million meta-years while reality laboriously computed the next step, there's no reason this should affect what you actually end up experiencing, any more than it should affect whether planets stay in their orbits or not. For all we know, our own computers are much faster (from our perspective) than the machines on which the Dark Lords of the Matrix are simulating us (from their perspective).

Comment author: Perplexed 07 November 2010 06:53:56PM 3 points [-]

If reality were computed in reverse chronological order, what differences would you expect to see?

Suppose our universe was produced by specifying some particular final state, and then repeatedly computing predecessor states according to some deterministic laws of nature. Would we experience time backward? Or would we still experience it forward (the reverse of the direction of the simulation) because of some time assymetry in the physical laws or in the entropy of the initial vs final states?

Everyone always assumes that the simulation will proceed "foreward". Is that important? I honestly don't know.

Comment author: imaxwell 08 November 2010 04:17:14AM 4 points [-]

You can go one step further. If folks like Barbour are correct that time is not fundamental, but rather something that emerges from causal flow, then it ought to be that our universe can be simulated in a timeless manner as well. So a model of this universe need not actually be "executed" at all---a full specification of the causal structure ought to be enough.

And once you've bought that, why should the medium for that specification matter? A mathematical paper describing the object should be just as legitimate as an "implementation" in magnetic patterns on a platter somewhere.

And if it doesn't matter what the medium is, why should it matter whether there's a medium at all? Theorems don't become true because someone proves them, so why should our universe become real because someone wrote it down?

If I understand Max Tegmark correctly, this is actually the intuition at the core of his mathematical universe hypothesis (Wikipedia, but with some good citations at the bottom), which basically says: "We perceive the universe as existing because we are in it." Dr. Tegmark says that the universe is one of many coherent mathematical structures, and in particular it's one that contains sentient beings, and those sentient beings necessarily perceive themselves and their surroundings as "real". Pretty much the only problem I have with this notion is that I have no idea how to test it. The best I can come up with is that our universe, much like our region of the universe, should turn out to be almost but not quite ideal for the development of nearly-intelligent creatures like us, but I've seen that suggested of models that don't require the MUH as well. Aside from that, I actually find it quite compelling, and I'd be a bit sad to hear that it had been falsified.

Interestingly enough, a version of the MUH showed up in Dennis Paul Himes' (An Atheist Apology)[http://www.cookhimes.us/dennis/aaa.htm] (as part of the "contradiction of omnipotent agency" argument), written just a few years after Dr. Tegmark started writing about these ideas. Mr. Himes' essay was very influential on me as a teenager, and yet I never did hear of the "mathematical universe hypothesis" by that name until a few years ago. In past correspondence, he wrote that the argument was original to him as far as he knew, and at least one of his commenters claimed to also have developed it independently, so it may be a more intuitively plausible idea than it seems to be at first glance.

Comment author: Perplexed 08 November 2010 05:54:02PM -1 points [-]

at least one of his commenters claimed to also have developed it independently, so it [Tegmark's idea] may be a more intuitively plausible idea than it seems to be at first glance.

I'm pretty sure that the idea has occurred to just about everyone who has wondered whether the meanings of the intransitive verb "to exist" in mathematics and philosophy might have anything in common. Tegmark deserves some credit though for writing it down.

Comment author: Traddles 03 May 2011 06:06:22PM *  0 points [-]

Sounds like one of the central tenants of discordianism. There is no such thing as wings, identity, truth, the concept of equality. These are all abstract concepts that exist only in the mind. "Out there" in "True" reality, there is only chaos (not necessarily of the random kind, just of the meaningless/purposeless kind).

Comment author: Tuukka_Virtaperko 16 January 2012 10:31:07PM *  0 points [-]

But this is just the brain trying to be efficiently compress an object that it cannot remotely begin to model on a fundamental level. The airplane is too large. Even a hydrogen atom would be too large. Quark-to-quark interactions are insanely intractable. You can't handle the truth.

Can you handle the truth then? I don't understand the notion of truth you are using. In everyday language, when a person states something as "true", it doesn't usually need to be grounded to logic in order to work for a practical purpose. But you are making extremely abstract statements here. They just don't mean anything unless you define truth and solve the symbol grounding problem. You have criticized philosophy in other threads, yet here you are making dubious arguments. The arguments are dubious because they are not clearly mere rhetoric, and not clearly philosophy. If someone tries to require you to explain the meaning of them, you could say you're not interested of philosophy, so philosophical counterarguments are irrelevant to you. But you can't be disinterested of philosophy if you make philosophical claims like that and actually consider them important.

I don't like contemporary philosophy either, but I would suppose you are in trouble with these things, and I wonder if you are open to a solution? If not, fine.

But the way physics really works, as far as we can tell, is that there is only the most basic level - the elementary particle fields and fundamental forces. You can't handle the raw truth, but reality can handle it without the slightest simplification. (I wish I knew where Reality got its computing power.)

But you haven't defined reality. As long as you haven't done so, "reality" will be a metaphorical, vague concept, which frequently changes its meaning in use. This means if you state something to be "reality" in one discussion, logical analysis would probably reveal you didn't use it in the same meaning in another discussion.

You can have a deterministic definition of reality, but that will be arbitrary. Then people will start having completely pointless debates with you, and to make matters worse, you will perceive these debates as people trying to unjustify what you are doing. That's a problem caused by you not realizing you didn't have to justify your activities or approach in the first place. You didn't need to make these philosophical claims, and I don't suppose you would done so had you not felt threatened by something, such as religion or mysticism or people imposing their views on you.

This, as I see it, is the thesis of reductionism. Reductionism is not a positive belief, but rather, a disbelief that the higher levels of simplified multilevel models are out there in the territory.

If you categorize yourself as a reductionist, why don't you go all the way? You can't be both a reductionist and a realist. Ie. you can't believe in reductionism and in the existence of a territory at the same time. You have to drop either one of them. But which one?

Drop the one you need to drop. I'm serious. You don't need this metaphysical nonsense to justify something you are doing. Neither reductionism nor realism is "true" in any meaningful way. You are not doing anything wrong if you are a reductionist for 15 minutes, then switch to realism (ie. the belief in a "territory") for ten seconds, then switch again into reductionism and then maybe to something else. And that is also the way you really live your life. I mean, think about your mind. I suppose it's somewhat similar to mine. You don't think about that metaphysical nonsense when you're actually doing something practical. So you are not a metaphycisist when you're riding a bike and enjoying the wind or something.

It's just some conception of yourself which you have, that you have defined as someone who is an advocate of "reductionism and realism". This conception is true only when you indeed are either one of those. It's not true, when you're neither of those. But you are operating in your mind. Suppose someone says to you you're not a "reductionist and a realist" when you are, for example, in intense pain for some reason and are very unlikely to think about philosophy. Well, even in that case you could remind yourself of your own conception of yourself, that is, you are a "reductionist and a realist", and argue that the person who said you are not was wrong. But why would you want to do so? The only reasons I see are some naive or egoistic or defensive reasons, such as:

  • You are afraid the person who said you're not a "reductionist or realist" will try to waste your time by presenting stupid arguments according to which you may or may not or should or should not do something.
  • You believe your image of yourself as a "reductionist and realist" is somehow "true". But you are able to decide at will whether that image is true. It is true when you are thinking in a certain way, and false when you are not thinking that way. So the statement conveys no useful information, except maybe on something you would like to be or something like that. But that is no longer philosophy.
  • You have some sort of a need to never get caught uttering something that's not true. But in philosophy, it's a really bad idea to want to make true statements all the time. Metaphysical theories in and of itself are neither true nor false. Instead, they are used to define truth and falsehood. They can be contradictory or silly or arbitrary, but they can't be true or false.

If you state you to regard one state of mind or one theory, such as realism or reductionism, as some sort of an ultimate truth, you are simply putting yourself into a prison of words for no reason except that you apparently perceive some sort of safety in that prison or something like that. But its not safe. It exposes you to philosophical criticism you previously were invulnerable towards, because before you went to that prison, you didn't even participate in that game.

If you actually care about philosophy, great. But I haven't yet gotten such an impression. It seems like philosophy is an unpleasant chore to you. You want to use philosophy to obtain justification, a sense of entitlement, or something, and then throw it away because you think you're already finished with it - that you've obtained a framework theory which already suits your needs, and you can now focus on the needs. But you're not a true reductionist in the sense you defined reductionism, unless you also scrap the belief in the territory. I don't care what you choose as long as you're fine with it, but I don't want you to contradict yourself.

There is no way to express the existence of the "territory" as a meaningfully true statement. Or if there is, I haven't heard of it. It is a completely arbitrary declaration you use to create a framework for the rest of the things you do. You can't construct a "metatheory of reality" which is about the territory, which you suppose to exist, and have that same territory prove the metatheory is right. The territory may contain empirical evidence that the metatheory is okay, but no algorithm can use that evidence to produce proof for the metatheory, because:

  • From "territory's" point of view, the metatheory is undefined.
  • But the notion of gathering empirical evidence is meaningless if the metatheory, according to which the "territory" exists, is undefined.

Therefore, you have to define it if you want to use it for something, and just accept the fact that you can't prove it to be somehow true, much less use its alleged truth to prove something else false. You can believe what you want, but you can't make an AI that would use "territory" to construct a metatheory of territory, if it's somehow true to the AI that territory is all there is. The AI can't even construct a metatheory of "map and territory", if it's programmed to hold as somehow true that map and territory are the only things that exist. This entails that the AI cannot conceptualize its own metaphysical beliefs even as well as you can. It could not talk about them at all. To do so, it would have to be able to construct arbitrary metatheories on its own. This can only be done if the AI holds no metaphysical belief as infallible, that is, the AI is a reductionist in your meaning of the word.

I've seen some interest towards AI on LW. If you really would like to one day construct a very human-like AI, you will have problems if you cannot program an AI that can conceptualize the structure of its own cognitive processes also in terms that do not include realism. Because humans are not realists all the time. Their mind has a lot of features, and the metaphysical assumption of realism is usually only constructed when it is needed to perform some task. So if you want to have that assumption around all the time, you'll just end up adding unnecessary extra baggage to the AI which will probably also make the code very difficult to comprehend. You don't want to lug the assumption around all the time just because it's supposed to be true in some way nobody can define.

You could as well have a reductionist theory, which only constructs realism (ie. the declaration that an external world exists) under certain conditions. Now, philosophy doesn't usually include such theories, because the discipline is rather outdated, but there's no inherent reason why it can't be done. Realism is neither true not false in any meaningful and universal way. You are free to state it to exist if you are going to use that statement for something. But if you just say it, as if it would mean something in and of itself, you are not saying anything meaningful.

I hope you were interested of my rant.

Comment author: thomblake 16 January 2012 10:40:48PM *  0 points [-]

I don't understand the notion of truth you are using.

A belief is true when it corresponds to reality. Or equivalently, "X" is true iff X.

But you haven't defined reality.

In the map/territory distinction, reality is the territory. Less figuratively, reality is the thing that generates experimental results. From The Simple Truth:

I need different names for the thingies that determine my predictions and the thingy that determines my experimental results. I call the former thingies ‘belief’, and the latter thingy ‘reality’.

Comment author: DSimon 17 January 2012 12:40:02AM *  1 point [-]

I don't follow why you claim that reductionism and realism are incompatible. I think this may be because I'm very confused when I try to figure out, from context, what you mean by "realism", and I strongly suspect that that's because you don't have a definition of that word which can be used in tests for updating predictions, which is the sort of thing LWers look for in a useful definition.

Basically, I'm inclined to agree with you when you say:

Realism is neither true not false in any meaningful and universal way. You are free to state it to exist if you are going to use that statement for something. But if you just say it, as if it would mean something in and of itself, you are not saying anything meaningful.

This is a really good reason in my experience for not getting into long discussions about "But what is reality, really?"

Comment author: DSimon 17 January 2012 12:45:23AM *  0 points [-]

Because humans are not realists all the time. Their mind has a lot of features, and the metaphysical assumption of realism is usually only constructed when it is needed to perform some task.

Actually, this may be a good point for me to try to figure out what you mean by "realism", because here you seem to have connected that word to some but not all strategies of problem-solving. Can you give me some specific examples of problems which the mind tends to use realism in solving, and problems where it doesn't?

Comment author: Tuukka_Virtaperko 17 January 2012 03:19:22AM 1 point [-]

I got "reductionism" wrong, actually. I thought the author was using some nonstandard definition of reductionism, which would have been something to the effect of not having unnecessary declarations in a theory. I did not take into account that the author could actually be what he says he is, no bells and whistles, because I didn't take into account that reductionism could be taken seriously here. But that just means I misjudged. Of course I am not necessarily even supposed to be on this site. I am looking for people who might give useful ideas for theoretical work which could be useful for constructing AI, and I'm trying to check whether my approach is deemed intelligible here.

"Realism" is the belief that there is an external world, usually thought to consist of quarks, leptons, forces and such. It is typically thought of as a belief or a doctrine that is somehow true, instead of just an assumption an AI or a human makes because it needs to. Depending on who labels themself as a realist and on what mood is he, this can entail that everybody who is not a realist is considered mistaken.

An example of a problem whose solution does not need to involve realism is: "John is a small kid who seems to emulate his big brother almost all the time. Why is he doing this?" Possible answers would be: "He thinks his brother is cool" or "He wants to annoy his brother" or "He doesn't emulate his brother, they are just very similar". Of course you could just brain scan John. But if you really knew John, that's not what you would do, unless brain scanners were about as common and inexpensive as laptops. And have much better functionality than they currently do.

In the John problem, there's no need to construct the assumptions of a physical world, because the problem would be intelligible even in the case you meet John in a dream. You can't take any physical brain scanner with you in a dream, so you can't brain scan John. But you can analyze John's behavior with the same criteria according to which you would analyze him had you met him while awake.

I'm not trying to impose any views on you, because I'm basically just trying to find out whether someone is interested of this kind of stuff. The point is that I'm trying to construct a framework theory for AI that is not grounded on anything else than sensory (or emotional etc.) perception - all the abstract parts are defined recursively. Structurally, the theory is intended to resemble a programming language with dynamic typing, as opposed to static typing. The theory would be pretty much both philosophy and AI.

The problem I see now is this. My theory, RP, is founded on the notion that important parts of thinking are based on metaphysical emergence. The main recursion loop of the theory, in its current form, will not create any information if only reduction is allowed. I would allow both, but if the people on LW are reductionist, I would suppose that the logical consequence of that would be they believe my theory cannot work. And that's why I'm a bit troubled by the notion that you might accept reductionism as some sort of an axiom, because you don't want to have a long philosophical conversation and would prefer to settle down with something that currently seems reasonable. So should I expect you to not want to consider other options? It's strange that I should go elsewhere with my project, because that would amount to you rejecting an AI theory on grounds of contradicting your philosophical assumptions. Yet, my common sense expectation would be that you'd find AI more important than philosophy.

Comment author: DSimon 17 January 2012 04:21:02AM *  2 points [-]

The point is that I'm trying to construct a framework theory for AI that is not grounded on anything else than sensory (or emotional etc.) perception - all the abstract parts are defined recursively. Structurally, the theory is intended to resemble a programming language with dynamic typing, as opposed to static typing. [...] The main recursion loop of the theory, in its current form, will not create any information if only reduction is allowed.

You seem to be overthinking this. Reductionism is "merely" a really useful cognition technique, because calculating everything at the finest possible level is hopelessly inefficient. Perhaps a practical simple example is needed:

An AI that can use reductionism can say "Oh, that collection of pixels within my current view is a dog, and this collection is a man, and the other collection is a leash", and go on to match against (and develop on its own) patterns about objects at the coarser-than-pixel size of dogs, men, and leashes. Without reductionism, it would be forced to do the pattern matching for everything, even for complex concepts like "Man walking a dog", directly at the pixel level, which is not impossible but is certainly a lot slower to run and harder to update.

If you've ever refactored a common element out in your code into its own module, or even if you've used a library or high-level language, you are also using reductionism. The non-reductionistic alternative would be something like writing every program from scratch, in machine code.

Comment author: Tuukka_Virtaperko 17 January 2012 11:02:51AM 0 points [-]

Okay. That sounds very good. And it would seem to be in accordance with this statement:

Reductionism is not a positive belief, but rather, a disbelief that the higher levels of simplified multilevel models are out there in the territory.

If reductionism does not entail that I must construct the notion of a territory and include it into my conceptualizations at all times, it's not a problem. I now understand even better why I was confused by this. This kind of reductionism is not reductive physicalism. It's hardly a philosophical statement at all, which is good. I would say that "the notion of higher levels being out there in the territory" is meaningless, but expressing disbelief to that notion is apparently intended to convey approximately the same meaning.

RP doesn't yet actually include reduction. It's about next on the to do list. Currently it includes an emergence loop that is based on the power set function. The function produces a staggering amount of information in just a few cycles. It seems to me that this is because instead of accounting for emergence relations the mind actually performs, it accounts for all defined emergence relations the mind could perform. So the theory is clearly still under construction, and it doesn't yet have any kind of an algorithm part. I'm not much of a coder, so I need to work with someone who is. I already know one mathematician who likes to do this stuff with me. He's not interested of the metaphysical part of the theory, and even said he doesn't want to know too much about it. :) I'm not guaranteeing RP can be used for anything at all, but it's interesting.

Comment author: FiftyTwo 29 February 2012 11:57:25AM 1 point [-]
Comment author: Voltairina 04 March 2012 12:50:37PM 0 points [-]

One way of tracing the uhm, data I guess might be to say, we see, naively, a chair. And know that underneath the chair out there is, at the bottom level we're aware of, energy fields and fundamental forces. And those concepts, like the chair, correspond to a physics model, which is in turn a simplification/distillation of vast reams of recorded experimental data into said rules/objects, which is in turn actual results of taking measurements during experiments, which in turn are the results of actual physical/historical events. So the reductionist model - fields and forces - I think is still a map of experimental results tagged with like, interpretations that tie them together, I guess.

Comment author: Voltairina 04 March 2012 12:51:28PM 0 points [-]

Er, I guess I should say its strictly /not/ an attempt at a simplified description, but a minimal description which can still account for everything...

Comment author: Voltairina 04 March 2012 06:32:40PM 0 points [-]

Whatever the bottom level of our understanding of the map, even a one-level map is still above the territory, so there're still levels below that which carry back to, presumedly, territory. We find some fields-and-forces model that accounts for all the data we're aware of. But, its always going to be possible - less likely the more data we get - that something flies along and causes us to modify it. So, if we wanted to continue the reductionistic approach about the model we're making about our world, stripping away higher level abstractions, we'd say that its an in-process unifying simplification of and minimal inferences from the results of many experiments, which correspond to measurements of the world at certain levels of sensitivity by different means.

Comment author: Voltairina 04 March 2012 06:39:15PM 0 points [-]

Like, I can draw a picture of a face in increasingly finer and finer detail down to "all the detail I see" but its still going to contain unifying assumptions - like a vector representation of a face, versus the data, which may be pixellated - made up of specific individual measurement events. Or I can show a chart of where and how all the nerves are excited in my eyes, which are the 'raw data' level stuff that I have access to about what's 'out there', for which the simplest explanation is most probably a face. Actually its kind of interesting to think of it that way because a lot of our raw mental data is 'vectored' already. But, whenever we do a linear regression of a dataset, that's also a reduction-to-a-vector of something.

Comment author: potato 08 June 2012 11:50:05PM 0 points [-]

This post, represents for me, the typical LW response to something like the Object Oriented Ontologies of Paul Levi Bryant and DeLanda. These Ontologies attempt to give things like numbers, computations, atoms, fundamental particles, galaxies, higher level laws, fundamental laws, concepts, referents of concepts, etc. equal ontological status. They, hence, are strictly against making a distinction between map and territory, there is only territory, and all things that are, are objects.

I'm a confident reductionist, model/reality (bayesian), type guy. I'm not having major second thoughts about that, right now. But engaging in productive debate with object oriented philosophers might be a good chance for us to check ourselves,i.e., see how confident we really should be in our reductionist ontology. There are leading philosophers, and other scientists, that are apposed to reductionism, and opposed to correlationism. They have blogs, and are often open to debate. There's no point missing out on talking with someone that see's the universe fundamentally different from you in a way that is technically derivable.

Comment author: aceofspades 02 July 2012 04:46:06AM 1 point [-]

Does the reductionist model give different predictions about the world than the non-reductionist model? If so, are any easily checked?

Comment author: Rixie 29 March 2013 05:24:37PM 0 points [-]

This website is doing amazing things to the way I think every day, as well as occasionally making me die of laughter.

Thank you, Eliezer!

Comment author: wedrifid 12 September 2013 08:53:35AM 0 points [-]

as well as occasionally making me die of laughter.

But you got better.

Comment author: RogerS 23 April 2013 04:24:57PM *  0 points [-]

"having different descriptions at different levels" is itself something you say that belongs in the realm of Talking About Maps, not the realm of Talking About Territory

Why do we distinguish “map” and “territory”? Because they correspond to “beliefs” and “reality”, and we have learnt elsewhere in the Sequences that

my beliefs determine my experimental predictions, but only reality gets to determine my experimental results.

Let’s apply that test. It isn’t only predictions that apply at different levels, so do the results. We can have right or wrong models at quark level, atom level, crystal level, and engineering component level. At each level, the fact that one model is right and another wrong is a fact about reality: it is Talking about Territory. When we say a 747 wing is really there, we mean that (for example) visualising it as a saucepan will result in expectations that the results will not fulfil in the way that they will when visualising it as a wing. Indeed, we can have many different models of the wing, all equally correct - since they all result in predictions that conform to the same observations. The choice of correct model is what is in our head. The fact that it has to be (equivalent to) a model of a wing to be correct is in the Territory. In short, when Talking about Territory we can describe things at as many levels (of aggregation) as yield descriptions that can be tested against observation.

at different levels

What exactly is meant by “levels” here? The Naval Gunner is arguing about levels of approximation. The discussion of Boeing 747 wings is an argument about levels of aggregation. They are not the same thing. Treating the forces on an aircraft wing at the aggregate level is leaving out internal details that per se do not affect the result. There will certainly be approximations involved in practice, of course, but they don’t stem from the actual process of aggregation, which is essentially a matter of combining all the relevant force equations algebraically, eliminating internal forces, before solving them; rather than combining the calculated forces numerically.

...the way physics really works, as far as we can tell, is that there is only the most basic level—the elementary particle fields and fundamental forces

The way that reality works, as far as we can tell, is that there are basic ingredients, with their properties, which in any given system at any given instant exist in a particular configuration. Now reality is not just the ingredients but also the configuration - a wrong model of the configuration will give wrong predictions just as a wrong model of the ingredients will. The possible configurations include known stable structures. These structures are likewise real because any model of a configuration which cannot be transformed into a model which includes the identified structure in question is in conflict with reality. Physics is I understand it comprises (a) laws that are common to different configurations of the ingredients, and (b) laws that are common to different configurations of the known stable structures. Physicalism implies the belief that laws (b) are always consistent with laws (a) when both are sufficiently accurate.

...The laws of physics do not contain distinct additional causal entities that correspond to lift or airplane wings

True but the key word here is “additional”. Newton’s laws were undoubtedly laws of physics, and in my school physics lessons were expressed in terms of forces on bodies, rather than on their constituent particles. The laws for forces on constituent particles were then derived from Newton’s laws by a thought experiment in which a body is divided up. In higher education today the reverse process is the norm, but reality is indifferent to which equivalent formulation we use: both give identical predictions.[Original wording edited]

General Relativity contains the additional causal entity known as space-time curvature, which is an aggregate effect of all the massive particles in the universe given their configuration so is not a natural fit in the Procrustean bed of reductionism. [Postscript] Interestingly, I've read that Newton was never happy with his idea of gravitation as a force of attraction between two things because it implied a property shared between the two things concerned and therefore being intrinsic to neither - but failed to find a better formulation.

The critical words are really and see

Indeed, but when you see a wing it is not just in the mind, it is also evidence of how reality is configured. It is the result of the experiment you perform by looking.

.. the laws of physics themselves, use different descriptions at different levels—as yonder artillery gunner thought

What the gunner really thought is pure speculation of course, but this assumption by EY raises an important point about meta-models.

In thought experiments the outcome is determined by the applicable universal laws – that’s meta-model (A). In any real-world case you need a model of the application as well as models of universal laws. That’s meta-model (B). An actual artillery shell will be affected by things like air resistance, so the greater accuracy of Einstein’s laws in textbook cases is no guarantee of it giving more accurate results in this case. EY obviously knew this, but his meta-model excluded it from consideration here. Treating the actual application as a case governed only by Newton’s or Einstein’s laws is itself a case of “Mind Projection Fallacy” – projecting meta-model (A) onto a real-world application. So it’s not a case of the gunner mistaking a model for reality, but of mistaking the criteria for choosing between one imperfect model and another. I imagine gunners are generally practical men, and in the field of the applied sciences it is very common for competing theories to have their own fields of application where they are more accurate than the alternatives – so although he was clearly misinformed, at least his meta-model was the right one.

[Postscript] An arguable version of reductionism is the belief that laws about the ingredients of reality are in some sense "more fundamental" than laws about stable structures of the ingredients. This cannot be an empirical truth, since both laws give the same predictions where they overlap so cannot be empirically distinguished. Neither is any logical contradiction implied by its negation. It can only be a metaphysical truth, whatever that is. Doesn't it come down to believing Einstein's essentialist concept of science against Bohr's instrumentalist version? That science doesn't just describe, but also tells? So pick Bohr as an opponent if you must, not some anonymous gunner.

Comment author: army1987 12 September 2013 08:44:29AM 0 points [-]

"No," he said, "I mean that relativity will give you the wrong answer, because things moving at the speed of artillery shells are governed by Newtonian mechanics, not relativity."

[extreme steelman mode on]

By “relativity” he must have meant the ultrarelativistic approximation, of course.

[extreme steelman mode off]

:-)