Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Were atoms real?

61 Post author: AnnaSalamon 08 December 2010 05:30PM

Related to: Dissolving the Question, Words as Hidden Inferences

In what sense is the world “real”?  What are we asking, when we ask that question?

I don’t know.  But G. Polya recommends that when facing a difficult problem, one look for similar but easier problems that one can solve as warm-ups.  I would like to do one of those warm-ups today; I would like to ask what disguised empirical question scientists were asking were asking in 1860, when they debated (fiercely!) whether atoms were real.[1]

Let’s start by looking at the data that swayed these, and similar, scientists.

Atomic theory:  By 1860, it was clear that atomic theory was a useful pedagogical device.  Atomic theory helped chemists describe several regularities:

  • The law of definite proportions (chemicals combining to form a given compound always combine in a fixed ratio)
  • The law of multiple proportions (the ratios in which chemicals combine when forming distinct compounds, such as carbon dioxide and carbon monoxide, form simple integer ratios; this holds for many different compounds, including complicated organic compounds).
  • If fixed volumes of distinct gases are isolated, at a fixed temperature and pressure, their masses form these same ratios.

Despite this usefulness, there was considerable debate as to whether atoms were “real” or were merely a useful pedagogical device.  Some argued that substances might simply prefer to combine in certain ratios and that such empirical regularities were all there was to atomic theory; it was needless to additionally suppose that matter came in small unbreakable units.

Today we have an integrated picture of physics and chemistry, in which atoms have a particular known size, are made of known sets of subatomic particles, and generally fit into a total picture in which the amount of data far exceeds the number of postulated details atoms include.  And today, nobody suggests that atoms are not "real", and are "merely useful predictive devices".

Copernican astronomy:  By the mid sixteen century, it was clear to the astronomers at the University of Wittenburg that Copernicus’s model was useful.  It was easier to use, and more theoretically elegant, than Ptolemaic epicycles.  However, they did not take Copernicus’s theory to be “true”, and most of them ignored the claim that the Earth orbits the Sun.

Later, after Galileo and Kepler, Copernicus’s claims about the real constituents of the solar system were taken more seriously. This new debate invoked a wider set of issues, besides the motions of the planets across the sky. Scholars now argued about Copernicus’s compatibility with the Bible; about whether our daily experiences on Earth would be different if the Earth were in motion (a la Galileo); and about whether Copernicus’s view was more compatible with a set of physically real causes for planetary motion (a la Kepler).  It was this wider set of considerations that eventually convinced scholars to believe in a heliocentric universe. [2]

Relativistic time-dilation: For Lorentz, “local time” was a mere predictive convenience -- a device for simplifying calculations.  Einstein later argued that this local time was “real”; he did this by proposing a coherent, symmetrical total picture that included local time.

Luminiferous aether:  Luminiferous ("light-bearing") aether provides an example of the reverse transition.  In the 1800s, many scientists, e.g. Augustin-Jean Fresnel, thought aether was probably a real part of the physical world.  They thought this because they had strong evidence that light was a wave, including as the interference of light in two-slit experiments, and all known waves were waves in something.[2.5]

But the predictions of aether theory proved non-robust.  Aether not only correctly predicted that light would act as waves, but also incorrectly predicted that the Earth's motion with respect to aether should affect the perceived speed of light.  That is: luminiferous aether yielded accurate predictions only in narrow contexts, and it turned out not to be "real".

Generalizing from these examples

All theories come with “reading conventions” that tell us what kinds of predictions can and cannot be made from the theory.  For example, our reading conventions for maps tell us that a given map of North America can be used to predict distances between New York and Toronto, but that it should not be used to predict that Canada is uniformly pink.[3]  

If the “reading conventions” for a particular theory allow for only narrow predictive use, we call that theory a “useful predictive device” but are hesitant about concluding that its contents are “real”.  Such was the state of Ptolemaic epicycles (which was used to predict the planets' locations within the sky, but not to predict, say, their brightness, or their nearness to Earth); of Copernican astronomy before Galileo (which could be used to predict planetary motions, but didn't explain why humans standing on Earth did not feel as though they were spinning), of early atomic theory, and so on.  When we learn to integrate a given theory-component into a robust predictive total, we conclude the theory-component is "real".

It seems that one disguised empirical question scientists are asking, when they ask “Is X real, or just a handy predictive device?” is the question: “will I still get accurate predictions, when I use X in a less circumscribed or compartmentalized manner?” (E.g., “will I get accurate predictions, when I use atoms to predict quantized charge on tiny oil drops, instead of using atoms only to predict the ratios in which macroscopic quantities combine?".[4][5]

 


[1] Of course, I’m not sure that it’s a warm-up; since I am still confused about the larger problem, I don't know which paths will help. But that’s how it is with warm-ups; you find all the related-looking easier problems you can find, and hope for the best.

[2]  I’m stealing this from Robert Westman’s book “The Melanchthon Circle, Rheticus, and the Wittenberg Interpretation of the Copernican Theory”.  But you can check the facts more easily in the Stanford Encyclopedia of Philosophy.

[2.5] Manfred asks that I note that Lorentz's local time made sense to Lorentz partly because he believed an aether that could be used to define absolute time.  I unfortunately haven't read or don't recall the primary texts well enough to add good interpretation here (although I read many of the primary texts in a history of science course once), but Wikipedia has some good info on the subject.

[3] This is a standard example, taken from Philip Kitcher.

[4]  This conclusion is not original, but I can't remember who I stole it from.  It may have been Steve Rayhawk.

[5] Thus, to extend this conjecturally toward our original question: when someone asks "Is the physical world 'real'?" they may, in part, be asking whether their predictive models of the physical world will give accurate predictions in a very robust manner, or whether they are merely local approximations.  The latter would hold if e.g. the person: is a brain in a vat; is dreaming; or is being simulated and can potentially be affected by entities outside the simulation.

And in all these cases, we might say their world is "not real".

Comments (156)

Comment author: Manfred 08 December 2010 06:52:05PM *  14 points [-]

It's an interesting question, and probably the right way to ask it, but I've noticed three errors or omissions that would make me very happy if they were fixed. I'll start with the minor nitpicks.

  • Lorenz thought his transformed time wasn't real because he was preserving the aether, which defined a particularly "real" time. Before Einstein's interpretation of the photoelectric effect the aether made a lot of sense, which seems like useful context.

  • Hydrogen monoxide isn't something early chemists would have measured - maybe they measured hydrogen peroxide, though. Other examples for the law of multiple proportions would be carbon dioxide and carbon monoxide.

  • The Copernican model was not more accurate than the Ptolemaic model. Its inaccuracy was its major problem, in fact. The main reason it held on was that it, in its simplicity, felt more "real" - what you report was only thought later. Kepler's model, on the other hand, kicked butt and took names, which may be what you were thinking of.

Comment author: AnnaSalamon 08 December 2010 07:11:25PM *  6 points [-]

Much thanks for the good historical info.

Lorentz thought his transformed time wasn't real because he was preserving the aether, which defined a particularly "real" time. Before Einstein's interpretation of the photoelectric effect the aether made a lot of sense, which seems like useful context.

I'm confused still. This one sounds consistent with what I said; a local time was useful in prediction but didn’t provide enough predictions in varied enough contexts for it to seem more sensible to believe in local time as a real world-constituent, rather than as a narrowly useful predictive device. Are you saying this wouldn’t have been true without the aether as a specific such context?

Hydrogen monoxide isn't something early chemists would have measured - maybe they measured hydrogen peroxide, though. Other examples for the law of multiple proportions would be carbon dioxide and carbon monoxide.

Thanks. Fixed.

The Copernican model was not more accurate than the Ptolemaic model. Its inaccuracy was its major problem, in fact. The main reason it held on was that it, in its simplicity, felt more "real" - what you report was only thought later. Kepler's model, on the other hand, kicked butt and took names, which may be what you were thinking of.

Okay, thanks. I’ll fix that. Do you think historians of science are correct in thinking that the scholars at Wittenburg in fact engaged with the new theory, but not with the bit about heliocentricness?

Comment author: Anatoly_Vorobey 11 December 2010 12:25:55AM 1 point [-]

Re: Lorentz, I think this discussion might prove helpful, especially the very astute comment #9 there: http://www.physicsforums.com/showthread.php?t=442132

Comment author: [deleted] 08 December 2010 07:07:53PM 9 points [-]

This might be a situation where a word ("real") that served some useful purpose in certain contexts has been unwittingly taken out of that context, resulting in a meaningless question that can be dissolved by understanding the original context.

This seems to be the method that Wittgenstein uses to dissolve questions in Philosophical Investigations.

Comment author: Morendil 08 December 2010 09:04:46PM 7 points [-]

There's a famous quote by Ian Hacking regarding electrons - "If you can spray them, they're real".

In "disguised query" terms, this corresponds to "can X be reliably used to effect changes on the rest of what I consider 'real' already?"

We know that relativistic time dilation is real, for instance, because you have to take it into account to build GPS devices that work as expected, and these are as real as they come - you use them to drive your car somewhere.

Comment author: Snowyowl 09 December 2010 01:52:57PM 0 points [-]

This seems like the best criterion for reality: once you have something (like a GPS receiver) that cannot be explained or built without the understanding of relativity/atoms/luminiferous aether, then you can consider them to be "real".

(This is a sufficient condition, but probably not a necessary one.)

Comment author: Morendil 18 March 2011 04:05:18PM 0 points [-]

Did I say GPS? Man that's exotic. Apparently car batteries depend on relativistic effects.

Comment author: Sniffnoy 18 March 2011 10:13:37PM 2 points [-]

I don't think this counts, in that the predictions of relativity were not needed in the design. Relativity is needed to explain why certain phenomena crucial to the design occur, but the phenomena themselves were already known at the time of design. Another similar example is that relativity is needed to explain the color of gold. If all the evidence you had for relativity was of this form, and then it turned out relativity was wrong, you wouldn't be too surprised - you'd have gotten your theory wrong, but you wouldn't be saying "How the hell can batteries work, then?!" You still wouldn't know how car batteries worked, of course, but it wouldn't seem impossible that they should work, since they were based on phenomena known prior to relativity. By contrast, learning that relativity was wrong should make GPS seem impossible, since relativity was needed to predict that it would work (in its current form, with the relativistic calculations included) in the first place. (Here of course by "wrong" I mean "significantly off in these situations" rather than just "not literally true".)

Comment author: Jack 08 December 2010 09:24:08PM *  6 points [-]

By the mid sixteen century, it was clear to the astronomers at the University of Wittenburg that Copernicus’s model was useful. It was easier to use, and more theoretically elegant, than Ptolemaic epicycles.

Copernicus's model still had epicycles. He improved on the Ptolemaic model by dropping the equant.

Comment author: AnnaSalamon 08 December 2010 09:38:32PM 1 point [-]

I realize that. But his epicycles were still easier to use than the Ptolemaic epicycles model.

Comment author: Jack 09 December 2010 04:17:52AM *  1 point [-]

I'd consider mentioning the equant as the elimination of it seems to be where most of the theoretical elegance was gained. There were fewer epicycles (were they used differently?) but I'm not sure how much of the epicycle elimination was due to genuine improvement and how much just left the theory as less accurate (more epicycles usually means more precision).

Comment author: Jayson_Virissimo 08 December 2010 11:41:43PM 1 point [-]

In what sense were they easier to use?

Comment author: Jack 09 December 2010 04:18:14AM 0 points [-]

Well there were fewer of them... but Anna may mean something else.

Comment author: jimrandomh 08 December 2010 07:01:33PM *  20 points [-]

The idea of leaky abstractions seems relevant here. This is the observation from engineering that when layers of models are built on top of each other, consequences of the lower-level models tend to appear even when the higher layers are meant to abstract them away.

Asking whether a model is "real" seems akin to asking whether its abstraction will ever leak, and if it does, whether the places where lower layers show through are correctly labelled and explained within the model. Atomic chemistry is "real" in that, when it does break down (extreme energies, rare particles, etc), it's for reasons that can be explained in atomic chemistry's own vocabulary. On the other hand, psychology, for example, tends to break down for reasons that can't be explained, or can only be explained in terms of biology.

Under this definition, if the universe is a simulation, then it is real if and only if that simulation runs to completion without information about the simulator's universe leaking into our universe.

Comment author: Jack 08 December 2010 09:35:54PM *  10 points [-]

Atomic chemistry is "real" in that, when it does break down (extreme energies, rare particles, etc), it's for reasons that can be explained in atomic chemistry's own vocabulary. On the other hand, psychology, for example, tends to break down for reasons that can't be explained, or can only be explained in terms of biology.

I'm not sure I follow this. Isn't atomic chemistry an abstraction describing the behavior of the wave function? How are the places it breaks down explained by it's own vocabulary?

Comment author: steven0461 10 December 2010 10:09:37PM 5 points [-]

The MWI people like to cite Real Patterns by Daniel Dennett. I also wonder what LWers think of structural realism.

Comment author: jsalvatier 09 December 2010 02:28:27AM 5 points [-]

This seems like a pretty good idea. It suggests that the "reality" of a theory is point on a continuum, not a binary property. For example, under this interpretation, I'd say Newtonian physics is pretty real, but Relativistic physics is even more real than Newton's physics.

Comment author: Davorak 10 December 2010 09:15:21PM 3 points [-]

The "reality" of a theory is a point on a continuum of how well the theory maps to reality?

After accepting that reality is external and objective this seems like the best definition. How well a theory maps to reality can be hard to measure until a better theory comes along however.

The real problem is that through out history and to the present day people jump to conclusions and call things real with out proper evidence. If scientists had not jumped to conclusions in the 1860s then there would have been no debate over weather atoms were real or not.

Comment author: AlexU 14 December 2010 02:59:11PM *  10 points [-]

Half the more "philosophical" posts on here seem like they're trying to reinvent the wheel. This issue has been discussed a lot by philosophers and there's already an extensive literature on it. Check out http://plato.stanford.edu/entries/scientific-realism/ for starters. Nothing wrong with talking about things that have already talked about, of course, but it would probably be good at least to acknowledge that this is a well-established area of thought, with a known name, with a lot of sophisticated thinking already underway, rather than having the mindset that Less Wrong is single-handedly inventing Western philosophy from scratch.

Comment author: sfb 14 December 2010 08:41:16PM *  6 points [-]

Alternately, Western 'word salad' Philosophy might benefit from a bit of reinventing:

http://www.paulgraham.com/philosophy.html

And so instead of denouncing philosophy, most people who suspected it was a waste of time just studied other things. That alone is fairly damning evidence, considering philosophy's claims. It's supposed to be about the ultimate truths. Surely all smart people would be interested in it, if it delivered on that promise.

Because philosophy's flaws turned away the sort of people who might have corrected them, they tended to be self-perpetuating. Bertrand Russell wrote in a letter in 1912:

Hitherto the people attracted to philosophy have been mostly those who loved the big generalizations, which were all wrong, so that few people with exact minds have taken up the subject.

[..] There's a market for writing that sounds impressive and can't be disproven.

Comment author: Alexandros 09 December 2010 09:46:53AM 3 points [-]

This is an extremely thought-provoking article that I haven't been able to get off my mind, so thanks.

I think we can all agree that reality (the 'territory') as a whole is real, but this is nigh-tautological. The question of whether a particular concept is a true part of reality (e.g. atoms) is more interesting but not as straightforward.

jsalvatier suggests that 'the "reality" of a theory is point on a continuum, not a binary property', and it seems there's something to this. My gut response to the question of 'are atoms real?' was 'of course! we've got photos and everything!'. But what is a photo other than the result of a scientific experiment that gives us strong evidence in favour of the theory? Going back to the everyday definition of 'real', we consider something to be real when there's so much evidence that we can't imagine any alternative. 'I'll believe it when I see it with my own eyes' some say. But our eyes are just partial evidence, not a final arbiter of reality on their own. Others believe Uri Geller's powers to be real because he can perform his tricks in plain sight, in front of a camera. Eliezer has claimed that "A peer-reviewed, journal-published, replicated report is worth far more than what you see with your own eyes.". So my gut response was wrong. All we have is evidence, and some theories have more of it than others.

So, if the theory of atoms was supplanted by a superior alternative, would atoms no longer be real? It doesn't seem right that something that happens in our mind should affect the world out there. "Reality is that which, when you stop believing in it, doesn't go away". So all we can confidently say is our probability estimate that a certain aspect of our map is complete (a.k.a. 'atoms are real, 99%'). Which is not as satisfying and of course, given that our definitions can change over time, this may get more murky as the definition of an atom is refined to be in line with incoming observations and new theories. For instance, while we know the original definition of atom (indivisible) to be false, we consider that it's coiner was in fact right in some sense.

Altogether I think the problem is that while 'atoms' are concepts that appear in our theories, and therefore maps, 'real' is a word that refers to the territory. So I guess the question could be restated as 'Is the map (atoms) the territory (real)?'. At this point perhaps it could be claimed that with perfect processing of perfect information, the map does become the territory but I don't think it's worth kicking this dead horse any further.

Comment author: byrnema 09 December 2010 04:21:49PM *  2 points [-]

I think we can all agree that reality (the 'territory') as a whole is real, but this is nigh-tautological.

For some reason, this sentence struck a chord with me and made a number of somewhat confused thoughts fall into place (a mini-epiphany). Indeed, real means the territory, whatever that is. I could have consoled myself in moments of panic fearing that reality wasn't really 'real'. However real reality feels, there isn't something else 'more real'.

So what did I mean when I felt that reality didn't feel real?

Thinking about it for a few moments, 'feeling real' subjectively means not only empirically based (seeing it, touching it, hearing it) but that the sensory information is integrated and familiar. A very new environment, a chemical imbalance in your brain that make it difficult to process sensory data and even an inner ear infection all cause feelings of unreality.

Something abstract, likewise -- "a mother's love", for example -- can be considered subjectively real, if you have lots of familiar and empirical examples of such love (she held me, she drove me to school, I had happy feelings). I don't know if this feeling of "real" isn't just an analogy your brain makes. That is, that it feels integrated on all sensory levels and familiar like a familiar physical object.

Finally, by analogy, atoms and photons should subjectively feel real for someone studying them if there is a lot of integrated empirical evidence of them and they are familiar objects. Thus the subjective feeling of something being real is scalable: they feel somewhat real if you can see their effects, but more real if you can see/hear/touch them directly. And then regardless of the kind of empirical evidence, it'll feel more and more real as you become more familiar with them.

All of this is relevant to the post only to the extent that the statement, "X is real" has a subjective component.

Comment author: SilasBarta 08 December 2010 09:41:07PM 3 points [-]

Another way to state your conclusion is that the "is it real?" question reduces to the question of whether a model yields a Level 1 or Level 2 understanding [/self-promotion]. Indeed, those concerns where part of what motivated me to create the hierarchy.

Comment author: [deleted] 09 December 2010 02:49:51PM 6 points [-]

Very small children understand "real" to be "what's inside" -- what's hidden, essential. Sometimes literally inside: ask toddlers "If you took a dog, and gave it the bones and insides of a cat, would it still be a dog?" they say "no," but "If you took a dog and made it look like a cat on the outside, would it still be a dog?" they say "yes." (I'm getting this from Paul Bloom's "How Pleasure Works.") Young children are essentialist about gender as well -- they assume more differences between the sexes than actually exist, not fewer.

What psychological evidence I've seen suggests that we're in some way wired to see categories as real. "Natural kinds." To think that there's a real difference "out there" between dog and not-dog, not just a useful bookkeeping convention. I'm inclined to believe that Anna's reasoning about "atoms are real" and Eliezer's reasoning about categories actually make more sense than essentialism -- but I suspect that this kind of question-dissolving is not the standard, evolution-provided brain pathway.

Comment author: TheOtherDave 09 December 2010 04:15:36PM 5 points [-]

If the subject interests you, I recommend reading Women, Fire, and Dangerous Things. It's somewhat slow going, but the author lays out a detailed story about the process of category formation in humans (that is, why we create the categories that we do) that does wonders for clarifying the issues involved.

I have no idea whether the specific story he tells is right or not, but sometimes it's useful to just have an example of what such a story might look like.

Comment author: [deleted] 09 December 2010 04:16:15PM 1 point [-]

thanks!

Comment author: David_Gerard 09 December 2010 03:21:11PM *  4 points [-]

Real-world example: The creationist science of baraminology takes assumption of kinds to its logical limits. Todd Charles Wood comes so close to admitting his baraminology work is excellent evidence for evolution. It's amazing how far people will take an obviously broken axiom without letting go of it.

Comment author: [deleted] 09 December 2010 04:25:15PM 8 points [-]

Interesting. It's funny how the Bible really reinforces the idea of natural kinds -- a lot of the prohibitions can be interpreted, one way or another, as prohibitions against mixing things that are essentially different (wool and flax, men and women, fish and mammals.) It would make sense if essentialism was the way we "naturally" think, and it takes some scientific development to tease out where it doesn't make sense.

Though I'm just amazed at their trouble with grammar, first of all. Grrrr.

Comment author: xamdam 09 December 2010 06:00:18PM 1 point [-]
  • wool and flax - Yes
  • men and women - Huh?
  • fish and mammals - Sort of (some people do not eat milk and fish with same utensils, but it's not from the Bible as far as I can tell) Additionally -
  • mixing plant species (via grafting) - Yes, a major support for your point

-- your local ex-rabbinical student :)

Comment author: [deleted] 09 December 2010 06:16:56PM 2 points [-]

-men and women: men aren't supposed to dress like women and vice versa.

-fish and mammals: takes some unpacking and was probably the wrong way to phrase it. The fish you can eat should have scales and fins -- that sort of points to "good" fish being especially "fishy" fish. Fish that are kind of not like fish are not okay.

Comment author: xamdam 09 December 2010 06:32:48PM 3 points [-]

-men and women: men aren't supposed to dress like women and vice versa.

agreed, support your theory

-fish and mammals

yes, probably wrong way to phrase it, but I agree about the essentialism of "fish with scales" being "fishy fish" - that's a very sharp observation, actually.

Comment author: David_Gerard 09 December 2010 04:30:40PM *  -1 points [-]

I herded the RW article from silver to gold (in the front cover rotation) and it was quite difficult. It's one of those subjects where every single thing about it is blitheringly stupid, and putting the stupidities in an order that reads usefully as an essay is actually the hard part. The inferential distance problem here is getting across to people that other people really do believe things this stupid. Staying understated requires remarkable self-control. Project Blue Beam was another - saving the punchline for the end, where it doesn't belong logically but does belong narratively.

Comment author: lucidfox 14 December 2010 09:24:15AM 2 points [-]

Young children are essentialist about gender as well -- they assume more differences between the sexes than actually exist, not fewer.

Heh. What are cooties anyway?

Comment author: NancyLebovitz 14 December 2010 04:45:49PM 2 points [-]

Another know how cross-cultural belief in cooties or the equivalent is?

Comment author: arundelo 14 December 2010 12:26:41PM 0 points [-]

Originally, they were lice.

Comment author: RichardKennaway 09 December 2010 07:27:37PM 1 point [-]

What psychological evidence I've seen suggests that we're in some way wired to see categories as real.

What would it be, to not see categories as real?

All of our perceptions, from low-level physical sensation up to the highest abstractions, are experienced as real things existing outside of ourselves. This is an illusion in every case.

Look around you and it will seem to you that you see objects "out there". Listen, and you seem to hear things far off. Touch something and the sensation appears to be at your skin. Smells seem to be in the air around you, and tastes seem to belong to what you are eating. Watch someone in action and it will seem as if you can see their purposes, right out there in the other person. Think about abstractions like "justice" or "democracy", and these too will seem to be externally existing things.

But all of that experience is literally in your head. We are all of us shut up inside three pounds of porridge in a bone box, but it never feels like that. There is something outside you that gives rise to these sensations, but it takes a lot of work to get anywhere close to the real story.

We are wired to perceive categories -- and sensations, and sequences, and patterns, and various other sorts of perception. The perceptual illusion affects all of them.

Comment author: TheOtherDave 09 December 2010 07:49:52PM 2 points [-]

What would it be, to not see categories as real?

I expect it would be noticing that I treat X as though it were importantly similar to Y, even though X is (it seems to me) nothing at all like Y.

This happened to me a lot while I was dealing with post-stroke PTSD... I would react to things in ways that made no sense to me at all, think about it for a while, and eventually conclude that I was treating those things as importantly equivalent to aspects of stroke-related trauma, even though they didn't seem to me to be importantly equivalent at all.

Our minds are not internally consistent.

Agreed about the rest of this, though. "Aaaa! I'm stuck inside this dark, damp skull!" just isn't the sort of thing brains are wired to experience.

Comment author: derefr 15 December 2010 12:53:13PM 0 points [-]

this kind of question-dissolving is not the standard, evolution-provided brain pathway.

Hawkins would agree.

Comment author: [deleted] 08 December 2010 05:50:55PM 6 points [-]

This is essentially the debate between scientific realists and anti-realists in philosophy of science. Realists hold that unobservable entities postulated by scientific theories are still "real"; anti-realists hold that these entities are not real. One of the big problems for anti-realists, as you pointed out with your first example, is that "what is observable" changes over time (e.g. we can now "see" atoms in ways that would have startled physicists in the 1860's). However, the anti-realists do have one interesting argument in their favor: many theories that were empirically successful for a long period of time turned out to postulate unobservable entities that didn't actually exist. For example: ether, which made claims that were useful as prediction tools but didn't truly reflect reality. (This argument comes from Bas Van Fraassen, a leading anti-realist.)

Hopefully this historical context is helpful. The point I am trying to make is this: your question is one of those "great unsolved problems in philosophy."

Comment author: AnnaSalamon 08 December 2010 05:55:26PM *  9 points [-]

The point I am trying to make is this: your question is one of those "great unsolved problems in philosophy."

The usual "great unsolved question of philosophy' is "Are atoms real?". I'm not trying to ask that question. I'm instead asking what disguised empirical inquiry scientists were engaged in, when, in the course of ordinary scientific research (and not metaphysical debates) they tried to figure out whether atoms were real.

Comment author: Jack 08 December 2010 09:14:08PM *  4 points [-]

Contemporary philosophers call this conceptual analysis and it's exactly how they talk about scientific realism and anti-realism. Your answer to the question, that X is real if it can be included as part of a coherent whole with the rest of science is vaguely Quinean.

Comment author: [deleted] 09 December 2010 06:19:56PM 1 point [-]

I agree with the resemblance to Quine; it could also be thought of as Philip Kitcher's "unification" model of explanation.

Comment author: komponisto 09 December 2010 06:29:02PM 1 point [-]

And also the coherence theory of truth (replace "X is real" with " 'X exists' is true").

Comment author: Perplexed 08 December 2010 05:56:21PM 6 points [-]

your question is one of those "great unsolved problems in philosophy."

Are there great solved problems in philosophy?

Comment author: AnnaSalamon 08 December 2010 06:09:05PM *  11 points [-]

People have solved good chunks of "Why do all dogs resemble one another", which is a problem that Plato cared a lot about. (Mendelian genetics, Darwinian evolution, and our understanding of how the brain clusters perceptions are all parts of the answer here.)

People have also solved good chunks of: "Is there a God?", "Is there likely to be an after life?", and "In what sense do we have free will?", among other questions.

Comment author: SilasBarta 08 December 2010 09:49:11PM *  5 points [-]

People have solved good chunks of "Why do all dogs resemble one another", which is a problem that Plato cared a lot about. (Mendelian genetics, Darwinian evolution, and our understanding of how the brain clusters perceptions are all parts of the answer here.)

Why not just the last of those? All dogs resemble one another because if they didn't have a critical resemblance, we wouldn't use the same label for them. Even today, we often have common-use terms for organisms, where the labels (taken literally) violate post-Darwinian understanding, and that's because of what the layperson considers a relevant similarity.

In other cases (e.g. "is a whale a fish?"), a deeper awareness of the relevant similarities did cause us to change up our label.

Comment author: ata 08 December 2010 10:01:25PM *  5 points [-]

All dogs resemble one another because if they didn't have a critical resemblance, we wouldn't use the same label for them.

That would only be a sufficient answer to the question "Why do we have a category called 'dogs' such that all of its members resemble one another?". Genetics, evolution, etc. are indeed necessary to answer the question about the referent rather than the quotation.

Comment author: SilasBarta 08 December 2010 10:23:12PM *  3 points [-]

That would only be a sufficient answer to the question "Why do we have a category called 'dogs' such that all of its members resemble one another?". Genetics, evolution, etc. are indeed necessary to answer the question about the referent rather than the quotation.

Only because he picked a specific category where the (apparently-significant) physical resemblance did in fact coincide with a genetic resemblance. But because he picked a class of animals ("dogs") due to other criteria, the answer to that question begins and ends with his classification algorithm and what his mind counts as "doglike".

It's quite common (as I made clear) for people to give the same name to genetically distant organisms or organs. The reason for physical similarity in that case is quite different from the reason in the case of the genetically similar organisms.

To base your answer to Plato on dogs' genetic similarity, you would also have to "explain" sharks and dolphins as being the same species -- the "species" of fish.

Comment author: AnnaSalamon 09 December 2010 11:12:19AM 1 point [-]

To base your answer to Plato on dogs' genetic similarity, you would also have to "explain" sharks and dolphins as being the same species -- the "species" of fish.

Here, too, one search out scientific explanations for how the similarities arose -- this time having to do partly with how form is passed along within a species (genetics), and partly with convergent evolutionary pressures that lead sharks and dolphins to both have a streamlined shape, flippers, etc.

Comment author: SilasBarta 09 December 2010 02:34:58PM *  4 points [-]

Yes, I get that. But, again, Plato didn't create a category isomorphic to modern knowledge of genetic lines. He created a category based on what Greeks at the time deemed "doglike". And the answer to that question is purely one of "why do you consider a boundary that includes only those things you call 'dogs' worthy of its own label?" Only later, as humans gained more knowledge, could they ask more complex questions about organisms that require knowledge of genetics, selection pressures, and convergent evolution. But the Greeks were not then at that point.

Also, explanations having to do with how humans deem something doglike are scientific.

Edit: To make the point clearer, consider ansewring Plato by saying "dogs are similar because genes determine what an animal looks like, animals reproduce by passing genes, and all dogs have similar genes". Such an answer would be wrong (uninformative) because it uses the premise "animals you give the same label to are similar because they have genes proportionally similar". This model is wrong, as it requires (per my above comment) you to also tell Plato that "shark-fish and dolphin-fish are similar because genes determine what an animal looks like, animals reproduce by passing genes, and all fish have similar genes."

Comment author: rwallace 09 December 2010 08:04:35AM 2 points [-]

It's not just a matter of labels. We can imagine a world in which every creature was a unique random mishmash of features without regard to any other creature. Empirically, we do not live in such a world; in our world, living organisms come in definite clusters with regularities to their properties. Evolution provides an explanation of why biology does objectively possess this feature.

Comment author: SilasBarta 09 December 2010 02:42:47PM *  7 points [-]

I understand that. That still doesn't mean Plato was in a position to be asking a question that requires understanding of evolutionary theory to answer. His question is not much different from him asking, had he lived in the world you posited, why all aerofauns are similar, where "aerofaun" is a label they innocuously came up with for "any creature that flies".

In that case, as in the actual one, there are huge differences among the aerofauns, more so than there are among dogs or among flying creatures in this world. But, even if that world's true explanation were "aliens regularly send their randomized automaton toys to earth", that still wouldn't mean you need aliens to answer the aerofaun question, because your question is already dissolved by understanding your own categorization system.

Edit: To further clarify the point: In your hypothetical world, the correct (informative, expectation-constraining) answer to a Plato asking "Why are all aerofauns similar?" would be:

"They're not similar in any objective sense. They simply have one particular similarity that you deem salient -- the fact of their flying -- and this is obscured by your having been accustomed to using the same label, 'aerofaun' for all of them. And the reason for a word's existence in the first place is because it calls out a human-relevant cluster. Because it matters to humans whether an animal flies or not, we have a word for it. But once you know whether an animal flies, there is no additional fact of the matter as to why the fliers are similar -- that similarity is an artifact of the filtering applied before an animal is called an aerofaun."

Similarly, you should answer Plato: "Dogs aren't similar in any objective sense. They simply have a few similarities that you deem salient -- how they're adaptable to humans, work in packs, walk on four legs, like meat, bark, etc. -- and this is obscured by your having been accustomed to using the same label, 'dog', for all of them. And the reason for a word's existence in the first place is because it calls out a human-relevant cluster. Because it matters to humans whether an animal has all the traits {friendly to us, works in packs, can't stand for long, wants meat, and can emit a loud call}, there is no additional fact of the matter as to why dogs are similar -- that similarity is an artifact of the filtering applied before an animal is called a dog. Maybe one day we'll find that some of the things we were calling dogs differ in a critical way -- maybe they can't interbreed with most dogs? -- and we'll have to change our labeling system."

Comment author: Perplexed 08 December 2010 06:32:27PM 8 points [-]

People have also solved good chunks of: "Is there a God?", "Is there likely to be an after life?", and "In what sense do we have free will?", among others.

If a problem is solved in philosophy, but nobody reads it ...

Comment author: Jack 08 December 2010 10:58:28PM 9 points [-]

Of course, if all we care about are lay beliefs the same could be said for physics, biology and neuroscience.

Comment author: Perplexed 09 December 2010 12:16:41AM 5 points [-]

Good point. But I think it is the case that almost everyone who has need of (i.e. uses) information from physics, biology, and neuroscience uses the standard, though esoteric, information produced by scientists.

But people who need (i.e. make decisions based on) ideas from philosophy regarding metaphysics, generally do not make use of what you and I might call the "state of the art" in this field.

Comment author: Jack 09 December 2010 03:15:15PM *  5 points [-]

Sure, unfortunately acting on the false beliefs that there is a God and you have a soul doesn't leave the loud and fiery explosions that acting on false beliefs about physics does.

Comment author: NihilCredo 13 December 2010 09:57:46AM 2 points [-]

Unless you count religious warfare, that is.

Comment author: Jack 09 December 2010 03:11:51PM *  2 points [-]

I agree with Silas. Talk of genetics and evolution here makes it look like Plato was actually concerned about dogs but that's just an example of the problem. Plato was talking about the general question of which the following are also examples (tokens actually!), "Why do all triangles resemble each other?" "Why do all storms resemble each other?" "Why do all performances of Oedipus resemble each other?" and so on. And he's not looking for a causal explanation, he's trying to understand what our categories are doing and what it means to refer to different things by the same name.

Understanding how the human brain clusters perceptions helps us understand the question but it doesn't really answer it- it just transforms it to a question about the reality of such categories. And this problem is far from solved.

In any case, if we're counting philosophical problems which were transformed at some point into scientific problems then we might as well include the entirety of the sciences save Geometry, Music and Rhetoric as "solved philosophical problems". I don't say this to condemn philosophy either, on the contrary it was often philosophers who developed the methodology to answer these questions.

Comment author: SilasBarta 09 December 2010 03:46:21PM *  2 points [-]

Plato was talking about the general question of which the following are also examples (tokens actually!), "Why do all triangles resemble each other?" "Why do all storms resemble each other?" "Why do all performances of Oedipus resemble each other?" and so on. And he's not looking for a causal explanation, he's trying to understand what our categories are doing and what it means to refer to different things by the same name.

Well, I think that's giving Plato too much credit -- my claim is that, at the time, they weren't even aware of how their categorizations were influencing their judgments. But your comparison to the triangle question is very apt. According what I read in Dennett's Darwin's Dangerous Idea, the Western ontology from Greeks through to the 19th century was that all animals represent a special, ideal, "platonic" form.

To claim, as Darwin did, that animals changed forms over time sounded to them, like it would sound to us if someone argued, "Okay, you know all those integers we use? Well, they weren't always that way. They kinda changed over time. That 3 and 4 we have? See, they actually used to be a 3.5. Then over time it split into 3.2 and 3.8, eventually reaching the 3 and 4 we have today."

In short, the Greeks didn't recognize the hidden inferences that words were making and thought they were finding objective categories when really they were creating human-useful categories. EY goes into detail about this in the article AnnaSalamon referenced, Words as Hidden Inferences.

Yet the brain goes on about its work of categorization, whether or not we consciously approve. "All humans are mortal, Socrates is a human, therefore Socrates is mortal" - thus spake the ancient Greek philosophers. Well, if mortality is part of your logical definition of "human", you can't logically classify Socrates as human until you observe him to be mortal. But - this is the problem - Aristotle knew perfectly well that Socrates was a human. Aristotle's brain placed Socrates in the "human" category as efficiently as your own brain categorizes tigers, apples, and everything else in its environment: Swiftly, silently, and without conscious approval.

Comment author: Jack 09 December 2010 04:19:40PM 1 point [-]

So what I think you're saying is that Plato had so much map-territory confusion that what he had to say about forms isn't even a meaningful question. Is that right?

I might agree. It's hard to figure out how ancient philosophers were actually thinking about problems given that we only approach their work through modernized translations and with our own concepts and categories at hand.

I'm not sure I see Plato inferring from words, though. Maybe you can point out that step explicitly?

Part of the problem is that "Words as Hidden Inferences" doesn't make that much sense to me as it stands, particularly as it relates to Greek philosophy. Eliezer's example is at the very least poorly chosen. Aristotle didn't even necessarily believe that humans are mortal, he seems agnostic on that question. The quote "All humans are mortal, Socrates is a human, therefore Socrates is mortal" isn't an argument for anyone's mortality. It's an example of a logical syllogism. "All humans are mortal" and "Socrates is a human" are just premises designed to illustrate the form. They might as well be made in set notation.

Aristotle believed bodies inevitably die, if I recall. That maybe a wrong judgment but an inference based mostly on observation (or at least based on general theories which were based on observation but unfortunately not much experimentation). He thought that the part of the soul that thinks might be able to live on after the body but that at least some of the soul was dependent upon the body (note that Aristotle's soul isn't at all like the Platonic/Christian conception we're familiar with and could charitably but plausibly be updated into something people here would be comfortable identifying as a person sans body).

Comment author: SilasBarta 09 December 2010 04:57:41PM 1 point [-]

So what I think you're saying is that Plato had so much map-territory confusion that what he had to say about forms isn't even a meaningful question. Is that right?

No, I agree there was a meaningful question there: "why have the things we (historically) labeled as 'dogs' seem so similar to us?" And you can meaingfully answer that question, in a way that improves your map of the world, by looking at how things got into the dog category in the first place, and why that category (regardless of name) even exists.

While I admit I don't have special expertise on Greek philosophy in this area, I do know that they had not gathered enough evidence at that point to even be asking questions that require knowledge of evolution to answer, and that they were hung up on idealism (as opposed to nominalism) which forces you to think in terms of ideal forms rather than models that identify relevant clusters.

So perhaps EY's characterization of the situation misled me, but the essential features are still there to support my claim that Plato went astray by not recognizing the source of the classification-as-dog.

Comment author: Jack 09 December 2010 05:45:31PM *  3 points [-]

I see. I guess we were disagreeing with Anna for somewhat different reasons. Your point is that when Plato was considering the question "why do the things we call dogs resemble each other" the concept the English word dog references was just a folk concept that was applied to some things that looked the same- the causal-historical story for how those things came to look the way they do is irrelevant to the fact they're called the same thing just because our brains classify them the same way.

I think thats right. My point was that Plato didn't really care about dogs so much. What he cared about was this phenomenon of resemblance. The question wasn't so much how did discrete individuals (Lassie and Snoopy) come to exist in a way that resemble each other. Rather, the question is "We call both Lassie and Snoopy 'dogs' and yet they are different individuals. What then is the relation between 'dog' and Lassie/Snoopy and what are we doing when we call both Lassie and Snoopy dogs? But that might be more the entire tradition of Western philosophy talking rather than Plato himself.

Plato's answer though is that there are abstract objects, "forms" which are imperfectly instantiated in Lassie and Snoopy. Both approximate ideal 'dogness'. For Plato it was these forms that were 'most real' so to speak because they were eternal and perfect. Plato, and especially some of his later followers got really mystical about all this and it got imported into Christianity. But we can excise the mysticism/silly talk about perfection and get a live philosophical question (the most notable Platonist of the 20th century is Bertrand Russell). A modern version of the question might be "what is the ontological status of abstract objects?" At best evolution and genetics are only tangentially involved with that question and only for a subset of abstract objects (things like species) and as a whole the question is generally considered unsolved. As it stands nominalism and Platonism have about equal representation among philosophers as a whole, though Platonism has a slight advantage among those who do work in Metaphysics.

Comment author: David_Gerard 08 December 2010 08:17:07PM *  2 points [-]

These appear to be things that, once solved, aren't "philosophy" any more. So what's philosophy? What, in your view, is left?

Comment author: WrongBot 08 December 2010 08:26:25PM 8 points [-]

Problems we don't know the right questions for yet. When we have a good handle on a question, it becomes science. When we have a good answer for the question, it becomes settled science.

Comment author: ata 08 December 2010 08:22:15PM *  8 points [-]

Philosophy consists of the questions that we don't understand well enough to even know how to go about answering them, but which, despite that (or because of that), are still really fun to argue about endlessly even in the absence of any new insights about the structure of the problem.

(Basically, I think describing a given problem as "philosophical" is mostly mind projection; from history, it seems that all the qualities that make a given problem a philosophical one have been properties of the people thinking about it rather than of the problem itself.)

Comment author: Will_Newsome 08 December 2010 11:02:32PM *  0 points [-]

People have also solved good chunks of: "Is there a God?", "Is there likely to be an after life?", and "In what sense do we have free will?", among other questions.

Er... I think a small number of people have made some progress, and I guess you could call that progress 'good chunks', but I get the feeling that the vast majority of rationalists are very confused about the first two questions (or would be if they noticed their confusion). Atheists and theists are both right and wrong in their own way, but neither have a solid understanding of the important underlying considerations. If you asked me if souls are real or if God is real, I'd say yes to both, but the explanation thereof would be excruciatingly difficult, and I'd be tempted to label the question 'not even wrong', akin to 'If a tree falls in the forest...'. (And I'm not talking about trivially true ensemble universe stuff, either -- I think there's more to it than just being smugly meta-contrarian.) Your point stands that there are a lot of solved philosophy problems, I'm just disputing your first two examples. Free will is a good example, though.

Comment author: Jack 08 December 2010 11:26:27PM 4 points [-]

Atheists and theists are both right and wrong in their own way, but neither have a solid understanding of the important underlying considerations. If you asked me if souls are real or if God is real, I'd say yes to both, but the explanation thereof would be excruciatingly difficult, and I'd be tempted to label the question 'not even wrong', akin to 'If a tree falls in the forest...

Not to make things 'excruciating' for you but you can't really leave that hanging.

Comment author: Will_Newsome 08 December 2010 11:53:04PM *  4 points [-]

Gah. I'll stick my neck out a bit. Short barely-defensible version: sometimes your low-level-language/ontology should be bits, sometimes it should be gods. Souls are a pretty good model of how memetic cognitive algorithms make up about half of human experience and don't reside in any one body. (You could remove all of the memes from someone's body and put them in someone else's body, and that'd be damn close to reincarnation. There are obvious objections here but I'm just going to plow ahead.) For instance, Wikipedia: "In philosophy of mind, dualism is a set of views about the relationship between mind and matter, which begins with the claim that mental phenomena are, in some respects, non-physical."

'Non-physical' is the key concept. I like to model cognitive algorithms in terms of e.g. memetics and computer science and phenomenology, not in terms of atoms. So when the nasty monists come along and say 'everything about this soul business can be explained in terms of atoms', I say, well sure, the languages are Turing-equivalent, but who cares? There's barely a difference in anticipated experiences, it's just arguing about which ontology better carves reality at its joints. Personally, I'm just fine with using the ontology of souls and gods and magic. Yeah, half of it 'reduces' to the placebo effect and memetics and what not, but why choose that ontology? Use ontological pragmatism.

(I guess there's an argument that you can have a speed prior over speed prior languages and should use low-level languages when all else is equal, but I find 'algorithmic ontology' to be simpler and easier to reason about than 'atomic/physical ontology' anyway, so once again I think I disagree with the monists.)

With regards to God in particular: God exists in a lot of peoples' heads. He's a massively parallel distributed cognitive algorithm that millions of people use and model. That's more of an existence than your average person, by far. What atheists mean when they claim He doesn't exist is something else that no theists actually care about. He's revealed Himself to them. Once you've personally experienced the God cognitive algorithm, are you going to listen to some snobby scientist who comes along and tells you that God doesn't exist? But you directly experienced Him! And so did half the people at your church! Silly ignorant scientsts.

In that sense, and it is an important sense, God is very real. More than that, all memes (memetic algorithms) are real. Now, it might be bad ontological pragmatism if this leads you to go ahead and start believing you'll go to the Christian heaven after you die. And there are all sorts of just-plain-wrong things that theists believe. But I don't think that they're that much more wrong than your average atheist. Both are pretty damn wrong. But it doesn't really matter, because most beliefs are clothes. It's when people start taking things seriously that you run into trouble.

And I realize this comes across as just being pointlessly meta-contrarian, but it's important to reason about these things correctly when you're doing Friendliness philosophy.

Comment author: TheOtherDave 09 December 2010 12:01:18AM 5 points [-]

Sure. And in that sense, Santa Claus is also real, and it's entirely correct to say that "God is no more real than Santa Claus." Or have I misunderstood you?

And yet, I suspect few theists would agree with that statement.

Comment author: mtraven 09 December 2010 01:21:08AM 2 points [-]

Allow me to link to this post on the social construction of Santa Claus

Comment author: Will_Newsome 09 December 2010 12:05:41AM 1 point [-]

I wouldn't say that's entirely correct. God is significantly more real than Santa Claus. He's inspired all kinds of art and science and devotion and what not, to a much greater extent than Santa Claus. Plus, people don't really talk to Santa Claus, whereas they often talk to God, and sometimes He answers. God is a much more complex algorithm.

Theists wouldn't agree with your statement, but I wouldn't either. And there are lots of statements that are true that theists would disagree with, just like there are lots of statements that are true that anyone would disagree with, because people suck at epistemology. But that's kind of tangential to the main thrust of my argument.

Comment author: TheOtherDave 09 December 2010 12:27:27AM 3 points [-]

I'm a little startled by you interpreting "more real" as an quantitative comparison, when I meant it as a qualitative one, so I have to back up a bit and ask you to unpack that.

Presumably you aren't arguing that inspiring art, science, devotion and whatnot is what it means to be real, or it would follow that most of the atoms in the universe are non-real and are in non-real configurations, which is a decidedly odd use of that word.

You say later that God is "much more complex," and I can't really see what that has to do with anything... I mean, a tree is much more complex than a wooden pole, but I wouldn't say that has anything to do with the reality of a tree or of a wooden pole.

Basically, I can't quite figure out what you mean by "real," and you seem to be using it in ways that are inconsistent with the way most people I know (including quite a few theists) would use it.

For my own part, what I would conclude from your argument is that God, independent of reality or non-reality, is more important than Santa Claus. Which I would agree with. If God is a reality, it's a more important reality than Santa Claus. If God is a myth, it's a more important myth than Santa Claus. Etc.

Incidentally, many people write letters to Santa Claus, and sometimes things happen that they experience as a reply from Santa Claus. If that is different from what you are referring to as an "answer" here, then I've continued to misunderstand you.

So, let me back up and try again. I'm currently imagining a purple dinosaur named Ansel with a built-in helicopter coming out of its skull and a refrigerator in its belly. Are you suggesting that Ansel is real, since it exists in my mind, and that it would become increasingly real if other people sat around imagining it too?

Comment author: Will_Newsome 09 December 2010 12:56:39AM *  6 points [-]

So, let me back up and try again. I'm currently imagining a purple dinosaur named Ansel with a built-in helicopter coming out of its skull and a refrigerator in its belly. Are you suggesting that Ansel is real, since it exists in my mind, and that it would become increasingly real if other people sat around imagining it too?

Yes. And if I imagined Ansel except green and not purple, then that adds a little bit to the realness of Ansel, unless we want to call the new green dinosaur Spinoz instead and have it be its own distinct cognitive algorithm.

Presumably you aren't arguing that inspiring art, science, devotion and whatnot is what it means to be real, or it would follow that most of the atoms in the universe are non-real and are in non-real configurations, which is a decidedly odd use of that word.

Nah, I reason about it in terms of measure. You have one cognitive algorithm that's being run on one mind. You have another cognitive algorithm that's running redundantly on a hundred minds. I'd say the latter has about a hundred times as much measure as the former. I don't know how else to reason about relative existence. (Realness?) I'm porting this sort of thinking over from reasoning about the universe being spatially infinite and there being an infinite number of TheOtherDaves all typing slightly different things. Some of those TheOtherDaves 'exist' more than others, especially if they're doing very probable things.

If existence isn't measured by number of copies, then what could it be measured by? The alternative I see is something like decision theoretic significance, which is why I was talking about what you called 'importance'. But I'm wary of getting into cutting edge decision theory stuff that I don't understand very well. Instead, can you tell me what you think 'realness' is, and whether or not you think God is real, and why or why not? We're starting to argue over definitions, which is a common failure mode, but it's cool as long as we realize we're arguing over definitions.

I think that everything exists, by the way: there's an ensemble universe, like Tegmark's level 4 multiverse, and so we can only quibble about how existent something is, not whether or not it exists. I might be having trouble trying to translate commonsense definitions into and out of my ontology. My apologies.

You say later that God is "much more complex," and I can't really see what that has to do with anything... I mean, a tree is much more complex than a wooden pole, but I wouldn't say that has anything to do with the reality of a tree or of a wooden pole.

I mean that people tend to use a lot more neurons to model God than to model Santa Claus, and thus by the redundant-copies argument hinted at above this means that God exists more. Relatedly...

Incidentally, many people write letters to Santa Claus, and sometimes things happen that they experience as a reply from Santa Claus. If that is different from what you are referring to as an "answer" here, then I've continued to misunderstand you.

You're right, I forgot about this. Parents have to use lots of neurons to model Santa Claus when crafting the letters. Kids don't tend to use as many neurons when writing letters to Santa, I think. But add up all of these neuron-compuations and it's still vastly less than the neuron-computations used by the many people having religious experiences and praying every day. (I'm using number-of-neurons-used as a proxy for strength/number of computations.)

Also, 'people' aren't ontologically fundamental: they're made of algorithms too, just like God. So I don't see how you can say 'God doesn't exist' without implying that Will Newsome doesn't exist; Will Newsome is just a collection of human universal algorithms (facial recognition, object permanence) and culture-specific memetic contents (humanism, rationality, Buddhism). The body is just a computing substrate, and it's not something I identify with all that much. And if I'm just a collection of algorithms running on some general computing hardware, well, the same is true of God. It's just that he's more parallel and I'm more serial. And I'm way smarter.

(Not that there is any such thing as 'I'. 'I' am made of a kludge of algorithms, and we don't always agree.)

Comment author: Will_Newsome 09 December 2010 01:17:35AM 0 points [-]

By the way, User:ata made this illuminating comment which I agree with; see my reply (where I admit to defecting when it comes to using words correctly).

Comment author: ata 09 December 2010 12:34:28AM *  17 points [-]

With regards to God in particular: God exists in a lot of peoples' heads. He's a massively parallel distributed cognitive algorithm that millions of people use and model. . . . In that sense, and it is an important sense, God is very real. More than that, all memes (memetic algorithms) are real.

But that's not the sense that theists mean when they say "God is real", and it's definitely not the sense that atheists mean when they say "God isn't real". When someone says "God isn't real", it's not like they're saying that God is not a meme that exists in anybody's mind — a person needs to have their own mental copy of the God algorithm, and the understanding that millions of people share it, in order to even bother being an atheist. It's pretty clear that they mean that the God algorithm isn't a model of any actual agent that created the universe or acts on it independently of the humans modeling him.

So I'd disagree with "In that sense, and it is an important sense, God is very real." Clearly in that sense God is real, but it seems like a profoundly unimportant sense to me, particularly because I don't think anyone actually uses "real" that way. It seems like a type error; a god is an extremely different sort of thing than the idea of a god.

Comment author: wedrifid 09 December 2010 12:40:51AM 4 points [-]

But that's not the sense that theists mean when they say "God is real", and it's definitely not the sense that atheists mean when they say "God isn't real".

Indeed. God is the omniscient, omnipresent, infinitely powerful and utterly non-existent creator of the universe! Cognitive algorithms are cognitive algorithms. Sometimes they make people say the word 'God'.

Comment author: Will_Newsome 09 December 2010 01:12:30AM *  2 points [-]

Clearly in that sense God is real, but it seems like a profoundly unimportant sense to me, particularly because I don't think anyone actually uses "real" that way. It seems like a type error; a god is an extremely different sort of thing than the idea of a god

You're right.

I suppose I'm just ignoring the unimportant senses because I'm talking to rationalists about what 'God' could be thought of as, and, well, the other more common ways of thinking about it don't convey much information. I was mostly trying to convey an ontology of cognitive algorithms, but got sidetracked into talking about this God business via a request from the audience. I honestly don't care much about how typical theists or atheists use the words, because, well, I don't care what they think. ;) I think I managed to get my points across despite defecting in the words game. Still, my apologies.

Also something very much like the actual God exists in a Tegmark multiverse, but that's also pretty unimportant, decision theoretically speaking. He's just another counterfactual terrorist.

Comment author: [deleted] 09 December 2010 05:40:44AM 0 points [-]

Also something very much like the actual God exists in a Tegmark multiverse, but that's also pretty unimportant, decision theoretically speaking. He's just another counterfactual terrorist.

Really? It sounds kinda like a self-defeating object. My guess is that there is an unending infinite hierarchy. But I don't trust my intuitions about the large scale structure of the multiverse much.

Comment author: David_Gerard 21 December 2010 01:08:32PM *  -1 points [-]

By "real" I'm assuming you mean something like "a phenomenon that needs to be accounted for in order to make accurate predictions". Specifically, predictions about what people will do. If so, absolutely.

Of course then there are other valid senses of "real" which everyone else is arguing below, in which there is the question of effects outside people's actions, and whether the phenomenon showed up in people's heads because an entity outside our scientific understanding called God put it there. Those are, of course, the tricky ones.

(God of the Gaps time!)

Comment author: shokwave 09 December 2010 05:12:50AM 3 points [-]

If you asked me if souls are real or if God is real, I'd say yes to both

Having read your explanation, I think you ought to say both are not real. Your description of God and souls as parallelized cognitive algorithms does not predict what "God is real, souls are real" predicts.

I think it would be more accurate to say "the belief that 'God is real, souls are real' is definitely real, and regardless of the truth value of the statement, the belief itself affects the world". That makes the same predictions as your cognitive algorithm idea (which I quite like), but doesn't cause misunderstandings with people who are using the word 'real' in very common ways.

Comment author: Vladimir_Nesov 08 December 2010 11:46:03PM 1 point [-]

If you asked me if souls are real or if God is real, I'd say yes to both, but the explanation thereof would be excruciatingly difficult, and I'd be tempted to label the question 'not even wrong',

What about the virtue of narrowness?

Comment author: Will_Newsome 09 December 2010 12:01:40AM *  2 points [-]

Being narrow with your own conceptual framework is good, but I'm promoting being liberal when it comes to interpreting others' concepts, when playing fast and loose in back-and-forth discourse, and when reasoning very abstractly in order to see connections. As long as you make sure to go back and make sure that everything connects precisely, and avoid affective death spirals around seemingly big insights about the fundamental nature of all things (which is somewhat difficult), it can be useful for getting new perspectives and for communicating concepts effectively.

ETA: With regards to communication, this only really works if each of the participants has some amount of faith in the epistemology of their conversation partner. If some random guy told me God exists, and I wanted to make him smarter, I wouldn't go on about all the ways that God exists; I'd go on about the ways He doesn't.

Comment author: Vladimir_Nesov 09 December 2010 12:11:45AM 2 points [-]

If some random guy told me God exists, and I wanted to make him smarter, I wouldn't go on about all the ways that God exists; I'd go on about the ways He doesn't.

Or just teach him the Virtue of Narrowness.

Comment author: Will_Newsome 09 December 2010 12:13:08AM -2 points [-]

True, that's a better solution. But, but, but being contrarian is so much more fun!

Comment author: JGWeissman 09 December 2010 12:11:00AM 1 point [-]

You should only be liberal in what you accept, if you can transform it so that when you repeat it, you can still be conservative in what you say.

Comment author: Will_Newsome 09 December 2010 01:32:29AM 1 point [-]

When possible this is best, but some people at SIAI (cough Vassar cough) have conversational styles that are very fast so as to convey the most information in the shortest time, and it's hard to do real-time transformations from ultra-abstract statements to reasonably-precise internal models and back as information is exchanged and people build up their ontologies on the fly. (Which is pretty awesome when it happens -- one of the joys of being a Visiting Fellow. And of talking to Michael Vassar.)

Comment author: TheOtherDave 09 December 2010 12:35:21AM 0 points [-]

I'd more or less agree with this, but would add that it's important to flag the difference between asserting the existence of X, making decisions based on the existence of X, and supposing the existence of X. If I start using language in a way that elides those differences, I am doing nobody any favors, least of all myself.

Comment author: bentarm 09 December 2010 02:22:33AM 3 points [-]

your question is one of those "great unsolved problems in philosophy."

Are there great solved problems in philosophy?

I think a good working definition of philosophy is "not science yet" - so the answer to this question is "yes, but we don't call it philosophy any more".

Comment author: Cephalover 10 December 2010 05:56:14AM 2 points [-]

Why can't real-ness just be functionality? People often resist this concept, but it seems sensible to me.

Exploring the function of things, in fact, how we know about the universe - when we talk about what something is, we'll really talking about an aggregate of functions that it has (e.g. we know that if we do something to a part of the universe, something will happen - since the result varies by the part of the universe we're looking at and the conditions under which we perturb it, we can divide the universe into "things.") We can say that atoms (for example) are real because we have observed consequences of our actions that (as far as we can tell) could only happen if something fitting our description of an atom existed.

Thinking of something as "real" in the sense that it seems like a sensible and somewhat self-explanatory entity is just a matter of familiarity, I think. We think of atoms as "real" because we've grown up conceiving of the world in that way (which is only the case, one would hope, because they have been shown to be "real" in the more strict scientific sense I mentioned above.)

Comment author: Douglas_Knight 10 December 2010 04:37:52AM 2 points [-]

The atomic theory seems to me very different to me than the others, which seem more like the motivating example. The atomic hypothesis also seems much easier. In particular, I think everyone was in agreement about what it would mean for atoms to be real or not, whereas in the other examples I think that there was no such agreement.

It is true that that the debates used the dichotomy "real or just a useful tool" that appeared in the other examples. Yes, what they meant by "real" was "is it useful in a less circumscribed setting," but the setting for the atomic theory was very cleanly circumscribed: the law of multiple proportions. It seemed back then (and seems to me) quite plausible that some other underlying phenomenon could give rise to that law. The discovery of another consequence that seemed much less plausibly explained by (hypothetical) other theories brought about rapid acceptance of the theory. (Quantization of charge would have done well here, but I believe the actual consequence was in explaining isomers and especially stereoisomers, which is a scale-free phenomenon, like ratios, but unlike quantization of charge.)

In the other examples, it was not clear what the competing theories were and whether they were distinguishable. Of course, since there was no particular theory competing with the atomic theory, there was danger of the situation becoming muddier, but that doesn't bother me so much.

Comment author: Douglas_Knight 08 December 2010 11:32:17PM *  2 points [-]

This is rather tangential, but it's something about geocentrism that has been bothering me recently. Aristarchus (c 250 BC) and Hipparchus (c 150 BC) computed that the sun has 10x the diameter of the earth and thus the earth should circle the sun. Their contemporaries said: no, heliocentrism implies that the fixed stars are very far away. That's an OK argument against heliocentrism, but did they really engage with the intermediate step? [see update] Ptolemy agreed that the sun was 10 million kilometers away, but did he discuss its size?

And what did later astronomers do? When Kepler and his contemporaries got it right, did they think this a big deal? Did they think it relevant to heliocentrism? This larger distance makes the distance to the fixed stars proportionately larger, but I think the plausible size of the universe should increase superlinearly in observed distances, so discovering that the sun is farther away than you thought should erode the parallax argument against heliocentrism.

ETA: improved measurement of the tides should allow one correctly do Hipparchus's calculation, and multiply the AU by 10. For fixed parallax, the (lower bound) distance to the fixed stars is measured in AU and thus increases proportionately, which is what I meant above. But later astronomers measured stars more accurately than earlier, providing evidence against parallax, and thus against heliocentrism. In fact, no one followed Hipparchus, but instead used astronomical measurements to improve on Aristarchus and Ptolemy. So I don't have much to ask heliocentric Kepler about the discovery that the sun was even bigger (though he had to bite a bigger bullet than Hipparchus of distance to fixed stars). Still, did his predecessors really grapple with the size of the sun?

Added, 2013: the writings of Aristarchus and Hipparchus are lost, so we're not really sure what they did, but they did both conclude that the sun is much bigger than the earth. Aristarchus was heliocentric, but Hipparchus was geocentric. Added, 2014: actually, even the claim that Hipparchus was geocentric is dubious. The claim that their contemporaries were geocentric, let alone that they made a parallax argument, is a complete fabrication for symmetry with the Renaissance arguments. My question about Ptolemy and later remains.

Comment author: JGWeissman 08 December 2010 06:38:11PM 2 points [-]

It looks like you dropped a word:

Despite this usefulness, there was considerable (debate?) as to whether atoms were “real” or were merely a useful pedagogical device.

Comment author: AnnaSalamon 08 December 2010 06:39:35PM 1 point [-]

Thanks. Fixed.

Comment author: JGWeissman 08 December 2010 06:46:58PM 0 points [-]

Another one:

“will I still get accurate predictions, when (I?) use X in a less circumscribed or compartmentalized manner?”

I finished reading, so that should be the last one. :)

Comment author: Vladimir_Nesov 08 December 2010 07:50:19PM 1 point [-]

Why do you communicate things like this publicly? It takes other people's attention, even if for a bit, where there seems to be no reason whatsoever for that to happen. It's an error that costs you and others almost nothing, but an error nonetheless.

Comment author: JGWeissman 08 December 2010 08:03:07PM 14 points [-]

Benifets of making public proofreading comments include:

Because I also check to see if anyone else has made a comment reporting the same error, it prevents the writer from getting many messages for the same correction.

When people see the comment and a polite reply from the author reporting the error has been fixed, it encourages them to report proofreading errors that they see, instead of saying silent, improving general quality of published articles.

This doesn't really apply in this case, but sometimes when a proposed correction resolves confusion generated by the error, the proofreading comment can help other readers to understand before the author responds and fixes the mistake.

I agree that due to being a distraction after the error is fixed, this is a tradeoff, and I would like to reduce that effect, perhaps a way to tag a thread as "resolved proofreading issue" that would collapse it be default or sort it to the end.

Comment author: DSimon 08 December 2010 08:34:27PM *  5 points [-]

I agree that due to being a distraction after the error is fixed, this is a tradeoff, and I would like to reduce that effect, perhaps a way to tag a thread as "resolved proofreading issue" that would collapse it be default or sort it to the end.

Kuro5hin, a general-purpose discussion site that's since been taken over by trolls, had a mechanism like this. When submitting a top-level comment, you could mark it as "editorial", which would keep it hidden under default view settings. This trick worked pretty well, and I notice that K5ers seemed more eager to offer editorial suggestions than LWers.

Comment author: Vladimir_Nesov 08 December 2010 08:38:31PM 2 points [-]

Thanks, I see now how it is less settled than I believed.

Comment author: Peter_de_Blanc 08 December 2010 10:25:47PM 1 point [-]

Benifets of making public proofreading comments include:

In my case it's a compulsion. No cost-benefit analysis is involved.

Comment author: xamdam 08 December 2010 08:04:15PM *  0 points [-]

I suspect it's for the same reason I occasionally litter by accident and not pick it up; it's a negative externality but the cost of self monitoring all the time is greater. I'd get worried if it goes over a (small) threshold. People like the communication for non-informational reasons and occasionally speech-litter.

Comment author: AnnaSalamon 08 December 2010 06:49:17PM 1 point [-]

Thanks again. I guess I should proof-read more carefully.

Comment author: johnclark 11 December 2010 06:27:34PM 3 points [-]

Are atoms real? Whatever the answer to that question is imagine if it were exchanged, that is suppose that magically the reality of atoms became unreal or the reality of atoms became real, would the world be in any way different as a result? I think the clear answer is no, therefore regardless of what the status of atoms may ultimately be, the question "Are atoms real?" is not real because real things make a difference and unreal things do not.

John K Clark

Comment author: potato 05 July 2012 08:17:49PM *  0 points [-]

If I thought that atoms were unreal, I would not expect to be able to photograph them. I also wouldn't expect a single atom to be capable of casting a shadow. That's some ways (and there are many more) that I could be wrong about atoms being unreal mere pedagogical tools.

Comment author: jhuffman 13 December 2010 06:39:49PM 0 points [-]

Could you give me an example of something that is real?

Comment author: derefr 15 December 2010 12:32:42PM *  1 point [-]

Whatever substrate supports the computation inscribing your consciousness would be necessarily real, under whatever sense the word "real" could possibly have useful meaning. ("I think; thinking is an algorithm; therefore something is, in order to execute that algorithm.")

Interestingly, proposing a Tegmark multiverse makes the deepest substrate of consciousness "mathematics."

Comment author: atucker 08 December 2010 10:35:29PM *  3 points [-]

When I hear "Are atoms real?" I imagine zooming in on some object until I can see an atom. Could they just be asking if, given the technology required to magnify/compress/measure some form of sensory input about something, it would make some kind of intuitive sense to a human brain?

Like, if you could stand above the solar system and look down on it, the Copernican model would say it makes more sense to hover over the Sun and imagine everything rotating around you than to hover over the Earth and imagine everything rotating around you while pirouetting .

Similarly, if I could put "drops of something" in the luminaferous aether and watch it eddy and swirl, I'd be convinced its real. That something doesn't exist, but that's only evidence of its non-reality.

If something "really" happened, I would expect that, if I had a time machine that didn't cause paradoxes and whatnot, I could go back in time and watch it.

Though, this version of "real" doesn't really account for questions about more abstract things, like "Is this love for real?" or "Is free will real?". (Though, free will is a disguised query for something else, but digressing...)

Comment author: lucidfox 14 December 2010 09:30:29AM 0 points [-]

Like, if you could stand above the solar system and look down on it, the Copernican model would say it makes more sense to hover over the Sun and imagine everything rotating around you than to hover over the Earth and imagine everything rotating around you while pirouetting .

This is an interesting point. What does "more sense" mean?

From a purely utilitarian point of view, Tycho Brahe's compromise system is as useful as Copernicus's: it gives the same experimental predictions while still keeping Earth at the center of the universe. Kepler's acceptance of the Copernican system had more to do with his Pythagorean views, his perception that the center of the universe must contain the Central Fire, the cause of all motion.

In other words, we're dealing with subjective views here. And yet it objectively seems to make more sense to center the Solar System around the Sun. Perhaps because there is no reason to privilege Earth above the other planets, and no reason to assume that the Sun "really" revolves around, say, Venus?

Comment author: atucker 15 December 2010 05:00:11AM *  0 points [-]

Lets go with "more sense" = simpler.

They both provide accurate predictions, but the heliocentric model that gravity holds the Planets in orbit around the Sun has a lower Kolmogorov complexity than a geocentric model in which the Earth is central, but everything has weird complicated paths as they orbit.

Comment author: JoshuaZ 15 December 2010 05:05:35AM 0 points [-]

Are you comparing the Copernican system or the Keplerian system? The straight Copernican system is about as complicated as the geocentrist system. You only get the reduction in complexity when you go for full out Keplerian. And note that there were other pre-Kepler systems that were arguably simpler than the Copernican system. this article gives a good brief summary.

Comment author: atucker 15 December 2010 05:56:18AM 0 points [-]

My mistake. I was thinking of Keplerian when I wrote this.

Comment author: Snowyowl 09 December 2010 01:44:32PM 0 points [-]

Interesting idea, but it seems a little badly-defined. Some aspects of quantum physics don't seem intuitive to me (mostly small details). Does that mean that those details aren't "real" to me, but they are real to other people?

Comment author: atucker 09 December 2010 10:28:00PM *  -1 points [-]

In a nutshell, yes. That doesn't make reality subjective, it just means that different people hold different things as real. I'm pretty sure that given a long enough conversation, people would agree on if something is real enough.

To clarify the definition, I'm basically reading the question and post as "Are electrons real objects?" Most of the examples given were about objects, or easily observable things about them, like what's going around what. Its less of a question of being intuitive, and more one of whether it would be observable given basically magical senses.

This dissolution doesn't really hold that well for theories. I can be convinced that things are true (like the early chemists who see atoms are yielding good predictions, but not necessarily existing), but without knowing if they're real.

Something that illustrates this split nicely is the question "Is Heisenberg's Uncertainty Principle real?" Its certainly true, you can't be sure of a particle's position and momentum because as you shorten a photon's wavelength enough to make the position more certain, you increase the photon's energy and make the particle's momentum less certain.

When I learned that, I felt jiffed. "Hey!" my reality-asserting subroutines complained to me "That's not uncertain at all! It happens to be that in real life you can't be certain, but if I could observe a particle without light or interacting with it any way using some magical impossible version of sight, it would totally have a definite momentum and position!"

It wasn't until the teacher demonstrated the unit rearrangement to phrase it in terms of energy and time, and talk about particles going through things that they didn't have the energy to penetrate for me to be convinced that it was real, and not just an accurate theory.

Comment author: [deleted] 10 December 2010 05:37:20AM *  1 point [-]

Only anti-realists think hidden questions lurk behind the concept of 'reality' [that's a fair definition of anti-realism]; realists take 'real' as primitive. You feel confused because you want an anti-realist account of "real," despite being a realist yourself . (Or else, you're an anti-realist smart enough to see through the extant anti-realist theories.)

Quine's is a famous example of an anti-realist account. Quine said the concepts denoting existing things (those that are real) are those variables you must quantify over in the best scientific theory of the subject. The problem with that is that "best scientific theory" isn't specifiable except with reference to what's real, although Quine made a noble effort.

When you ask whether atoms are real, the question doesn't concern the best explanatory framework; what's real determines best explanatory framework rather than the other way around—at least that's what realists think. The best theory is the one that best accounts for what's really there; the best theory doesn't doesn't determine what's real. Things may exist even though our best theory says otherwise. To parry the attack, the anti-realist can only retreat to epicycles—say, the best theory in the idealized long run—introducing concepts more problematic.

Comment author: Psy-Kosh 10 December 2010 05:10:26PM 1 point [-]

Hrm... I'd say atoms are real iff reality (whatever that is) obeys a certain set of (approximate) regularities (that basically amount to the rules for atoms). ie, there's a sense in which they're "actually there"

The atoms are an explanation for a phenomena iff the fact that those particular regularities are largely sufficient to explain the phenomena in question. That is, that one doesn't have to "dig deeper" to still explain things.

(Of course, if atomic theory in general had failed and only explained a single particular thing, that would suggest that the first criteria was violated.)

Comment author: steven0461 08 December 2010 08:37:04PM 1 point [-]

Lorentz, not Lorenz.

Comment author: AnnaSalamon 08 December 2010 09:02:14PM 1 point [-]

Fixed.

Comment author: byrnema 08 December 2010 06:59:25PM *  1 point [-]

Aether not only correctly predicted that light would act as waves, but also incorrectly predicted that the Earth's motion with respect to aether should affect the perceived speed of light.

I don't know anything about old ideas about aether, but I've wondered why it was wrong, and whether the aether-idea is really conclusively wrong or whether someday science could return to that idea.

Does "aether" necessarily mean that the observed speed of light may vary? In particular, what is packed into the word "aether" that demands this?

...I'm wondering if a different "aether" that doesn't require observer-dependent light speeds may be no less weird than current assumptions we have about how light propagates?

Comment author: DanielLC 09 December 2010 05:26:05AM 0 points [-]

The photon waveform has actual mass, and it certainly makes waves when light is traveling. As such, calling the photons themselves aether doesn't seem that inaccurate.

Comment author: byrnema 08 December 2010 11:32:35PM *  0 points [-]

I suppose no one has answered my question above because no one (yet) has a handy list of the essential, minimal assumptions of the theory of "luminiferous aether". Though perhaps I didn't describe my question well enough. It was: What assumption about the aether caused it to be disqualified?

During my drive home, I had some time to recall my motivations for wondering about the theory of aether and whether it is really dead or just out of style..

From what I understand of what light is -- mainly from discussions here on Less Wrong -- light is what happens as the electromagnetic field gets updated. Suppose you have an electron at location (x,y). This creates an electromagnetic field centered at (x,y). Then you move the electron to position (x',y') and the new electromagnetic field is centered there. The whole electromagnetic field has to shift by this much. But the field can't shift throughout the whole universe instantaneously. The change in the electromagnetic field propagates at a finite speed from the new position. We see this 'fixing' of the field as a light wave propagating through space.* Naively, I view it as a disturbed mesh that rights itself one kink at a time.

Is this mesh 'real'? If it's not real, what is the electromagnetic field? What is changing and getting fixed over time? To me, the question of whether the aether is real is the same question. Isn't there something there? But if not, how can it work?? I don't mind if the answer is something more abstract than a type of matter/particle. It just seems that if information is moving something must be carrying it.

* We see a lightwave by catching it, which means we change the electromagnetic field in exactly the right amount to counter the defect and stop its propagation.

Comment author: Vaniver 08 December 2010 11:54:45PM *  1 point [-]

I think this gets back to the question of what you mean by "there". Because if I have, say, water in a tank, and I move around a stick I placed in that water, then the 'water field' (or whatever I want to call the positions of the water molecules) will update based on that, and it will update at a finite speed because the information is carried by traveling water molecules. So the water field is there because water molecules are there- if you put something in their way, they'll run into it.

But electromagnetic waves are carried by photons, which are really weird. Water molecules have a rest mass- if you managed to slow one down to no speed at all, it would exert about as much gravitational pull as normal, and it would still get in the way of other things you tried to push through it. A photon has no rest mass, and a way of thinking about that is to say that if the photon isn't moving, it isn't there.

And so if by "thereness" you mean "if I shoot a neutron at a stationary one, is the neutron sometimes deflected?" then water molecules are there and photons aren't.

But there's another sense that we can talk about thereness- what happens when we (or they) speed up. If I had the aforementioned water tank on a train moving near the speed of light, things would look the same inside the train- but really weird from outside. To observers, the ability of the 'water field' to update depends on how fast the water field is moving relative to the observer- but that isn't true for the electromagnetic field. All observers see it 'updating' at the same rate.

So, what do we mean by "aether"? Here, I think we might be getting in a linguistic/historical issue, which is what you originally asked about. I was fascinated, watching a talk between PZ Myers and Dawkins (one of those, might not be the first one), where for Dawkins the phrase "group selection" seemed to be irretrievably connected with Wynne-Edwards, despite there being several defensible things that also have a connection with that name. Each time it came up, he had to check- "you're not talking about Wynne-Edwards group selection, right?"

I believe (but haven't extensively researched the physics in question) that the luminiferous aether was tied to the idea that there is one correct reference frame, and so when that disagreed with experiments that idea got tossed out. That means we don't have a visual answer to the question "what wiggles when there's an electromagnetic wave?", and as far as I can tell it doesn't make a difference what you visualize wiggling, but calling it 'aether' makes people ask questions to make sure you're not an adherent of a dead theory.

Comment author: prase 09 December 2010 12:29:50AM *  0 points [-]

Theory of æther was disqualified because of relativity. If there was some real mesh, its nodes would have to be located somewhere, and we would be able to measure our velocity relative to it. It does not work that way. The way it works can still be described by æther, but one must postulate that time and distance measurements are distorted depending on the velocity with respect to the æther, and still one has no chance to determine in which inertial system the æther is stationary. This is not how real entities behave.

Comment author: AnnaSalamon 08 December 2010 07:01:53PM *  0 points [-]

Well, I specified "luminiferous" (light-bearing) aether in the title, although I abbreviated this as simply "aether" in the rest of the section.

Comment author: byrnema 08 December 2010 07:14:12PM 0 points [-]

In the above, please consider "aether" replaced by "luminiferous aether". I suppose 'aether' could be vague enough to mean anything, and in that sense may be real, but I am curious about what aether needed to be in that particular theory in the 1800s.

Comment author: AnnaSalamon 08 December 2010 07:20:01PM 1 point [-]

Done.

Comment author: xamdam 08 December 2010 06:28:14PM 1 point [-]

[5] Thus, to extend this conjecturally toward our original question: when someone asks "Is the physical world 'real'?" they may, in part, be asking whether their predictive models of the physical world will give accurate predictions in a very robust manner, or whether they are merely local approximations. The latter would hold if e.g. the person: is a brain in a vat; is dreaming; or is being simulated and can potentially be affected by entities outside the simulation.

Hmm. Let's say we live in a multiverse where there are infinitely many universes with laws we cannot compute, so our laws are very much local (but not necessarily approximations). Would it make the world as we know it less real? I would not feel that.

On the other hand living in a simulation would feel unreal, though it might be based on a fantasy that you can 'break out' somehow.

Another use of the term is authenticity; e.g. I'd be proud to own a book signed by Churchill, but ashamed if it was a fake. (Physical laws to not dictate either way - it could have been authentic). This last example makes me think that it's going to be hard to disentangle the term from its psychological connotations.

Comment author: human-mathematics 14 December 2010 09:05:22AM 0 points [-]

I'm not sure what point you're trying to make, if you're trying to make one.

One other constructive criticism: why don't you consider some non-examples? I mean, theories that gave good predictions, seemed to generalize, and then, didn't?

I can't think of any stellar examples right now, but the lame example of the black swan comes to mind.

Comment author: uploada 13 December 2010 11:20:39AM 0 points [-]

Besides the predictive power as a way to measure "realness", I would add persistence. A car in my dream is less real than a car before me when I cross a street in my daily life, in the sense that it persists more in my mind.

Comment author: John_Maxwell_IV 12 December 2010 11:46:00PM *  0 points [-]

Taboo real?

:P

Comment author: Desrtopa 13 December 2010 12:26:07AM 0 points [-]
Comment author: hairyfigment 09 December 2010 07:32:49PM 0 points [-]

You know Eliezer argues that atoms are not individually real, right?

Comment author: SilasBarta 09 December 2010 07:39:24PM *  0 points [-]

Not real at the fundamental level, correct, but real in the relevant sense for certain levels of abstraction, and (satisfying the criteria Anna Salamon gave and the and the Level 2 standard I gave) capable of plugging in to other models of reality.

Comment author: hairyfigment 09 December 2010 07:51:38PM 0 points [-]

Well, the most famous opposition to atomic theory (famous to me at least) came from Ernst Mach of Mach's Principle. Seems like applying his positivism to quantum physics tells you that only the wave-function and the Born rule "are real". Atoms just give us a convenient way to approximate these rules for predicting experience.

Comment author: SilasBarta 09 December 2010 07:55:21PM *  2 points [-]

Reductionism sequence. Now.

Edit: Okay, maybe later.

Comment author: wedrifid 09 December 2010 08:41:37PM *  1 point [-]

Edit: Okay, maybe later.

Nice edit. :P

Comment author: erikvanderharst 10 December 2010 05:52:10PM *  -1 points [-]

How about photons? If they are real can they be particles as well as waves? Feynman (who got a nobel prize for it together with Sin-Itiro Tomonaga and Julian Schwinger (they all thought of it independently)) went for waves. So did he think that for a photon to exist at all it needs to be classified? I think he felt photons were "real" but kind of shady before he nailed them down as waves instead of particles (making them "really real").

Comment author: arundelo 10 December 2010 06:37:24PM *  1 point [-]

Feynman called light particles. (See also this video at 36:15 through 36:30.)

And welcome to Less Wrong!

Edit: Sorry I don't have an intelligent response to your actual point. For what it's worth, I (as a physics dilettante) think that light is neither waves nor particles, but blobs of amplitude (or something). It certainly doesn't behave like little billiard balls. I defer to Feynman, though.

Comment author: erikvanderharst 10 December 2010 10:04:39PM 0 points [-]

I remembered it the wrong way around. Feynman (and the other 2) went for particles rather than waves. What I was trying to say is that similar to atomic therory moving from "a useful pedagogical device" (1860) to "atoms really exist" (today), photons went from "this curious thing that looks like a particle or a wave depending on how you set up your experiment" to "It is a particle".

Comment author: wnoise 10 December 2010 10:11:42PM 4 points [-]

Well it went to "it's a particle" because all the other particles became "excitations of quantum fields" as well...

(And there are still significant differences in the phenomenological treatments because the boundary conditions play a very special role in describing and quantizing the field modes in actual calculations.)

Comment author: David_Gerard 10 December 2010 11:49:27PM *  2 points [-]

In the sense that the word "particle" at that scale now means "quantum probability distribution". BLOB THINGS.

(I still visualise atoms as planetary electrons around a nucleus sun - the electrons possibly in shells - until I catch myself and try to visualise s and p shells. Too much out-of-date popular science as a child.)