Eliezer_Yudkowsky comments on Mixed Reference: The Great Reductionist Project - Less Wrong

29 Post author: Eliezer_Yudkowsky 05 December 2012 12:26AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (353)

You are viewing a single comment's thread.

Comment author: Eliezer_Yudkowsky 05 December 2012 12:22:10AM 4 points [-]

Mainstream status:

AFAIK, the proposition that "Logical and physical reference together comprise the meaning of any meaningful statement" is original-as-a-whole (with many component pieces precedented hither and yon). Likewise I haven't elsewhere seen the suggestion that the great reductionist project is to be seen in terms of analyzing everything into physics+logic.

An important related idea I haven't gone into here is the idea that the physical and logical references should be effective or formal, which has been in the job description since, if I recall correctly, the late nineteenth century or so, when mathematics was being axiomatized formally for the first time. This pat is popular, possibly majoritarian; I think I'd call it mainstream. See e.g. http://plato.stanford.edu/entries/church-turing/ although logical specifiability is more general than computability (this is also already-known).

Obviously and unfortunately, the idea that you are not supposed to end up with more and more ontologically fundamental stuff is not well-enforced in mainstream philosophy.

Comment author: Alejandro1 05 December 2012 02:13:01AM 18 points [-]

AFAIK, the proposition that "Logical and physical reference together comprise the meaning of any meaningful statement" is original-as-a-whole (with many component pieces precedented hither and yon). Likewise I haven't elsewhere seen the suggestion that the great reductionist project is to be seen in terms of analyzing everything into physics+logic.

This seems awfully similar to Hume's fork:

If we take in our hand any volume; of divinity or school metaphysics, for instance; let us ask, Does it contain any abstract reasoning concerning quantity or number? No. Does it contain any experimental reasoning concerning matter of fact and existence? No. Commit it then to the flames: for it can contain nothing but sophistry and illusion.

  • David Hume, An Enquiry Concerning Human Understanding (1748)

As Mardonius says, 20th century logical empiricism (also called logical positivism or neopositivism) is basically the same idea with "abstract reasoning" fleshed out as "tautologies in formal systems" and "experimental reasoning" fleshed out initially as " statements about sensory experiences". So the neopositivists' original plan was to analyze everything, including physics, in terms of logic + sense data (similar to qualia, in modern terminology). But some of them, like Neurath, considered logic + physics a more suitable foundation from the beginning, and others, like Carnap, became eventually convinced of this as well, so the mature neopositivist position is quite similar to yours.

One key difference is that for you (I think, correct me if I am wrong) reductionism is an ontological enterprise, showing that the only "stuff" there is (in some vague sense) is logic and physics. For the neopositivists, such a statement would be as meaningless as the metaphysics they were trying to "commit to the flames". Reductionism was a linguistic enterprise: to develop a language in which every meaningful statement is translatable into sentences about physics (or qualia) and logic, in order to make the sciences more unified and coherent and to do away with muddled metaphysical thought.

Comment author: Eliezer_Yudkowsky 05 December 2012 02:31:14AM 4 points [-]

Is there a good statement of the "mature neopositivist" / Carnap's position?

Comment author: Alejandro1 05 December 2012 10:48:42PM *  2 points [-]

There is no article on Carnap on the SEP, and I couldn't find a clear statement on the Vienna Circle article, but there is a fairly good one in the Neurath article:

In his classic work Der Logische Afbau der Welt (1928) (known as the Aufbau and translated as The Logical Structure of the World), Carnap investigated the logical ‘construction’ of objects of inter-subjective knowledge out of the simplest starting point or basic types of fundamental entities (Russell had urged in his late solution to the problem of the external world to substitute logical constructions for inferred entities). He introduced several possible domains of objects, one of which being the psychological objects of private sense experience—analysed as ‘elementary experiences’.

(…)

Neurath first confronted Carnap on yet another alleged feature of his system, namely, subjectivism. He promptly rejected Carnap's proposals on the grounds that if the language and the system of statements that constitute scientific knowledge are intersubjective, then phenomenalist talk of immediate subjective, private experiences should have no place.

(…)

Following Neurath, Carnap explicitly opposed to the language of experience a narrower conception of intersubjective physicalist language which was to be found in the exact quantitative determination of physics-language realized in the readings of measurement instruments. Remember that for Carnap only the structural or formal features, in this case, of exact mathematical relations (manifested in the topological and metric characteristics of scales), can guarantee objectivity. After the Aufbau, now the unity of science rested on the universal possibility of the translation of any scientific statement into physical language—which in the long run might lead to the reduction of all scientific knowledge to the laws and concepts of physics.

The mature Carnap position seems to be, then, not to reduce everything to logic + fundamental physics (electrons/wavefunctions/etc), as perhaps you thought I had implied, but to reduce everything to logic + observational physics (statements like "Voltimeter reading = 10 volts"). Theoretical sentences about electrons and such are to be reduced (in some sense that varied which different formulations) to sentences of observational physics. This does not mean that for Carnap electrons are not "real"; as I said before, reductionism was conceived as a linguistic proposal, not an ontological thesis.

Comment author: Eliezer_Yudkowsky 05 December 2012 11:23:00PM 0 points [-]

Experience + logic != physics + logic > causality + logic

Comment author: shminux 05 December 2012 11:26:56PM -1 points [-]

Experience + models = reality

Comment author: RobbBB 06 December 2012 05:25:40AM *  2 points [-]

Cucumbers are neither experiences nor models. Yet I'm pretty sure reality includes at least one cucumber.

Comment author: shminux 06 December 2012 06:30:45AM *  1 point [-]

Cucumbers are both experiences and models, actually. You experience its sight, texture and taste, you model this as a green vegetable with certain properties which predict and constrain your similar future experiences.

Numbers, by comparison, are pure models. That's why people are often confused about whether they "exist" or not.

Comment author: [deleted] 06 December 2012 08:21:33PM 0 points [-]

You experience its sight, texture and taste, you model this as a green vegetable with certain properties which predict and constrain your similar future experiences.

Are experiences themselves models? If not, are you endorsing the view that qualia are fundamental?

Comment author: shminux 06 December 2012 09:19:12PM 1 point [-]

Are experiences themselves models? If not, are you endorsing the view that qualia are fundamental?

Experiences are, of course, themselves a multi-layer combination of models and inputs, and at some point you have to stop, but qualia seem to be at too high a level, given that they appear to be reducible to physiology in most brain models.

Comment author: RobbBB 06 December 2012 08:33:51AM 0 points [-]
  1. How do you know models exist, and aren't just experiences of a certain sort?

  2. How do you know that unexperienced, unmodeled cucumbers don't exist? How do you know there was no physical universe prior to the existence of experiencers and modelers?

Comment author: NancyLebovitz 07 December 2012 03:16:55PM *  3 points [-]

I've played with the idea that there is nothing but experience (Zen and the Art of Motorcycle Maintenance was rather convincing). However, it then becomes surprising that my experience generally behaves as though I'm living in a stable universe with such things as previously unexperienced cucumbers showing up at plausible times.

Comment author: shminux 06 December 2012 06:34:53PM 0 points [-]

How do you know that unexperienced, unmodeled cucumbers don't exist?

This question is meaningless in the framework I have described (Experience + models = reality). If you provide an argument why this framework is not suitable, i.e., it fails to be useful in a certain situation, feel free to give an example.

Comment author: bryjnar 05 December 2012 11:11:50PM 1 point [-]

Even just take the old logical postivist doctrine about analyticity/syntheticity: all statements are either "analytic" (i.e. true by logic (near enough)), or synthetic (true due to experience). That's at least on the same track. And I'm pretty sure they wouldn't have had a problem with statements that were partially both.

Comment author: crazy88 05 December 2012 12:45:55AM 8 points [-]

Obviously and unfortunately, the idea that you are not supposed to end up with more and more ontologically fundamental stuff inside your philosophy is not mainstream.

I think I must be misunderstanding what you're saying here because something very similar to this is probably the principle accusation relied upon in metaphysical debates (if not the very top, certainly top 3). So let me outline what is standard in metaphysical discussions so that I can get clear on whether you're meaning something different.

In metaphysics, people distinguish between quantitative and qualitative parsimony. Quantitative parisimony is about the amount of stuff your theory is committed to (so a theory according to which more planets exist is less quantitatively parsimonious than an alternative). Most metaphysicians don't care about quantative parsimony. On the other hand, qualitative parsimony is about the types of stuff that your theory is committed to. So if a theory is committed to causation and time, this would be less qualitatively parsimonious than one that that was only committed to causation (just an example, not meant to be an actual case). Qualitative parsimony is seen to be one of the key features of a desirable metaphysical theory. Accusations that your theory postulates extra ontological stuff but doesn't gain further explanatory power for doing so is basically the go to standard accusation against a metaphysical theory.

Fundamentality is also a major philosophical issue - the idea that some stuff you postulate is ontologically fundamental and some isn't. Fundamentality views are normally coupled with the view that what really matters is qualitative parsimony of fundamental stuff (rather than stuff generally).

So how does this differ from the claim that you're saying is not mainstream?

Comment author: Eliezer_Yudkowsky 05 December 2012 12:53:53AM 2 points [-]

The claim might just need correction to say, "Many philosophers say that simplicity is a good thing but the requirement is not enforced very well by philosophy journals" or something like that. I think I believe you, but do you have an example citation anyway? (SEP entries or other ungated papers are in general good; I'm looking for an example of an idea being criticized due to lack of metaphysical parsimony.) In particular, can we find e.g. anyone criticizing modal logic because possibility shouldn't be basic because metaphysical parsimony?

Comment author: crazy88 05 December 2012 01:11:26AM *  9 points [-]

In terms of Lewis, I don't know of someone criticising him for this off-hand but it's worth noting that Lewis himself (in his book On the Plurality of Worlds) recognises the parsimony objection and feels the need to defend himself against it. In other words, even those who introduce unparsimonious theories in philosophy are expected to at least defend the fact that they do so (of course, many people may fail to meet these standards but the expectation is there and theories regularly get dismissed and ignored if they don't give a good accounting of why we should accept their unparsimonious nature).

Sensations and brain processes: one of Jack Smart's main grounds for accepting the identity theory of mind is based around considerations of parsimony

Quine's paper On What There Is is basically an attack on views that hold that we need to accept the existence of things like pegasus (because otherwise what are we talking about when we say "Pegasus doesn't exist"). Perhaps a ridiculous debate but it's worth noting that one of Quine's main motivations is that this view is extremely unparsimonious.

From memory, some proponents of EDT support this theory because they think that we can achieve the same results as CDT (which they think is right) in a more parsimonious way by doing so (no link for that however as that's just vague recollection).

I'm not actually a metaphysician so I can't give an entire roll call of examples but I'd say that the parsimony objection is the most common one I hear when I talk to metaphysicians.

Comment author: Eugine_Nier 06 December 2012 06:22:31AM 2 points [-]

In particular, can we find e.g. anyone criticizing modal logic because possibility shouldn't be basic because metaphysical parsimony?

Why shouldn't it? I haven't seen any reduction of it that deals with this objection.

Comment author: Peterdjones 05 December 2012 02:06:33PM *  0 points [-]

"Many philosophers say that simplicity is a good thing but the requirement is not enforced very well by philosophy journals"

Would that be desirable? If a contributor can argue persuasively for dropping parsimony, why should that be suppressed?

criticizing modal logic because possibility

Surely that should be modal realism.

Comment author: NancyLebovitz 05 December 2012 03:32:27PM 1 point [-]

"Make things as simple as possible, but no simpler." --Albert Einstein

How do you know whether something is as simple as possible?

In terms of publishing, should the standard be as simple as is absolutely possible, or should it be as simple as possible given time and mental constraints?

Comment author: DanArmak 05 December 2012 08:10:15PM 0 points [-]

How do you know whether something is as simple as possible?

You keep trying to make it simpler, but you fail to do so without losing something in return.

Comment author: crazy88 05 December 2012 09:49:04PM *  2 points [-]

It still may be hard to resolve when something is as simple as possible.

So modal realism (the idea that possible worlds exist concretely) has been highlighted a few times in this thread as an unparsimonious theory but Lewis has two responses to this:

1.) This is (at least mostly) quantitative unparsimony not qualitative (lots of stuff, not lots of types of stuff). It's unclear how bad quantitative unparsimony is. Specifically, Lewis argues that there is no difference between possible worlds and actual worlds (actuality is indexical) so he argues that he doesn't postulate two types of stuff (actuality and possibility) he just postulates a lot more of the stuff that we're already committed to. Of course, he may be committed to unicorns as well as goats (which the non-realist isn't) but then you can ask whether he's really committed to more fundamental stuff than we are.

2.) Lewis argues that his theory can explain things that no-one else can so even if his theory is less parsimonious, it gives rewards in return for that cost.

Now many people will argue that Lewis is wrong, perhaps on both counts but the point is that even with the case that's been used almost as a benchmark for unparsimonious philosophy in this thread, it's not as simple as "Lewis postulates two types of stuff when he doesn't need to, therefore, clearly his theory is not as simple as possible."

Comment author: Mardonius 05 December 2012 12:55:10AM 3 points [-]

Isn't this, essentially, a mild departure from late Logical Empiricism to allow for a wider definition of Physical and a more specific definition of Logical references?

Comment author: Eliezer_Yudkowsky 05 December 2012 12:59:26AM 1 point [-]

I don't see anything similar to this post on a quick skim of http://plato.stanford.edu/entries/logical-empiricism/ . Please specify.

Comment author: Mardonius 05 December 2012 01:55:25AM 2 points [-]

Well, I was specifically thinking of this passage

The Great Reductionist Project can be seen as figuring out how to express meaningful sentences in terms of a >combination of physical references (statements whose truth-value is determined by a truth-condition directly >correspnding to the real universe we're embedded in) and logical references (valid implications of premises, >or elements of models pinned down by axioms); where both physical references and logical references are to >be described 'effectively' or 'formally', in computable or logical form. (I haven't had time to go into this last part >but it's an already-popular idea in philosophy of computation.)

And the Great Reductionist Thesis can be seen as the proposition that everything meaningful can be >expressed this way eventually.

Which, to my admittedly rusty knowledge of mid 20th century philosophy, sounds extremely similar to the anti-metaphysics position of Carnap circa 1950. His work on Ramsey sentences, if I recall, was an attempt to reduce mixed statements including theoretical concepts ("appleness") to a statement consisting purely of Logical and Observational Terms. I'm fairly sure I saw something very similar to your writings in his late work regarding Modal Logic, but I'm clearly going to have to dig up the specific passage.

Comment author: RobbBB 05 December 2012 02:38:27AM *  8 points [-]

Amusingly, this endeavor also sounds like your arch-nemesis David Chalmers' new project, Constructing the World. Some of his moderate responses to various philosophical puzzles may actually be quite useful to you in dismissing sundry skeptical objections to the reductive project; from what I've seen, his dualism isn't indispensable to the interesting parts of the work.

Comment author: bryjnar 05 December 2012 03:47:37AM 3 points [-]

Just to say that in general, apart from the stuff about consciousness, which I disagree with but think is interesting, I think that Chalmers is one of the best philosophers alive today. Seriously, he does a lot of good work.

Comment author: Will_Newsome 06 December 2012 05:34:28PM *  4 points [-]

He also reads LessWrong, I think.

Comment author: Alejandro1 06 December 2012 05:56:57PM 4 points [-]

I am about 90% certain that he is djc.

Comment author: gwern 06 December 2012 06:00:06PM 4 points [-]

I'd agree; the link to philpapers (a Chalmers project), claiming to be a pro, having access to leading decision theorists - all consistent.

Comment author: RobbBB 06 December 2012 06:31:44PM 4 points [-]

It's either Chalmers or a deliberate impersonator. 'DJC' stands for 'David John Chalmers.'

Comment author: aaronsw 25 December 2012 08:57:42PM *  1 point [-]

It's too bad EY is deeply ideologically committed to a different position on AI, because otherwise his philosophy seems to very closely parallel John Searle's. Searle is clearer on some points and EY is clearer on others, but other than the AI stuff they take a very similar approach.

EDIT: To be clear, John Searle has written a lot, lot more than the one paper on the Chinese Room, most of it having nothing to do with AI.

Comment author: Eliezer_Yudkowsky 25 December 2012 09:14:15PM 0 points [-]

So... admittedly my main acquaintance with Searle is the Chinese Room argument that brains have 'special causal powers', which made me not particularly interested in investigating him any further. But the Chinese Room argument makes Searle seem like an obvious non-reductionist with respect to not only consciousness but even meaning; he denies that an account of meaning can be given in terms of the formal/effective properties of a reasoner. I've been rendering constructive accounts of how to build meaningful thoughts out of "merely" effective constituents! What part of Searle is supposed to be parallel to that?

Comment author: aaronsw 25 December 2012 09:27:36PM *  4 points [-]

I guess I must have misunderstood something somewhere along the way, since I don't see where in this sequence you provide "constructive accounts of how to build meaningful thoughts out of 'merely' effective constituents" . Indeed, you explicitly say "For a statement to be ... true or alternatively false, it must talk about stuff you can find in relation to yourself by tracing out causal links." This strikes me as parallel to Searle's view that consciousness imposes meaning.

But, more generally, Searle says his life's work is to explain how things like "money" and "human rights" can exist in "a world consisting entirely of physical particles in fields of force"; this strikes me as akin to your Great Reductionist Project.

Comment author: pjeby 26 December 2012 12:28:26AM 5 points [-]

Searle says his life's work is to explain how things like "money" and "human rights" can exist in "a world consisting entirely of physical particles in fields of force";

Someone should tell him this has already been done: dissolving that kind of confusion is literally part of LessWrong 101, i.e. the Mind Projection Fallacy. Money and human rights and so forth are properties of minds modeling particles, not properties of the particles themselves.

That this is still his (or any other philosopher's) life's work is kind of sad, actually.

Comment author: aaronsw 04 January 2013 10:01:47PM 2 points [-]

I guess my phrasing was unclear. What Searle is trying to do is generate reductions for things like "money" and "human rights"; I think EY is trying to do something similar and it takes him more than just one article on the Mind Projection Fallacy. (Even once you establish that it's properties of minds, not particles, there's still a lot of work left to do.)

Comment author: Eliezer_Yudkowsky 26 December 2012 12:07:54AM 1 point [-]

This strikes me as parallel to Searle's view that consciousness imposes meaning.

Why? Did I mention consciousness somewhere? Is there some reason a non-conscious software program hooked up to a sensor, couldn't do the same thing?

I don't think Searle and I agree on what constitutes a physical particle. For example, he thinks 'physical' particles are allowed to have special causal powers apart from their merely formal properties which cause their sentences to be meaningful. So far as I'm concerned, when you tell me about the structure of something's effects on the particle fields, there shouldn't be anything left after that - anything left is extraphysical.

Comment author: Peterdjones 27 December 2012 11:05:15AM 1 point [-]

Searle's views have nothing to do with attributing novel properties to fundamental particles. They are more to do with identifying mental properties with higher-levle physical properties, which are themselves irreducible in a sense (but also reducible in another sense).

Comment author: Ritalin 29 December 2012 08:08:31AM -1 points [-]

That's confusing. What senses?

Comment author: Peterdjones 01 January 2013 11:35:39AM 0 points [-]

See the link I gave to start with.

Comment author: pjeby 25 December 2012 09:12:04PM -1 points [-]

It's too bad EY is deeply ideologically committed to a different position on AI, because otherwise his philosophy seems to very closely parallel John Searle's

Perhaps I'm confused, but isn't Searle the guy who came up with that stupid Chinese Room thing? I don't see at all how that's remotely parallel to LW philosophy, or why it would be a bad thing to be ideologically opposed to his approach to AI. (He seems to think it's impossible to have AI, after all, and argues from the bottom line for that position.)

Comment author: aaronsw 25 December 2012 09:46:41PM 3 points [-]

I was talking about Searle's non-AI work, but since you brought it up, Searle's view is:

  1. qualia exists (because: we experience it)
  2. the brain causes qualia (because: if you cut off any other part of someone they still seem to have qualia)
  3. if you simulate a brain with a Turing machine, it won't have qualia (because: qualia is clearly a basic fact of physics and there's no way just using physics to tell whether something is a Turing-machine-simulating-a-brain or not)

Which part does LW disagree with and why?

Comment author: Benito 26 December 2012 01:01:07AM *  2 points [-]

To offer my own reasons for disagreement,

I think the first point is unfounded (or misguided). We do things (like moving, and thinking). We notice and can report that we've done things, and occasionally we notice and can report that we've noticed that we've done something. That we can report how things appear to a part of us that can reflect upon stimuli is not important enough to be called 'quaila'. That we notice that we find experience 'ineffable' is not a surprise either - you would not expect the brain to be able to report everything that occurs, down to the neurons firing (or atoms moving). So, all we really have is the ability to notice and report that which has been advantageous for us to report in the evolutionary history of the human (these stimuli that we can notice are called 'experiences'). There is nothing mysterious here, and the word 'qualia' always seems to be used mysteriously - so I don't think the first point carries the weight it might appear to.

if you simulate a brain with a Turing machine, it won't have qualia (because: qualia is clearly a basic fact of physics and there's no way just using physics to tell whether something is a Turing-machine-simulating-a-brain or not)

Qualia is not clearly a basic fact of physics. I made the point that we would not expect a species designed by natural selection to be able to report or comprehend its most detailed, inner workings, solely on the evidence of what it can report and notice. But this is all skirting around the core idea of LessWrong: The map is not the territory. Just because something seems fundamental does not mean it is. Just because it seems like a Turing machine couldn't be doing consciousness, doesn't mean that is how it is. We need to understand how it came to be that we feel what we feel, before go making big claims about the fundamental nature of reality. This is what is worked on in LessWrong, not in Searle's philosophy.

Comment author: Peterdjones 26 December 2012 11:32:35AM -1 points [-]

That we notice that we find 'ineffable' is not a surprise either - you would not expect the brain to be able to report everything that occurs, down to the neurons firing (or atoms moving)That we notice that we find 'ineffable' is not a surprise either - you would not expect the brain to be able to report everything that occurs, down to the neurons firing (or atoms moving)

If the ineffabiity of qualia is down to the complexity of fine-grained neural behaviour, then the question is why is anything effable -- people can communicate about all sorts of things that aren't sensations (and in many cases are abstract and "in the head").

Comment author: Benito 26 December 2012 11:54:52AM *  0 points [-]

I'm not sure that I follow. Can anything we talk about be reduced to less than the basic stimuli we notice ourselves having?

All words (that mean anything) refer to something. When I talk about 'guitars', I remember experiences I've had which I associate with the word (i.e. guitars). Most humans have similar makeups, in that we learn in similar ways, and experience in similar ways (I'm just talking about the psychological unity of humans, and how far our brain design is from, say, mice). So, we can talk about things, because we've learnt to refer certain experiences (words) to others (guitars).

Neither of the two can refer to anything other to the experiences we have. Anything we talk about is in relation to our experiences (Or possibly even meaningless).

Comment author: Peterdjones 26 December 2012 02:52:36PM *  -1 points [-]

Most of the classic reductions are reductions to things beneath perceivable stimuli,eg heat to molecular motion. Reductionism and physialism would be in very bad trouble if language and concpetualistion grounded out where perception does. The theory also mispredicts that we woul be able communicate our sensations , but struggle to communicate abstract (eg mathemataical) ideas with a distant rleationship, or no relationship to senssation. In fact, the classic reductions are to the basic entities of phyiscs, which are ultimately defined mathematically, and often hard to hard to visualise or otherwise relate to sensation.

Comment author: Benito 26 December 2012 04:31:06PM *  0 points [-]

You could point out the different constituents of experience that feel fundamental, but they themselves (e.g. Red) don't feel as though they are made up of anything more than themselves.

When we talk about atoms, however, that isn't a basic piece of mind that mind can talk about. My mind feels as though it is constituted of qualia, and it can refer to atoms. I don't experience an atom, I experience large groups of them, in complex arrangements. I can refer to the atom using larger, complex arrangements of neurons (atoms). Even though, when my mind asks what the basic parts of reality are, it has a chain of reference pointing to atoms, each part of that chain is a set of neural connections, that don't feel reducible.

Even on reflection, our experiences reduce to qualia. We deduce that qualia are made of atoms, but that doesn't mean that our experience feels like its been reduced to atoms.

Comment author: Peterdjones 27 December 2012 11:12:28AM *  -1 points [-]

Where is that heading? Is it supposed to tell my why qualia are ineffable....or rather, why qualia are more ineffable than cognition?

Comment author: nshepperd 26 December 2012 12:33:27AM *  2 points [-]

I can't really speak for LW as a whole, but I'd guess that among the people here who don't believe¹ "qualia doesn't exist", 1 and 2 are fine, but we have issues with 3, as expanded below. Relatedly, there seems be some confusion between the "boring AI" proposition, that you can make computers do reasoning, and Searle's "strong AI" thing he's trying to refute, which says that AIs running on computers would have both consciousness and some magical "intentionality". "Strong AI" shouldn't actually concern us, except in talking about EMs or trying to make our FAI non-conscious.

3. if you simulate a brain with a Turing machine, it won't have qualia

Pretty much disagree.

qualia is clearly a basic fact of physics

Really disagree.

and there's no way just using physics to tell whether something is a Turing-machine-simulating-a-brain or not

And this seems really unlikely.

¹ I qualify my statement like this because there is a long-standing confusion over the use of the word "qualia" as described in my parenthetical here.

Comment author: aaronsw 04 January 2013 10:08:32PM 2 points [-]

Well, let's be clear: the argument I laid out is trying to refute the claim that "I can create a human-level consciousness with a Turing machine". It doesn't mean you couldn't create an AI using something other than a pure Turing machine and it doesn't mean Turing machines can't do other smart computations. But it does mean that uploading a brain into a Von Neumann machine isn't going to keep you alive.

So if you disagree that qualia is a basic fact of physics, what do you think it reduces to? Is there anything else that has a first-person ontology the way qualia does?

And if you think physics can tell whether something is a Turing-machine-simulating-a-brain, what's the physical algorithm for looking at a series of physical particles and deciding whether it's executing a particular computation or not?

Comment author: nshepperd 05 January 2013 02:45:29AM 1 point [-]

So if you disagree that qualia is a basic fact of physics, what do you think it reduces to?

Something brains do, obviously. One way or another.

And if you think physics can tell whether something is a Turing-machine-simulating-a-brain, what's the physical algorithm for looking at a series of physical particles and deciding whether it's executing a particular computation or not?

I should perhaps be asking what evidence Searle has for thinking he knows things like what qualia is, or what a computation is. My statements were both negative: it is not clear that qualia is a basic fact of physics; it is not obvious that you can't describe computation in physical terms. Searle just makes these assumptions.

If you must have an answer, how about this: a physical system P is a computation of a value V if adding as premises the initial and final states of P and a transition function describing the physics of P shortens a formal proof that V = whatever.

Comment author: aaronsw 05 January 2013 09:46:46PM 1 point [-]

They're not assumptions, they're the answers to questions that have the highest probability going for them given the evidence.

Comment author: MugaSofer 26 December 2012 02:04:35AM 1 point [-]

if you simulate a brain with a Turing machine, it won't have qualia (because: qualia is clearly a basic fact of physics and there's no way just using physics to tell whether something is a Turing-machine-simulating-a-brain or not)

There's your problem. Why the hell should we assume that "qualia is clearly a basic fact of physics "?

Comment author: aaronsw 04 January 2013 10:10:16PM 1 point [-]

Because it's the only thing in the universe we've found with a first-person ontology. How else do you explain it?

Comment author: MugaSofer 04 January 2013 11:39:16PM *  -1 points [-]

Well, I probably can't explain it as eloquently as others here - you should try the search bar, there are probably posts on the topic much better than this one - but my position would be as follows:

  • Qualia are experienced directly by your mind.

  • Everything about your mind seems to reduce to your brain.

  • Therefore, qualia are probably part of your brain.

Furthermore, I would point out two things: one, that qualia seem to be essential parts of having a mind; I certainly can't imagine a mind without qualia; and two, that we can view (very roughly) images of what people see in the thalamus, which would suggest that what we call "qualia" might simply be part of, y'know, data processing.

Comment author: TheOtherDave 26 December 2012 12:57:03AM *  1 point [-]

Another not-speaking-for-LW answer:

Re #1: I certainly agree that we experience things, and that therefore the causes of our experience exist. I don't really care what name we attach to those causes... what matters is the thing and how it relates to other things, not the label. That said, in general I think the label "qualia" causes more trouble due to conceptual baggage than it resolves, much like the label "soul".

Re #2: This argument is oversimplistic, but I find the conclusion likely.
More precisely: there are things outside my brain (like, say, my adrenal glands or my testicles) that alter certain aspects of my experience when removed, so it's possible that the causes of those aspects reside outside my brain. That said, I don't find it likely; I'm inclined to agree that the causes of my experience reside in my brain. I still don't care much what label we attach to those causes, and I still think the label "qualia" causes more confusion due to conceptual baggage than it resolves.

Re #3: I see no reason at all to believe this. The causes of experience are no more "clearly a basic fact of physics" than the causes of gravity; all that makes them seem "clearly basic" to some people is the fact that we don't understand them in adequate detail yet.

Comment author: pjeby 26 December 2012 12:20:35AM 0 points [-]

Searle's view is:

  1. qualia exists (because: we experience it)
  2. the brain causes qualia (because: if you cut off any other part of someone they still seem to have qualia)
  3. if you simulate a brain with a Turing machine, it won't have qualia (because: qualia is clearly a basic fact of physics and there's no way just using physics to tell whether something is a Turing-machine-simulating-a-brain or not)

Which part does LW disagree with and why?

The whole thing: it's the Chinese Room all over again, a intuition pump that begs the very question it's purportedly answering. (Beginning an argument for the existence of qualia with a bare assertion that they exist is a little more obvious than the way that the word "understanding" is fudged in the Chinese Room argument, but basically it's the same.)

I suppose you could say that there's a grudging partial agreement with your point number two: that "the brain causes qualia". The rest of what you listed, however, is drivel, as is easy to see if you substitute some other term besides "qualia", e.g.:

  1. Free will exists (because: we experience it)
  2. The brain causes free will (because if you cut off any part, etc.)
  3. If you simulate a brain with a Turing machine, it won't have free will because clearly it's a basic fact of physics and there's no way to tell just using physics whether something is a machine simulating a brain or not.

It doesn't matter what term you plug into this in place of "qualia" or "free will", it could be "love" or "charity" or "interest in death metal", and it's still not saying anything more profound than, "I don't think machines are as good as real people, so there!"

Or more precisely: "When I think of people with X it makes me feel something special that I don't feel when I think of machines with X, therefore there must be some special quality that separates people from machines, making machine X 'just a simulation'." This is the root of all these Searle-ian arguments, and they are trivially dissolved by understanding that the special feeling people get when they think of X is also a property of how brains work.

Specifically, the thing that drives these arguments is our inbuilt machinery that classifies things as mind-having or not-mind-having, for purposes of prediction-making. But the feeling that we get that a thing is mind-having or not-mind-having is based on what was useful evolutionarily, not on what the actual truth is. Searlian (Surly?) arguments are thus in exactly the same camp as any other faith-based argument: elevating one's feelings to Truth, irrespective of the evidence against them.

Comment author: [deleted] 26 December 2012 01:07:45AM 1 point [-]

(Beginning an argument for the existence of qualia with a bare assertion that they exist is a little more obvious than the way that the word "understanding" is fudged in the Chinese Room argument, but basically it's the same.)

Just a nit pick: the argument Aaron presented wasn't an argument for the existence of qualia, and so taking the existence of qualia as a premise doesn't beg the question. Aaron's argument was an argument agains artificial consciousness.

Also, I think Aaron's presentation of (3) was a bit unclear, but it's not so bad a premise as you think. (3) says that since qualia are not reducible to purely physical descriptions, and since a brain-simulating turing-machine is entirely reducible to purely physical descriptions, brain-simulating turing-machines won't experience qualia. So if we have qualia, and count as conscious in virtue of having qualia (1), then brain-simulating turing machines won't count as conscious. If we don't have qualia, i.e. if all our mental states are reducible to purely physical descriptions, then the argument is unsound because premise (1) is false.

You're right that you can plug many a term in to replace 'qualia', so long as those things are not reducible to purely physical descriptions. So you couldn't plug in, say, heart-attacks.

This is the root of all these Searle-ian arguments, and they are trivially dissolved by understanding that the special feeling people get when they think of X is also a property of how brains work.

Could you explain this a bit more? I don't see how it's relevant to the argument. Searle is not arguing on the basis of any special feelings. This seems like a straw man to me, at the moment, but I may not be appreciating the flaws in Searle's argument.

Comment author: pjeby 26 December 2012 03:09:16AM *  1 point [-]

the argument Aaron presented wasn't an argument for the existence of qualia, and so taking the existence of qualia as a premise doesn't beg the question

In order for the argument to make any sense, you have to buy into several assumptions which basically are the argument. It's "qualia are special because they're special, QED". I thought about calling it circular reasoning, except that it seems closer to begging the question. If you have a better way to put it, by all means share.

Could you explain this a bit more? I don't see how it's relevant to the argument. Searle is not arguing on the basis of any special feelings. This seems like a straw man to me, at the moment, but I may not be appreciating the flaws in Searle's argument.

When I said that our mind detection circuitry was the root of the argument, I didn't mean that Searle was overtly arguing on the basis of his feelings. What I'm saying is, the only evidence for Searle-type premises are the feelings created by our mind-detection circuitry. If you assume these feelings mean something, then Searle-ish arguments will seem correct, and Searle-ish premises will seem obvious beyond question.

However, if you truly grok the mind-projection fallacy, then Searle-type premises are just as obviously nonsensical, and there's no reason to pay any attention to the arguments built on top of them. Even as basic a tool as Rationalist Taboo suffices to debunk the premises before the argument can get off the ground.

Comment author: Peterdjones 26 December 2012 12:28:49PM -1 points [-]

you have to buy into several assumptions which basically are the argument.

Any vald argument has a conclusion that is entiailed by its premises taken jointly. Circularity is when the whole conclusion is entailed by one premise, with the others being window-dressing.

you have to buy into several assumptions which basically are the argument.

I think there is a way that ripe tomatoes seem visually: how is that mind-projection.

Comment author: MugaSofer 26 December 2012 01:57:19AM 0 points [-]

But ... if you're assuming that qualia are "not reducible to purely physical descriptions", and you need qualia to be conscious, then obviously brain-simulations wont be conscious. But those assumptions seem to be the bulk of the position he's defending, aren't they?

Comment author: [deleted] 26 December 2012 03:21:28AM 2 points [-]

But those assumptions seem to be the bulk of the position he's defending, aren't they?

Right, the argument comes down, for most of us, to the first premise: do we or do we not have mental states irreducible to purely physical conditions. Aaron didn't present an argument for that, he just presented Searle's argument against AI from that. But you're right to ask for a defense of that premise, since it's the crucial one and it's (at the moment) undefended here.

Comment author: MugaSofer 26 December 2012 03:13:23PM *  -2 points [-]

Presenting an obvious result of a nonobvious premise as if it was a nonobvious conclusion seems suspicious, as if he's trying to trick listeners into accepting his conclusion even when their priors differ.

[Edited for terminology.]

Comment author: [deleted] 26 December 2012 03:33:34PM 0 points [-]

Presenting a trivial conclusion from nontrivial premises as a nontrivial conclusion seems suspicious

Not only suspicious, but impossible: if the premises are non-trivial, the conclusion is non-trivial.

In every argument, the conclusion follows straight away from the premises. If you accept the premises, and the argument is valid, then you must accept the conclusion. The conclusion does not need any further support.

Comment author: Peterdjones 26 December 2012 12:48:10PM -1 points [-]

. (3) says that since qualia are not reducible to purely physical descriptions, and since a brain-simulating turing-machine is entirely reducible to purely physical descriptions, brain-simulating turing-machines won't experience qualia.

To pick a further nit, the argument is more that qualia can't be engineered into an AI. If an AI implementation has qualia at all, it would be serendipitous.

Comment author: [deleted] 26 December 2012 03:36:16PM *  0 points [-]

To pick a further nit, the argument is more that qualia can't be engineered into an AI. If an AI implementation has qualia at all, it would be serendipitous.

That's a possibility, but not as I laid out the argument: if being conscious entails having qualia, and if qualia are all irreducible to purely physical descriptions, and every state of a turning machine is reducible to a purely physical description, then turing machines can't simulate consciousness. That's not very neat, but I do believe it's valid. Your alternative is plausible, but it requires my 'turning machines are reducible to purely physical descriptions' premise to be false.

Comment author: aaronsw 04 January 2013 09:51:39PM *  0 points [-]

Beginning an argument for the existence of qualia with a bare assertion that they exist

Huh? This isn't an argument for the existence of qualia -- it's an attempt to figure out whether you believe in qualia or not. So I take it you disagree with step one, that qualia exists? Do you think you are a philosophical zombie?

I do think essentially the same argument goes through for free will, so I don't find your reductio at all convincing. There's no reason, however, to believe that "love" or "charity" is a basic fact of physics, since it's fairly obvious how to reduce these. Do you think you can reduce qualia?

I don't understand why you think this is a claim about my feelings.

Comment author: shminux 05 January 2013 12:38:05AM *  2 points [-]

Suppose that neuroscientists some day show that the quale of seeing red matches a certain brain structure or a neuron firing pattern or a neuro-chemical process in all humans. Would you then say that the quale of red has been reduced?

Comment author: aaronsw 05 January 2013 09:45:19PM 2 points [-]

Of course not!

Comment author: shminux 05 January 2013 10:16:42PM 0 points [-]

and why not?

Comment author: aaronsw 05 January 2013 10:23:39PM 1 point [-]

Because the neuron firing pattern is presumably the cause of the quale, it's certainly not the quale itself.

Comment author: Peterdjones 05 January 2013 10:36:35PM *  -2 points [-]

Reduction is an explanatory process: a mere observed correlation does not qualify.

Comment author: pjeby 06 January 2013 01:21:46AM *  0 points [-]

I take it you disagree with step one, that qualia exists?

I think that anyone talking seriously about "qualia" is confused, in the same way that anyone talking seriously about "free will" is.

That is, they're words people use to describe experiences as if they were objects or capabilities. Free will isn't something you have, it's something you feel. Same for "qualia".

I do think essentially the same argument goes through for free will

Dissolving free will is considered an entry-level philosophical exercise for Lesswrong. If you haven't covered that much of the sequences homework, it's unlikely that you'll find this discussion especially enlightening.

(More to the point, you're doing the rough equivalent of bugging people on a newsgroup about a question that is answered in the FAQ or an RTFM.)

Do you think you are a philosophical zombie?

This is probably a good answer to that question.

I don't understand why you think this is a claim about my feelings.

Because (as with free will) the only evidence anyone has (or can have) for the concept of qualia is their own intuitive feeling that they have some.

Comment author: Peterdjones 06 January 2013 01:27:09AM -2 points [-]

Free will isn't something you have, it's something you feel.

So you say. It is not standardly defined that way.

Same for "qualia".

Qualia are defined as feelings, sensations etc. Since we have feelings, sensations etc we have qualia. I do not see the confusion in using the word ""qualia"

Comment author: hairyfigment 05 January 2013 12:21:35AM -1 points [-]

Do you think you can reduce qualia?

Well, would that mean writing a series like this?

My intuition certainly says that Martha has a feeling of ineffable learning. Do you at least agree that this proves the unreliability of our intuitions here?

Comment author: aaronsw 05 January 2013 09:45:02PM -1 points [-]

Who said anything about our intuitions (except you, of course)?

Comment author: hairyfigment 06 January 2013 05:26:14AM 0 points [-]

You keep making statements like,

the neuron firing pattern is presumably the cause of the quale, it's certainly not the quale itself.

And you seem to consider this self-evident. Well, it seemed self-evident to me that Martha's physical reaction would 'be' a quale. So where do we go from there?

(Suppose your neurons reacted all the time the way they do now when you see orange light, except that they couldn't connect it to anything else - no similarities, no differences, no links of any kind. Would you see anything?)

Comment author: aaronsw 06 January 2013 07:34:20PM 1 point [-]

I guess you need to do some more thinking to straighten out your views on qualia.

Comment author: MugaSofer 04 January 2013 11:46:10PM *  -1 points [-]

I do think essentially the same argument goes through for free will

Could you expand on this point, please? It generally agreed* that "free will vs determinism" is a dilemma that we dissolved long ago. I can't see what else you could mean by this, so ...

[*EDIT: here, that is]

Comment author: aaronsw 05 January 2013 09:36:42PM 0 points [-]

I guess it really depends on what you mean by free will. If by free will, pjeby meant some kind of qualitative experience, then it strikes me that what he means by it is just a form of qualia and so of course the qualia argument goes through. If he means by it something more complicated, then I don't see how point one holds (we experience it), and the argument obviously doesn't go through.

Comment author: Peterdjones 26 December 2012 12:17:36PM 0 points [-]

Beginning an argument for the existence of qualia with a bare assertion that they exist

But that's not contentious. Qualia are things like the appearence of tomatoes or taste of lemon. I've seen tomatoes and tasted lemons.

This is the root of all these Searle-ian arguments, and they are trivially dissolved by understanding that the special feeling people get when they think of X is also a property of how brains work.

But Searle says that feelngs, understanding, etc are properties of how the brain works. What he argues against is the claim that they are computational properties. But it is also uncontentious that physiclaism can be true and computationalism false.

Comment author: Peterdjones 26 December 2012 10:52:29AM -1 points [-]

if you simulate a brain with a Turing machine, it won't have qualia (because: qualia is clearly a basic fact of physics and there's no way just using physics to tell whether something is a Turing-machine-simulating-a-brain or not)

It isn't even clear to Searle that qualia are physically basic. He thinks consciousness is a a high-level outcome of the brain's concrete causal powers. His objection to computaional apporaches is rooted in the abstract nature of computation, not in the physcial basiscness of qualia. (In fact, he doesn't use the word "qualia", although he often seems to be talking about the same thing).