Comment author: pjeby 26 December 2012 12:20:35AM 0 points [-]

Searle's view is:

  1. qualia exists (because: we experience it)
  2. the brain causes qualia (because: if you cut off any other part of someone they still seem to have qualia)
  3. if you simulate a brain with a Turing machine, it won't have qualia (because: qualia is clearly a basic fact of physics and there's no way just using physics to tell whether something is a Turing-machine-simulating-a-brain or not)

Which part does LW disagree with and why?

The whole thing: it's the Chinese Room all over again, a intuition pump that begs the very question it's purportedly answering. (Beginning an argument for the existence of qualia with a bare assertion that they exist is a little more obvious than the way that the word "understanding" is fudged in the Chinese Room argument, but basically it's the same.)

I suppose you could say that there's a grudging partial agreement with your point number two: that "the brain causes qualia". The rest of what you listed, however, is drivel, as is easy to see if you substitute some other term besides "qualia", e.g.:

  1. Free will exists (because: we experience it)
  2. The brain causes free will (because if you cut off any part, etc.)
  3. If you simulate a brain with a Turing machine, it won't have free will because clearly it's a basic fact of physics and there's no way to tell just using physics whether something is a machine simulating a brain or not.

It doesn't matter what term you plug into this in place of "qualia" or "free will", it could be "love" or "charity" or "interest in death metal", and it's still not saying anything more profound than, "I don't think machines are as good as real people, so there!"

Or more precisely: "When I think of people with X it makes me feel something special that I don't feel when I think of machines with X, therefore there must be some special quality that separates people from machines, making machine X 'just a simulation'." This is the root of all these Searle-ian arguments, and they are trivially dissolved by understanding that the special feeling people get when they think of X is also a property of how brains work.

Specifically, the thing that drives these arguments is our inbuilt machinery that classifies things as mind-having or not-mind-having, for purposes of prediction-making. But the feeling that we get that a thing is mind-having or not-mind-having is based on what was useful evolutionarily, not on what the actual truth is. Searlian (Surly?) arguments are thus in exactly the same camp as any other faith-based argument: elevating one's feelings to Truth, irrespective of the evidence against them.

Comment author: aaronsw 04 January 2013 09:51:39PM *  0 points [-]

Beginning an argument for the existence of qualia with a bare assertion that they exist

Huh? This isn't an argument for the existence of qualia -- it's an attempt to figure out whether you believe in qualia or not. So I take it you disagree with step one, that qualia exists? Do you think you are a philosophical zombie?

I do think essentially the same argument goes through for free will, so I don't find your reductio at all convincing. There's no reason, however, to believe that "love" or "charity" is a basic fact of physics, since it's fairly obvious how to reduce these. Do you think you can reduce qualia?

I don't understand why you think this is a claim about my feelings.

Comment author: pjeby 25 December 2012 09:12:04PM -1 points [-]

It's too bad EY is deeply ideologically committed to a different position on AI, because otherwise his philosophy seems to very closely parallel John Searle's

Perhaps I'm confused, but isn't Searle the guy who came up with that stupid Chinese Room thing? I don't see at all how that's remotely parallel to LW philosophy, or why it would be a bad thing to be ideologically opposed to his approach to AI. (He seems to think it's impossible to have AI, after all, and argues from the bottom line for that position.)

Comment author: aaronsw 25 December 2012 09:46:41PM 3 points [-]

I was talking about Searle's non-AI work, but since you brought it up, Searle's view is:

  1. qualia exists (because: we experience it)
  2. the brain causes qualia (because: if you cut off any other part of someone they still seem to have qualia)
  3. if you simulate a brain with a Turing machine, it won't have qualia (because: qualia is clearly a basic fact of physics and there's no way just using physics to tell whether something is a Turing-machine-simulating-a-brain or not)

Which part does LW disagree with and why?

Comment author: Eliezer_Yudkowsky 25 December 2012 09:14:15PM 0 points [-]

So... admittedly my main acquaintance with Searle is the Chinese Room argument that brains have 'special causal powers', which made me not particularly interested in investigating him any further. But the Chinese Room argument makes Searle seem like an obvious non-reductionist with respect to not only consciousness but even meaning; he denies that an account of meaning can be given in terms of the formal/effective properties of a reasoner. I've been rendering constructive accounts of how to build meaningful thoughts out of "merely" effective constituents! What part of Searle is supposed to be parallel to that?

Comment author: aaronsw 25 December 2012 09:27:36PM *  4 points [-]

I guess I must have misunderstood something somewhere along the way, since I don't see where in this sequence you provide "constructive accounts of how to build meaningful thoughts out of 'merely' effective constituents" . Indeed, you explicitly say "For a statement to be ... true or alternatively false, it must talk about stuff you can find in relation to yourself by tracing out causal links." This strikes me as parallel to Searle's view that consciousness imposes meaning.

But, more generally, Searle says his life's work is to explain how things like "money" and "human rights" can exist in "a world consisting entirely of physical particles in fields of force"; this strikes me as akin to your Great Reductionist Project.

Comment author: Eliezer_Yudkowsky 05 December 2012 12:22:10AM 4 points [-]

Mainstream status:

AFAIK, the proposition that "Logical and physical reference together comprise the meaning of any meaningful statement" is original-as-a-whole (with many component pieces precedented hither and yon). Likewise I haven't elsewhere seen the suggestion that the great reductionist project is to be seen in terms of analyzing everything into physics+logic.

An important related idea I haven't gone into here is the idea that the physical and logical references should be effective or formal, which has been in the job description since, if I recall correctly, the late nineteenth century or so, when mathematics was being axiomatized formally for the first time. This pat is popular, possibly majoritarian; I think I'd call it mainstream. See e.g. http://plato.stanford.edu/entries/church-turing/ although logical specifiability is more general than computability (this is also already-known).

Obviously and unfortunately, the idea that you are not supposed to end up with more and more ontologically fundamental stuff is not well-enforced in mainstream philosophy.

Comment author: aaronsw 25 December 2012 08:57:42PM *  1 point [-]

It's too bad EY is deeply ideologically committed to a different position on AI, because otherwise his philosophy seems to very closely parallel John Searle's. Searle is clearer on some points and EY is clearer on others, but other than the AI stuff they take a very similar approach.

EDIT: To be clear, John Searle has written a lot, lot more than the one paper on the Chinese Room, most of it having nothing to do with AI.

Comment author: [deleted] 01 December 2012 02:34:52PM 0 points [-]

Try this and let me know if it's what you're looking for.

In response to comment by [deleted] on Open Thread, December 1-15, 2012
Comment author: aaronsw 01 December 2012 02:40:54PM 2 points [-]

That's a good explanation of how to do Solomonoff Induction, but it doesn't really explain why. Why is a Kolmgorov complexity prior better than any other prior?

Comment author: aaronsw 01 December 2012 02:36:57PM 0 points [-]

I agree with EY that collapse interpretations of QM are ridiculous but are there any arguments against the Bohm interpretation better than the ones canvassed in the SEP article?

http://plato.stanford.edu/entries/qm-bohm/#o

Comment author: aaronsw 01 December 2012 02:18:40PM 1 point [-]

Someone smart recently argued that there's no empirical evidence young earth creationists are wrong because all the evidence we have of the Earth's age is consistent either hypothesis that God created the earth 4000 years ago but designed it to look like it was much older. Is there a good one-page explanation of the core LessWrong idea that your beliefs need to be shifted by evidence even when the evidence isn't dispositive as versus the standard scientific notion of devastating proof? Right now the idea seems smeared across the Sequences.

Comment author: aaronsw 01 December 2012 02:12:32PM 0 points [-]

Typo: But or many philosophical problems

Comment author: aaronsw 01 December 2012 02:02:56PM 1 point [-]
In response to Causal Universes
Comment author: Eliezer_Yudkowsky 28 November 2012 06:13:09AM 22 points [-]

Mainstream status:

I haven't yet particularly seen anyone else point out that there is in fact a way to finitely Turing-compute a discrete universe with self-consistent Time-Turners in it. (In fact I hadn't yet thought of how to do it at the time I wrote Harry's panic attack in Ch. 14 of HPMOR, though a primary literary goal of that scene was to promise my readers that Harry would not turn out to be living in a computer simulation. I think there might have been an LW comment somewhere that put me on that track or maybe even outright suggested it, but I'm not sure.)

The requisite behavior of the Time Turner is known as Stable Time Loops on the wiki that will ruin your life, and known as the Novikov self-consistency principle to physicists discussing "closed timelike curve" solutions to General Relativity. Scott Aaronson showed that time loop logic collapses PSPACE to polynomial time.

I haven't yet seen anyone else point out that space and time look like a simple generalization of discrete causal graphs to continuous metrics of relatedness and determination, with c being the generalization of locality. This strikes me as important, so any precedent for it or pointer to related work would be much appreciated.

Comment author: aaronsw 29 November 2012 11:40:30PM *  1 point [-]

I don't totally understand it, but Zuse 1969 seems to talk about spacetime as a sort of discrete causal graph with c as the generalization of locality ("In any case, a relation between the speed of light and the speed of transmission between the individual cells of the cellular automaton must result from such a model."). Fredkin and Wolfram probably also have similar discussions.

View more: Prev | Next