In response to Cached Thoughts
Comment author: Phil_Goetz2 30 June 2008 01:14:52AM 0 points [-]

In 1998, I wrote a rec.arts.int-fiction post called "Believable stupidity" (http://groups.google.com/group/rec.arts.int-fiction/ browse_thread/thread/60a077934f89a291/ 3fffb9048965857d?lnk=gst&q=believable+stupidity#3fffb9048965857d) split across 3 lines; rejoin for link)

saying that Eliza, a computer program that matches patterns, and fills in a template to produce a response, always wins the Loebner competition because template matching is more like what people do than reasoning is.

Comment author: Phil_Goetz2 15 April 2008 01:34:51AM 2 points [-]

Someone (Russell?) once commented on the surprising efficacy of mathematics, which was developed by people who did not believe that it would ever serve any purpose, and yet ended up being at the core of many pragmatic solutions.

A companion observation is on the surprising inefficacy of philosophy, which is intended to solve our greatest problems, and never does. Like Eliezer, my impression is that philosophy just generates a bunch of hypotheses, with no way of choosing between them, until the right hypotheses is eventually isolated by scientists. Philosophy is usually an attempt to do science without all the hard work. One might call philosophy the "science of untestable hypotheses".

But, on the other hand, there must be cases where philosophical inclinations have influenced people to pursue lines of research that solved some problem sooner than it would have been solved without the initial philosophical inclination.

One example is the initial conception that the Universe could be described mathematically. Kepler and Newton worked so hard at finding mathematical equations to govern the movements of celestial bodies because they believed that God must have designed a Universe according to some order. If they'd been atheists, they might never have done so.

This example doesn't redeem philosophy, because I believe their philosophies were helpful only by chance. I'd like to see how many examples there are of philosophical notions that sped up research that proved them correct. Can anyone think of some?

Comment author: Phil_Goetz2 11 April 2008 04:00:42AM 0 points [-]

To make it clear why you would sometimes want to think about implied invisibles, suppose you're going to launch a spaceship, at nearly the speed of light, toward a faraway supercluster. By the time the spaceship gets there and sets up a colony, the universe's expansion will have accelerated too much for them to ever send a message back. Do you deem it worth the purely altruistic effort to set up this colony, for the sake of all the people who will live there and be happy? Or do you think the spaceship blips out of existence before it gets there? This could be a very real question at some point.

I don't see any difference between deciding to send the spaceship even though the colonists will be outside my lightcone when they get there, and deciding to send the spaceship even though I will be dead when they get there.

I don't think it's possible to get outside Earth's light cone by travelling less than the speed of light, is it? I'm not well-educated about such things, but I thought that leaving a light-cone was possible only during the very early stages (eg., the first several seconds) after the big bang. Of course, that was said back when people believed the universe's expansion was slowing down. But unless the universe's expansion allows things to move out of Earth's light-cone - and I suspect that allowing that possibility would allow violation of causality, because it seems it would require a perceived velocity wrt Earth past the speed of light - then the entire exercise may be moot; the notion of invisibles may be as incoherent as the atomically-identical zombies.

In response to GAZP vs. GLUT
Comment author: Phil_Goetz2 07 April 2008 01:15:51PM 1 point [-]

PK is right. I don't think a GLUT can be intelligent, since it can't remember what it's done. If you let it write notes in the sand and then use those notes as part of the future stimulus, then it's a Turing machine.

The notion that a GLUT could be intelligent is predicated on the good-old-fashioned AI idea that intelligence is a function that computes a response from a stimulus. This idea, most of us in this century now believe, is wrong.

In response to GAZP vs. GLUT
Comment author: Phil_Goetz2 07 April 2008 03:07:33AM 0 points [-]

Eliezer, I suspect you are not being 100% honest here. I don't have any problems with a GLUT being conscious.

I have problems with a GLUT being conscious. (Actually, the GLUT fails dramatically to satisfy the graph-theoretic requirements for consciousness that I alluded to but did not describe earlier today, but I wouldn't believe that a GLUT could be conscious even if that weren't the case.)

In response to GAZP vs. GLUT
Comment author: Phil_Goetz2 07 April 2008 03:06:32AM 1 point [-]

Eliezer, I suspect you are not being 100% honest here. I don't have any problems with a GLUT being conscious.

I have problems with a GLUT being conscious. (Actually, the GLUT fails dramatically to satisfy the graph-theoretic requirements for consciousness that I alluded to but did not describe earlier today, but I wouldn't believe that a GLUT could be conscious even if that weren't the case.)

Comment author: Phil_Goetz2 06 April 2008 07:25:34PM 0 points [-]

Going on about zombies and consciousness as if you were addressing philosophical issues, when you have redefined consciousness to mean a particular easily-comprehended computational or graph-theoretic property, falls squarely into the category of ideas that I consider Silly.

Although, ironically, I'm in the process of doing exactly that. I will try to come up with a rationalization for why it is Not Silly when I do it.

Comment author: Phil_Goetz2 06 April 2008 07:22:25PM 0 points [-]

Caledonian writes:

Um, no. What it IS is a radically different meaning of the word than what the p-zombie nonsense uses. Chalmers' view requires stripping 'consciousness' of any consequence, while Eliezer's involves leaving the standard usage intact.

'Consciousness' in that sense refers to self-awareness or self-modeling, the attempt of a complex computational system to represent some aspects of itself, in itself. It has causal implications for the behavior of the system, can potentially be detected by an outside observer who has access to the mechanisms underlying that system, and is fully part of reality.

What Eliezer wrote is consistent with that definition of consciousness. But that is not "the standard usage". It's a useless usage. Self-representation is trivial and of no philosophical interest. The interesting philosophical question is why I have what the 99% of the world who doesn't use your "standard usage" means by "consciousness". Why do I have self-awareness? - and by self-awareness, I don't mean anything I can currently describe computationally, or know how to detect the consequences of.

This is the key unsolved mystery of the universe, the only one that we have really no insight into yet. You can't call it "nonsense" when it clearly exists and clearly has no explanation or model. Unless you are a zombie, in which case what I interpret as your stance is reasonable.

There is a time to be a behaviorist, and it may be reasonable to say that we shouldn't waste our time pursuing arguments about internal states that we can't detect behaviorially, but it is Silly to claim to have dispelled the mystery merely by defining it away.

There have been too many attempts by scientists to make claims about consciousness that sound astonishing, but turn out to be merely redefinitions of "consciousness" to something trivial. Like this, for instance. Or Crick's "The Astonishing Hypothesis", or other works by neuroscientists on "consciousness" when they are actually talking about focus of attention. I have developed an intellectual allergy to such things. Going on about zombies and consciousness as if you were addressing philosophical issues, when you have redefined consciousness to mean a particular easily-comprehended computational or graph-theoretic property, falls squarely into the category of ideas that I consider Silly.

Comment author: Phil_Goetz2 06 April 2008 03:03:35AM 0 points [-]

Consciousness, whatever it may be - a substance, a process, a name for a confusion - is not epiphenomenal; your mind can catch the inner listener in the act of listening, and say so out loud. The fact that I have typed this paragraph would at least seem to refute the idea that consciousness has no experimentally detectable consequences.

Eliezer, I'm shocked to see you write such nonsense. This only shows that you don't understand the zombie hypothesis at all. Or, you suppose that intelligence requires consciousness. This is the spiritualist, Searlian stuff you usually oppose.

The zombie hypothesis begins by asserting that I have no way of knowing whether you are conscious, no matter what you write. You of all people I expect to accept this, since you believe that you are Turing-computable. You haven't made an argument against the zombie hypothesis; you've merely asserted that it is false and called that assertion an argument.

The only thing I can imagine is that you have flipped the spiritualist argument around to its mirror image. Instead of saying that "I am conscious; Turing machines may not be conscious; therefore I am not just a Turing machine", you may be saying, "I am conscious; I am a Turing machine; therefore, all Turing machines that emit this sequence of symbols are conscious."

In response to Hand vs. Fingers
Comment author: Phil_Goetz2 30 March 2008 04:16:49AM 3 points [-]

If you want to fight the good fight, edit the section "Limits of Reductionism" in the Wikipedia article on Reductionism. It cites many examples of things that are merely complex, as evidence that reductionism is false.

View more: Prev | Next