Comment author: JoshuaZ 24 November 2012 08:42:02PM 1 point [-]

Is there something that it is like to be you?

I'm not sure this question is any better formed. "What it is like to be an X" doesn't seem to have any coherent meaning when one presses people about what they actually are talking about.

If anything, the philosophical consensus is that qualia is important.

Taking qualia seriously as a question is a distinct claim than qualia actually having anything substantial to do with consciousness. I'm not sure of specific acceptance levels of qualia, but the fact is that a majority of philosophers either accept physicalism or lean towards it. So I'm not sure how to reconcile that with your claim.

Yes, behaviorism is a very attractive solution. But presumably what people want is a living conscious artificial mind and not a useful house maid in robot form. I can get that functionality right now.

On the contrary, most people don't care whether it is conscious in some deep philosophical sense. In fact, having functional AI that are completely not conscious have certain advantages- such as being less of an ethical problem in sending them to be destroyed (say as robot soldiers, or as probes to other planets). Moreover, the primary worry discussed on LW as far as AI is concerned is that the AI will bootstrap itself in a way that results in a very unpleasant bad singularity. Whether the AI is truly conscious or not has nothing to do with that worry.

Wikipedia? Really?

Yes, for many purposes Wikipedia is quite useful and reasonably reliable as a source. In many fields (math and chemistry for example) articles have been written by actual experts in the fields.

Did you even bother to read the page or are you just pointing to something on wikipedia and believing that constitutes an argument?

My primary intent for the link was for its use in the introduction where it uses the fairly standard notion that "that psychology should concern itself with the observable behavior of people and animals, not with unobservable events that take place in their minds." It is incidentally useful to understand behaviorism in most senses of the term went away not due to arguments about things like qualia, but rather that advances in neuroscience and related areas allowed us to get much more direct access to what was going on inside. At some level, psychology is still controlled by behaviorism if one interprets that to include brain activity as behavior.

And yes, I am familiar with behaviorism in the sense that is discussed in that section. But it still isn't an attempt to explain consciousness. It is essentially an argument that psychology doesn't need to explain consciousness. These aren't the same thing.

"If it is raining, Mr. Smith will use his umbrella. It is raining, therefore Mr. Smith will use his umbrella." Is this a valid deduction? No, it isn't because consciousness is not behavior only.

So I don't follow you at all here, and it doesn't even look like there's any argument you've made here other than just some sort of conclusion. But I don't see where in the notion of "deduction" consciousness comes in. Are you using some non-standard definition of "use" or of "umbrella"?

If you are a fan of Doctor Who, is the Teselecta conscious? Is there something that it is like to be the Teselecta? My answer is no, there is nothing it is like to be a robot piloted by miniature people emulating the behavior of a real conscious person.

Don't be a blockhead. ;)

So, on LW there's a general expectation of civility, and I suspect that that general expectation doesn't go away when one punctuates with a winky-emoticon.

Comment author: noen 25 November 2012 04:41:07AM *  0 points [-]

"On the contrary, most people don't care whether it is conscious in some deep philosophical sense."

Do you mean that people don't care if they are philosophical zombies or not? I think they care very much. I also think that you're eliding the point a bit by using "deep" as a way to hand wave the problem away. The problem of consciousness is not some arcane issue that only matters to philosophers in their ivory towers. It is difficult. It is unsolved. And... and this is important. it is a very large problem, so large that we should not spend decades exploring false leads. I believe strong AI proponents have wasted 40 years of time and energy pursuing a ill advised research program. Resources that could have better been spent in more productive ways.

That's why I think this is so important. You have to get things right, get your basic "vector" right otherwise you'll get lost because the problem is so large once you make a mistake about what it is you are doing you're done for. The "brain stabbers" are in my opinion headed in the right direction. The "let's throw more parallel processors connected in novel topologies at it" crowd are not.

"Moreover, the primary worry discussed on LW as far as AI is concerned is that the AI will bootstrap itself in a way that results in a very unpleasant bad singularity."

Sounds like more magical thinking if you ask me. Is bootstrapping a real phenomenon? In the real world is there any physical process that arises out of nothing?

"And yes, I am familiar with behaviorism in the sense that is discussed in that section. But it still isn't an attempt to explain consciousness."

Yes it is. In every lecture I have heard when the history of the philosophy of mind is recounted the behaviorism of the 50's and early 60's it's main arguments for and against it as an explanation of consciousness are given. This is just part of the standard literature. I know that cognitive/behavioral therapeutic models are in wide use and very successful but that is simply beside the point here.

"So I don't follow you at all here, and it doesn't even look like there's any argument you've made here other than just some sort of conclusion."

Are you kidding!??? It was nothing BUT argument. Here, let me make it more explicit.

Premise 1 "If it is raining, Mr. Smith will use his umbrella." Premise 2 "It is raining" Conclusion "therefore Mr. Smith will use his umbrella."

That is a behaviorist explanation for consciousness. It is logically valid but still fails because we all know that Mr. Smith just might decide not to use his umbrella. Maybe that day he decides he likes getting wet. You cannot deduce intent from behavior. If you cannot deduce intent from behavior then behavior cannot constitute intentionality.

"So, on LW there's a general expectation of civility, and I suspect that that general expectation doesn't go away when one punctuates with a winky-emoticon."

It's a joke hun. I thought you would get the reference to Ned Block's counter argument to behaviorism. It shows how an unconscious machine could pass the Turing test. I'm pretty sure that Steven Moffat must have been aware of it and created the Teselecta.

Suppose we build a robot and instead of robot brain we put in a radio receiver. The robot can look and move just like any human. Suppose then that we take the nation of China and give everyone a transceiver and a rule they must follow. For each individual if they receive as input state S1 they will then output state S2. They are all connected in a functional flowchart that perfectly replicates a human brain. The robot then looks moves and above all talks just like any human being. It passes the Turing test.

Is "Blockhead" (the name affectionately given to this robot) conscious?

No it is not. A non-intelligent machine passes the behaviorist Turing test for an intelligent AI. Therefore behaviorism cannot explain consciousness and an intelligent AI could never be constructed from a database of behaviors. (Which is essentially what all attempts at computer AI consist of. A database and a set of rules for accessing them.)

Comment author: JoshuaZ 23 November 2012 05:30:12PM *  0 points [-]

Is there something that it is like to be Siri?

I'm not sure what you mean by this question. Is this a variant of what it is like to be a bat? There's a decent argument that such questions don't make sense. But this doesn't matter much: Whether some AI has qualia or not doesn't change any of the external behavior, than for most purposes like existential risk it doesn't matter.

I doubt it. I think it will always be apparent to people that they are dealing with a software tool that makes it easier for

This and most of the rest of your post are assertions, not arguments.

If behaviorism has been rejected as an explanation for consciousness how can one appeal to behaviorism as a model for future AI?

First, what do you mean by behaviorism in this context? Behaviorism as that word is classically defined isn't an attempt to explain consciousness. It doesn't care about consciousness at all.

Comment author: noen 24 November 2012 03:30:48PM -1 points [-]

"Is this a variant of what it is like to be a bat?"

Is there something that it is like to be you? There are also decent arguments that qualia does matter. It is hardly a settled matter. If anything, the philosophical consensus is that qualia is important.

"Whether some AI has qualia or not doesn't change any of the external behavior,"

Yes, behaviorism is a very attractive solution. But presumably what people want is a living conscious artificial mind and not a useful house maid in robot form. I can get that functionality right now.

If I write a program that allows my PC to speak in perfect English and in a perfectly human voice can my computer talk to me? Can it say hello? Yes it can, Can it greet me hello? No, it cannot because it cannot intend to say hello.

"Behaviorism as that word is classically defined isn't an attempt to explain consciousness."

Wikipedia? Really? Did you even bother to read the page or are you just pointing to something on wikipedia and believing that constitutes an argument? Look at section 5 "Behaviorism in philosophy". Read that and follow the link to the Philosophy of Mind article. Read that. You will discover that behaviorism was at one time thought to be a valid theory of mind. That all we needed to do to explain human behavior was to describe human behavior.

"If it is raining, Mr. Smith will use his umbrella. It is raining, therefore Mr. Smith will use his umbrella." Is this a valid deduction? No, it isn't because consciousness is not behavior only.

If you are a fan of Doctor Who, is the Teselecta conscious? Is there something that it is like to be the Teselecta? My answer is no, there is nothing it is like to be a robot piloted by miniature people emulating the behavior of a real conscious person.

Don't be a blockhead. ;)

Comment author: fubarobfusco 24 November 2012 05:48:53AM 2 points [-]

What on earth is "semantic content"?

Comment author: noen 24 November 2012 02:25:55PM 0 points [-]

Meaning.

The words on this page mean things. They are intended to refer to other things.

Comment author: noen 23 November 2012 04:17:51PM -2 points [-]

You're getting old. The long term prognosis is that the condition is fatal. ;)

Comment author: JoshuaZ 21 November 2012 09:03:38PM *  3 points [-]

Since the Chinese Room argument does refute the strong AI hypothesis no AI will be possible on current hardware. An artificial brain that duplicates the causal functioning of an organic brain is necessary before an AI can be constructed.

There's a lot of objections to the Chinese room, but in this context, the primary issue is that the Chinese room doesn't matter: Even if the AI isn't conscious in some deep philosophical sense, it has all the same results then for humans the dangers and promises of strong AI are identical.

I further predict that AI researchers will continue to predict immanent AI in direct proportion to research grant dollars they are able to attract.

Continue implies this is currently the case. Do you have evidence for this? My impression is that most AI research is going to practical machine learning which is currently being used for many real world applications. Many people in the machine learning world state that any form of general AI is extremely unlikely to happen soon, so what evidence for this claimed proportion is there?

Corollary: A stable nuclear fusion reactor will be built before a truly conscious artificial mind is.

I don't see how this is a corollary. If you mean to state it as an example of a comparison to what sort of technology would be needed that might make some sense. However, we actually already have stable fusion reactors. Examples include tabtletop designs that can be made by hobbyists. Do you mean something like a fusion reactor that produces more useful energy than is inputted?

Comment author: noen 22 November 2012 07:06:40PM 0 points [-]

Is there something that it is like to be Siri? Still, Siri is a tool and potentially a powerful one. But I feel no need to be afraid of Siri as Siri any more than I am afraid of nuclear weapons in themselves. What frightens me is how people might misuse them. Not the tools themselves. Focusing on the tools then does not address the root issue. Which is human nature and what social structures we have in place to make sure some clown doesn't build a nuke in his basement.

Did ELIZA present the "dangers and promises" of AI? Weizenbaum's secretary thought so. She thought it passed the Turing test. Did it? Will future AI tools really be indistinguishable from living beings? I doubt it. I think it will always be apparent to people that they are dealing with a software tool that makes it easier for them to do something.

If behaviorism has been rejected as an explanation for consciousness how can one appeal to behaviorism as a model for future AI?

--

"so what evidence for this claimed proportion is there?"

Oh, I was just being flippant. It is a law of the universe that if there is a joke to be made I must at least try for it. ;)

"I don't see how this is a corollary. "

Yeah, also not serious. I meant only to mock the eternal claim of fusion proponents that it is always "just around the corner". I remember as a child reading breathless articles in Popular Science in the 70's about the immanent breakthroughs in nuclear fusion "any day now". Just like AI researchers of that day. And 40 years later little has changed.

I do not mistake Google translate for a conscious entity. Neither does anyone else. I can see no reason to believe that will change in the next 40 years.

"Examples include tabtletop designs that can be made by hobbyists."

Well now, that was cool. But yeah, no net increase in energy. Still, good for him.

Comment author: Cyan 09 November 2012 08:22:50PM 1 point [-]

Since the sun going nova is not a random event, strict frequentists deny that there is a probability to associate with it.

Comment author: noen 09 November 2012 11:56:30PM 0 points [-]

Among candidate stars for going nova I would think you could treat it as a random event. But Sol is not a candidate and so doesn't even make it into the sample set. So it's a very badly constructed setup. It's like looking for a needle in 200 million haystacks but restricting yourself only to those haystacks you already know it cannot be in. Or do I have that wrong.

Comment author: gwern 09 November 2012 06:32:35PM 1 point [-]

I don't think one would simply ignore the dice, and what data is the frequentist drawing upon in the comic which specifies the null?

Comment author: noen 09 November 2012 08:16:41PM *  -2 points [-]

How about "the probability of our sun going nova is zero and 36 times zero is still zero"?

Although... continuing with the XKCD theme if you divide by zero perhaps that would increase the odds. ;)

Comment author: gwern 09 November 2012 04:15:05PM 7 points [-]

No, it's not fair. Given the setup, the null hypothesis would be, I think, 'neither the Sun has exploded nor the dice come up 6', and so when the detector goes off we reject the 'neither x nor y' in favor of 'x or y' - and I think the Bayesian would agree too that 'either the Sun has exploded or the dice came up 6'!

Comment author: noen 09 November 2012 06:10:25PM 0 points [-]

I think the null hypothesis is "the neutrino detector is lying" because the question we are most interested in is if it is correctly telling us the sun has gone nova. If H0 is the null hypothesis and u1 is the chance of a neutrino event and u2 is the odds of double sixes then H0 = µ1 - µ2. Since the odds of two die coming up sixes is vastly larger than the odds of the sun going nova in our lifetime the test is not fair.

Comment author: [deleted] 09 November 2012 02:56:20PM 5 points [-]

Clear writing, clear thinking, much appreciated.

An aside: the Venus fly trap plant has fibers in its hinged petals. Touching one will not close it. Touching more than one with a delay between touches will not close it. Touching one or more fibers in succession will close it. This plant moves as if it can count and is aware of time. Learning that caused me to re-think what it means to count, as did your essay. Except the plant-fact is interesting while your essay is useful.

In response to comment by [deleted] on On counting and addition
Comment author: noen 09 November 2012 05:43:21PM 0 points [-]

Plants do not count and have no awareness of time or of anything at all. The exact method by which venus fly traps activate is unknown but it seems hard to me to attribute it with the ability to count. That kind of teleological explanation is something we are cognitively biased to give but it fails to be explanatory.

Sunflowers do not turn their heads to face the sun because they want to catch more sunlight. They turn towards light because those cells that are in shadow receive more auxin which in turn stimulates the elongation of the cell walls causing the plant to grow in the opposite direction and towards the light. Natural selection will tend to favor those individuals that can gather more light than those which do not. There is no teleology involved.

Comment author: noen 09 November 2012 03:37:58PM *  -1 points [-]

I generally agree with point (1) but the point is irrelevant. Counting isn't what makes 2 + 2 = 4 true. Although that is how we all learn to do math, by counting and memorizing addition and multiplication tables. I owe it all to my 3rd grade teacher. ;)

On point (2): "on our macro scale of reality, on the scale of things we perceive with our senses, discrete, separate objects are a feature of the map, not the territory; they exist in your mind, not the reality. In the reality, there's just a lot of atoms everywhere"

There are no atoms at the macro scale. Or, if you like, atoms are everywhere. A chair is an "atom" of my dinning room furniture set and I can choose to count 5 items, four chairs and a table, or one item, one dinning room set. How I choose to cut up the world will determine which answer I get. But I am very confident that rocks and trees and universities and constitutions do not exist in my mind. They have an objective ontology that is independent of my personal subjective needs, interests and desires. Which is what it means for something to be real.

"Was 2+2=4 before humans were around to invent that equation?"

The statement: "2 + 2 = 4" is absolutely true because it is true in all possible worlds. Humans did not invent the equation, we invented the symbols and means of expressing it but the relation that is expressed in the words is an objective feature of the world that is true regardless of our opinions about it. Scientific facts have the world to word direction of fit. That is, they are true only to the extent they correspond to the world.

"we can certainly speak of single photons"

Only if we choose to observe them as particles. Photons have been observed experimentally to be both particles and waves. "The measurement apparatus detected strong nonlocality, which certified that the photon behaved simultaneously as a wave and a particle in our experiment. This represents a strong refutation of models in which the photon is either a wave or a particle." This presents a significant challenge to certain theories.

View more: Next