Comment author: JoshuaZ 24 November 2012 08:42:02PM 1 point [-]

Is there something that it is like to be you?

I'm not sure this question is any better formed. "What it is like to be an X" doesn't seem to have any coherent meaning when one presses people about what they actually are talking about.

If anything, the philosophical consensus is that qualia is important.

Taking qualia seriously as a question is a distinct claim than qualia actually having anything substantial to do with consciousness. I'm not sure of specific acceptance levels of qualia, but the fact is that a majority of philosophers either accept physicalism or lean towards it. So I'm not sure how to reconcile that with your claim.

Yes, behaviorism is a very attractive solution. But presumably what people want is a living conscious artificial mind and not a useful house maid in robot form. I can get that functionality right now.

On the contrary, most people don't care whether it is conscious in some deep philosophical sense. In fact, having functional AI that are completely not conscious have certain advantages- such as being less of an ethical problem in sending them to be destroyed (say as robot soldiers, or as probes to other planets). Moreover, the primary worry discussed on LW as far as AI is concerned is that the AI will bootstrap itself in a way that results in a very unpleasant bad singularity. Whether the AI is truly conscious or not has nothing to do with that worry.

Wikipedia? Really?

Yes, for many purposes Wikipedia is quite useful and reasonably reliable as a source. In many fields (math and chemistry for example) articles have been written by actual experts in the fields.

Did you even bother to read the page or are you just pointing to something on wikipedia and believing that constitutes an argument?

My primary intent for the link was for its use in the introduction where it uses the fairly standard notion that "that psychology should concern itself with the observable behavior of people and animals, not with unobservable events that take place in their minds." It is incidentally useful to understand behaviorism in most senses of the term went away not due to arguments about things like qualia, but rather that advances in neuroscience and related areas allowed us to get much more direct access to what was going on inside. At some level, psychology is still controlled by behaviorism if one interprets that to include brain activity as behavior.

And yes, I am familiar with behaviorism in the sense that is discussed in that section. But it still isn't an attempt to explain consciousness. It is essentially an argument that psychology doesn't need to explain consciousness. These aren't the same thing.

"If it is raining, Mr. Smith will use his umbrella. It is raining, therefore Mr. Smith will use his umbrella." Is this a valid deduction? No, it isn't because consciousness is not behavior only.

So I don't follow you at all here, and it doesn't even look like there's any argument you've made here other than just some sort of conclusion. But I don't see where in the notion of "deduction" consciousness comes in. Are you using some non-standard definition of "use" or of "umbrella"?

If you are a fan of Doctor Who, is the Teselecta conscious? Is there something that it is like to be the Teselecta? My answer is no, there is nothing it is like to be a robot piloted by miniature people emulating the behavior of a real conscious person.

Don't be a blockhead. ;)

So, on LW there's a general expectation of civility, and I suspect that that general expectation doesn't go away when one punctuates with a winky-emoticon.

Comment author: noen 25 November 2012 04:41:07AM *  0 points [-]

"On the contrary, most people don't care whether it is conscious in some deep philosophical sense."

Do you mean that people don't care if they are philosophical zombies or not? I think they care very much. I also think that you're eliding the point a bit by using "deep" as a way to hand wave the problem away. The problem of consciousness is not some arcane issue that only matters to philosophers in their ivory towers. It is difficult. It is unsolved. And... and this is important. it is a very large problem, so large that we should not spend decades exploring false leads. I believe strong AI proponents have wasted 40 years of time and energy pursuing a ill advised research program. Resources that could have better been spent in more productive ways.

That's why I think this is so important. You have to get things right, get your basic "vector" right otherwise you'll get lost because the problem is so large once you make a mistake about what it is you are doing you're done for. The "brain stabbers" are in my opinion headed in the right direction. The "let's throw more parallel processors connected in novel topologies at it" crowd are not.

"Moreover, the primary worry discussed on LW as far as AI is concerned is that the AI will bootstrap itself in a way that results in a very unpleasant bad singularity."

Sounds like more magical thinking if you ask me. Is bootstrapping a real phenomenon? In the real world is there any physical process that arises out of nothing?

"And yes, I am familiar with behaviorism in the sense that is discussed in that section. But it still isn't an attempt to explain consciousness."

Yes it is. In every lecture I have heard when the history of the philosophy of mind is recounted the behaviorism of the 50's and early 60's it's main arguments for and against it as an explanation of consciousness are given. This is just part of the standard literature. I know that cognitive/behavioral therapeutic models are in wide use and very successful but that is simply beside the point here.

"So I don't follow you at all here, and it doesn't even look like there's any argument you've made here other than just some sort of conclusion."

Are you kidding!??? It was nothing BUT argument. Here, let me make it more explicit.

Premise 1 "If it is raining, Mr. Smith will use his umbrella." Premise 2 "It is raining" Conclusion "therefore Mr. Smith will use his umbrella."

That is a behaviorist explanation for consciousness. It is logically valid but still fails because we all know that Mr. Smith just might decide not to use his umbrella. Maybe that day he decides he likes getting wet. You cannot deduce intent from behavior. If you cannot deduce intent from behavior then behavior cannot constitute intentionality.

"So, on LW there's a general expectation of civility, and I suspect that that general expectation doesn't go away when one punctuates with a winky-emoticon."

It's a joke hun. I thought you would get the reference to Ned Block's counter argument to behaviorism. It shows how an unconscious machine could pass the Turing test. I'm pretty sure that Steven Moffat must have been aware of it and created the Teselecta.

Suppose we build a robot and instead of robot brain we put in a radio receiver. The robot can look and move just like any human. Suppose then that we take the nation of China and give everyone a transceiver and a rule they must follow. For each individual if they receive as input state S1 they will then output state S2. They are all connected in a functional flowchart that perfectly replicates a human brain. The robot then looks moves and above all talks just like any human being. It passes the Turing test.

Is "Blockhead" (the name affectionately given to this robot) conscious?

No it is not. A non-intelligent machine passes the behaviorist Turing test for an intelligent AI. Therefore behaviorism cannot explain consciousness and an intelligent AI could never be constructed from a database of behaviors. (Which is essentially what all attempts at computer AI consist of. A database and a set of rules for accessing them.)

Comment author: JoshuaZ 23 November 2012 05:30:12PM *  0 points [-]

Is there something that it is like to be Siri?

I'm not sure what you mean by this question. Is this a variant of what it is like to be a bat? There's a decent argument that such questions don't make sense. But this doesn't matter much: Whether some AI has qualia or not doesn't change any of the external behavior, than for most purposes like existential risk it doesn't matter.

I doubt it. I think it will always be apparent to people that they are dealing with a software tool that makes it easier for

This and most of the rest of your post are assertions, not arguments.

If behaviorism has been rejected as an explanation for consciousness how can one appeal to behaviorism as a model for future AI?

First, what do you mean by behaviorism in this context? Behaviorism as that word is classically defined isn't an attempt to explain consciousness. It doesn't care about consciousness at all.

Comment author: noen 24 November 2012 03:30:48PM -1 points [-]

"Is this a variant of what it is like to be a bat?"

Is there something that it is like to be you? There are also decent arguments that qualia does matter. It is hardly a settled matter. If anything, the philosophical consensus is that qualia is important.

"Whether some AI has qualia or not doesn't change any of the external behavior,"

Yes, behaviorism is a very attractive solution. But presumably what people want is a living conscious artificial mind and not a useful house maid in robot form. I can get that functionality right now.

If I write a program that allows my PC to speak in perfect English and in a perfectly human voice can my computer talk to me? Can it say hello? Yes it can, Can it greet me hello? No, it cannot because it cannot intend to say hello.

"Behaviorism as that word is classically defined isn't an attempt to explain consciousness."

Wikipedia? Really? Did you even bother to read the page or are you just pointing to something on wikipedia and believing that constitutes an argument? Look at section 5 "Behaviorism in philosophy". Read that and follow the link to the Philosophy of Mind article. Read that. You will discover that behaviorism was at one time thought to be a valid theory of mind. That all we needed to do to explain human behavior was to describe human behavior.

"If it is raining, Mr. Smith will use his umbrella. It is raining, therefore Mr. Smith will use his umbrella." Is this a valid deduction? No, it isn't because consciousness is not behavior only.

If you are a fan of Doctor Who, is the Teselecta conscious? Is there something that it is like to be the Teselecta? My answer is no, there is nothing it is like to be a robot piloted by miniature people emulating the behavior of a real conscious person.

Don't be a blockhead. ;)

Comment author: loup-vaillant 24 November 2012 08:30:44AM 1 point [-]

I second fubarobfusco. While you could say programs are pure syntax, they are executed on real machines and have real effects. If those capabilities don't count as semantic content, I don't know what does.

So, I still don't know what makes you so sure conciousness is impossible on an emulator. (Leaving aside the fact that using "strong AI" to talk about conciousness, instead of capabilities, is a bit strange.)

Comment author: noen 24 November 2012 02:50:55PM *  -3 points [-]

That is correct, you don't know what semantic content is.

"I still don't know what makes you so sure conciousness is impossible on an emulator."

For the same reason that I know simulated fire will not burn anything. In order for us to create an artificial mind, which certainly must be possible, we must duplicate the causal relations that exist in real consciousnesses.

Let us imagine that you go to your doctor and he says, "You're heart is shot. We need to replace it. Lucky for you we have miniature super computer we can stick into your chest that can simulate the pumping action of a real heart down to the atomic level. Every atom, every material, every gasket of a real pump is precisely emulated to an arbitrary degree of accuracy."

"Sign here."

Do you sign the consent form?

Simulation is not duplication. In order to duplicate the causal effects of real world processes it is not enough to represent them in symbolic notation. Which is all a program is. To duplicate the action of a lever on a mass it is not enough to represent that action to yourself on paper or in a computer. You have to actually build a physical lever in the physical world.

In order to duplicate conscious minds, which certainly must be due to the activity of real brains, you must duplicate those causal relations that allow real brains to give rise to the real world physical phenomenon we call consciousness. A representation of a brain is no more a real brain than a representation of a pump will ever pump a single drop of fluid.

None of this means we might not someday build an artificial brain that gives rise to an artificial conscious mind. But it won't be done on a von Neuman machine. It will be done by creating real world objects that have the same causal functions that real world neurons or other structures in real brains do.

How could it be any other way?

Comment author: fubarobfusco 24 November 2012 05:48:53AM 2 points [-]

What on earth is "semantic content"?

Comment author: noen 24 November 2012 02:25:55PM 0 points [-]

Meaning.

The words on this page mean things. They are intended to refer to other things.

Comment author: JoshuaZ 23 November 2012 07:02:30PM 0 points [-]

When the telegraph was invented people thought the mind was like the telegraph because...... magic is why.

Because the telegraph analogy is actually a pretty decent analogy.

Building more and faster wires and better telegraph stations and connecting them in advanced topologies will not change the fact that you are living in a fantasy world.

What makes you think a sufficiently large number of organized telegraph lines won't act like a brain? Note that whether the number may be too large to actually fit on Earth is besides the point.

Comment author: noen 24 November 2012 05:29:41AM -3 points [-]

"Because the telegraph analogy is actually a pretty decent analogy."

No it isn't. Constructing analogies is for poets and fiction writers. Science does not construct analogies. The force on an accelerating mass isn't analogous to F=ma, it IS F=ma. If what you said is true, that neurons are like telegraph stations and their dendrites the wires then it could not be true that neurons can communicate without a direct connection or "wire" between them. Neurons can communicate without any synaptic connection between them (See: "Neurons Talk Without Synapses"). Therefore the analogy is false.

"What makes you think a sufficiently large number of organized telegraph lines won't act like a brain?"

Because that is an example of magical thinking. It is not based on a functional understanding of the phenomenon. "If I just pour more of chemical A into solution B I will get a bigger and better reaction." We are strongly attracted to thinking like that. It's probably why it took us thousands of years to really get how to do science properly.

Comment author: loup-vaillant 23 November 2012 09:42:36PM 8 points [-]

Strong AI is refuted because syntax is insufficient for semantics.

Where the heck does that come from? What do you mean by "strong AI is refuted", "syntax is insufficient for semantics", and how does the former follow from the latter?

Comment author: noen 24 November 2012 05:08:34AM -3 points [-]

"What do you mean by "strong AI is refuted""

The strong AI hypothesis is that consciousness is the software running on the hardware of the brain. Therefore one does not need to know or understand how brains actually work to construct a living conscious mind. Thus any system that implements the right computer program with the right inputs and outputs has cognition in exactly the same literal sense that human beings have understanding, thought and memory. It was the belief of strong AI proponents such as Marvin Minski at MIT and others that they were literally creating minds when writing their programs. They felt no need to stoop so low as to poke around in actual brains and get their hands dirty.

Computers are syntactical machines. The programs they execute are pure syntax and have no semantic content. Meaning is assigned, it is not intrinsic to symbolic logic. That is it's strength. Since (1) programs are pure syntax and have no semantic content and (2) minds do have semantic content and (3) syntax is neither sufficient for nor constitutive of semantics. It must follow that programs are not by themselves constitutive of, nor sufficient for, minds. The strong AI hypothesis is false.

Which means that IBM is wasting time, energy and money. But.... perhaps their efforts will result in spin off technology so not all is lost.

Comment author: mapnoterritory 22 November 2012 09:19:01AM 1 point [-]

I actually never heard about non-von Neumann architectures. Anybody has some tip on a good source on this? Especially how this relates to biological brain architectures? Thank you!

Comment author: noen 23 November 2012 05:11:33PM *  -5 points [-]

Parallelism changes absolutely nothing other than speed of execution.

Strong AI is refuted because syntax is insufficient for semantics. Allowing the syntax to execute in parallel will not alter this because the refutation of strong AI attacks the logical basis for the strong AI hypothesis itself. If you are trying to build a television with tinker-toys it does not improve your chances to substitute higher quality tinker-toys for the older wooden ones. You will still never get a functional TV.

They do not actually have a physical non-von Neumann architecture. They are simulating a brain on simulated neurosynaptic cores on a simulated non-von Neumann architecture on a Blue Gene/Q super computer which consists of 64-bit PowerPC A2 processors connected in a toroidal network. No wonder it's slow.

They are trying to reach "True North" and believe they are headed in the right direction but they do not know if the Compass they have built actually measures what they believe it measures. Nor do they know if once they get there True North will do what they want it to do. They do not even know how what they want to do does what it does but they believe if they use faster computers that will overcome their lack of knowledge of how actual minds arise out of actual brains, which they don't know how they are constructed. Nor do they know how the actual neurons of which actual brains are constructed actually function in real life.

But they're published. So... you know... there's that.

If you cannot simulate round worms, do not know how neurons actually work and do not even know how memories are stored in natural brains you are in no danger of building Colossus.

People are highly susceptible to magical thinking. When the telegraph was invented people thought the mind was like the telegraph because...... magic is why. Building more and faster wires and better telegraph stations and connecting them in advanced topologies will not change the fact that you are living in a fantasy world.

Comment author: noen 23 November 2012 04:27:52PM -4 points [-]

We have no idea how neurons actually work.

We have no idea how brains actually work.

We have no idea what consciousness is, how it works, or even if it does exist.

If you do not know how a radio works or how a transistor works or what the knobs and dials actually do and cannot even build a simulation of how one might work you are in no danger of building the ultimate radio to rule all others.

Having a bad idea does not make you closer to having a good idea.

Comment author: noen 23 November 2012 04:17:51PM -2 points [-]

You're getting old. The long term prognosis is that the condition is fatal. ;)

Comment author: Nornagest 22 November 2012 07:02:11PM *  9 points [-]

In Chris Mooney's book "The Republican Brain" he makes a good case based on recent studies for why we should think of the totalitarianism of the former USSR as a right wing phenomenon. [...] Conservative personalities then acclimate themselves to the resulting bureaucracy and seek to freeze it in place. Then, being authoritarians, they accumulate power and use it as authoritarians always do.

It'd be hard for me to overstate my skepticism for the genre of popular political science books charging that their authors' enemies are innately evil. I haven't read Mooney's book, though I have read quite a few articles with a similar thesis; if you're presenting his analysis accurately, though, it seems pretty tortured.

Mao was central to his revolution from its inception, and if you've read anything of his it's obvious that he was a true believer. The democides he's been charged with may have worked as consolidations of power, but they certainly weren't attempts to minimize social or political change; indeed, most of the deaths during the Great Leap Forward can be laid at the feet of novel but poorly implemented agricultural organization. (This may also be true for the Holodomor and other instances of mass famine in the Soviet Union.) Stalin's a more ambiguous case; many of his worst excesses do seem to have served a personal power grab, and he was a relatively minor figure within Lenin's initial party organization, but if anything he seems too ambitious to be branded a Marxist conservative. His purges don't fit well with a desire to safeguard the Leninist bureaucracy; on the contrary, they pretty much destroyed it. He was of course an authoritarian in the sense of seeking to maximize personal and state power, but the "Marxist conservative" label seems to fit Khrushchev and others of his generation much better.

In any case, if we're using "liberal" and "conservative" strictly to gauge desire for social change, then by the same token we have to decouple it from authoritarianism or adherence to positions generally thought of as right-wing. Indeed, in this narrow sense Hitler, Mussolini, and others (though perhaps not Franco) might be considered liberals: fascist (in the grandparent's sense) ideology is quite big on cultural traditionalism, but even more central is its concept of social transformation based on extreme nationalism, shared political goals, and economic corporatism. The nostalgia in its rhetoric has to be understood in that context (and, in Hitler's case, in the context of a sense of national humiliation following WWI).

Comment author: noen 23 November 2012 03:57:16PM -5 points [-]

"It'd be hard for me to overstate my skepticism for the genre of popular political science books charging that their authors' enemies are innately evil. I haven't read Mooney's book"

It is obvious you have not read it because he makes no such claim nor have I. In fact he ends the book with a new found respect for conservatives. Loyalty, personal responsibility, being willing to set aside one's own desires for the good of the group are all admirable qualities. I myself do not despise conservatives in themselves. I do despise the hucksters and grifters who promote pseudoscience and conspiracy theories in order to enrich themselves. Those people find a significant percentage of the population are easily manipulated by preying on their fears and prejudices. That percentage is over represented by conservative personality types and people with that kind of temperament tend to find political conservatism more to their liking. I have met Democrats with conservative personalities but not many. Civil Rights legislation in the 60's was passed primarily by Republicans with liberal personalities. The reactionary types were in the Democratic Party

Conservatives are not innately evil. No one is. All people are susceptible to certain cognitive biases. Some people more than others. Some other people have found they can manipulate them to their advantage. It is easy to do, you trigger the fear response, as a result one's rational centers literally shut down and areas of the brain associated with survival are activated.

"if we're using "liberal" and "conservative" strictly to gauge desire for social change"

No, that's not how it is used. Conservative means "resistant to change" and Liberal means "novelty seeking". Political conservatives need not all be authoritarians but virtually all authoritarians would self select for conservative political organizations.

"Indeed, in this narrow sense Hitler, Mussolini, and others (though perhaps not Franco) might be considered liberals"

That's absurd. Liberalism is not defined as a desire for social change. The authoritarian or conservative mindset would also seek social change because they wish to return to what they perceive as a traditional model for society.

View more: Next