Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

One last roll of the dice

0 Post author: Mitchell_Porter 03 February 2012 01:59AM

Previous articles: Personal research update, Does functionalism imply dualism?, State your physical account of experienced color.

 

In phenomenology, there is a name for the world of experience, the "lifeworld". The lifeworld is the place where you exist, where time flows, and where things are actually green. One of the themes of the later work of Edmund Husserl is that a scientific image of the real world has been constructed, on the basis of which it is denied that various phenomena of the lifeworld exist anywhere, at any level of reality.

When I asked, in the previous post, for a few opinions about what color is and how it relates to the world according to current science, I was trying to gauge just how bad the eclipse of the lifeworld by theoretical conceptions is, among the readers of this site. I'd say there is a problem, but it's a problem that might be solved by patient discussion.

Someone called Automaton has given us a clear statement of the extreme position: nothing is actually green at any level of reality; even green experiences don't involve the existence of anything that is actually green; there is no green in reality, there is only "experience of green" which is not itself green. I see other responses which are just a step or two away from this extreme, but they don't deny the existence of actual color with that degree of unambiguity.

A few people talk about wavelengths of light, but I doubt that they want to assert that the light in question, as it traverses space, is actually colored green. Which returns us to the dilemma: either "experiences" exist and part of them is actually green, or you have to say that nothing exists, in any sense, at any level of reality, that is actually green. Either the lifeworld exists somewhere in reality, or you must assert, as does the philosopher quoted by Automaton, that all that exists are brain processes and words. Your color sensations aren't really there, you're "having a sensation" without there being a sensation in reality.

What about the other responses? kilobug seems to think that pi actually exists inside a computer calculating the digits of pi, and that this isn't dualist. Manfred thinks that "keeping definitions and referents distinct" would somehow answer the question of where in reality the actual shades of green are. drethelin says "The universe does not work how it feels to us it works" without explaining in physical terms what these feelings about reality are, and whether any of them is actually green. pedanterrific asks why wrangle about color rather than some other property (the answer is that the case of color makes this sort of problem as obvious as it ever gets). RomeoStevens suggests I look into Jeff Hawkins. Hawkins mentions qualia once in his book "On Intelligence", where he speculates about what sort of neural encoding might be the physical correlate of a color experience; but he doesn't say how or whether anything manages to be actually colored.

amcknight asks which of 9 theories of color listed in the SEP article on that subject I'm talking about. If you go a few paragraphs back from the list of 9 theories, you will see references to "color as it is in experience" or "color as a subjective quality". That's the type of color I'm talking about. The 9 theories are all ways of talking about "color as in physical objects", and focus on the properties of the external stimuli which cause a color sensation. The article gets around to talking about actual color, subjective or "phenomenal" color, only at the end.

Richard Kennaway comes closest to my position; he calls it an apparently impossible situation which we are actually living. I wouldn't put it quite like that; the only reason to call it impossible is if you are completely invested in an ontology lacking the so-called secondary qualities; if you aren't, it's just a problem to solve, not a paradox. But Richard comes closest (though who knows what Will Newsome is thinking). LW user "scientism" bites a different bullet to the eliminativists, and says colors are real and are properties of the external objects. That gets a point for realism, but it doesn't explain color in a dream or a hallucination.

Changing people's minds on this subject is an uphill battle, but people here are willing to talk, and most of these subjects have already been discussed for decades. There's ample opportunity to dissolve, not the problem, but the false solutions which only obscure the real problem, by drawing on the work of others; preferably before the future Rationality Institute starts mass-producing people who have the vice of quale-blindness as well as the virtues of rationality. Some of those people will go on to work on Friendly AI. So it's highly desirable that someone should do this. However, that would require time that I no longer have.

 

In this series of posts, I certainly didn't set out to focus on the issue of color. The first post is all about Friendly AI, the ontology of consciousness, and a hypothetical future discipline of quantum neurobiology. It may still be unclear why I think evidence for quantum computing in the brain could help with the ontological problems of consciousness. I feel that the brief discussion this week has produced some minor progress in explaining myself, which needs to be consolidated into something better. But see my remarks here about being able to collapse the dualistic distinction between mental and physical ontology in a tensor network ontology; also earlier remarks here about about mathematically representing the phenomenological ontology of consciousness. I don't consider myself dogmatic about what the answer is, just about the inadequacy of all existing solutions, though I respect my own ideas enough to want to pursue them, and to believe that doing so will be usefully instructive, even if they are wrong.

However, my time is up. In real life, my ability to continue even at this inadequate level hangs by a thread. I don't mean that I'm suicidal, I mean that I can't eat air. I spent a year getting to this level in physics, so I could perform this task. I have considerable momentum now, but it will go to waste unless I can keep going for a little longer - a few weeks, maybe a few months. That should be enough time to write something up that contains a result of genuine substance, and/or enough time to secure an economic basis for my existence in real life that permits me to keep going. I won't go into detail here about how slim my resources really are, or how adverse my conditions, but it has been the effort that you would want from someone who has important contributions to make, and nowhere to turn for direct assistance.[*] I've done what I can, these posts are the end of it, and the next few days will decide whether I can keep going, or whether I have to shut down my brain once again.

So, one final remark. Asking for donations doesn't seem to work yet. So what if I promise to pay you back? Then the only cost you bear is the opportunity cost and the slight risk of default. Ten years ago, Eliezer lent me the airfare to Atlanta for a few days of brainstorming. It took a while, but he did get that money back. I honor my commitments and this one is highly public. This really is the biggest bargain in existential risk mitigation and conceptual boundary-breaking that you'll ever get: not even a gift, just a loan is required. If you want to discuss a deal, don't do it here, but mail me at mitchtemporarily@hotmail.com. One person might be enough to make the difference.

[*]Really, I can't say that, that's an emotional statement. There has been lots of assistance, large and small, from people in my life. But it's been a struggle conducted at subsistence level the whole way.

 

ETA 6 Feb: I get to keep going.

Comments (107)

Comment author: JoshuaZ 03 February 2012 04:50:17AM *  17 points [-]

There are a lot of reasons that people aren't responding positively to your comments. One of which I think hasn't been addressed is that this to a large extent pattern matches to a bad set of metapatterns in history. In general, our understanding of the mind has been by having to reject our strong intuitions about how our minds are dualist and how aspects of our minds (or our minds as a whole) are fundamentally irreducible. So they look at this and think that it isn't a promising line of inquiry. Now, this may be unfair, but I don't think it really is very unfair. The notion that there are irreducible or even reducible but strongly dualist aspects of our universe seems to a class of hypotheses which has been repeatedly falsified. So it is fair for someone to by default to assign a low probability to similar hypotheses.

You have other bits that make worrying signals about your rationality or level intentions, like when you write things like:

I don't mean that I'm suicidal, I mean that I can't eat air. I spent a year getting to this level in physics, so I could perform this task.

This bit not only made me sit up in alarm, it substantially reduced how seriously I should take your ideas. Previously, my thought process was "This seems wrong, but Porter seems to know a decent amount of physics, more than I do in some aspects, maybe I should update more to taking this sort of hypothesis more seriously?" Although Penrose has already done that, so you wouldn't cause that big an update, this shows that much of your physics knowledge occurred after you reached certain conclusions. This feels a lot like learning about a subject to write the bottom line. This isn't as extreme as say Jonathan Wells who got a PhD in biology so he could "destroy Darwinism" but it does seem similar. The primary difference is that Wells seemed interested in the degree for its rhetorical power, whereas you seem genuinely interested in actually working out the truth. But to a casual observer who just read this post, they'd see a very strong match here.

I also think that you are being downvoted in part because you are asking for money in a fairly crass fashion and you don't have the social capital/status here to get away with it. Eliezer gets away with it even from the people here who don't consider the Singularity Institute to be a great way of fighting existential risk, because it is hard to have higher status than being the website's founder (although lukeprog and Yvain might be managing to beat that in some respects). In this context, making a point about how you just want loans at some level reduces status even further. One thing that you may want to consider is looking for other similar sources of funding that are broader and don't have the same underlying status system. Kickstarter would be an obvious one.

Comment author: Mitchell_Porter 03 February 2012 07:33:28AM 1 point [-]

In general, our understanding of the mind has been by having to reject our strong intuitions about how our minds are dualist and how aspects of our minds (or our minds as a whole) are fundamentally irreducible.

Sometimes progress consists of doubling back to an older attitude, but at a higher level. Revolutions have excesses. The ghost in the machine haunts us, the more we take the machine apart. I see the holism of quantum states as the first historical sign of an ontological synthesis transcending the clash between reductionism and subjectivity, which has hitherto been resolved by rejecting one or the other, or by uneasy dualistic coexistence.

this shows that much of your physics knowledge occurred after you reached certain conclusions. This feels a lot like learning about a subject to write the bottom line.

Or it's like learning anatomy, physiology, and genetics, so you can cure a disease. Certainly my thinking about physics has a much higher level of concreteness now, because I have much more to work with, and I have new ideas about details - maybe it's complexes of twistor polytopes, rather than evolving tensor networks. But I've found no reason to question the original impetus.

I also think that you are being downvoted in part because you are asking for money in a fairly crass fashion and you don't have the social capital/status here to get away with it.

I believe most of the downvotes are coming because of the claims I make (about what might be true and what can't be true) - I get downvotes whenever I say this stuff. Also because it's written informally rather than like a scholarly argumentative article (that's due to writing it all in a rush), and it contains statements to the effect that "too many of you just don't get it". Talking about money is just the final straw, I think.

But actually I think it's going OK. There's communication happening, issues are being aired and resolved, and there will have been progress, one way or another, by the time the smoke clears.

However, I do want to say that this comment of yours was not bad as an exercise in dispassionate analysis of what causes might be at work in the situation.

Comment author: bryjnar 03 February 2012 03:39:19PM 8 points [-]

One other bit of (hopefully) constructive criticism: you do seem to have a bit of a case of philosophical jargon-itis. I mean sentences like this:

I see the holism of quantum states as the first historical sign of an ontological synthesis transcending the clash between reductionism and subjectivity, which has hitherto been resolved by rejecting one or the other, or by uneasy dualistic coexistence.

As a philosopher myself, I appreciate the usefulness of jargon from time to time, but you sometimes have the air of throwing it around for the sheer joy of it. Furthermore, I (at least) find that that sort of style can sometimes feel like you're deliberately trying to obscure your point, or that it's camoflage to conceal any dubious parts.

Comment author: David_Gerard 03 February 2012 10:36:45PM 1 point [-]

When someone's spent years on a personal esoteric search for meaning, word salad is a really bad sign.

Comment author: CronoDAS 06 February 2012 05:06:18AM 0 points [-]

One other bit of (hopefully) constructive criticism: you do seem to have a bit of a case of philosophical jargon-itis.

What he said.

I have difficulty understand what Mitchell Porter is trying to say when he talks about this topic. When I run into something that is difficult to understand in this manner, I usually find that, upon closer examination, it usually turns out that I didn't understand it because it doesn't make any sense in the first place. And, as far as I can tell, this is also true of what Mitchell Porter, too.

Comment author: Mitchell_Porter 06 February 2012 06:45:38AM 1 point [-]

I claim that colors obviously exist, because they are all around us, and I also claim that they do not exist in standard physical ontology. Is that much clear?

Comment author: CronoDAS 06 February 2012 07:07:25AM *  2 points [-]

Now it is.

I disagree that colors do not exist in standard physical ontology, and find the claim rather absurd on its face. (I'm not entirely sure what ontology is, but I think I've picked up the meaning from context.)

See:
Brain Breakthrough! It's Made of Neurons!
Hand vs. Fingers
Angry Atoms

I don't know every last detail of how the experience of color is created by the interaction of of light waves, eyes, and neurons, but I know that that's where it comes from.

Comment author: Mitchell_Porter 06 February 2012 08:33:24AM *  1 point [-]

An ontology is a theory about what it is that exists. I have to speak of "physical ontology" and not just of physics, because so many physicists take an anti-ontological or positivistic attitude, and say that physical theory just has to produce numbers which match the numbers coming from experiment; it doesn't have to be a theory about what it is that exists. And by standard physical ontology I mean one which is based on what Galileo called primary properties, possibly with some admixture of new concepts from contemporary mathematics, but definitely excluding the so-called secondary properties.

So a standard physical ontology may include time, space, and objects in space, and the objects will have size, shape, and location, and then they may have a variety of abstract quantitative properties on top of that, but they don't have color, sound, or any of those "feels" which get filed under qualia.

I don't know every last detail of how the experience of color is created by the interaction of light waves, eyes, and neurons, but I know that that's where it comes from.

Asking "where is the experienced color in the physical brain?" shows the hidden problem here . We know from experience that reality includes things that are actually green, namely certain parts of experiences. If we insist that everything is physical, then that means that experiences and their parts are also physical entities of some kind. If the actually green part of an experience is a physical entity, then there must be a physical entity which is actually green.

For the sake of further discussion, let us assume a physical ontology based on point-particles. These particles have the property of location - the property of always being at some point in space - and maybe they have a few other properties, like velocity, spin, and charge. An individual particle isn't actually green. What about two of them? The properties possessed by two of them are quantitative and logical conjunctions of the properties of individual particles - e.g. "location of center of mass" or "having a part at location x0 and another part at x1". We can even extend to counterfactual properties, e.g. "the property of flying apart if a heavy third particle were to fly past on a certain trajectory".

To accept that actual greenness still exists, but to argue against dualism, you need to show that actual greenness can be identified with some property like these. The problem is that that's a little absurd. It is exactly like saying that if you count through the natural numbers, all of the numbers after 5 x 10^37 are blue. The properties that are intrinsically available in standard physical ontology are much like arithmetic properties, but with a few additional "physical" predicates that can also enter into the definition.

I presume that most modern people don't consider linguistic behaviorism an adequate account of anything to do with consciousness. Linguistic behaviorism is where you say there are no "minds" or "psychological states", there are just bodies that speak. It's the classic case of accounting for experience by only accounting for what people say about experience.

Cognitive theories of consciousness are considered an advance on this because they introduce a causal model with highly structured internal states which have a structural similarity to conscious states. We see the capacity of neurons to encode information e.g. in spiking rates, we see that there are regions of cortex to which visual input is mapped point by point, and so we say, maybe the visual experience of a field of color is the same thing as a sheet of visual neurons spiking at different rates.

But I claim they can't be the same thing because of the ontological mismatch. A visual experience contains actual green, a sheet of neurons is a complicated bound state of a quadrillion atoms which nowhere contains actual green, though it may contain neurons exhibiting an averaged behavior which has a structural and causal role rather close to the structural and causal role played by actual greenness, as inferred from psychology and phenomenology.

Here I say there are two choices. Either you say that on top of the primary properties out of which standard physical ontology is built, there are secondary properties, like actual green, which are the building blocks of conscious experiences, and you say that the experiences dualistically accompany the causally isomorphic physical processes. Or you say that somewhere there is a physical object which is genuinely identical to the conscious experience - it is the experience - and you say that these neuronal sheets which behave like the parts of an experience still aren't the thing itself, they are just another stage in the processing of input (think of the many anatomical stages to the pathways that begin at the optic nerve and lead onward into the brain).

There are two peculiarities to this second option. First, haven't we already argued that the base properties available in physical ontology, considered either singly or in conjunction, just can't be identified with the constituent properties of conscious states? How does positing this new object help, if it is indeed a physical object? And second, doesn't it sound like a soul - something that's not a network of neurons, but a single thing; the single place where the whole experience is localized?

I propose to deal with the second peculiarity by employing a quantum ontology in which entanglement is seen as creating complex single objects (and not just correlated behaviors in several objects which remain ontologically distinct), and with the first peculiarity by saying that, yes, the properties which make up a conscious state are elementary physical properties, and noting that we know nothing about the intrinsic character of elementary physical properties, only their causal and structural relations to each other (so there's no reason why the elementary internal properties of an entangled system can't literally and directly be the qualia). I take the structure of a conscious state and say, that is the structure of some complex but elementary entity - not the structure of a collective behavior (as when we talk about the state of a neuron as "firing" or "not firing", a description which passes over the intricate microscopic detail of the exact detailed state).

The rationale of this move is that identifying the conscious state machine with a state machine based on averaged collective behaviors is really what leads to dualism. If we are instead dealing with the states of an entity which is complex but "fundamental", in the sense of being defined in terms of the bottom level of physical description (e.g. the Hilbert spaces of these entangled systems), then it's not a virtual machine.

Maybe that's the key concept in order to get this across to computer scientists: the idea is that consciousness is not a virtual state machine, it's a state machine at the "bottom level of implementation". If consciousness is a virtual state machine - so I argue - then you have dualism, because the states of the state machine of consciousness have to have a reality which the states of a virtual machine don't intrinsically have.

If you are just making a causal model of something, there's no necessity for the states of your model to correspond to anything more than averaged behaviors and averaged properties of the real system you're modeling. But consciousness isn't just a model or a posited concept, it is a thing in itself, a definite reality. States of consciousness must exist in the true ontology, they can't just be heuristic approximate concepts. So the choice comes down to: conscious states are dualistically correlated with the states of a virtual state machine, or conscious states are the physical states of some complex but elementary physical entity. I take the latter option and posit that it is some entangled subsystem of the brain with a large but finite number of elementary degrees of freedom. This would be the real physical locus of consciousness, the self, and you; it's the "Cartesian theater" where diverse sensory information all shows up within the same conscious experience, and it is the locus of conscious agency, the internally generated aspect of its state transitions being what we experience as will.

(That is, the experience of willing is awareness of a certain type of causality taking place. I'm not saying that the will is a quale; the will is just the self in its causal role, and there are "qualia of the will" which constitute the experience of having a will, and they result from reflective awareness of the self's causal role and causal power... Or at least, these are my private speculations. )

I'll guess that my prose got a little difficult again towards the end, but that's how it will be when we try to discuss consciousness in itself as an ontological entity. But hopefully the road towards the dilemma between dualism and quantum monism is a little clearer now.

Comment author: CronoDAS 06 February 2012 12:06:52PM *  3 points [-]

For the sake of further discussion, let us assume a physical ontology based on point-particles. These particles have the property of location - the property of always being at some point in space - and maybe they have a few other properties, like velocity, spin, and charge. An individual particle isn't actually green. What about two of them? The properties possessed by two of them are quantitative and logical conjunctions of the properties of individual particles - e.g. "location of center of mass" or "having a part at location x0 and another part at x1". We can even extend to counterfactual properties, e.g. "the property of flying apart if a heavy third particle were to fly past on a certain trajectory".

To accept that actual greenness still exists, but to argue against dualism, you need to show that actual greenness can be identified with some property like these. The problem is that that's a little absurd.

Well, it sounds quite reasonable to me to say that if you arrange elementary particles in a certain, complicated way, you get an instance of something that experiences greenness. To me, this is no different than saying that that if you arrange particles in a certain, complicated way, you get a diamond. We just happen to know a lot more about what particle configurations create "diamondness" than "experience of green"ness. (As a matter of fact, we know exactly how to define "diamondness" as a function of particle type and arrangement.)

So, at this point I apply the Socratic method...

Are we in agreement that a "diamond" is a thing that exists? (My answer: Yes - we can recognize diamonds when we see them.)

Is the property "is a diamond" one that can be defined in terms of "quantitative and logical conjunctions of the properties of individual particles"? (My answer: Yes, because we know that diamonds are made of carbon atoms arranged in a specific pattern.)

Hopefully we agree on these answers! And if we do, can you tell me what the difference is between the predicate "is experiencing greenness" and "is a diamond" such that we can tell, in the real world, if something is a diamond by looking at the particles that make it up, and that it is impossible, in principle, to do the same for "is experiencing greenness"?

What I think your mistake is, is that you underestimate the scope of just what "quantitative and logical conjunctions of the properties of individual particles" can actually describe. Which is, literally, anything at all that can be described with mathematics, assuming you're allowing all the standard operators of predicate logic and of arithmetic. And that would include the function that maps "arrangements of particles" as an input and returns "true" if the arrangement of particles included a brain that was experiencing green and "false" otherwise - even though we humans don't actually know what that function is!

But I claim they can't be the same thing because of the ontological mismatch. A visual experience contains actual green, a sheet of neurons is a complicated bound state of a quadrillion atoms which nowhere contains actual green, though it may contain neurons exhibiting an averaged behavior which has a structural and causal role rather close to the structural and causal role played by actual greenness, as inferred from psychology and phenomenology.

To sum up, I assert that you are mistaken when you say that there is is an ontological mismatch - the sheet of neurons does indeed contain the experience of green. You are literally making the exact same error that Eliezer's strawman makes in Angry Atoms.

Comment author: whowhowho 01 February 2013 03:34:37PM 0 points [-]

We just happen to know a lot more about what particle configurations create "diamondness" than "experience of green"ness.

And if you don't know how to create greenness, it is an act of faith on your part that it is done by phsyics as you understand it at all.

Comment author: CronoDAS 01 February 2013 08:08:21PM 0 points [-]

Perhaps, but physics has had a pretty good run so far...

Comment author: Mitchell_Porter 07 February 2012 02:44:10AM 0 points [-]

By talking about "experience of green", "experiencing greenness", etc, you get to dodge the question of whether greenness itself is there or not. Do you agree that there is something in reality that is actually green, namely, certain parts of experiences? Do you agree that if these parts of experiences can be identified with particular physical entities, then those physical entities must be actually green?

Comment author: metaphysicist 21 February 2012 06:31:40AM 0 points [-]

Do you agree that there is something in reality that is actually green, namely, certain parts of experiences?

No. Why do you believe there is? Because you seem to experience green? Since greenness is ontologically anomalous, what reason is there to think the experience isn't illusion?

Comment author: CronoDAS 07 February 2012 04:37:22AM *  0 points [-]

Well, I'm used to using the word "green" to describe objects that reflect certain wavelengths of light (which are interpreted in a certain way by the human visual system) and not experiences. As in, "This apple is green" or "I see something that looks green." Which is why I used the expression "experience of greenness", because that's the best translation I can think of for what you're saying into CronoDAS-English.

So the question

Do you agree that if these parts of experiences can be identified with particular physical entities, then those physical entities must be actually green?

seems like a fallacy of equivocation to me, or possibly a fallacy of composition. It feels odd to me to say that a brain is green - after all, they don't look green when you're cutting open a skull to see what's inside of it. If "green" in Mitchell-Porter-English means the same thing as "experiences the sensation of greenness" does in CronoDAS-English, then yes, I'll definitely say that the set of particular physical entities in question possesses the property "green", even though the same can't be said of the individual point-particles which make up that collection.

(This kind of word-wrangling is another reason why I tried to stay out of this discussion in the past... trying to make sure we mean the same thing when we talk to each other can take a lot of effort.)

Comment author: David_Gerard 03 February 2012 10:32:46PM *  2 points [-]

Dualism is a confused notion. If, in a long journey through gathering a tremendous degree of knowledge, you arrive at dualism, you've made a mistake somewhere and need to go back and see where you divided by zero. If your logical chain is in fact sound to a mathematical degree of certainty, then arriving at dualism is a reductio ad absurdum of your starting point.

Comment author: Mitchell_Porter 04 February 2012 03:01:47AM 2 points [-]

Perhaps you missed that I have argued against functionalism because it implies dualism.

Comment author: David_Gerard 04 February 2012 08:36:28AM 1 point [-]

Then you need to do the same for ontologically basic qualia.

Comment author: Mitchell_Porter 04 February 2012 09:18:29AM 0 points [-]

I fail to see what your actual position is. Mine is, first, that colors exist, and second, that they don't exist in standard physical ontology. Please make a comparably clear statement about what you believe the truth to be.

Comment author: David_Gerard 04 February 2012 11:21:44AM *  6 points [-]

Colours "exist" as a fact of perception. If you're looking for colours without perception, you've missed what normative usage of "colour" means. You've also committed a ton of compression fallacy, assuming that all possible definitions of "colour" do or should refer to the same ontological entity.

You've then covered your views in word salad; I would not attempt to write with such an appalling lack of clarity as you've wrapped your views in in this sequence, except for strictly literary purposes; certainly not if my intent were to inform.

You need to seriously consider the possibility that this sequence is getting such an overwhelmingly negative reaction because you're talking rubbish.

Comment author: Mitchell_Porter 04 February 2012 12:46:08PM 1 point [-]

Colours "exist" as a fact of perception.

Why do you put "exist" in quotation marks? What does that accomplish? If I chopped off your hand, would you say that the pain does not exist, it only "exists"?

If you're looking for colours without perception, you've missed what normative usage of "colour" means.

I'm not looking for colors without perception; I'm looking for the colors of perception somewhere in physical reality; since colors are real, and physical reality is supposed to be the only sort of reality there is.

You've then covered your views in word salad; I would not attempt to write with such an appalling lack of clarity as you've wrapped your views in in this sequence, except for strictly literary purposes; certainly not if my intent were to inform.

It's not so easy to describe conscious states accurately, and a serious alternative to dualism isn't so easy to invent or convey either. I'm improvising a lot. If you make an effort to understand it, it may make more sense.

But let us return to your views. Colors only exist as part of perceptions; fine. Presumably you believe that a perception is a type of physical process, a brain process. Do you believe that some part of these brain processes is colored? If someone is seeing green, is there a flicker of actual greenness somewhere in or around the relevant brain process? I doubt that you think this. But then, at this point, nothing in your model of reality is actually green, neither the world outside the brain, nor the world inside the brain. Yet greenness is manifestly there in reality: perceptions contain actual greenness. Therefore your model is incomplete. Therefore, if you wish to include actual conscious experiences in your model, they'll have to go in alongside but distinct from the physical processes. Therefore, you will have to be a dualist.

I am not advocating dualism, I'm just telling you that if you don't want to deny the phenomenology of color, and you want to retain your physical ontology, you will have to be a dualist.

Comment author: HoverHell 03 February 2012 04:13:35PM *  0 points [-]

by default to assign a low probability to similar hypotheses

Mostly irrelevant to the OP, a question: how implausible do you see a claim that dualism is false (there's nothing irreducible in material models of our minds) and (at the same time) qualia (or phenomena as in constructs from qualia) are ontologically basic? (and, ergo, materialism i.e. material model is not ontologically basic).

(for few references, there are opposing (conflicting) hypotheses of “(strong) solipsism”, “materialism” and “agnostic solipsism”, and the aforementioned claim is a conclusion of the latter one.)

EDIT: If this (and nearby) is a post with red-flag keywords of “downvote it” then there's probably not overcomplicated post with green-flag words that will be upvoted without second though :)

More seriously: it is helpful to state why are you downvoting unless you are significantly certain that the poster is intentionally being obnoxious or apparently ignores such comments.

Comment author: JoshuaZ 03 February 2012 04:28:25PM 1 point [-]

Mostly irrelevant to the OP, a question: how implausible do you see a claim that dualism is false (there's nothing irreducible in material models of our minds) and (at the same time) qualia (or phenomena as in constructs from qualia) are ontologically basic? (and, ergo, materialism i.e. material model is not ontologically basic).

I don't know. Probably very low, certainly less than 1%.

Comment author: HoverHell 03 February 2012 10:33:53PM 2 points [-]

Hm, I realize that I might mean something different by “ontologically basic” from others.

Then, s/ontologic/empistemologic/g , i.e. “how implausible do you see a claim that dualism is false (…) and qualia (…) are epistemologically basic?”

Comment author: David_Gerard 03 February 2012 10:31:35PM 0 points [-]

Asserting that qualia are ontologically basic appears to be assuming that an aspect of mind is ontologically basic, i.e. dualism. So it's only not having done the logical chain myself that would let me set a a probability (a statement of my uncertainty) on it at all, rather than just saying "contradiction".

Comment author: HoverHell 03 February 2012 10:52:39PM *  1 point [-]

There's also a (not really low) possibility that you are misinterpreting the question (i.e. understanding it in a way different from intended).

Also (but not importantly) there's a possibility that there's no such thing as “matter” and therefore dualism is false (and, as a particular case of that — “(strong) solipsism” — qualia are ontologically basic)..

Also, a question in the adjacent thread (http://lesswrong.com/lw/9rb/one_last_roll_of_the_dice/5tox), if you don't mind answering.

Comment author: Ghostly 03 February 2012 04:41:49AM 6 points [-]

Some questions:

  1. How will you make money in the future to pay back the loan?
  2. Why aren't you doing that now, even on a part-time basis?
  3. Is there one academic physicist who will endorse your specific research agenda as worthwhile?
  4. Likewise for an academic philosopher?
  5. Likewise for anyone other than yourself?
  6. Why won't physicists doing ordinary physics (who are more numerous, have higher ability, and have better track record of productivity) solve your problems in the course of making better predictive models?
  7. How would this particular piece of work help with your larger interests? Would it cause physicists to work on this topic? Provide a basis for assessing your productivity or lack thereof?
  8. Why not spend some time programming or tutoring math? If you work at Google for a year you can then live off the proceeds for several years in Bali or the like. A moderate amount of tutoring work could pay the rent.
Comment author: Mitchell_Porter 03 February 2012 05:57:14AM *  3 points [-]

How will you make money in the future to pay back the loan?

Because I'll have a job again? I have actually had paid employment before, and I don't anticipate that the need to earn money will vanish from my life. The question is whether I'll move a few steps up the economic food chain, either because I find a niche where I can do my thing, or because I can't stand poverty any more and decide to do something that pays well. If I move up, repayment will be faster, if I don't, it will be slower, but either way it will happen.

Why aren't you doing that now, even on a part-time basis?

This is the culmination of a period in which I stopped trying to find compromise work and just went for broke. I've crossed all sorts of boundaries in the past few weeks, as the end neared and I forced more from myself. That will have been worth it, no matter what happens now.

Is there one academic physicist who will endorse your specific research agenda as worthwhile?

Well, let's start with the opposite: go here for an ex-academic telling me that one part of my research agenda is not worthwhile. Against that, you might want to look at the reception of my "questions" on the Stack Exchange sites - most of those questions are actually statements of ideas, and they generally (though not always) get a positive reception.

Now if you're talking about the big agenda I outlined in my first post in this series, there are clear resemblances between elements of it and work that is already out there. You don't have to look far to find physicists who are interested in physical ontology or in quantum brain theories - though I think most of them are on the wrong track, and the feeling might be mutual. But yes, I can think of various people whose work is similar enough to what I propose, that they might endorse that part of the picture.

Likewise for an academic philosopher?

David Chalmers is best know for talking about dualism, but he's flagged a new monism as an option worth exploring. We have an exchange of views on his blog here.

Likewise for anyone other than yourself?

Let's see if anyone else has surfaced, by the time we're done here.

Why won't physicists doing ordinary physics (who are more numerous, have higher ability, and have better track record of productivity) solve your problems in the course of making better predictive models?

Well, let's see what subproblems I have, which might be solved by a physicist. There's the search for the right quantum ontology, and then there's the search for quantum effects in the brain. Although most physicists take a positivist attitude and dismiss concerns about what's really there, there are plenty of people working on quantum ontology. In my opinion, there are new technical developments in QFT, mentioned at the end of my first post, which make the whole work of quantum ontology to date a sort of "prehistory", conducted in ignorance of very important new facts about alternative representations of QFT that are now being worked out by specialists. Just being aware of these new developments gives me a slight technical edge, though of course the word will spread.

As for the quantum brain stuff, there's lots of room for independent new inquiries there. I gave a talk 9 years ago on the possibility of topological quantum computation in the microtubule, and no-one else seems to have explored the theoretical viability of that yet. Ideas can sit unexamined for many years at a time.

How would this particular piece of work help with your larger interests? Would it cause physicists to work on this topic? Provide a basis for assessing your productivity or lack thereof?

OK, I no longer know what you're referring to by "particular piece of work" and "larger interests". Do you mean, how would the discovery of a consciousness-friendly quantum ontology be relevant to Friendly AI?

Why not spend some time programming or tutoring math? If you work at Google for a year you can then live off the proceeds for several years in Bali or the like. A moderate amount of tutoring work could pay the rent.

If I ever go to work at Google it won't be to live off the proceeds afterwards, it will be because it's relevant to artificial intelligence. Of course you're right that some forms of work pay well. Part of what keeps me down is impatience and the attempt to do the most important thing right now.

ETA: The indent turned every question into question "1", so I removed the numbers.

Comment author: Ghostly 03 February 2012 07:47:41AM 7 points [-]

Chalmers' short comment in your link amounts to just Chalmers expressing enthusiasm for ontologically basic mental properties, not any kind of recommendation for your specific research program.

Of course you're right that some forms of work pay well. Part of what keeps me down is impatience and the attempt to do the most important thing right now.

To be frank, the Outside View says that most people who have achieved little over many years of work will achieve little in the next few months. Many of them have trouble with time horizons, lack of willpower, or other problems that sabotage their efforts systematically, or prefer to indulge other desires rather than work hard. These things would hinder both scientific research and paid work. Refusing to self-finance with a lucrative job, combined with the absence of any impressive work history (that you have made clear in the post I have seen) is a bad signal about your productivity, your reasons for asking us for money, and your ability to eventually pay it back.

the attempt to do the most important thing right now

No one else seems to buy your picture of what is most important (qualia+safe AI). Have you actually thought through and articulated a model, with a chain of cause and effect, between your course of research and your stated aims of affecting AI? Which came first, your desire to think about quantum consciousness theories or an interest in safe AI? It seems like a huge stretch.

I'm sorry to be so blunt, but if you're going to be asking for money on Less Wrong you should be able to answer such questions.

Comment author: Mitchell_Porter 03 February 2012 08:57:59AM 1 point [-]

Chalmers' short comment in your link amounts to just Chalmers expressing enthusiasm for ontologically basic mental properties, not any kind of recommendation for your specific research program.

There is no existing recommendation for my specific research program because I haven't gone looking for one. I thought I would just work on it myself, finish a portion of it myself, and present that to the world, along with the outline of the rest of the program.

Refusing to self-finance with a lucrative job ... is a bad signal

"Lucrative" is a weakness in your critique. I'm having trouble thinking of anyone who decided they should have been a scientist, then went and made lots of money, then did something of consequence in science. People who really want to do something tend to have trouble doing something else in its place.

Of course you're correct that if someone wants to achieve big things, but has failed to do so thus far, there are reasons. One of my favorite lines from Bruce Sterling's Schismatrix talks about the Superbrights, who were the product of an experiment in genetically engineering IQs above 200, as favoring "wild schemes, huge lunacies that in the end boiled down to nothing". I spent my 20s trying to create a global transhumanist utopia in completely ineffective ways (which is why I can now write about the perils of utopianism with some conviction), and my 30s catching up on a lot of facts about the world. I am surely a case study in something, some type of failed potential, though I don't know what exactly. I would even think about trying to extract the lessons from my own experience, and that of similar people like Celia Green and Marni Sheppeard, so that others don't repeat my mistakes.

But throughout those years I also thought a great deal about the ontology of quantum mechanics and the ontology of consciousness. I could certainly have written philosophical monographs on those subjects. I could do so now, except that I now believe that the explanation of quantum mechanics will be found through the study of our most advanced theories, and not in reasoning about simple models, so the quick path to enlightenment turns out to lead up the slope of genuine particle physics. Anyway, perhaps the main reason I'm trying to do this now is that I have something to contribute and I don't see anyone else doing it.

No one else seems to buy your picture of what is most important (qualia+safe AI). Have you actually thought through and articulated a model, with a chain of cause and effect, between your course of research and your stated aims of affecting AI?

See the first post in this series. You need to know how consciousness works if you are going to correctly act on values that refer to conscious beings. If you were creating a transhuman AI, and couldn't even see that colors actually exist, you would clearly be a menace on account of having no clue about the reality of consciousness. Your theoretical apriori, your intellectual constructs, would have eclipsed any sensitivity to the phenomenological and ontological facts.

The issue is broader than just the viability of whatever ideas SIAI has about outsourcing the discovery of the true ontology of consciousness to an AI. We live in a culture possessing powerful computational devices that are interfacing with, and substituting for, human beings, in a plethora of ways. Human subjectivity has always understood itself through metaphor, and a lot of the metaphors are now coming from I.T. There is a synergy between the advance of computer power and the advance of this "computerized" subjectivity, that has trained itself to see itself as a computer. Perhaps the ultimate wrong turn would be a civilization which uploaded itself, thinking that it had thereby obtained immortality, when in fact they had just killed themselves, to be replaced by a society of unconscious simulations. That's an extreme, science-fictional example, but there are many lesser forms of the problem that could come to pass, which are merely pathologies rather than disasters.

I don't want to declare the impact of computers on human self-understanding as unconditionally negative, not at all; but it has led to a whole new type of false consciousness, a new way of "eclipsing the lifeworld", and the only way to overcome that problem rationally and knowingly (it could instead be overcome by violent emotional luddism), is to transcend these incomplete visions of reality - find a deeper one which makes their incompleteness manifest.

Which came first, your desire to think about quantum consciousness theories or an interest in safe AI?

Quantum consciousness. I was thinking about that a few years before Eliezer showed up in the mid-1990s. There have been many twists and turns since then.

Comment author: moridinamael 03 February 2012 09:35:30PM 11 points [-]

Perhaps the ultimate wrong turn would be a civilization which uploaded itself, thinking that it had thereby obtained immortality, when in fact they had just killed themselves, to be replaced by a society of unconscious simulations. That's an extreme, science-fictional example, but there are many lesser forms of the problem that could come to pass, which are merely pathologies rather than disasters.

Imagine you have signed up to have your brain scanned by the most advanced equipment available in 2045. You set in a tube and close you eyes while the machine recreates all the details of your brain, its connectivity, electromagnetic fields and electrochemical gradients, and transient firing patterns.

The technician says, "Okay, you've been uploaded, the simulation is running."

"Excellent," you respond. "Now I can interact with he simulation and prove that it doesn't have qualia."

"Hold on, there," said the technician. "You can't interact with it yet. The nerve impulses from your sensory organs and non-cranial nerves are still being recorded and used as input for the simulation, so that we can make sure it's a stable duplicate. Observe the screen."

You turn and look at the screen, where you see an image of yourself, seen from a camera floating above you, turned to face the screen. The screen is hanging from a bright green wall.

"That's me," you say. "Where's the simulation?" As you watch, you verify this, because the image of you on the screen says those words along with you.

"That is the simulation, on the monitor," reasserts the technician.

You are somewhat taken aback, but not entirely surprised. It is a high-fidelity copy of you and it's being fed the same inputs as you. You reach up to scratch your ear and notice that the you on the monitor mirrors this action perfectly. He even has the same bemused expression on his face as he examines the monitor, in his simulated world, which he doesn't realize is just showing him an image of himself, in a dizzying hall-of-mirrors effect.

"The wall is green," you say. The copy says this along with you in perfect unison. "No, really. It's not just how the algorithm feels from the inside. Of course you would say it's green," you say pointing at the screen, just as the simulation points at the screen and says what you are saying in the same tone, "But you're saying it for entirely different reasons than I am.

The technician tisks. "Your brain state matches that of the simulation for within an inordinately precise tolerance. He's not thinking anything different than what you're thinking. Whatever internal mental representation leads you to insist that the wall is green is identical to the internal mental representation that leads him to say the same."

"It can't be," you insist. "It doesn't have qualia, it's just saying that because its brain is wired up the same way as mine. No matter what he says, his experience of color is distinct from mine."

"Actually, we de-synchronized the inputs thirty seconds ago. You're the simulation, that's the real you on the monitor."

Comment author: Mitchell_Porter 04 February 2012 03:14:02AM 8 points [-]

Yes, I've read this in science fiction too. Do you want me to write my own science fiction in opposition to yours, about the monadic physics of the soul and the sinister society of the zombie uploads? It would be a stirring tale about the rise of a culture brainwashed to believe that simulations of mind are the same as the real thing, and the race against time to prevent it from implementing its deluded utopia by force.

Telling a vivid little story about how things would be if so-and-so were true, is not actually an argument in favor of the proposition. The relevant LW buzzword is fictional evidence.

Comment author: moridinamael 04 February 2012 03:48:37AM *  3 points [-]

Yes, I think your writing a "counter-fiction" would be a very useful exercise and might clarify to me how you can continue to hold the position that you do. I honestly do not fathom it. I admit this is a fact about my own state of knowledge, and I would like it if you could at least show me an example of a fictional universe where you were proven right, as I have shown an account of a fictional universe where you are proven wrong.

I don't intend for the story to serve as any kind of evidence, but I did intend for it to serve as an argument. If you found yourself in the position described in the story, would you be forced to admit that there was not, in fact, any information that makes up a "mind" outside of the mechanistic brain? If it turns out that humans and their simulations both behave and think in exactly the same fashion?

Again, it's not fictional evidence, it's me asking what your true rejection would need to be for you to accept that the universe is turtles all the way down.

Comment author: Mitchell_Porter 04 February 2012 05:44:59AM 2 points [-]

it's not fictional evidence, it's me asking what your true rejection would need to be for you to accept that the universe is turtles all the way down.

If it's a question of why I believe what I do, the starting point is described here, in the section on being wrong about one's phenomenology. Telling me that colors aren't there at any level is telling me that my color phenomenology doesn't exist. That's like telling you, not just that you're not having a conversation on lesswrong, but that you are not even hallucinating the occurrence of such a conversation. There are hard limits to the sort of doubt one can credibly engage in about what is happening to oneself at the level of appearance, and the abolition of color lies way beyond those limits, out in the land of "what if 2+2 actually equals 5?"

The next step is the insistence that such colors are not contained in physical ontology, and so a standard materialism is really a dualism, which will associate colors (and other ingredients of experience) with some material entity or property, but which cannot legitimately identify them with it. I think that ultimately this is straightforward - the mathematical ontologies of standard physics are completely explicit, it's obvious what they're made of, and you just won't get color out of something like a big logical conjunction of positional properties - but the arguments are intricate because every conceivable attempt to avoid that conclusion is deployed. So if you want arguments for this step, I'm sure you can find them in Chalmers and other philosophers.

Then there's my personal alternative to dualism. The existence of an alternative, as a palpable possibility if not a demonstrated reality, certainly helps me in my stubbornness. Otherwise I would just be left insisting that phenomenal ontology is definitely different to physical ontology, and historically that usually leads to advocacy of dualism, though starting in the late 19th century you had people talking about a monistic alternative - "panpsychism", later Russell's "neutral monism". There's surely an important issue buried here, something about the capacity of people to see that something is true, though it runs against their other beliefs, in the absence of an alternative set of beliefs that would explain the problematic truth. It's definitely easier to insist on the reality of an inconvenient phenomenon when you have a candidate explanation; but one would think that, ideally, this shouldn't be necessary.

your writing a "counter-fiction" would be a very useful exercise and might clarify to me how you can continue to hold the position that you do. I honestly do not fathom it.

It shouldn't require a story to convey the idea. Or rather, a story would not be the best vehicle, because it's actually a mathematical idea. You would know that when we look at the world, we see individuated particles, but at the level of quantum wavefunctions, we have, not just a wavefunction per particle, but entangled wavefunctions for several particles at once, that can't be factorized into a "product" of single-particle wavefunctions (instead, such entangled wavefunctions are sums, superpositions, of distinct product wavefunctions). One aspect of the dispute about quantum reality is whether the supreme entangled wavefunction of the universe is the reality, whether it's just the particles, whether it's some combination. But we could also speculate that the reality is something in between - that reality consists of lots of single particles, and then occasional complex entities which we would currently call entangled sets of particles. You could write down an exact specification of such an ontology; it would be a bit like what they call an objective-collapse ontology, except that the "collapses" or "quantum jumps" are potentially between entangled multiparticle states, not just localized single-particle states.

My concept is that the self is a single humongous "multi-particle state" somewhere in the brain, and the "lifeworld" (mentioned at the start of this post) is wholly contained within that state. This way we avoid the dualistic association between conscious state and computational state, in favor of an exact identity between conscious state and physical state. The isomorphism does not involve, on the physical side, a coarse-grained state machine, so here it can be an identity. When I'm not just defending the reality of color (etc) and the impossibility of identifying it with functional states, this is the model that I'm elaborating.

So if you want a counter-fiction, it would be one in which the brain is a quantum computer and consciousness is the big tensor factor in its quantum state, and in which classical uploading destroys consciousness because it merely simulates the big tensor factor's dynamics in an entity which ontologically consists of a zillion of the simple tensor factors (a classical computer). In other words, whether the state machine is realized within a single tensor factor or a distributed causal network of them, is what determines whether it is conscious or not.

If you found yourself in the position described in the story, would you be forced to admit that there was not, in fact, any information that makes up a "mind" outside of the mechanistic brain?

I was asked this some time ago - if I found myself to be an upload, as in your scenario, how would that affect my beliefs? To the extent that I believed what was going on, I would have to start considering Chalmers-style dualism.

Comment author: Risto_Saarelma 04 February 2012 11:49:06AM *  6 points [-]

So if you want a counter-fiction, it would be one in which the brain is a quantum computer and consciousness is the big tensor factor in its quantum state, and in which classical uploading destroys consciousness because it merely simulates the big tensor factor's dynamics in an entity which ontologically consists of a zillion of the simple tensor factors

I'm more interested in the part in the fiction where the heroes realize that the people they've lived with their whole lives in their revealed-to-be-dystopian future, who've had an upload brain prosthesis after some traumatic injury or disease, are actually p-zombies. How do they find this out, exactly? And how do they deal with there being all these people, who might be their friends, parents or children, who are on all social and cognitive accounts exactly like them, who they are now convinced lack a subjective experience?

Comment author: Mitchell_Porter 05 February 2012 04:55:40AM 3 points [-]

Let's see what we need to assume for such a fictional scenario. First, we have (1) functionally successful brain emulation exists, at a level where the emulation includes memory and personality. Then I see a choice between (2a) the world is still run by human beings, and (2b) the world has powerful AI. Finally, we have a choice between (3a) there has been no discovery of a need for quantum neuroscience yet, and (3b) a quantum neuroscience exists, but a quantum implementation of the personal state machine is not thought necessary to preserve consciousness.

In my opinion, (1) is in tension with (3a) and even with (2a). Given that we are assuming some form of quantum-mind theory to be correct, it seems unlikely that you could have functionally adequate uploads of whole human beings, without this having already been discovered. And having the hardware and the models and the brain data needed to run a whole human sim, should imply that you are well past the threshold of being able to create AI that is nonhuman but with human intellectual potential.

So by my standards, the best chance to make the story work is the combination of (1) with (3b), and possibly with (2b) also. The (2b) scenario might be set after a "semi-friendly" singularity, in which an Iain M. Banks, Culture-like existence for humanity has been created, and the science and technology for brain prostheses has been developed by AIs. Since the existence of a world-ruling friendly super-AI (a "Sysop") raises so many other issues, it might be better to think in terms of an "Aristoi"-like world where there's a benevolent human ruling class who have used powerful narrow AI to produce brain emulation technology and other boons to humanity, and who keep a very tight control on its spread. The model here might be Vinge's Peace Authority, a dictatorship under which the masses have a medieval existence and the rulers have the advanced technology, which they monopolize for the sake of human survival.

However it works, I think we have to suppose a technocratic elite who somehow know enough to produce working brain prostheses, but not enough to realize the full truth about consciousness. They should be heavily reliant on AI to do the R&D for them, but they've also managed to keep the genie of transhuman AI trapped in its box so far. I still have trouble seeing this as a stable situation - e.g. a society that lasts for several generations, long enough for a significant subpopulation to consist of "ems". It might help if we are only dealing with a small population, either because most of humanity is dead or most of them are long-term excluded from the society of uploads.

And even after all this world-building effort, I still have trouble just accepting the scenario. Whole brain emulation good enough to provide a functionally viable copy of the original person implies enormously destabilizing computational and neuroscientific advances. It's also not something that is achieved in a single leap; to get there, you would have to traverse a whole "uncanny valley" of bad and failed emulations.

Long before you faced the issue of whether a given implementation of a perfect emulation produced consciousness, you would have to deal with subcultures who believed that highly imperfect emulations are good enough. Consider all the forms of wishful thinking that afflict parents regarding their children, and people who are dying regarding their prospects of a cure, and on and on; and imagine how those tendencies would interact with a world in which a dozen forms of brain-simulation snake-oil are in the market.

Look at the sort of artificial systems which are already regarded by some people as close to human. We already have people marrying video game characters, and aiming for immortality via "lifebox". To the extent that society wants the new possibilities that copies and backups are supposed to provide, it will not wait around while technicians try to chase down the remaining bugs in the emulation process. And what if some of your sims, or the users of brain prostheses, decide that what the technicians call bugs are actually features?

So this issue - autonomous personlike entities in society, which may or may not have subjective experience - is going to be upon us before we have ems to worry about. A child with a toy or an imaginary friend may speak very earnestly about what its companion is thinking or feeling. Strongly religious people may also have an intense imaginative involvement, a personal relationship, with God, angels, spirits. These animistic, anthropomorphizing tendencies are immediately at work whenever there is another step forward in the simulation of humanity.

At the same time, contemporary humans now spend so much time interacting via computer, that they have begun to internalize many of the concepts and properties of software and computer networks. It therefore becomes increasingly easy to create a nonhuman intelligent agent which passes for an Internet-using human. A similar consideration will apply to neurological prostheses: before we have cortical prostheses based on a backup of the old natural brain, we will have cortical prostheses which are meant to be augmentations, and so the criterion of whether even a purely restorative cortical prosthesis is adequate, will increasingly be based on the cultural habits and practices of people who were using cortical prostheses for augmentation.

Comment author: Eugine_Nier 06 February 2012 01:14:11AM 1 point [-]

One possible counter fiction would have an ending similar to the bad ending of three worlds collide.

Comment author: katydee 12 March 2012 05:51:22AM 0 points [-]

I'm having trouble thinking of anyone who decided they should have been a scientist, then went and made lots of money, then did something of consequence in science.

Jeff Hawkins.

Comment author: Nick_Tarleton 03 February 2012 07:23:24AM *  7 points [-]

Why not spend some time programming or tutoring math? If you work at Google for a year you can then live off the proceeds for several years in Bali or the like. A moderate amount of tutoring work could pay the rent.

If I ever go to work at Google it won't be to live off the proceeds afterwards, it will be because it's relevant to artificial intelligence.

Supposing you can work at Google, why not? Beggaring yourself looks unlikely to maximize total productivity over the next few years, which seems like the timescale that counts.

Comment author: JoshuaZ 03 February 2012 06:02:09AM *  4 points [-]

Well, let's start with the opposite: go here for an ex-academic telling me that one part of my research agenda is not worthwhile.

Oh. Lubos Motl. Considering he seems to spend his time doing things like telling Scott Aaronson that Scott doesn't understand quantum mechanics, I don't think Motl's opinion should actually have much weight in this sort of context.

I gave a talk 9 years ago on the possibility of topological quantum computation in the microtubule, and no-one else seems to have explored the theoretical viability of that yet. Ideas can sit unexamined for many years at a time.

Quantum topological computers are more robust than old-fashioned quantum computers, but they aren't that more robust and they have their own host of issues. People likely aren't looking into this because a) it seems only marginally more reasonable than the more common version of microtubules doing quantum computation and b) it isn't clear how one would go about testing any such idea.

Comment author: Mitchell_Porter 03 February 2012 06:18:09AM -1 points [-]

In the article I linked, Lubos is expressing a common opinion about the merit of such formulas. It's a deeper issue than just Lubos being opinionated.

If I had bothered to write a paper, back when I was thinking about anyons in microtubules, we would know a lot more by now about the merits of that idea and how to test it. There would have been a response and a dialogue. But I let it go and no-one else took it up on the theoretical level.

Comment author: JoshuaZ 03 February 2012 06:20:54AM 2 points [-]

Do you have any suggested method for testing the microtubule claim?

Comment author: Mitchell_Porter 03 February 2012 06:42:41AM 0 points [-]

From my perspective, first you should just try to find a biophysically plausible model which contains anyons. Then you can test it, e.g. by examining optical and electrical properties of the microtubule.

People measure such properties already, so we may already have relevant data, even evidence. There is a guy in Japan (Anirban Bandyopadhyay) who claims to have measured all sorts of striking electrical properties of the microtubule. When he talks about theory, I just shake my head, but that doesn't tell us whether his data is good or bad.

Comment author: Manfred 03 February 2012 04:58:55AM *  5 points [-]

"Green" refers to objects which disproportionately reflect or emit light of a wavelength between 520 and 570nm.

~(Solvent, from the previous thread.)

A few people talk about wavelengths of light, but I doubt that they want to assert that the light in question, as it traverses space, is actually colored green.

If your counterexample is already taken care of by the very second person in the previous thread, you should use a different counterexample. EDIT: I am not endorsing Solvent's definition in any "The Definition" sense - but I felt that you ignored what he wrote when making that counterexample. In a post that is basically all about responding to what people wrote, that's bad, and I think there were other similar lapses.

Anyhow, the question you're regarding as so mysterious isn't even mary's room level - it's "what words mean" level, i.e. it's already solved. Suggested reading: Dissolving questions about disease, How an algorithm feels from inside, Quotation and referent, The meaning of right.

Comment author: JoshuaZ 03 February 2012 05:29:12AM *  1 point [-]

This doesn't actually work. We speak of seeing green things. And we can say an object looks green even when it isn't emitting any such rays, due to various optical illusions and the like. If it turned out that there was some specific wavelength (say around 450 nm in the otherwise blue range) that also triggered the same visual reaction in our systems as waves in the 520-570 range I don't think we'd have trouble calling objects sending off that frequency as green. And we actually do something similar to colors from objects that emit combinations of wavelengths. People with synesthesia are similarly a problem. Dissolving this does have some subtle issues. The real issue is that the difficulty of dissolving what we mean when we say a given color is not good evidence that colors cannot be dissolved.

Comment author: Manfred 03 February 2012 05:34:29AM *  2 points [-]

So, I think I can just say "mind projection fallacy" and you'll know what I mean about most of those things.

But yes, I am not endorsing Solvent's definition (I'll edit in a disclaimer to that effect, and explaining why I still quoted). "Green," as a human word, is a lot more like "disease" from Yvain's post than it is like "a featherless biped."

Comment author: antigonus 03 February 2012 04:08:48AM 16 points [-]

Something has gone horribly wrong here.

Comment author: JenniferRM 03 February 2012 07:00:35PM 3 points [-]

Is the apparent reference to David Stove's "What is Wrong with Our Thoughts?" intentional?

Comment author: antigonus 04 February 2012 01:37:36AM 1 point [-]

No, never seen that before.

Comment author: David_Gerard 03 February 2012 11:57:29PM 2 points [-]
Comment author: WrongBot 03 February 2012 02:44:50AM 17 points [-]

What would it take to convince you that this entire line of inquiry is confused? Not just the quantum stuff, but the general idea that qualia are ontologically basic? Not just arguments, necessarily, experiments would be good, too.

If Mitchell is unable or unwilling to answer this question, no one should give him any amount of money no matter the terms.

Comment author: Nisan 03 February 2012 03:23:24AM 9 points [-]

Is this a reasonable request? What would convince you that this line of inquiry is not confused?

Comment author: WrongBot 03 February 2012 04:23:03AM 17 points [-]

If we discover laws of physics that only seem to be active in the brain, that would convince me. If we discover that the brain sometimes violates the laws of physics as we know them, that would convince me. If we build a complete classical simulation of the brain and it doesn't work, that would convince me. If we build a complete classical simulation of the brain and it works differently from organic brains, that would convince me. Ditto for quantum versions, even, I guess.

And there are loads of other things that would be strong evidence on this issue. Maybe we'll find the XML tags that encode greenness in objects. I don't expect any of these things to be true, because if I did then I would have updated already. But if any of these things did happen, of course I would change my mind. It probably wouldn't even take evidence that strong. Hell, any evidence stronger than intuition would be nice.

Comment author: amcknight 03 February 2012 11:29:41PM 7 points [-]

Which returns us to the dilemma: either "experiences" exist and part of them is actually green, or you have to say that nothing exists, in any sense, at any level of reality, that is actually green.

The third option is my favourite:
Good news everyone! There are all kinds of different things that you can permissibly call green! Classes of wavelengths, dispositions in retnas, experiences in brains, all kinds of things! Now we have the fun choice of deciding which one is most interesting and what we want to talk about! Yay!

Comment author: David_Gerard 03 February 2012 11:50:13PM 2 points [-]

And Fallacies of Compression was just in the sequence reruns a couple of days ago, too ...

Comment author: Mitchell_Porter 04 February 2012 03:16:31AM 0 points [-]

I'm talking about experiences in brains.

Comment author: Luke_A_Somers 06 February 2012 12:58:38AM *  0 points [-]

Well, then, you've just told us where to find green. When neuroscientists find the spot to poke that makes their subjects say 'Wow, that is so GREEN', what do you say then?

I haven't been following this closely, but unless you're taking the exact dualist stance you say below that you're denying, it really seems like that should be the answer.

Comment author: [deleted] 04 February 2012 01:49:18PM *  2 points [-]

I have a question and I think maybe your answer will make it easier for other people to understand what you are arguing.

What about people who are color blind? They see for instance red where in objective reality the objects wavelength is "green". What happens here in your view? In the persons experience he still see red, but it should be green... And eventhough we know approximately the processes that do that people are color blind, this seems to be a interesting question in your model.

Comment author: scientism 03 February 2012 07:12:47PM *  2 points [-]

For what it's worth, I don't take dreams and hallucinations to involve seeing at all, so I don't believe I have anything to explain with regard to colour in dreams and hallucinations. I take the question "Do you dream in colour?" to be incoherent whereas the question "Have you dreamt of colour / coloured things?" is fine. The former question presupposes that perception involves seeing internal imagery rather than directly perceiving the world, which I deny, and that dreaming / hallucinating can therefore be said to be a form of perception also, something which obviously can't follow from my denial of mediating imagery.

Comment author: amcknight 03 February 2012 11:30:54PM 2 points [-]

The lifeworld is the place where you exist, where time flows, and where things are actually green.

What makes you think these all happen in the same place?

Comment author: Luke_A_Somers 06 February 2012 01:01:59AM 0 points [-]

... they're all the naive interpretations of our sensations, so it really seems they ought to overlap at least.

Comment author: FeepingCreature 04 February 2012 06:02:20PM 2 points [-]

Things that my brain tells me are green, are green. Things that your brain tells you are green, are green. In cases where we disagree, split the label into my!green and your!green.

Now can we move on? This post is a waste of time.

Comment author: Eugine_Nier 06 February 2012 01:09:15AM 3 points [-]

Things that my brain tells me are green, are green. Things that your brain tells you are green, are green. In cases where we disagree, split the label into my!green and your!green.

To see the problem with the above statement, try replacing the word "green" with "true".

Comment author: FeepingCreature 06 February 2012 01:52:37AM 0 points [-]

You mean, "to see the problem with a wholly unrelated statement". Green is not the same kind of property as true.

Comment author: Eugine_Nier 06 February 2012 04:24:43AM 1 point [-]

Green is not the same kind of property as true.

Could you expand on that.

Comment author: FeepingCreature 06 February 2012 08:04:33AM 2 points [-]

Truth is an abstract, rationally defined property that has a meaning beyond my mind. To say that "things my brain tells me are true, are true" is a similar kind of claim would imply that green, like true, has a working definition beyond the perceptual. If this is the case, I'd like to know it. I'm fairly sure it's not actually possible to be wrong about a perceived color, excluding errors in memory. It's possible to consider a statement and be mistaken about its truthfulness, but is it possible to look at an object and be mistaken about the color one perceives it as? That seems nonsensical.

Comment author: Eugine_Nier 07 February 2012 05:31:43AM 2 points [-]

To say that "things my brain tells me are true, are true" is a similar kind of claim would imply that green, like true, has a working definition beyond the perceptual.

So can you provide a working definition of "true"?

Comment author: FeepingCreature 07 February 2012 01:28:42PM 0 points [-]

If there was definitely such a thing as an objective reality, my answer would be "a claim that is not in contradiction with objective reality". As it stands, I'll have to settle for "a claim that is never in contradiction with perceived reality. " Note that, for instance, ludicrous claims about the distant past do in fact stand in contradiction with perceived reality since "things like that seem to not happen now, and the behavior of the universe seems to be consistent over time" is a true claim which a ludicrous but unverifiable claim would contradict with. Note that the degree to which you believe truth can be objective is exactly proportional to the degree to which you believe reality is objective and modelled by our observations.

Comment author: [deleted] 05 February 2012 08:20:08AM *  -2 points [-]

This comment is beyond stupid.

His entire point is that the very fact that we percieve colors need to be explained. I can close my eyes and visualize any color I want, but how is that possible when colors do not exist objectively ? So in order to avoid postulating that the mind is a separate entity with it's own "reality" with colors, Mitchell Porter is trying to get colors in our reality.

I am not convinced anything as drastic as quantum mechanisms is needed to explain this and I am very much a functionalist, but the issues he want to investigate is definitely worth digging into.

But since you claim this post is a waste of time, please elaborate on exactly how colors arise in experience...

Comment author: FeepingCreature 05 February 2012 09:49:31PM *  2 points [-]

I have no idea; I'm not a neurobiologist. I'd guess that colors arise in experience by virtue of being fundamentally indexical; what a color "is" is merely a defined unique association in our brains that links sensory data to a bunch of learned responses. It's like the human soul - any property of it that you'd use to make it "unique", to differentiate your soul from another's, or to differentiate red from green, can be described as neurological activation of an associative pattern. Memories - neurological. Instinct, learning, feelings, hormones, habits - all biological or neurological. What is red? It's like fire, like roses, like blood. All associative. Could you build a brain that perceives red meaningfully differently from green while having no such associations, built-in or learned? I suspect that if I was such a being, I would not even be able to differentiate red from green, because my brain would never have been given occasion to treat a red thing with a different response than a green thing. How would you expect there to be nerves for that kind of differentiation if there was never a need for it?

Colors are associated responses and groupings for certain kinds of sensory data. They have no further identity.

That's my take.

[edit] The real stupid thing is that mysteriousness is a property of the question, not the answer! Even if we weren't able to put out a good guess as to how colors work that wouldn't make it a topic to call the entirety of reductionism into question. The correct answer should then be "We don't know yet, but it's probably something in the brain and not magical and/or mysterious". Haven't we learned our lesson with consciousness?

Comment author: TrE 05 February 2012 10:01:02PM *  0 points [-]

And this view seems to be consistent with this bbc documentary excerpt, relevant part starts at 03:00. The Himba have different and less color categories, probably because they don't need more or others.

Comment author: buybuydandavis 03 February 2012 04:02:04AM 1 point [-]

A few people talk about wavelengths of light, but I doubt that they want to assert that the light in question, as it traverses space, is actually colored green.

Why not? If anything has color, it's light.

Comment author: Manfred 03 February 2012 05:31:53AM 5 points [-]

Careful of getting sucked into a Standard Dispute :P

"light is clearly colored."
"No it's not - imagine looking at it from the side!"
"By definition, looking at something means absorbing photons, so if we could look at a beam of light form the side it would look like the color it is."
And so on.

Comment author: Nick_Tarleton 03 February 2012 07:25:15AM *  1 point [-]
Comment author: buybuydandavis 03 February 2012 07:08:09PM *  2 points [-]

Maybe I needed to add a space in my comment.

"If any thing has color, it is light." My point was that if you want to make color a property of things in themselves, and not the reaction of your nervous system to them, green light strikes me as about as green as a green thing can get.

As for the supposed scary and interesting part of the problem, while the science of color perception is no doubt full of interesting facts and concepts, it's hardly scary, and I don't think the perception of color is scary or interesting in philosophical terms at all.

I would call some subset of the possible states of your nervous system as you perceiving green. I can't enumerate those states, but I find nothing scary about the issue; it's completely unproblematic.

What do you find scary about this?

Comment author: Emile 03 February 2012 08:39:21PM 1 point [-]

green light strikes me as about as green as a green thing can get.

How green a ray of light appears can depend of what's around it.

Comment author: buybuydandavis 03 February 2012 09:34:11PM *  2 points [-]

See the rest of my sentence. I was explicitly talking about things in themselves, and not how they appear to an observer.

The original comment I responded to

A few people talk about wavelengths of light, but I doubt that they want to assert that the light in question, as it traverses space, is actually colored green.

Anyone talking about light in the optical range as it traverses space is likely to talk about the color of that light, and assert "that's green light". More generally, outside the optical range, they're likely to talk about the type of light in terms of frequency bands.

Comment author: Morendil 03 February 2012 02:08:08PM 1 point [-]

I still don't get that.

Comment author: Mitchell_Porter 06 February 2012 02:34:50AM *  1 point [-]

As of a few minutes ago, my problems are solved for the next few months - which should be long enough for this situation never to recur. If I ever go fundraising again, I'll make sure I have something far more substantial ready to make my case.

Comment author: CarlShulman 10 February 2012 12:47:44AM 3 points [-]

By way of these posts, or due to some independent cause?

Comment author: Mitchell_Porter 07 March 2012 12:41:02AM 2 points [-]

Independent cause. I did also get $100 back from "ITakeBets" as a result of the posts, but that was all.

Comment author: Automaton 04 February 2012 06:25:52AM *  1 point [-]

Just to clarify, I don't really consider my position to be eliminative towards green, only that what we are talking about when we talk about green 'qualia' is nothing more than a certain type of sentient experience. This may eliminate what you think you are talking about what you say green, but not what I think I am talking about when I say green. I am willing to say that the part of a functional pattern of neural activity that is experienced as green qualia is identical to green in the sense that people generally mean when they talk about seeing something green. But there is no way to separate the green from the experience of seeing green.

Do you think that green is something separate from or independent of the experience of seeing green? Do you believe that seeing green is some sort of causal relationship between 'green' and a conscious mind similar to what a content externalist believes about the relationship between intentional states and external content? I don't understand why you believe so strongly that green has to be something beyond a conscious experience.

ETA: For a position that I believe to be more extreme/eliminativist than mine, see Frank Jackson's paper Mind and Illusion, where he argues that seeing red is being in a representational state that represents something non-physical(and also not real), in the same way that someone could be in a representational state representing a fairy.

From that paper:

Intensionalism means that no amount of tub-thumping assertion by dualists (including by me in the past) that the redness of seeing red cannot be accommodated in the austere physicalist picture carries any weight. That striking feature is a feature of how things are being represented to be, and if, as claimed by the tub thumpers, it is transparently a feature that has no place in the physicalist picture, what follows is that physicalists should deny that anything has that striking feature. And this is no argument against physicalism. Physicalists can allow that people are sometimes in states that represent that things have a non-physical property. Examples are people who believe that there are fairies. What physicalists must deny is that such properties are instantiated.

I think he is basically saying that if you can imagine the concept of something that isn't physically real (like a fairy), why couldn't you have a state representing redness, even though redness is not physically real? Or, for an example involving something ontologically unreal, one could have beliefs about the vitalist's élan vital even though it is not made of any ontological entities from the physical universe, so why not have conscious states about red even though it has no place in physical ontology. I'm not sure I agree with his belief that colors can't be considered to have physical ontology, but he seems to agree with you on that point.

Comment author: Bruno_Coelho 03 February 2012 10:37:54PM 0 points [-]

Apparently there is no guarantee return. Suppose that your theoretical assumptions are correct, then why people don't get it? I mean, if the explanations have some power, other physics will accept.

Maybe future neurobiology help us with the consciosuness debate. FAI is another helm.

Comment author: loveandthecoexistenc 04 February 2012 01:46:30AM -1 points [-]

The most apparent way to talk about such topics here is to completely overhaul the terminology and canonical examples.

And then do something with the resulting referential void.

Certainly not a task for group of less than four people, and likely not a task for group of less than 40.

Is your attempt to single-handedly contribute, with all the costs it imposes, likely enough to give a significant positive result?