I would essentially deny that anything is actually green, but assert that there is a mental state of "experiencing green", which is a certain functional state of a mind.
And what functional state is that? Can you write seeGreen()?
I would essentially deny that anything is actually green, but assert that there is a mental state of "experiencing green", which is a certain functional state of a mind.
And what functional state is that? Can you write seeGreen()?
With the aid of qualia computing and a quantum computer, perhaps ;-)
I'll only answer with an analogy. Visualize a cluster of magnetized particles on a hard disk platters. Billions of atoms linked with complicated quantum electromagnetic fields. A magnetic head goes over them. It fires electrical currents, that are sent a semi-conductor. That semi-conductor then starts doing computation, which is just electrons flowing from one part to another, from a semi-conductor to another. And then, another flow of electrons fired by a electron gun and hitting a screen, and they end up printing numbers.
You have that in your mind's eye ? Now, those numbers are the digits of Pi. According to my theory, the digits of Pi that appears on the screen are the same thing as some aspect of all those billions of Pi-less atoms in motion. Well, yes, and ? There is no dualism involved in that.
As for something being "green", we can detect "green" with webcams and computers. My Gimp as a "anti-red eye filter" that can not only detect a kind of red and even its shape, and remove it. Being green is a very physical property of light, or of matter that emits/absorbs light. There is even less dualism in that than in my Pi example, or in any other kind of file (text, pictures, sound, movie, ...) stored in a hard disk.
Both you and prase seem to be missing the point. The experience of green has nothing to with wavelengths of light. Wavelengths of light are completely incidental to the experience. Why? Because you can experience the qualia of green thanks to synesthesia. Likewise, if you take LSD at a sufficient dose, you will experience a lot of colors that are unrelated to the particular input your senses are receiving. Finally, you can also experience such color in a dream. I did that last night.
The experience of green is not the result of information-processing that works to discriminate between wavelengths of light. Instead, the experience of green was recruited by natural selection to be part of an information-processing system that discriminates between wavelengths of light. If it had been more convenient, less energetically costly, more easily accessible in the neighborhood of exploration, etc. evolution would have recruited entirely different qualia in order to achieve the exact same information-processing tasks color currently takes part in.
In other words, stating what stimuli triggers the phenomenology is not going to help at all in elucidating the very nature of color qualia. For all we know, other people may experience feelings of heat and cold instead of colors (locally bounded to objects in their 2.5D visual field), and still behave reasonably well as judged by outside observers.
you don't immediately generalize and say: 'no universe capable of exhaustive description by mathematically precise laws can ever contain conscious awareness'. Why not?
My problem is not with mathematically precise laws, my problem is with the objects said to be governed by the laws. The objects in our theories don't have properties needed to be the stuff that makes up experience itself.
Quantum mechanics by itself is not an answer. A ray in a Hilbert space looks less like the world than does a scattering of particles in a three-dimensional space. At least the latter still has forms with size and shape. The significance of quantum mechanics is that conscious experiences are complex wholes, and so are entangled states. So a quantum ontology in which reality consists of an evolving network of states drawn from Hilbert spaces of very different dimensionalities, has the potential to be describing conscious states with very high-dimensional tensor factors, and an ambient neural environment of small, decohered quantum systems (e.g. most biomolecules) with a large number of small-dimensional tensor factors. Rather than seeing large tensor factors as an entanglement of many particles, we would see "particles" as what you get when a tensor factor shrinks to its smallest form.
I emphasize again that an empirically adequate model of reality as evolving tensor network would still not be the final step. The final step is to explain exactly how to identify some of the complicated state vectors with individual conscious states. To do this, you have to have an exact ontological account of phenomenological states. I think Husserlian transcendental phenomenology has the best ideas in that direction.
Once this is done, the way you state the laws of motion might change. Instead of saying 'tensor factor T with neighbors T0...Tn has probability p of being replaced by Tprime', you would say 'conscious state C, causally adjacent to microphysical objects P0...Pn, has probability p of evolving into conscious state Cprime' - where C and Cprime are described in a "pure-phenomenological" way, by specifying sensory, intentional, reflective, and whatever other ingredients are needed to specify a subjective state exactly.
This has the potential to get rid of the dualism because you are no longer saying conscious state C is really a coarse-graining of a microphysical state. The ontology employed in the subjective description, and the ontology employed for the purposes of stating an exact physical law, have become the same ontology - that is the aim. The Churchlands have written about this idea, but they come at it from the other direction, supposing that folk psychology might one day be replaced by a neurosubjectivity in which you interpret your experience in detail as "events happening to a brain". That might be possible, but the whole import of my argument is that there will have to be some change in the physical ontology employed to understand the brain, before that becomes possible.
Replies to other comments on this post will be forthcoming, but not immediately.
Quantum mechanics by itself is not an answer. A ray in a Hilbert space looks less like the world than does a scattering of particles in a three-dimensional space. At least the latter still has forms with size and shape. The significance of quantum mechanics is that conscious experiences are complex wholes, and so are entangled states. So a quantum ontology in which reality consists of an evolving network of states drawn from Hilbert spaces of very different dimensionalities, has the potential to be describing conscious states with very high-dimensional tensor factors, and an ambient neural environment of small, decohered quantum systems (e.g. most biomolecules) with a large number of small-dimensional tensor factors. Rather than seeing large tensor factors as an entanglement of many particles, we would see "particles" as what you get when a tensor factor shrinks to its smallest form.
[...]
Once this is done, the way you state the laws of motion might change. Instead of saying 'tensor factor T with neighbors T0...Tn has probability p of being replaced by Tprime', you would say 'conscious state C, causally adjacent to microphysical objects P0...Pn, has probability p of evolving into conscious state Cprime' - where C and Cprime are described in a "pure-phenomenological" way, by specifying sensory, intentional, reflective, and whatever other ingredients are needed to specify a subjective state exactly.
You are hitting the nail in the head. I don't expect people in LessWrong to understand this for a while, though. There is actually a good reason why the cognitive style of rationalists, at least statistically, is particularly ill-suited for making sense of the properties of subjective experience and how they constrain the range of possible philosophies of mind. The main problem is the axis of variability of "empathizer vs. systematizer." LessWrong is built on a highly systematizing meme-plex that attracts people who have a motivational architecture particularly well suited for problems that require systematizing intelligence.
Unfortunately, recognizing that one's consciousness is ontologically unitary requires a lot of introspection and trusting one's deepest understanding against the conclusions that one's working ontology suggests. Since LessWrongers have been trained to disregard their own intuitions and subjective experience when thinking about the nature of reality, it makes sense that the unity of consciousness will be a blind spot for as long as we don't come up with experiments that can show the causal relevance of such unity. My hope is to find a computational task that consciousness can achieve at a runtime complexity that would be impossible with a classical neural networks implemented with the known physical constraints of the brain. However, I'm not very optimistic this will happen any time soon.
The alternative is to lay out specific testable predictions involving the physical implementation of consciousness in the brain. I recommend reading David Pearce's physicalism.com, which outlines an experiment that would convince any rational eternal quantum mind skeptic that indeed the brain is a quantum computer.
I am super late to the party. But I want to say that I agree with you and I find your line of research interesting and exciting. I myself am working on a very similar space.
I own a blog called Qualia Computing. The main idea is that qualia actually plays a causally and computationally relevant role. In particular, it is used in order to solve Constraint Satisfaction Problems with the aid of phenomenal binding. Here is the "about" of the site:
Qualia Computing? In brief, epiphenomenalism cannot be true. Qualia, it turns out, must have a causally relevant role in forward-propelled organisms, for otherwise natural selection would have had no way of recruiting it. I propose that the reason why consciousness was recruited by natural selection is found in the tremendous computational power that it afford to the real-time world simulations it instantiates through the use of the nervous system. More so, the specific computational horse-power of consciousness is phenomenal binding –the ontological union of disparate pieces of information by becoming part of a unitary conscious experience that synchronically embeds spaciotemporal structure. While phenomenal binding is regarded as a mere epiphenomenon (or even as a totally unreal non-happening) by some, one needs only look at cases where phenomenal binding (partially) breaks down to see its role in determining animal behavior.
Once we recognize the computational role of consciousness, and the causal network that links it to behavior, a new era will begin. We will (1) characterize the various values of qualia in terms of their computational properties, and (2) systematically explore the state-space of possible conscious experiences.
(1) will enable us to recruit the new qualia varieties we discover thanks to (2) so as to improve the capabilities of our minds. This increased cognitive power will enable us to do (2) more efficiently. This positive-feedback loop is perhaps the most important game-changer in the evolution of consciousness in the cosmos.
We will go from cognitive sciences to actual consciousness engineering. And then, nothing will ever feel the same.
Also, see: qualiacomputing.com/2015/04/19/why-not-computing-qualia/
I'm happy to talk to you. I'd love to see where your research is at.
I suspect half speed is actually a rational decision given some underlying model AnnaSalamon was not aware of explicitly.
For instance, she may intuitively feel that she just passed the hotel. If so, then being extra careful to look well for features and marks around you that could give you hints of whether this happened could work best at half speed. Are there fewer hotels around? Is it a residential area? Does the amount of economic activity seems to be increasing or decreasing as I move in this direction? Then, you can turn around and get there faster.
Formalizing the precise model that would make half-speed the rational choice may be a bit complicated. But that's what the Bayesian approach to cognitive sciences would try to do first.
I find rating the statements about consciousness hard because the scale doesn't distinguish Agree/Disagree from "I think I know the answer"/"I don't know"
Good feedback! In the future I will always add that option. The statistical analysis is trickier, but it can be done :)
Done. Very interesting.
It is quite long but I expected thus. Maybe you should split it into separate tests (though I guess that you are interested in and expect correlations between the parts).
I found the questions very carefully worded. I especially liked the last question set with the std deviation scale. When during checking I noticed that I didn't have many marks in the zero column I judged that I probably fell prey to some bias, reconsidered all entries and moved some toward the mean. I only left only those entries in the +-2 columns where I knew from other tests that I fell into that range.
I'M also very interested in the results. When do you expect to publish the results?
Thanks for your feedback. I am aiming to have the writeup done by August 8th. You will be able to find it in Qualia Computing.
Announcement:
Enough people are continuing to answer the questionnaire that it makes sense to extend the deadline until midnight (California time) of the Sunday 2nd of August of 2015.
Thanks for helping! I am aiming to have the writeup with the results ready by August 8th.
I have a question:
In your post, "A Workable Solution to the Problem of Other Minds", you talk about solving the problem by connecting and disconnecting minds (i.e. doing mind-coalescing and decolescing). I also had this idea, but I didn't really develop it much. Do you know where I could read more about this proposed solution to the problem of other minds?
doing mind-coalescing and decolescing
That is not enough to solve the problem of other minds, as the article explains. The main problem is that when you incorcoporate a whole brain into your overall brain-mass by connecting to it, you can't be certain whether the other being was conscious to begin with or whether the effect is a simple result of your massively amplified brain.
That's why you need a scheme that allows the other being to solve a puzzle while you are disconnected. The puzzle needs to be such that only a conscious intelligence could solve it. And to actually verify that the entity solved it on its own you need to connect again to it and verify while merged that the solution is found there.
Of course you need to make sure that you distract yourself while you are temporarily disconnected, otherwise you may suspect you accidentally solved the phenomenal puzzle on your own.
The solution has a minimum of complexity, and to my knowledge no one else had proposed it before. Derek Parfit, Daniel Kolak, Borges and David Pearce get into some amazing territories that could well lead to a solution of this sort. But they always stay one step short of getting something where the creation of information is a demonstration of another entity actually being conscious.
View more: Next
Not at all. The experience of green is the way our information processing system internally represent "light of green wavelength", nothing else. That if you voluntarily mess up with your cognitive hardware by taking drugs, or that during background maintenance tasks, or that "bugs" in the processing system can lead to "experience of green" when there is no real green to be perceived doesn't change anything about it - the experience of green is the way "green wavelenngth" is encoded in our information processing system, nothing less, nothing more.
I have seen this argument before, and I must confess that I am very puzzled about the kind of mistake that is going on here. I might call it naïve functionalist realism, or something like that. So whereas in "standard" naïve realism people find it hard to dissociate their experiences with an existing mind-independent world, they then go on to perceive everything as "seeing the world directly, nothing else, nothing more." Naïve realists will interpret their experiences as direct, unmediated, impressions of the real world.
Of course this is a problematic view, and there killer arguments against it. For instance, hallucinations. However, naïve realists can still come back and say that you are talking about cases of "misapprehension", where you don't really perceive the world directly anymore. That does not mean you "weren't perceiving the world directly before." But here the naïve realist has simply not integrated the argument in a rational way. If you need to explain hallucinations as "failed representations of true objects" you don't, anymore, need to in addition restate one's previous belief in "perceiving the world directly." Now you end up having two ontologies instead of one: Inner representations and also direct perception. And yet, you only need one: Inner representations.
Analogously, I would describe your argument as naïve functionalist realism. Here you first see a certain function associated to an experience, and you decide to skip the experience altogether and simply focus on the function. In itself, this is reasonable, since the data can be accounted for with no problem. But when I mention LSD and dream, suddenly that is part of another category like a "bug" in one's mind. So here you have two ontologies, where you can certainly explain it all with just one.
Namely, the green is a particular qualia, which gets triggered under particular circumstances. Green does not refer to the wavelength of light that triggers it, since you can experience it without such light. To instead postulate that this is in fact just a "bug" of the original function, but that the original function is in and of itself what green is, simply adds another ontology which, when taken on its own, already can account for the phenomena.