Max Tegmark publishes a preprint of a paper arguing from physical principles that consciousness is “what information processing feels like from the inside,” a position I've previously articulated on lesswrong. It's a very physics-rich paper, but here's the most accessable description I was able to find within it:

If we understood consciousness as a physical phenomenon, we could in principle answer all of these questions [about consciousness] by studying the equations of physics: we could identify all conscious entities in any physical system, and calculate what they would perceive. However, this approach is typically not pursued by physicists, with the argument that we do not understand consciousness well enough.

In this paper, I argue that recent progress in neuroscience has fundamentally changed this situation, and that we physicists can no longer blame neuroscientists for our own lack of progress. I have long contended that consciousness is the way information feels when being processed in certain complex ways, i.e., that it corresponds to certain complex patterns in spacetime that obey the same laws of physics as other complex systems, with no "secret sauce" required.

The whole paper is very rich, and worth a read.

New Comment
30 comments, sorted by Click to highlight new comments since:

The claim that "consciousness is what information processing feels like from the inside" strikes me as distinctly un-illuminating: why should information processing feel like anything from the inside?

[-][anonymous]00

Because it does. Why do charged particles attract or repel? Why do some particles experience mass? At some point the answer is simply "because that's how the universe works."

We know consciousness exists, as we each have first hand evidence. If we want to believe that we live in a reducible universe, then there must be some reduction bringing consciousness down to a most basic physical process. At some point that reductive explanation ends with a very unsatisfying "because that's just how the universe works."

But I would be very suspicious of any model which reached that level before arriving at the level of fundamental particles and their interactions. Why? Because every other phenomenon in the universe also reduces down to that level, so why should we expect the explanation of consciousness to be different?

As far as I can tell, the paper is asking this question: if the world is just a wavefunction, why do we see it as a bunch of material things? Tegmark is trying to show that viewing the world as a bunch of material things is somehow special, that it optimizes some physical or mathematical quantity. That's impressive if he can make it work, but I'm not sure it's on the right track. Maybe a better question would be, which ways of looking at the wavefunction are the most likely to contain evolution? After all, minds are optimized for the kind of information processing that is useful for evolution. (Um, what I really meant here was "useful for increasing fitness", thx Mark_Friedenbach.)

I think you're on the right track in assessing the paper's content. Here's what I retained from a first reading: He considers a quantum density matrix. He decides to separate it in a way which minimizes the mutual information of the two parts, hoping that this might be the amount of conscious information present, but it always turns out to be less than a bit. Also, his method of division tends to produce parts which are static (energy eigenstates). So in dividing up the density matrix, he adds a second condition (alongside "minimize the mutual information") so that the resulting parts will evolve over time. This increases the minimum mutual information, but not substantially.

I regard the paper as a very preliminary contribution to a new approach to quantum ontology. In effect he's telling us how the wavefunction divides into things, if we assume that the division is made according to this balance between minimal mutual information and some dynamics in the parts. Then he can ask whether the resulting things look like objects as we know them (reasonably so) and whether they look like integrated information processors (less success there, in my opinion, even though that was the aim).

[-][anonymous]20

Are they? Minds are optimized by evolution. That's not the same as for evolution.

That's too abstract, let's go down a level, I just meant that if catching rabbits is good for your genes, you might evolve eyes that see rabbits, not wavefunctions transformed to Fourier space or something. Edited the bit you were responding to, I guess it was unclear.

[-][anonymous]-10

You said:

Maybe a better question would be, which ways of looking at the wavefunction are the most likely to contain evolution?

But using your example, eyes don't "contain" evolution. They provide a capability which is advantageous under natural selection, but they do not themselves perform evolution by natural selection. It's not clear to me that we should expect any connection with consciousness and evolution, other than the historical description of how natural consciousness came to be.

Thanks for posting this.

Do I recall correctly that Gary Drescher also uses the 'what information processing feels like from the inside' view of consciousness, and that Eliezer thought it was at least a good insight?

I've been warming to the idea as a useful insight, but I'm still pretty confused; it feels like there's a useful (but possibly quantitative and not qualitative) difference between myself (obviously 'conscious' for any coherent extrapolated meaning of the term) and my computer (obviously not conscious (to any significant extent?)), which is not accounted for by saying merely that consciousness is the feeling of information processing.

I think the idea of consciousness might really be a confused suite arising from real, more fundamental ingredients including the feeling of information being processed. So maybe it's more like 'the properties of an information processor give rise (possibly in combination with other things) to things we refer to by 'consciousness''. I'm struggling to think of cases where we can't (at least in principle) taboo consciousness and instead talk about more specific things that we know to refer non-confusedly to actual things. And saying 'Consciousness is X' seems to take consciousness too seriously as a useful or meaningful or coherent concept.

(I guess consciousness is often treated as a fundamental ethical consideration that cannot be reduced any further, but I am skeptical of the idea that consciousness is fundamental to ethics per se, and extremely suspicious of ethical considerations that have not been shown reducible to 'selfishness'+game/decision theory.)

I think there's a notable probability of the disjunction that consciousness is meaningless enough that any attempt to reduce it as much as Tegmark tries is misguided or that it is possible in non-quantum models and Tegmark's approach (even if it is incomplete or partially incorrect) generalises.

The integrated information theory (IIT) of consciousness claims that, at the fundamental level, consciousness is integrated information, and that its quality is given by the informational relationships generated by a complex of elements (Tononi, 2004).

This theory, that is in the background of the Tegmark paper, allows for different qualities of consciousness. The informational relationships in your computer are vastly simpler than those in your brain ... so the quality of consciousness would be correspondingly poor.

I was digging in the references because I thought 'consciousness' meant 'self-awareness' and I was confused about the direction of the discussion. Now I know that consciousness is about experience (e.g., the experience of seeing a color, or hearing a sound, etc) and the quality of that experience.

Roughly guessing from the Tononi paper, which is beyond my ken, the "the informational relationships generated by a complex of elements" can be so complex, new mathematics or physics is required to characterize the complexity and topology of these relationships.

This is a thought experiment about a photodiode and a digital camera that is helpful in explaining the difference in complexity in information integration:

Skip down to the section Information: the photodiode thought experiment.

[-][anonymous]00

I've been warming to the idea as a useful insight, but I'm still pretty confused; it feels like there's a useful (but possibly quantitative and not qualitative) difference between myself (obviously 'conscious' for any coherent extrapolated meaning of the term) and my computer (obviously not conscious (to any significant extent?)), which is not accounted for by saying merely that consciousness is the feeling of information processing.

Why do you think your computer is not conscious? It probably has more of a conscious experience than, say, a flatworm or sea urchin. (As byrnema notes, conscious does not necessarily imply self-aware here.)

Probably, various physical processes could be used to create this "way information feels when being processed". Why call it physics then, and not rather informatics?

Of course, behind every informatics, there is a physical process. But we don't call calculating 2*22 physics. Why then call "consciousness" (or "evolution") - physics?

[-][anonymous]10

I don't understand this comment. What is it you would like us to be careful not to call 'physics'? The paper does contain some physics.

[-][anonymous]00

Did you read the paper? He reduces information processing down to quantum states and operators, thereby fully reducing the theory to a physical model. I'd call that physics.

When is the reduction - or rather the translation of a problem - to QM, possible and productive?

Sometimes, it is impossible. The planet's orbit stability problem isn't even translatable to QM, at all. Because QM doesn't do gravity.

I doubt it is always productive, even when possible. One could translate the game of tic-tact-toe to QM. But what's the point? A simple look-up table would do. Can we translate the look-up table optimization to QM? Maybe, but it would hardly give us any new insight in tic-tact-toe game.

[-][anonymous]20

It has implications for morality if the existing definitions of consciousness turn out incorrect or incomplete. It has implications for singularity technologies of mind uploading and cryonics revival if a theory of consciousness can be extended to predict end-of-identity. It's also simply interensting in its own sake.

We had a bad mix or two, of consciousness and QM in the past, already.

Some people demanded a conscious observer to collapse the wave function!

And some people talked about the "quantum nature of consciousness" quite a lot.

Both were quite unnecessary.

I am not saying, that it is therefore forbidden to think about the consciousness and QM at the same time, but that it should be done cautiously, very cautiously, to avoid old mistakes.

[-][anonymous]50

Honestly I don't understand the point you're making. It sounds a lot like "we should have a semantic stop sign!" If the people before us have done a piss-poor job of reducing consciousness to physicality, then that should encourage us to do better, not stop work entirely.

His term "perceptronium" is handy.

Let us also conjecture another principle that conscious systems must satisfy: that of autonomy, i.e. that information can be processed with relative freedom from external influence.

That's never been part of my concept of consciousness. E.g. I think conscious subroutines are possible, but need not have any autonomy.

In fact we have a counterexample right now: tulpas.

Consciousness implies recognizing actions as associated with the actor or not? To recognize such a correlation implies some means to 'cause' action (otherwise the perceptronium is just a pattern detector and I don't think that sufficieces for consciousness). To 'cause' actions implies that the action is not (detectably) determined by external effects (but effectively only by structure internal to the actor). Thus you need this autonomy (if you subscribe to this model).

Consciousness implies recognizing actions as associated with the actor or not?

Not on my concept of consciousness. For me, consciousness is about subjective experience, not about agency. To paraphrase Bentham, "My question is not, Can they reason? nor, Can they act? but, Can they suffer?"

Hm. Trying to come up with a matching definition of "suffer" in this context.

How about "perceiving damage". But that is not conscious. That could be said about any minimal (neurological) circuit.

"Perceiving damage to self". But that recurses to "self". And it we avoid "self" by using "actor" (which is more specific but needs a simpler concept) we are back where I was.

Also "suffer" implies some kind of stress. Some mode that deals with existential danger. Which can be a) act actively to avoid that danger or b) display signals to some perpetrator to reduce the danger or c) failure due to (partial) break down of essential systems.

But if I use just "being in a state of suffering" (as above a) to c)) this is still not conscious so I guess something must be missing.

Suffering is not about damage. I'm not even sure it's about aversion. Suffering seems to be tings like boredom, sadness, and frustration. The best hypothesis I'm found so far is "internal conflict". The primary capability that enables suffering seems to be desire.

Pain and damage doesn't cause suffering; it's wanting to get away from it and being unable to that does. If you feel a jolt of excruciating pain, it disappears entirely when you flinch away, and you negate it's source to remove the risk in the future, you'll probably experience it in a highly positive way.

I agree with that but it just explains words with other insufficiently undefined words (insufficient for the purpose of defining consciousness). I tried to reduce "suffer" to more primitive and unambigous terms. And if you disagree with my proposal please propose in that format.

Question: would someone with a stronger physics background be willing to explain what is Tegmark's "quantum factorization problem"? Section 1E

I'm not entirely sure - he didn't explain it all that clearly. But it is definitely reminiscent of the factorization problems one sees in intro quantum mechanics, like noticing when you can do psi(x,y,z) = X(x) Y(y) Z(z). The similarity is that this scheme is all about finding that kind of joint to carve nature at - find things that are relatively independent from each other but strongly interacting within themselves.

Ok, that's a start, thanks. So is he suggesting that the way consciousness carves reality at the joints is special?

...in which case, this carving must be done at the analysis stage, right, not at the perception stage? Because at the perception stage, our senses work just like other (non-conscious) sensors.

And then finally, if he is talking about the way the conscious mind carves reality at the joints, this is processing after we have all the data so why is quantum mechanics relevant? (I imagine that a creature could analyze sensory data in lots of different ways, for example a bee might use Fourier analysis for all I know, where we might use some sort of object identification criteria…)

It's fine if you don't know the answers to these questions, or they are too wrong to respond to.

Another way of asking my question is, since we evolved from non-conscious creatures, and the hardware is largely the same, where does using the wave function to carve reality at the joints come in?

He's trying to find the joints that you have to carve in quantum mechanical systems so that you can find any consciousnesses that happen to be in them.

So yes, it's entirely in the analysis stage - finding how to describe in quantum mechanical terms those things we already know how to describe in informal language, like 'person' or 'choice' or 'memory'.

Ah, thanks. My interpretation was that he was saying that conscious minds do that particular carving, but your interpretation is that he proposes that particular carving for finding conscious minds – and other entity like objects. That makes more sense.

[-][anonymous]20

One irrelevant comment about this cool article.

Rather than just remain immobile as a gold ring, it must exhibit complex dynamics so that its future state depends in some complicated (and hopefully controllable/programmable) way on the present state. Its atom arrangement must be less ordered than a rigid solid where nothing interest- ing changes, but more ordered than a liquid or gas.

I think Tegmark is mischaracterizing solids, at least above absolute zero. Materials, even solid gold, are subject to lots of interesting dynamics, such as creep and grain growth. Wikipedia has a nice animation of the latter.