Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

## a different perspecive on physics

0 26 June 2017 10:47PM

(Note: this is anywhere between crackpot and inspiring, based on the people I talked to before. I am not a physicist.)

I have been thinking about a model of physics that is fundamentally different from the ones I have been taught in school and university. It is not a theory, because it does not make predictions. It is a different way of looking at things. I have found that this made a lot of things we normally consider weird a lot easier to understand.

Almost every model of physics I have read of so far is based on the idea that reality consists of stuff inside a coordinate system, and the only question is the dimensionality of the coordinate system. Relativity talks about bending space, but it still treats the existence of space as the norm. But what if there were no dimensions at all?

Rationale

If we assume that the universe is computable, then dimension-based physics, while humanly intuitive, are unnecessarily complicated. To simulate dimension-based physics, one first needs to define real numbers, which is complicated, and requires that numbers be stored with practically infinite precision. Occam's Razor argues against this.

A graph model in contrast would be extremely simple from a computational point of view: a set of nodes, each with a fixed number of attributes, plus a set of connections between the nodes, suffices to express the state of the universe. Most importantly, it would suffice for the attributes of nodes to be simple booleans or natural numbers, which are much easier to compute than real numbers. Additionally, transition functions to advance in time would be easy to define as well as they could just take the form of a set of if-then rules that are applied to each node in turn. (these transition functions roughly correspond to physical laws in more traditional physical theories)

Idea

Model reality as a graph structure. That is to say, reality at a point of time is a set of nodes, a set of connections between those nodes, and a set of attributes for each node. There are rules for evolving this graph over time that might be as simple as those in Conway's game of life, but they lead to very complex results due to the complicated structure of the graph.

Connections between nodes can be created or deleted over time according to transition functions.

What we call particles are actually patterns of attributes on clusters of nodes. These patterns evolve over time according to transition functions. Also, since particles are patterns instead of atomic entities, they can in principle be created and destroyed by other patterns.

Our view of reality as (almost) 3-dimensional is an illusion created by the way the nodes connect to each other: This can be done if a pattern exists that matches these criterions: change an arbitrarily large graph (a set of vertices, a set of edges), such that the following is true:

-There exists a mapping f(v) of vertices to (x,y,z) coordinates such that for any pair of vertices m,n: the euclidean distance of f(m) and f(n) is approximately equal to the length of the shortest path between m and n (inaccuracies are fine so long as the distance is small, but the approximation should be good at larger distances).

A dimensionless graph model would have no contradiction between quantum physics and relativity. Quantum effects happen when patterns (particles) spread across nodes that still have connections between them besides those connections that make up the primary 3D grid. This also explains why quantum effects exist mostly on small scales: the pattern enforcing 3D grid connections tends to wipe out the entanglements between particles. Space dilation happens because the patterns caused by high speed travel cause the 3D grid pattern to become unstable and the illusion that dimensions exist breaks down. There is no contradiction between quantum physics and relativity if the very concept of distance is unreliable. Time dilation is harder to explain, but can be done. This is left as an exercise to the reader, since I only really understood this graph-based point of view when I realised how that works, and don't want to spoiler the aha-moment for you.

Note

This is not really a theory. I am not making predictions, I provide no concrete math, and this idea is not really falsifiable in its most generic forms. Why do I still think it is useful? Because it is a new way of looking at physics, and because it makes everything so much more easy and intuitive to understand, and makes all the contradictions go away. I may not know the rules by which the graph needs to propagate in order for this to match up with experimental results, but I am pretty sure that someone more knowledgeable in math can figure them out. This is not a theory, but a new perspective under which to create theories.

Also, I would like to note that there are alternative interpretations for explaining relativity and quantum physics under this perspective. The ones mentioned above are just the ones that seem most intuitive to me. I recognize that having multiple ways to explain something is a bad thing for a theory, but since this is not a theory but a refreshing new perspective, I consider this a good thing.

I think that this approach has a lot of potential, but is difficult for humans to analyse because our brains evolved to deal with 3D structures very efficiently but are not at all optimised to handle arbitrary graph structures with any efficiency. For this reason, Coming up with an actual mathematically complete attempt at a graph-based model of physics would almost certainly require computer simulations for even simple problems.

Conclusion

Do you think the idea has merit?

If not, what are your objections?

Has research in something like this maybe already been done, and I just never heard of it?

## [Link] "Field Patterns" as a new mathmatical construct.

0 24 February 2017 11:56PM

## EMdrive paper published, nearly identical to leaked draft.

0 19 November 2016 09:30PM

Paper first, article next

http://arc.aiaa.org/doi/10.2514/1.B36120

http://www.sciencealert.com/it-s-official-nasa-s-peer-reviewed-em-drive-paper-has-finally-been-published

TLDR,

Yup, thrust estimates the same 1.2 millinewtons per kw, in vac.

Hypothesized to be pushing off the quantum foam.

"[The] supporting physics model used to derive a force based on operating conditions in the test article can be categorised as a nonlocal hidden-variable theory, or pilot-wave theory for short."

Pilot-wave theory is a slightly controversial interpretation of quantum mechanics.

It's pretty complicated stuff, but basically the currently accepted Copenhagen interpretation of quantum mechanics states that particles do not have defined locations until they are observed.

Pilot-wave theory, on the other hand, suggests that particles do have precise positions at all times, but in order for this to be the case, the world must also be strange in other ways – which is why many physicists have dismissed the idea.

But in recent years, the pilot-wave theory has been increasing in popularity, and the NASA team suggests that it could help explain how the EM Drive produces thrust without appearing to propel anything in the other direction.

"If a medium is capable of supporting acoustic oscillations, this means that the internal constituents were capable of interacting and exchanging momentum," the team writes.

Pilot wave theory

https://www.quantamagazine.org/20160517-pilot-wave-theory-gains-experimental-support/

## A Somewhat Vague Proposal for Grounding Ethics in Physics

-3 27 January 2015 05:45AM

As Tegmark argues, the idea of "final goal" for AI is likely incoherent, at least if (as he states), "Quantum effects aside, a truly well-defined goal would specify how all particles in our Universe should be arranged at the end of time."

But "life is a journey not a destination".  So really, what we should be specifying is the entire evolution of the universe through its lifespan.  So how can the universe "enjoy itself" as much as possible before the big crunch (or before and during the heat death)*.

I hypothesize that experience is related to, if not a product of, change.  I further propose (counter-intuitively, and with an eye towards "refinement" (to put it mildly))** that we treat experience as inherently positive and not try to distinguish between positive and negative experiences.

Then it seems to me the (still rather intractable) question is: how does the rate of entropy's increase relate to the quantity of experience produced?  Is it simply linear (in which case, it doesn't matter, ethically)?  My intuition is that is it more like the fuel efficiency of a car, non-linear and with a sweet spot somewhere between a lengthy boredom and a flash of intensity.

*I'm not super up on cosmology; are there other theories I ought to be considering?

**One idea for refinement: successful "prediction" (undefined here) creates positive experiences; frustrated expectations negative ones.

## Tachyon neutrinos (again)

5 27 December 2014 05:19PM

In 2012, a large amount of attention was given to the OPERA experiment's apparent sighting of faster than light neutrinos. This turned out to be erroneous due to a faulty cable, and similar experiments confirmed the same results. However, while this was occurring, a distinct point was made: some attempts to determine the mass of the electron neutrino(one of the three known neutrino types) found that the square of the mass was apparently negative, which would be consistent with an imaginary mass and thus electron neutrinos would be tachyons. While little attention was paid to at the time, a new paper by Robert Ehrlich looks again at this approach. Ehrlich points out that six different experimental results seem to yield an imaginary mass for the electron neutrino, and what is more, all the results are in close agreement, with an apparent square of the mass being close to -0.11 electron-volts squared.

There are at least two major difficulties with Ehrlich's suggestion, both of which were also issues for OPERA aside from any philosophical or metaconcerns like desire to preserve causality. First, it is difficult to reconcile with Ehrlich's suggestion is one of the same data points that apparently tripped up OPERA, that is the neutrinos from SN 1987A neutrinos. In the SN 1987A supernova  (the first observed in 1987 hence the name), the supernova was close enough that we were actually able to detect the neutrinos from it. The neutrinos arrived about three hours before the light from the supernova. But that's not evidence for faster than light neutrinos, since one actually expects this to happen. In the standard way of viewing things, the neutrinos move very very close to the speed of light, but during a core-collapse supernova like SN 1987A, the neutrinos are produced in the core at the beginning of the process. They then flee the star without interacting with the matter, whereas the light produced in the core is slowed down by all the matter in the way, so the neutrinos get a few hours head start.

The problem for FTL neutrinos is that if the neutrions were even a tiny bit faster than the speed of light they should have arrived much much earlier. This is strong evidence against FTL neutrinos. In the paper in question, Ehrlich mentions SN 1987A in the context of testing his hypothesis in an alternate way using a supernova and the exact distribution of the neutrinos from one but doesn't discuss anywhere I can see the more basic issue of the neutrinos arriving at close to the same time as the light. It is conceivable that electron neutrinos are the only neutrinos which are tachyons, and if this is the case, then it seems like neutrino oscillation (the tendency for neutrinos to change types spontaneously) could account for part of what is going on here, but having only some types of neutrinos be tachyons would possibly lead to other problems.

Second, there's reason to believe that tachyons if they existed would emit Cherenkov-like radiation. Andrew Cohen and Sheldon Glashow wrote a paper showing that this would be a major issue in the context of OPERA. Ehrlich seems to claim in the new paper that this shouldn't be an issue in the context he is working in, but does not provide any reasoning. Hopefully someone who is more of an expert can comment on what is going on there.

This seems like potentially stronger evidence for tachyonic neutrinos than the OPERA experiment since this is the same result from a variety of different experiments all giving very close to the same results.

## The representational fallacy

1 25 June 2014 11:28AM

Basically Heather Dyke argues that metaphysicians are too often arguing from representations of reality (eg in language) to reality itself.

It looks to me like a variant of the mind projection fallacy. This might be the first book length treatment teh fallacy has gotten though.  What do people think?

See reviews here

https://www.sendspace.com/file/k5x8sy

https://ndpr.nd.edu/news/23820-metaphysics-and-the-representational-fallacy/

To give bit of background there's a debate between A-theorists and B-theorists in philosophy of time.

A-theorists think time has ontological distinctions between past present and future

B-theorists hold there is no ontological distinction between past present and future.

Dyke argues that a popular argument for A-theory (tensed language represents ontological distinctions) commits the representational fallacy. Bourne agrees , but points out an argument Dyke uses for B-theory commits the same fallacy.

## [Link] Consciousness as a State of Matter (Max Tegmark)

15 [deleted] 08 January 2014 06:11PM

Max Tegmark publishes a preprint of a paper arguing from physical principles that consciousness is “what information processing feels like from the inside,” a position I've previously articulated on lesswrong. It's a very physics-rich paper, but here's the most accessable description I was able to find within it:

If we understood consciousness as a physical phenomenon, we could in principle answer all of these questions [about consciousness] by studying the equations of physics: we could identify all conscious entities in any physical system, and calculate what they would perceive. However, this approach is typically not pursued by physicists, with the argument that we do not understand consciousness well enough.

In this paper, I argue that recent progress in neuroscience has fundamentally changed this situation, and that we physicists can no longer blame neuroscientists for our own lack of progress. I have long contended that consciousness is the way information feels when being processed in certain complex ways, i.e., that it corresponds to certain complex patterns in spacetime that obey the same laws of physics as other complex systems, with no "secret sauce" required.

The whole paper is very rich, and worth a read.

## Amplituhedron?

8 20 September 2013 10:09PM

I recently ran across a rather interesting result while browsing the Internet:

Physics Discover Geometry Underlying Particle Physics

Physicists have discovered a jewel-like geometric object that dramatically simplifies calculations of particle interactions and challenges the notion that space and time are fundamental components of reality.

“This is completely new and very much simpler than anything that has been done before,” said Andrew Hodges, a mathematical physicist at Oxford University who has been following the work.

The revelation that particle interactions, the most basic events in nature, may be consequences of geometry significantly advances a decades-long effort to reformulate quantum field theory, the body of laws describing elementary particles and their interactions. Interactions that were previously calculated with mathematical formulas thousands of terms long can now be described by computing the volume of the corresponding jewel-like “amplituhedron,” which yields an equivalent one-term expression.

Unfortunately, I'm still at the point in my education where my best response to new physics is "cache for later," and the fact that it claims to eliminate locality/unitarity seems decidedly odd to my mostly-untrained mind. I notice I am confused, and that LessWrong has a rather large number of trained physicists.

## Estimating the kolmogorov complexity of the known laws of physics?

10 08 July 2013 04:30AM

In the post Complexity and Intelligence, Eliezer says that the Kolmogorov Complexity (length of shortest equivalent computer program) of the laws of physics is about 500 bits:

Suppose you ran a Turing machine with unlimited tape, so that, starting from our laws of physics, it simulated our whole universe - not just the region of space we see around us, but all regions of space and all quantum branches. [...]

Then the "Kolmogorov complexity" of that entire universe [...] would be 500 bits, or whatever the size of the true laws of physics when written out as equations on a sheet of paper.

Where did this 500 come from?

I googled around for estimates on the Kolmogorov Complexity of the laws of physics, but didn't find anything. Certainly nothing as concrete as 500.

I asked about it on the physics stack exchange, but haven't received any answers as of yet.

I considered estimating it myself, but doing that well involves significant time investment. I'd need to learn the standard model well enough to write a computer program that simulated it (however inefficiently or intractably, it's the program length that matters not it's time or memory performance).

Based on my experience programming, I'm sure it wouldn't take a million bits. Probably less than ten thousand. The demo scene does some pretty amazing things with 4096 bits. But 500 sounds like a teeny tiny amount to mention off hand for fitting the constants, the forces, the particles, and the mathematical framework for doing things like differential equations. The fundamental constants alone are going to consume ~20-30 bits each.

Does anyone have a reference, or even a more worked-through example of an estimate?

## Tegmark's talk at Oxford

7 12 June 2013 11:49AM

Max Tegmark, from the Massachusetts Institute of Technology and the Foundational Questions Institute (FQXi), presents a cosmic perspective on the future of life, covering our increasing scientific knowledge, the cosmic background radiation, the ultimate fate of the universe, and what we need to do to ensure the human race's survival and flourishing in the short and long term. He's strongly into the importance of xrisk reduction.

## The ongoing transformation of quantum field theory

22 29 December 2012 09:45AM

Quantum field theory (QFT) is the basic framework of particle physics. Particles arise from the quantized energy levels of field oscillations; Feynman diagrams are the simple tool for approximating their interactions. The "standard model", the success of which is capped by the recent observation of a Higgs boson lookalike, is a quantum field theory.

But just like everything mathematical, quantum field theory has hidden depths. For the past decade, new pictures of the quantum scattering process (in which particles come together, interact, and then fly apart) have incrementally been developed, and they presage a transformation in the understanding of what a QFT describes.

At the center of this evolution is "N=4 super-Yang-Mills theory", the maximally supersymmetric QFT in four dimensions. I want to emphasize that from a standard QFT perspective, this theory contains nothing but scalar particles (like the Higgs), spin-1/2 fermions (like electrons or quarks), and spin-1 "gauge fields" (like photons and gluons). The ingredients aren't something alien to real physics. What distinguishes an N=4 theory is that the particle spectrum and the interactions are arranged so as to produce a highly extended form of supersymmetry, in which particles have multiple partners (so many LWers should be comfortable with the notion).

In 1997, Juan Maldacena discovered that the N=4 theory is equivalent to a type of string theory in a particular higher-dimensional space. In 2003, Edward Witten discovered that it is also equivalent to a different type of string theory in a supersymmetric version of Roger Penrose's twistor space. Those insights didn't come from nowhere, they explained algebraic facts that had been known for many years; and they have led to a still-accumulating stockpile of discoveries about the properties of N=4 field theory.

What we can say is that the physical processes appearing in the theory can be understood as taking place in either of two dual space-time descriptions. Each space-time has its own version of a particular large symmetry, "superconformal symmetry", and the superconformal symmetry of one space-time is invisible in the other. And now it is becoming apparent that there is a third description, which does not involve space-time at all, in which both superconformal symmetries are manifest, but in which space-time locality and quantum unitarity are not "visible" - that is, they are not manifest in the equations that define the theory in this third picture.

I cannot provide an authoritative account of how the new picture works. But here is my impression. In the third picture, the scattering processes of the space-time picture become a complex of polytopes - higher-dimensional polyhedra, joined at their faces - and the quantum measure becomes the volume of these polyhedra. Where you previously had particles, you now just have the dimensions of the polytopes; and the fact that in general, an n-dimensional space doesn't have n special directions suggests to me that multi-particle entanglements can be something more fundamental than the separate particles that we resolve them into.

It will be especially interesting to see whether this polytope combinatorics, that can give back the scattering probabilities calculated with Feynman diagrams in the usual picture, can work solely with ordinary probabilities. That was Penrose's objective, almost fifty years ago, when he developed the theory of "spin networks" as a new language for the angular momentum calculations of quantum theory, and which was a step towards the twistor variables now playing an essential role in these new developments. If the probability calculus of quantum mechanics can be obtained from conventional probability theory applied to these "structures" that may underlie familiar space-time, then that would mean that superposition does not need to be regarded as ontological.

I'm talking about this now because a group of researchers around Nima Arkani-Hamed, who are among the leaders in this area, released their first paper in a year this week. It's very new, and so arcane that, among physics bloggers, only Lubos Motl has talked about it.

This is still just one step in a journey. Not only does the paper focus on the N=4 theory - which is not the theory of the real world - but the results only apply to part of the N=4 theory, the so-called "planar" part, described by Feynman diagrams with a planar topology. (For an impressionistic glimpse of what might lie ahead, you could try this paper, whose author has been shouting from the wilderness for years that categorical knot theory is the missing piece of the puzzle.)

The N=4 theory is not reality, but the new perspective should generalize. Present-day calculations in QCD already employ truncated versions of the N=4 theory; and Arkani-Hamed et al specifically mention another supersymmetric field theory (known as ABJM after the initials of its authors), a deformation of which is holographically dual to a theory-of-everything candidate from 1983.

When it comes to seeing reality in this new way, we still only have, at best, a fruitful chaos of ideas and possibilities. But the solid results - the mathematical equivalences - will continue to pile up, and the end product really ought to be nothing less than a new conception of how physics works.

## If MWI is correct, should we expect to experience Quantum Torment?

3 10 November 2012 04:32AM

If the many worlds of the Many Worlds Interpretation of quantum mechanics are real, there's at least a good chance that Quantum Immortality is real as well: All conscious beings should expect to experience the next moment in at least one Everett branch even if they stop existing in all other branches, and the moment after that in at least one other branch, and so on forever.

However, the transition from life to death isn't usually a binary change. For most people it happens slowly as your brain and the rest of your body deteriorates, often painfully.

Doesn't it follow that each of us should expect to keep living in this state of constant degradation and suffering for a very, very long time, perhaps forever?

I don't know much about quantum mechanics, so I don't have anything to contribute to this discussion. I'm just terrified, and I'd like, not to be reassured by well-meaning lies, but to know the truth. How likely is it that Quantum Torment is real?

## Question on decoherence and virtual particles

0 14 September 2012 04:33AM

Doing some insomniac reading of the Quantum Sequence, I think that I've gotten a reasonable grasp of the principles of decoherence, non-interacting bundles of amplitude, etc. I then tried to put that knowledge to work by comparing it with my understanding of virtual particles (whose rate of creation in any area is essentially equivalent to the electromagnetic field), and I had a thought I can't seem to find mentioned elsewhere.

If I understand decoherence right, then quantum events which can't be differentiated from each other get summed together into the same blob of amplitude. Most virtual particles which appear and rapidly disappear do so in ways that can't be detected, let alone distinguished. This seems as if it could potentially imply that the extreme evenness of a vacuum might have to do more with the overall blob of amplitude of the vacuum being smeared out among all the equally-likely vacuum fluctuations, than it does directly with the evenness of the rate of vacuum fluctuations themselves. It also seems possible that there could be some clever way to test for an overall background smear of amplitude, though I'm not awake enough to figure one out just now. (My imagination has thrown out the phrase 'collapse of the vacuum state', but I'm betting that that's just unrelated quantum buzzword bingo.)

Does anything similar to what I've just described have any correlation with actual quantum theory, or will I awaken to discover all my points have been voted away due to this being complete and utter nonsense?

## What is the Mantra of Polya?

6 31 July 2012 05:49PM

The other day at dinner, someone showed me this video of a slinky dropping. It shoes that the bottom of the slinky stays perfectly stationary for a while after it's been dropped. (The link goes to the 10-second interesting part).

I spent some time trying to figure out why that happens, but didn't get it. The next day, I spent half an hour writing down the differential equations that describe the slinky's motion and staring at them, with no idea how to proceed. Eventually, I watched the video again with sound, and learned the simple answer, which is that the speed of waves traveling in a slinky is very slow - a few meters per second - and the bottom half sits still until a wave can travel down and inform it that the slinky's been dropped.

The strange thing is that I already knew this, or at least the idea was familiar to me. Also, while at dinner, someone mentioned the "pole-in-the-barn" paradox from special relativity, and mentioned the same speed-of-information-in-materials idea in resolving the paradox, but I still didn't make the connection to the problem I was considering.

I want a simple phrase, similar to "check consequentialism", "take the outside view", or "worth it?" that applies to checking your own thought process while solving problems to stop you from revving your engine in the wrong direction for too long. I realized I've read a book about what to do in such situations. It's George Polya's How to Solve It. (Amazon Wikipedia Google Books) I don't have a copy of the book anymore, and I would like to crowdsource creating a short phrase that captures the general mindset endorsed by it. Some questions I remember the book suggesting are

• Have you seen a similar problem before?
• What are the unknowns?
• What information do you have?
• Is it obvious that the unknowns are enough information?
There are more of these listed in the Wikipedia article.

Also, "Mantra of Polya" doesn't roll off the tongue well (at least I think it doesn't, since I'm not sure how to pronounce "Polya"), so a better name for this mnemonic would be good, too.

## Exponential Economist Meets Finite Physicist [link]

5 13 April 2012 03:55AM

A dialogue discussing how thermodynamics limits future growth in energy usage, and that in turn limits GDP growth, from the blog Do the Math.

Physicist: Hi, I’m Tom. I’m a physicist.

Economist: Hi Tom, I’m [ahem..cough]. I’m an economist.

Physicist: Hey, that’s great. I’ve been thinking a bit about growth and want to run an idea by you. I claim that economic growth cannot continue indefinitely.

Economist: [chokes on bread crumb] Did I hear you right? Did you say that growth can not continue forever?

Physicist: That’s right. I think physical limits assert themselves.

Economist: Well sure, nothing truly lasts forever. The sun, for instance, will not burn forever. On the billions-of-years timescale, things come to an end.

Physicist: Granted, but I’m talking about a more immediate timescale, here on Earth. Earth’s physical resources—particularly energy—are limited and may prohibit continued growth within centuries, or possibly much shorter depending on the choices we make. There are thermodynamic issues as well.

I think this is quite relevant to many of the ideas of futurism (and economics) that we often discuss here on Less Wrong. They address the concepts related to levels of civilization and mind uploading. Colonization of space is dismissed by both parties, at least for the sake of the discussion. The blog author has another post discussing his views on its implausibility; I find it to be somewhat limited in its consideration of the issue, though.

He has also detailed the calculations whose results he describes in this dialogue in a few previous posts. The dialogue format will probably be a kinder introduction to the ideas for those less mathematically inclined.

## [link] Faster than light neutrinos due to loose fiber optic cable.

13 22 February 2012 09:52PM

A mundane cause for a surprising result. Consider this unconfirmed for now, however unsurprising it sounds.

According to sources familiar with the experiment, the 60 nanoseconds discrepancy appears to come from a bad connection between a fiber optic cable that connects to the GPS receiver used to correct the timing of the neutrinos' flight and an electronic card in a computer. After tightening the connection and then measuring the time it takes data to travel the length of the fiber, researchers found that the data arrive 60 nanoseconds earlier than assumed. Since this time is subtracted from the overall time of flight, it appears to explain the early arrival of the neutrinos.

New data, however, will be needed to confirm this hypothesis.

Source: Science/AAAS

## State your physical account of experienced color

0 01 February 2012 07:00AM

Previous post: Does functionalism imply dualism? Next post: One last roll of the dice.

Don't worry, this sequence of increasingly annoying posts is almost over. But I think it's desirable that we try to establish, once and for all, how people here think color works, and whether they even think it exists.

The way I see it, there is a mental block at work. An obvious fact is being denied or evaded, because the conclusions are unpalatable. The obvious fact is that physics as we know it does not contain the colors that we see. By "physics" I don't just mean the entities that physicists talk about, I also mean anything that you can make out of them. I would encourage anyone who thinks they know what I mean, and who agrees with me on this point, to speak up and make it known that they agree. I don't mind being alone in this opinion, if that's how it is, but I think it's desirable to get some idea of whether LessWrong is genuinely 100% against the proposition.

Just so we're all on the same wavelength, I'll point to a specific example of color. Up at the top of this web page, the word "Less" appears. It's green. So, there is an example of a colored entity, right in front of anyone reading this page.

My thesis is that if you take a lot of point-particles, with no property except their location, and arrange them any way you want, there won't be anything that's green like that; and that the same applies for any physical theory with an ontology that doesn't explicitly include color. To me, this is just mindbogglingly obvious, like the fact that you can't get a letter by adding numbers.

At this point people start talking about neurons and gensyms and concept maps. The greenness isn't in the physical object, "computer screen", it's in the brain's response to the stimulus provided by light from the computer screen entering the eye.

My response is simple. Try to fix in your mind what the physical reality must be, behind your favorite neuro-cognitive explanation of greenness. Presumably it's something like "a whole lot of neurons, firing in a particular way". Try to imagine what that is physically, in terms of atoms. Imagine some vast molecular tinker-toy structures, shaped into a cluster of neurons, with traveling waves of ions crossing axonal membranes. Large numbers of atoms arranged in space, a few of them executing motions which are relevant for the information processing. Do you have that in your mind's eye? Now look up again at that word "Less", and remind yourself that according to your theory, the green shape that you are seeing is the same thing as some aspect of all those billions of colorless atoms in motion.

If your theory still makes sense to you, then please tell us in comments what aspect of the atoms in motion is actually green.

I only see three options. Deny that anything is actually green; become a dualist; or (supervillain voice) join me, and together, we can make a new ontology.

## Does functionalism imply dualism?

-1 31 January 2012 03:43AM

This post follows on from Personal research update, and is followed by State your physical explanation of experienced color.

In a recent post, I claimed that functionalism about consciousness implies dualism. Since most functionalists think their philosophy is an alternative to dualism, I'd better present an argument.

But before I go further, I'll link to orthonormal's series on dissolving the problem of "Mary's Room": Seeing Red: Dissolving Mary's Room and Qualia, A Study of Scarlet: The Conscious Mental Graph, Nature: Red in Truth, and Qualia. Mary's Room is one of many thought experiments bandied about by philosophers in their attempts to say whether or not colors (and other qualia) are a problem for materialism, and orthonormal presents a computational attempt to get around the problem which is a good representative of the functionalist style of thought. I won't have anything to say about those articles at this stage (maybe in comments), but they can serve as an example of what I'm talking about.

Now, though it may antagonize some people, I think it is best to start off by stating my position plainly and bluntly, rather than starting with a neutral discussion of what functionalism is and how it works, and then seeking to work my way from there to the unpopular conclusion. I will stick to the example of color to make my points - apologies to blind and colorblind readers.

My fundamental thesis is that color manifestly does exist - there are such things as shades of green, shades of red, etc - and that it manifestly does not exist in any standard sort of physical ontology. In an arrangement of point particles in space, there are no shades of green present. This is obviously true, and it's equally obvious for more complicated ontologies like fields, geometries, wavefunction multiverses, and so on. It's even part of the history of physics; even Galileo distinguished between primary qualities like location and shape, and secondary qualities like color. Primary qualities are out there and objectively present in the external world, secondary qualities are only in us, and physics will only concern itself with primary qualities. The ontological world of physical theory is colorless. (We may call light of a certain wavelength green light or red light, but that is because it produces an experience of seeing green or seeing red, not because the light itself is green or red in the original sense of those words.) And what has happened due to the progress of the natural sciences is that we now say that experiences are in brains, and brains are made of atoms, and atoms are described by a physics which does not contain color. So the secondary qualities have vanished entirely from this picture of the world; there is no opportunity for them to exist within us, because we are made of exactly the same stuff as the external world.

Yet the "secondary qualities" are there. They're all around us, in every experience. It really is this simple: colors exist in reality, they don't exist in theory, therefore the theory needs to be augmented or it needs to be changed. Dualism is an augmentation. My speculations about quantum monads are supposed to pave the way for a change. But I won't talk about that option here. Instead, I will try to talk about theories of consciousness which are meant to be compatible with physicalism - functionalism is one such theory.

Such a theory will necessarily present a candidate, however vague, for the physical correlate of an experience of color. One can then say that color exists without having to add anything to physics, because the color just is the proposed physical correlate. This doesn't work because the situation hasn't changed. If all you have are point particles whose only property is location, then individual particles do not have the property of being colored, nor do they have that property in conjunction. Identifying a physical correlate simply picks out a particular set of particles and says "there's your experience of color". But there's still nothing there that is green or red. You may accustom yourself to thinking of a particular material event, a particular rearrangement of atoms in space, as being the color, but that's just the power of habitual association at work. You are introducing into your concept of the event a property that is not inherently present in it.

It may be that one way people manage to avoid noticing this, is by an incomplete chain of thought. I might say: none of the objects in your physical theory are green. The happy materialist might say: but those aren't the things which are truly green in the sense you care about; the things which are green are parts of experiences, not the external objects. I say: fine. But experiences have to exist, right? And you say that physics is everything. So that must mean that experiences are some sort of physical object, and so it will be just as impossible for them to be truly green, given the ontological primitives we have to work with. But for some reason, this further deduction isn't made. Instead, it is accepted that objects in physical space aren't really green, but the objects of experience exist in some other "space", the space of subjective experience, and... it isn't explicitly said that objects there can be truly green, but somehow this difference between physical space and subjective space seems to help people be dualists without actually noticing it.

It is true that color exists in this context - a subjective space. Color always exists as part of an "experience". But physical ontology doesn't contain subjective space or conscious experience any more than it does contain color. What it can contain, are state machines which are structurally isomorphic to these things. So here we can finally identify how a functionalist theory of consciousness works psychologically: You single out some state machines in your physical description of the brain (like the networks in orthonormal's sequence of posts); in your imagination, you associate consciousness with certain states of such state machines, on the basis of structural isomorphism; and now you say, conscious states are those physical states. Subjective space is some neural topographic map, the subjectively experienced body is the sensorimotor homunculus, and so forth.

But if we stick to any standard notion of physical theory, all those brain parts still don't have any of the properties they need. There's no color there, there's no other space there, there's no observing agent. It's all just large numbers of atoms in motion. No-one is home and nothing is happening to them.

Clearly it is some sort of progress to have discovered, in one's physical picture of the world, the possibility of entities which are roughly isomorphic to experiences, colors, etc. But they are still not the same thing. Most of the modern turmoil of ideas about consciousness in philosophy and science is due to this gap - attempts to deny it, attempts to do without noticing it, attempts to force people to notice it. orthonormal's sequence, for example, seems to be an attempt to exhibit a cognitive model for experiences and behaviors that you would expect if color exists, without having to suppose that color actually exists. If we were talking about a theoretical construct, this would be fine. We are under no obligation to believe that phlogiston exists, only to explain why people once talked about it.

But to extend this attitude to something that most of us are directly experiencing in almost every waking moment, is ... how can I put this? It's really something. I'd call it an act of intellectual desperation, except that people don't seem to feel desperate when they do it. They are just patiently explaining, recapitulating and elaborating, some "aha" moment they had back in their past, when functionalism made sense to them. My thesis is certainly that this sense of insight, of having dissolved the problem, is an illusion. The genuineness of the isomorphism between conscious state and coarse-grained physical state, and the work of several generations of materialist thinkers to develop ways of speaking which smoothly promote this isomorphism to an identity, combine to provide the sense that no problem remains to be solved. But all you have to do is attend for a moment to experience itself, and then to compare that to the picture of billions of colorless atoms in intricate motion through space, to realize that this is still dualism.

I promised not to promote the monads, but I will say this. The way to avoid dualism is to first understand consciousness as it is in itself, without the presupposition of materialism. Observe the structure of its states and the dynamics of its passage. That is what phenomenology is about. Then, sketch out an ontology of what you have observed. It doesn't have to contain everything in infinite detail, it can overlook some features. But I would say that at a minimum it needs to contain the triad of subject-object-aspect (which appears under various names in the history of philosophy). There are objects of awareness, they are being experienced within a common subjective space, and they are experienced in a certain aspect. Any theory of reality, whether or not it is materialist, must contain such an entity in order to be true.

The basic entity here is the experiencing subject. Conscious states are its states. And now we can begin to tackle the ontological status of state machines, as a candidate for the ontological category to which conscious beings belong.

State machines are abstracted descriptions. We say there's a thing, it has a set of possible states; here are the allowed transitions between them, and the conditions under which those transitions occur. Specify all that and we have specified a state machine. We don't care about why those are the states or why the transitions occur; those are irrelevant details.

A very simple state machine might be denoted by the state transition network "1<->2". There's a state labeled 1 and another state labeled 2. If the machine is in state 1, it proceeds to state 2, and the reverse is also true. This state machine is realized wherever you have something that oscillates between two states without stopping in either. First the earth is close to the sun, then it is far from the sun, then it is close again... The Earth in its orbit instantiates the state machine "1<->2". I get involved with Less Wrong, then I quit for a while, then I come back... My Internet habits also instantiate the state machine "1<->2".

A computer program is exactly like this, a state machine of great complexity (and usually its state transition rules contain some dependence on external conditions, like user input) which has been physically instantiated for use. But one cannot claim that its states have any intrinsic meaning, any more than I can claim that the state 1 in the oscillating state machine is intrinsically about the earth being close to the sun. This is not true, even if I write down the state transition network in the form "CloseToTheSun<->FarFromTheSun".

This is another ontological deficiency of functionalism. Mental states have meanings, thoughts are always about something, and what they are about is not the result of convention or of the needs of external users. This is yet another clue that the ontological status of conscious states is special, that their "substance" matters to what they are. Of course, this is a challenge to the philosophy which says that a detailed enough simulation of a brain will create a conscious person, regardless of the computational substrate. The only reason people believe this, is because they believe the brain itself is not a special substrate. But this is a judgment made on the basis of science that is still at a highly incomplete stage, and certainly I expect science to tell us something different by the time it's finished with the brain. The ontological problems of functionalism provide a strong apriori reason for this expectation.

What is more challenging is to form a conception of the elementary parts and relations that could form the basis of an alternative ontology. But we have to do this, and the impetus has to come from a phenomenological ontology of consciousness that is as precise as possible. Fortunately, a great start was made on this about 100 years ago, in the heyday of phenomenology as a philosophical movement.

A conscious mind is a state machine, in the sense that it has states and transitions between them. The states also have structure, because conscious experiences do have parts. But the ontological ties that combine those parts into the whole are poorly apprehended by our current concepts. When we try to reduce them to nothing but causal coupling or to the proximity in space of presumed physical correlates of those parts, we are, I believe, getting it wrong. Clearly cause and effect operates in the realm of consciousness, but it will take great care to state precisely and correctly the nature of the things which are interacting and the ways in which they do so. Consider the ability to tell apart different shades of color. It's not just that the colors are there; we know that they are there, and we are able to tell them apart. This implies a certain amount of causal structure. But the perilous step is to focus only on that causal structure, detach it from considerations of how things appear to be in themselves, and instead say "state machine, neurons doing computations, details interesting but not crucial to my understanding of reality". Somehow, in trying to understand conscious cognition, we must remain in touch with the ontology of consciousness as partially revealed in consciousness itself. The things which do the conscious computing must be things with the properties that we see in front of us, the properties of the objects of experience, such as color.

You know, color - authentic original color - has been banished from physical ontology for so long, that it sounds a little mad to say that there might be a physical entity which is actually green. But there has to be such an entity, whether or not you call it physical. Such an entity will always be embedded in a larger conscious experience, and that conscious experience will be embedded in a conscious being, like you. So we have plenty of clues to the true ontology; the clues are right in front of us; we're subjectively made of these clues. And we will not truly figure things out, unless we remain insistent that these inconvenient realities are in fact real.

## Personal research update

4 29 January 2012 09:32AM

Synopsis: The brain is a quantum computer and the self is a tensor factor in it - or at least, the truth lies more in that direction than in the classical direction - and we won't get Friendly AI right unless we get the ontology of consciousness right.

Followed by: Does functionalism imply dualism?

Sixteen months ago, I made a post seeking funding for personal research. There was no separate Discussion forum then, and the post was comprehensively downvoted. I did manage to keep going at it, full-time, for the next sixteen months. Perhaps I'll get to continue; it's for the sake of that possibility that I'll risk another breach of etiquette. You never know who's reading these words and what resources they have. Also, there has been progress.

I think the best place to start is with what orthonormal said in response to the original post: "I don't think anyone should be funding a Penrose-esque qualia mysterian to study string theory." If I now took my full agenda to someone out in the real world, they might say: "I don't think it's worth funding a study of 'the ontological problem of consciousness in the context of Friendly AI'." That's my dilemma. The pure scientists who might be interested in basic conceptual progress are not engaged with the race towards technological singularity, and the apocalyptic AI activists gathered in this place are trying to fit consciousness into an ontology that doesn't have room for it. In the end, if I have to choose between working on conventional topics in Friendly AI, and on the ontology of quantum mind theories, then I have to choose the latter, because we need to get the ontology of consciousness right, and it's possible that a breakthrough could occur in the world outside the FAI-aware subculture and filter through; but as things stand, the truth about consciousness would never be discovered by employing the methods and assumptions that prevail inside the FAI subculture.

Perhaps I should pause to spell out why the nature of consciousness matters for Friendly AI. The reason is that the value system of a Friendly AI must make reference to certain states of conscious beings - e.g. "pain is bad" - so, in order to make correct judgments in real life, at a minimum it must be able to tell which entities are people and which are not. Is an AI a person? Is a digital copy of a human person, itself a person? Is a human body with a completely prosthetic brain still a person?

I see two ways in which people concerned with FAI hope to answer such questions. One is simply to arrive at the right computational, functionalist definition of personhood. That is, we assume the paradigm according to which the mind is a computational state machine inhabiting the brain, with states that are coarse-grainings (equivalence classes) of exact microphysical states. Another physical system which admits the same coarse-graining - which embodies the same state machine at some macroscopic level, even though the microscopic details of its causality are different - is said to embody another instance of the same mind.

An example of the other way to approach this question is the idea of simulating a group of consciousness theorists for 500 subjective years, until they arrive at a consensus on the nature of consciousness. I think it's rather unlikely that anyone will ever get to solve FAI-relevant problems in that way. The level of software and hardware power implied by the capacity to do reliable whole-brain simulations means you're already on the threshold of singularity: if you can simulate whole brains, you can simulate part brains, and you can also modify the parts, optimize them with genetic algorithms, and put them together into nonhuman AI. Uploads won't come first.

But the idea of explaining consciousness this way, by simulating Daniel Dennett and David Chalmers until they agree, is just a cartoon version of similar but more subtle methods. What these methods have in common is that they propose to outsource the problem to a computational process using input from cognitive neuroscience. Simulating a whole human being and asking it questions is an extreme example of this (the simulation is the "computational process", and the brain scan it uses as a model is the "input from cognitive neuroscience"). A more subtle method is to have your baby AI act as an artificial neuroscientist, use its streamlined general-purpose problem-solving algorithms to make a causal model of a generic human brain, and then to somehow extract from that, the criteria which the human brain uses to identify the correct scope of the concept "person". It's similar to the idea of extrapolated volition, except that we're just extrapolating concepts.

It might sound a lot simpler to just get human neuroscientists to solve these questions. Humans may be individually unreliable, but they have lots of cognitive tricks - heuristics - and they are capable of agreeing that something is verifiably true, once one of them does stumble on the truth. The main reason one would even consider the extra complication involved in figuring out how to turn a general-purpose seed AI into an artificial neuroscientist, capable of extracting the essence of the human decision-making cognitive architecture and then reflectively idealizing it according to its own inherent criteria, is shortage of time: one wishes to develop friendly AI before someone else inadvertently develops unfriendly AI. If we stumble into a situation where a powerful self-enhancing algorithm with arbitrary utility function has been discovered, it would be desirable to have, ready to go, a schema for the discovery of a friendly utility function via such computational outsourcing.

Now, jumping ahead to a later stage of the argument, I argue that it is extremely likely that distinctively quantum processes play a fundamental role in conscious cognition, because the model of thought as distributed classical computation actually leads to an outlandish sort of dualism. If we don't concern ourselves with the merits of my argument for the moment, and just ask whether an AI neuroscientist might somehow overlook the existence of this alleged secret ingredient of the mind, in the course of its studies, I do think it's possible. The obvious noninvasive way to form state-machine models of human brains is to repeatedly scan them at maximum resolution using fMRI, and to form state-machine models of the individual voxels on the basis of this data, and then to couple these voxel-models to produce a state-machine model of the whole brain. This is a modeling protocol which assumes that everything which matters is physically localized at the voxel scale or smaller. Essentially we are asking, is it possible to mistake a quantum computer for a classical computer by performing this sort of analysis? The answer is definitely yes if the analytic process intrinsically assumes that the object under study is a classical computer. If I try to fit a set of points with a line, there will always be a line of best fit, even if the fit is absolutely terrible. So yes, one really can describe a protocol for AI neuroscience which would be unable to discover that the brain is quantum in its workings, and which would even produce a specific classical model on the basis of which it could then attempt conceptual and volitional extrapolation.

Clearly you can try to circumvent comparably wrong outcomes, by adding reality checks and second opinions to your protocol for FAI development. At a more down to earth level, these exact mistakes could also be made by human neuroscientists, for the exact same reasons, so it's not as if we're talking about flaws peculiar to a hypothetical "automated neuroscientist". But I don't want to go on about this forever. I think I've made the point that wrong assumptions and lax verification can lead to FAI failure. The example of mistaking a quantum computer for a classical computer may even have a neat illustrative value. But is it plausible that the brain is actually quantum in any significant way? Even more incredibly, is there really a valid apriori argument against functionalism regarding consciousness - the identification of consciousness with a class of computational process?

I have previously posted (here) about the way that an abstracted conception of reality, coming from scientific theory, can motivate denial that some basic appearance corresponds to reality. A perennial example is time. I hope we all agree that there is such a thing as the appearance of time, the appearance of change, the appearance of time flowing... But on this very site, there are many people who believe that reality is actually timeless, and that all these appearances are only appearances; that reality is fundamentally static, but that some of its fixed moments contain an illusion of dynamism.

The case against functionalism with respect to conscious states is a little more subtle, because it's not being said that consciousness is an illusion; it's just being said that consciousness is some sort of property of computational states. I argue first that this requires dualism, at least with our current physical ontology, because conscious states are replete with constituents not present in physical ontology - for example, the "qualia", an exotic name for very straightforward realities like: the shade of green appearing in the banner of this site, the feeling of the wind on your skin, really every sensation or feeling you ever had. In a world made solely of quantum fields in space, there are no such things; there are just particles and arrangements of particles. The truth of this ought to be especially clear for color, but it applies equally to everything else.

In order that this post should not be overlong, I will not argue at length here for the proposition that functionalism implies dualism, but shall proceed to the second stage of the argument, which does not seem to have appeared even in the philosophy literature. If we are going to suppose that minds and their states correspond solely to combinations of mesoscopic information-processing events like chemical and electrical signals in the brain, then there must be a mapping from possible exact microphysical states of the brain, to the corresponding mental states. Supposing we have a mapping from mental states to coarse-grained computational states, we now need a further mapping from computational states to exact microphysical states. There will of course be borderline cases. Functional states are identified by their causal roles, and there will be microphysical states which do not stably and reliably produce one output behavior or the other.

Physicists are used to talking about thermodynamic quantities like pressure and temperature as if they have an independent reality, but objectively they are just nicely behaved averages. The fundamental reality consists of innumerable particles bouncing off each other; one does not need, and one has no evidence for, the existence of a separate entity, "pressure", which exists in parallel to the detailed microphysical reality. The idea is somewhat absurd.

Yet this is analogous to the picture implied by a computational philosophy of mind (such as functionalism) applied to an atomistic physical ontology. We do know that the entities which constitute consciousness - the perceptions, thoughts, memories... which make up an experience - actually exist, and I claim it is also clear that they do not exist in any standard physical ontology. So, unless we get a very different physical ontology, we must resort to dualism. The mental entities become, inescapably, a new category of beings, distinct from those in physics, but systematically correlated with them. Except that, if they are being correlated with coarse-grained neurocomputational states which do not have an exact microphysical definition, only a functional definition, then the mental part of the new combined ontology is fatally vague. It is impossible for fundamental reality to be objectively vague; vagueness is a property of a concept or a definition, a sign that it is incomplete or that it does not need to be exact. But reality itself is necessarily exact - it is something - and so functionalist dualism cannot be true unless the underdetermination of the psychophysical correspondence is replaced by something which says for all possible physical states, exactly what mental states (if any) should also exist. And that inherently runs against the functionalist approach to mind.

Very few people consider themselves functionalists and dualists. Most functionalists think of themselves as materialists, and materialism is a monism. What I have argued is that functionalism, the existence of consciousness, and the existence of microphysical details as the fundamental physical reality, together imply a peculiar form of dualism in which microphysical states which are borderline cases with respect to functional roles must all nonetheless be assigned to precisely one computational state or the other, even if no principle tells you how to perform such an assignment. The dualist will have to suppose that an exact but arbitrary border exists in state space, between the equivalence classes.

This - not just dualism, but a dualism that is necessarily arbitrary in its fine details - is too much for me. If you want to go all Occam-Kolmogorov-Solomonoff about it, you can say that the information needed to specify those boundaries in state space is so great as to render this whole class of theories of consciousness not worth considering. Fortunately there is an alternative.

Here, in addressing this audience, I may need to undo a little of what you may think you know about quantum mechanics. Of course, the local preference is for the Many Worlds interpretation, and we've had that discussion many times. One reason Many Worlds has a grip on the imagination is that it looks easy to imagine. Back when there was just one world, we thought of it as particles arranged in space; now we have many worlds, dizzying in their number and diversity, but each individual world still consists of just particles arranged in space. I'm sure that's how many people think of it.

Among physicists it will be different. Physicists will have some idea of what a wavefunction is, what an operator algebra of observables is, they may even know about path integrals and the various arcane constructions employed in quantum field theory. Possibly they will understand that the Copenhagen interpretation is not about consciousness collapsing an actually existing wavefunction; it is a positivistic rationale for focusing only on measurements and not worrying about what happens in between. And perhaps we can all agree that this is inadequate, as a final description of reality. What I want to say, is that Many Worlds serves the same purpose in many physicists' minds, but is equally inadequate, though from the opposite direction. Copenhagen says the observables are real but goes misty about unmeasured reality. Many Worlds says the wavefunction is real, but goes misty about exactly how it connects to observed reality. My most frustrating discussions on this topic are with physicists who are happy to be vague about what a "world" is. It's really not so different to Copenhagen positivism, except that where Copenhagen says "we only ever see measurements, what's the problem?", Many Worlds says "I say there's an independent reality, what else is left to do?". It is very rare for a Many World theorist to seek an exact idea of what a world is, as you see Robin Hanson and maybe Eliezer Yudkowsky doing; in that regard, reading the Sequences on this site will give you an unrepresentative idea of the interpretation's status.

One of the characteristic features of quantum mechanics is entanglement. But both Copenhagen, and a Many Worlds which ontologically privileges the position basis (arrangements of particles in space), still have atomistic ontologies of the sort which will produce the "arbitrary dualism" I just described. Why not seek a quantum ontology in which there are complex natural unities - fundamental objects which aren't simple - in the form of what we would presently called entangled states? That was the motivation for the quantum monadology described in my other really unpopular post. :-) [Edit: Go there for a discussion of "the mind as tensor factor", mentioned at the start of this post.] Instead of saying that physical reality is a series of transitions from one arrangement of particles to the next, say it's a series of transitions from one set of entangled states to the next. Quantum mechanics does not tell us which basis, if any, is ontologically preferred. Reality as a series of transitions between overall wavefunctions which are partly factorized and partly still entangled is a possible ontology; hopefully readers who really are quantum physicists will get the gist of what I'm talking about.

I'm going to double back here and revisit the topic of how the world seems to look. Hopefully we agree, not just that there is an appearance of time flowing, but also an appearance of a self. Here I want to argue just for the bare minimum - that a moment's conscious experience consists of a set of things, events, situations... which are simultaneously "present to" or "in the awareness of" something - a conscious being - you. I'll argue for this because even this bare minimum is not acknowledged by existing materialist attempts to explain consciousness. I was recently directed to this brief talk about the idea that there's no "real you". We are given a picture of a graph whose nodes are memories, dispositions, etc., and we are told that the self is like that graph: nodes can be added, nodes can be removed, it's a purely relational composite without any persistent part. What's missing in that description is that bare minimum notion of a perceiving self. Conscious experience consists of a subject perceiving objects in certain aspects. Philosophers have discussed for centuries how best to characterize the details of this phenomenological ontology; I think the best was Edmund Husserl, and I expect his work to be extremely important in interpreting consciousness in terms of a new physical ontology. But if you can't even notice that there's an observer there, observing all those parts, then you won't get very far.

My favorite slogan for this is due to the other Jaynes, Julian Jaynes. I don't endorse his theory of consciousness at all; but while in a daydream he once said to himself, "Include the knower in the known". That sums it up perfectly. We know there is a "knower", an experiencing subject. We know this, just as well as we know that reality exists and that time passes. The adoption of ontologies in which these aspects of reality are regarded as unreal, as appearances as only, may be motivated by science, but it's false to the most basic facts there are, and one should show a little more imagination about what science will say when it's more advanced.

I think I've said almost all of this before. The high point of the argument is that we should look for a physical ontology in which a self exists and is a natural yet complex unity, rather than a vaguely bounded conglomerate of distinct information-processing events, because the latter leads to one of those unacceptably arbitrary dualisms. If we can find a physical ontology in which the conscious self can be identified directly with a class of object posited by the theory, we can even get away from dualism, because physical theories are mathematical and formal and make few commitments about the "inherent qualities" of things, just about their causal interactions. If we can find a physical object which is absolutely isomorphic to a conscious self, then we can turn the isomorphism into an identity, and the dualism goes away. We can't do that with a functionalist theory of consciousness, because it's a many-to-one mapping between physical and mental, not an isomorphism.

So, I've said it all before; what's new? What have I accomplished during these last sixteen months? Mostly, I learned a lot of physics. I did not originally intend to get into the details of particle physics - I thought I'd just study the ontology of, say, string theory, and then use that to think about the problem. But one thing led to another, and in particular I made progress by taking ideas that were slightly on the fringe, and trying to embed them within an orthodox framework. It was a great way to learn, and some of those fringe ideas may even turn out to be correct. It's now abundantly clear to me that I really could become a career physicist, working specifically on fundamental theory. I might even have to do that, it may be the best option for a day job. But what it means for the investigations detailed in this essay, is that I don't need to skip over any details of the fundamental physics. I'll be concerned with many-body interactions of biopolymer electrons in vivo, not particles in a collider, but an electron is still an electron, an elementary particle, and if I hope to identify the conscious state of the quantum self with certain special states from a many-electron Hilbert space, I should want to understand that Hilbert space in the deepest way available.

My only peer-reviewed publication, from many years ago, picked out pathways in the microtubule which, we speculated, might be suitable for mobile electrons. I had nothing to do with noticing those pathways; my contribution was the speculation about what sort of physical processes such pathways might underpin. Something I did notice, but never wrote about, was the unusual similarity (so I thought) between the microtubule's structure, and a model of quantum computation due to the topologist Michael Freedman: a hexagonal lattice of qubits, in which entanglement is protected against decoherence by being encoded in topological degrees of freedom. It seems clear that performing an ontological analysis of a topologically protected coherent quantum system, in the context of some comprehensive ontology ("interpretation") of quantum mechanics, is a good idea. I'm not claiming to know, by the way, that the microtubule is the locus of quantum consciousness; there are a number of possibilities; but the microtubule has been studied for many years now and there's a big literature of models... a few of which might even have biophysical plausibility.

As for the interpretation of quantum mechanics itself, these developments are highly technical, but revolutionary. A well-known, well-studied quantum field theory turns out to have a bizarre new nonlocal formulation in which collections of particles seem to be replaced by polytopes in twistor space. Methods pioneered via purely mathematical studies of this theory are already being used for real-world calculations in QCD (the theory of quarks and gluons), and I expect this new ontology of "reality as a complex of twistor polytopes" to carry across as well. I don't know which quantum interpretation will win the battle now, but this is new information, of utterly fundamental significance. It is precisely the sort of altered holistic viewpoint that I was groping towards when I spoke about quantum monads constituted by entanglement. So I think things are looking good, just on the pure physics side. The real job remains to show that there's such a thing as quantum neurobiology, and to connect it to something like Husserlian transcendental phenomenology of the self via the new quantum formalism.

It's when we reach a level of understanding like that, that we will truly be ready to tackle the relationship between consciousness and the new world of intelligent autonomous computation. I don't deny the enormous helpfulness of the computational perspective in understanding unconscious "thought" and information processing. And even conscious states are still states, so you can surely make a state-machine model of the causality of a conscious being. It's just that the reality of how consciousness, computation, and fundamental ontology are connected, is bound to be a whole lot deeper than just a stack of virtual machines in the brain. We will have to fight our way to a new perspective which subsumes and transcends the computational picture of reality as a set of causally coupled black-box state machines. It should still be possible to "port" most of the thinking about Friendly AI to this new ontology; but the differences, what's new, are liable to be crucial to success. Fortunately, it seems that new perspectives are still possible; we haven't reached Kantian cognitive closure, with no more ontological progress open to us. On the contrary, there are still lines of investigation that we've hardly begun to follow.

## [Transcript] Richard Feynman on Why Questions

61 08 January 2012 07:01PM

I thought this video was a really good question dissolving by Richard Feynman. But it's in 240p! Nobody likes watching 240p videos. So I transcribed it. (Edit: That was in jest. The real reasons are because I thought I could get more exposure this way, and because a lot of people appreciate transcripts. Also, Paul Graham speculates that the written word is universally superior than the spoken word for the purpose of ideas.) I was going to post it as a rationality quote, but the transcript was sufficiently long that I think it warrants a discussion post instead.

Here you go:

continue reading »

## Question about timeless physics

3 16 December 2011 01:09PM

Related to: lesswrong.com/lw/qp/timeless_physics/

Why do I find myself at this point in time, configuration space, rather than another point? In other words, why do I have certain expectations rather than others?

I don't expect the U.S. presidential elections to have happened but to happen next, where "to happen" and "to have happened" internally marks the sequential order of steps indexed by consecutive timestamps. But why do I find myself to have that particular expectation rather than any other, what is it that does privilege this point?

So you seem to remember Time proceeding along a single line.  You remember that the particle first went left, and then went right.  You ask, "Which way will the particle go this time?"

My question is why I find myself to remember that the particle went left and then right rather than left but not yet right?

But both branches, both future versions of you, just exist.  There is no fact of the matter as to "which branch you go down".  Different versions of you experience both branches.

Yes, but why does my version experience this point of my branch and not any other point of my branch?

I understand that if this universe was a giant simulation and that if it was to halt and then resume, after some indexical measure of causal steps used by those outside of it, then I wouldn't notice it. Therefore if you remove the notion of an outside world there ceases to be any measure of how many causal steps it took until I continued my relational measure of progression.

But that's not my question. Assume for a moment that my consciousness experience is not a causal continuum but a discrete sequence of causal steps from 1, 2, 3, ... to N where N marks this point. Why do I find myself at N rather than 10 or N+1?

## Problems of the Deutsch-Wallace version of Many Worlds

4 16 December 2011 06:55AM

The subject has already been raised in this thread, but in a clumsy fashion. So here is a fresh new thread, where we can discuss, calmly and objectively, the pros and cons of the "Oxford" version of the Many Worlds interpretation of quantum mechanics.

This version of MWI is distinguished by two propositions. First, there is no definite number of "worlds" or "branches". They have a fuzzy, vague, approximate, definition-dependent existence. Second, the probability law of quantum mechanics (the Born rule) is to be obtained, not by counting the frequencies of events in the multiverse, but by an analysis of rational behavior in the multiverse. Normally, a prescription for rational behavior is obtained by maximizing expected utility, a quantity which is calculated by averaging "probability x utility" for each possible outcome of an action. In the Oxford school's "decision-theoretic" derivation of the Born rule, we somehow start with a ranking of actions that is deemed rational, then we "divide out" by the utilities, and obtain probabilities that were implicit in the original ranking.

I reject the two propositions. "Worlds" or "branches" can't be vague if they are to correspond to observed reality, because vagueness results from an object being dependent on observer definition, and the local portion of reality does not owe its existence to how we define anything; and the upside-down decision-theoretic derivation, if it ever works, must implicitly smuggle in the premises of probability theory in order to obtain its original rationality ranking.

Some references:

"Decoherence and Ontology: or, How I Learned to Stop Worrying and Love FAPP" by David Wallace. In this paper, Wallace says, for example, that the question "how many branches are there?" "does not... make sense", that the question "how many branches are there in which it is sunny?" is "a question which has no answer", "it is a non-question to ask how many [worlds]", etc.

"Quantum Probability from Decision Theory?" by Barnum et al. This is a rebuttal of the original argument (due to David Deutsch) that the Born rule can be justified by an analysis of multiverse rationality.

## How Many Worlds?

2 14 December 2011 02:51PM

How many universes "branch off" from a "quantum event", and in how many of them is the cat dead vs alive, and what about non-50/50 scenarios, and please answer so that a physics dummy can maybe kind of understand?

(Is it just 1 with the live cat and 1 with the dead one?)

## Physics question (slightly off-topic)

2 12 December 2011 05:53AM

There's probably a better place to ask this question, but I don't know what it is. That being said...

Which will go further if a batter manages to hit it with a baseball bat: a baseball thrown to the batter at 90 miles per hour or one thrown at 60 miles per hour?

## Kickstarter fundraising for largest Tesla Coils in history

1 28 November 2011 05:48AM

"If the government is not willing to fund the building of two 10-story tall Tesla Coils, then why the hell do I even pay taxes?"

This seems like by far the best investment of \$300,000 out there, if your metric is revolutionary new physics discovered per dollar. I pointed the founder at Thiel's Breakout Labs, which is probably more suited to this kind of thing than Kickstarter. But there is still a very non-negligible chance that the Kickstarter Grant will come to fruition.

## OPERA Confirms: Neutrinos Travel Faster Than Light

10 18 November 2011 09:58AM

New high-precision tests carried out by the OPERA collaboration in Italy broadly confirm its claim, made in September, to have detected neutrinos travelling at faster than the speed of light. The collaboration today submitted its results to a journal, but some members continue to insist that further checks are needed before the result can be considered sound.

The OPERA Collaboration sent to the Cornell Arxiv an updated version of their preprint today, where they summarize the results of their analysis, expanded with additional statistical tests, and including the check performed with 20 additional neutrino interactions they collected in the last few weeks. These few extra timing measurements crucially allow the ruling out of some potential unaccounted sources of systematic uncertainty, notably ones connected to the knowledge of the proton spill time distribution.

[...]

So what does OPERA find ? Their main result, based on the 15,233 neutrino interactions collected in three years of data taking, is unchanged from the September result. The most interesting part of the new publication is instead that the  find that the 20 new neutrino events (where neutrino speeds are individually measured, as opposed to the combined measurement done with the three-year data published in September) confirm the earlier result: the arrival times appear to occur about 60 nanoseconds before they are expected.

Previously on LW: lesswrong.com/lw/7rc/particles_break_lightspeed_limit/

## Physics Video Lectures

6 02 October 2011 09:05AM

## The Apparent Reality of Physics

-3 23 September 2011 08:10PM

Follow-up to: Syntacticism

I wrote:

The only objects that are real (in a Platonic sense) are formal systems (or rather, syntaxes). That is to say, my ontology is the set of formal systems. (This is not incompatible with the apparent reality of a physical universe).

In my experience, most people default1 to naïve physical realism: the belief that "matter and energy and stuff exist, and they follow the laws of physics".  This view has two problems: how do you know stuff exists, and what makes it follow those laws?

To the first - one might point at a rock, and say "Look at that rock; see how it exists at me."  But then we are relying on sensory experience; suppose the simulation hypothesis were true, then that sensory experience would be unchanged, but the rock wouldn't really exist, would it?  Suppose instead that we are being simulated twice, on two different computers.  Does the rock exist twice as much?  Suppose that there are actually two copies of the Universe, physically existing.  Is there any way this could in principle be distinguished from the case where only one copy exists?  No; a manifest physical reality is observationally equivalent to N manifest physical realities, as well as to a single simulation or indeed N simulations.  (This remains true if we set N=0.)

So a true description requires that the idea of instantiation should drop out of the model; we need to think in a way that treats all the above cases as identical, that justifiably puts them all in the same bucket.  This we can do if we claim that that-which-exists is precisely the mathematical structure defining the physical laws and the index of our particular initial conditions (in a non-relativistic quantum universe that would be the Schrödinger equation and some particular wavefunction).  Doing so then solves not only the first problem of naïve physical realism, but the second also, since trivially solutions to those laws must follow those laws.

But then why should we privilege our particular set of physical laws, when that too is just a source of indexical uncertainty?  So we conclude that all possible mathematical structures have Platonic existence; there is no little XML tag attached to the mathematics of our own universe that states "this one exists, is physically manifest, is instantiated", and in this view of things such a tag is obviously superfluous; instantiation has dropped out of our model.

When an agent in universe-defined-by-structure-A simulates, or models, or thinks-about, universe-defined-by-structure-B, they do not 'cause universe B to come into existence'; there is no refcount attached to each structure, to tell the Grand Multiversal Garbage Collection Routine whether that structure is still needed.  An agent in A simulating B is not a causal relation from A to B; instead it is a causal relation from B to A!  B defines the fact-of-the-matter as to what the result of B's laws is, and the agent in A will (barring cosmic rays flipping bits) get the result defined by B.2

So we are left with a Platonically existing multiverse of mathematical structures and solutions thereto, which can contain conscious agents to whom there will be every appearance of a manifest instantiated physical reality, yet no such physical reality exists.  In the terminology of Max Tegmark (The Mathematical Universe) this position is the acceptance of the MUH but the rejection of the ERH (although the Mathematical Universe is an external reality, it's not an external physical reality).

Reducing all of applied mathematics and theoretical physics to a syntactic formal system is left as an exercise for the reader.

1That is, when people who haven't thought about such things before do so for the first time, this is usually the first idea that suggests itself.

2I haven't yet worked out what happens if a closed loop forms, but I think we can pull the same trick that turns formalism into syntacticism; or possibly, consider the whole system as a single mathematical structure which may have several stable states (indexical uncertainty) or no stable states (which I think can be resolved by 'loop unfolding', a process similar to that which turns the complex plane into a Riemann surface - but now I'm getting beyond the size of digression that fits in a footnote; a mathematical theory of causal relations between structures needs at least its own post, and at most its own field, to be worked out properly).

## Particles break light-speed limit?

9 23 September 2011 11:00AM

http://www.nature.com/news/2011/110922/full/news.2011.554.html

http://arxiv.org/abs/1109.4897v1

http://usersguidetotheuniverse.com/?p=2169

http://news.ycombinator.com/item?id=3027056

Ereditato says that he is confident enough in the new result to make it public. The researchers claim to have measured the 730-kilometre trip between CERN and its detector to within 20 centimetres. They can measure the time of the trip to within 10 nanoseconds, and they have seen the effect in more than 16,000 events measured over the past two years. Given all this, they believe the result has a significance of six-sigma — the physicists' way of saying it is certainly correct. The group will present their results tomorrow at CERN, and a preprint of their results will be posted on the physics website ArXiv.org.

At least one other experiment has seen a similar effect before, albeit with a much lower confidence level. In 2007, the Main Injector Neutrino Oscillation Search (MINOS) experiment in Minnesota saw neutrinos from the particle-physics facility Fermilab in Illinois arriving slightly ahead of schedule. At the time, the MINOS team downplayed the result, in part because there was too much uncertainty in the detector's exact position to be sure of its significance, says Jenny Thomas, a spokeswoman for the experiment. Thomas says that MINOS was already planning more accurate follow-up experiments before the latest OPERA result. "I'm hoping that we could get that going and make a measurement in a year or two," she says.

Perhaps the end of the era of the light cone and beginning of the era of the neutrino cone? I'd be curious to see your probability estimates for whether this theory pans out. Or other crackpot hypotheses to explain the results.

## Review article on Bayesian inference in physics

6 19 September 2011 11:45PM

A nice article just appeared in Reviews of Modern Physics. It offers a brief coverage of the fundamentals of Bayesian probability theory, the practical numerical techniques, a diverse collection of real-world examples of applications of Bayesian methods to data analysis, and even a section on Bayesian experimental design. The PDF is available here.

The abstract:

# Bayesian inference in physics

Udo von Toussaint*
Max-Planck-Institute for Plasmaphysics, Boltzmannstrasse 2, 85748 Garching, Germany

Received 8 December 2009; published 19 September 2011

Bayesian inference provides a consistent method for the extraction of information from physics experiments even in ill-conditioned circumstances. The approach provides a unified rationale for data analysis, which both justifies many of the commonly used analysis procedures and reveals some of the implicit underlying assumptions. This review summarizes the general ideas of the Bayesian probability theory with emphasis on the application to the evaluation of experimental data. As case studies for Bayesian parameter estimation techniques examples ranging from extra-solar planet detection to the deconvolution of the apparatus functions for improving the energy resolution and change point estimation in time series are discussed. Special attention is paid to the numerical techniques suited for Bayesian analysis, with a focus on recent developments of Markov chain Monte Carlo algorithms for high-dimensional integration problems. Bayesian model comparison, the quantitative ranking of models for the explanation of a given data set, is illustrated with examples collected from cosmology, mass spectroscopy, and surface physics, covering problems such as background subtraction and automated outlier detection. Additionally the Bayesian inference techniques for the design and optimization of future experiments are introduced. Experiments, instead of being merely passive recording devices, can now be designed to adapt to measured data and to change the measurement strategy on the fly to maximize the information of an experiment. The applied key concepts and necessary numerical tools which provide the means of designing such inference chains and the crucial aspects of data fusion are summarized and some of the expected implications are highlighted.

© 2011 American Physical Society

## A prize to become artist in residence at CERN

2 03 September 2011 02:56AM

http://www.aec.at/collide/

Prix Ars Electronica Collide@CERN is the new international competition for digital artists to win a residency at CERN the world's largest particle physics laboratory in Geneva. It is the first prize to be announced as part of the new Collide@CERN artists residency programme initiated by the laboratory.

The residency is in two parts - with an initial two months at CERN, where the winning artist will have a specially dedicated science mentor from the world famous science lab to inspire him/her and his/her work. The second part will be a month with the Futurelab team and mentor at Ars Electronica Linz with whom the winner will develop and make new work inspired by the CERN residency.

## Schroedinger's cat is always dead

-14 26 August 2011 05:58PM

Suppose you believe in the Copenhagen interpretation of quantum mechanics.  Schroedinger puts his cat in a box, with a device that has a 50% chance of releasing a deathly poisonous gas.  He will then open the box, and observe a live or dead cat, collapsing that waveform.

But Schroedinger's cat is lazy, and spends most of its time sleeping.  Schroedinger is a pessimist, or else an optimist who hates cats; and so he mistakes a sleeping cat for a dead cat with probability P(M) > 0, but never mistakes a dead cat for a living cat.

So if the cat is dead with probability P(D) >= .5, Schroedinger observes a dead cat with probability P(D) + P(M)(1-P(D)).

If observing a dead cat causes the waveform to collapse such that the cat is dead, then P(D) = P(D) + P(M)(1-P(D)).  This is possible only if P(D) = 1.

continue reading »

## Polarized gamma rays and manifest infinity

16 30 July 2011 06:56AM

Most people (not all, but most) are reasonably comfortable with infinity as an ultimate (lack of) limit. For example, cosmological theories that suggest the universe is infinitely large and/or infinitely old, are not strongly disbelieved a priori.

By contrast, most people are fairly uncomfortable with manifest infinity, actual infinite quantities showing up in physical objects. For example, we tend to be skeptical of theories that would allow infinite amounts of matter, energy or computation in a finite volume of spacetime.

continue reading »

## States of knowledge as amplitude configurations

0 [deleted] 08 June 2011 06:38PM

I am reading through the sequence on quantum physics and have had some questions which I am sure have been thought about by far more qualified people. If you have any useful comments or links about these ideas, please share.

Most of the strongest resistance to ideas about rationalism that I encounter comes not from people with religious beliefs per se, but usually from mathematicians or philosophers who want to assert arguments about the limits of knowledge, the fidelity of sensory perception as a means for gaining knowledge, and various (what I consider to be) pathological examples (such as the zombie example). Among other things, people tend to reduce the argument to the existence of proper names a la Wittgenstein and then go on to assert that the meaning of mathematics or mathematical proofs constitutes something which is fundamentally not part of the physical world.

As I am reading the quantum physics sequence (keep in mind that I am not a physicist; I am an applied mathematician and statistician and so the mathematical framework of Hilbert spaces and amplitude configurations makes vastly much more sense to me than billiard balls or waves, yet connecting it to reality is still very hard for me) I am struck by the thought that all thoughts are themselves fundamentally just amplitude configurations, and by extension, all claims about knowledge about things are also statements about amplitude configurations. For example, my view is that the color red does not exist in and of itself but rather that the experience of the color red is a statement about common configurations of particle amplitudes. When I say "that sign is red", one could unpack this into a detailed statement about statistical properties of configurations of particles in my brain.

The same reasoning seems to apply just as well to something like group theory. States of knowledge about the Sylow theorems, just as an example, would be properties of particle amplitude configurations in a brain. The Sylow theorems are not separately existing entities which are of themselves "true" in any sense.

Perhaps I am way off base in thinking this way. Can any philosophers of the mind point me in the right direction to read more about this?

## The Multiverse Interpretation of Quantum Mechanics [link]

9 03 June 2011 08:45AM

## 12-year old challenges the Big Bang

1 [deleted] 29 March 2011 05:40AM

I thought this may be of interest to the LW community. Jacob Barnett is a 12-year old male who taught himself all of high school math (algebra through calculus), has a currently scored math IQ of 170 (for what that's worth) and is currently on track to become a researcher of astrophysics. His current major news worthy claim-to-fame (aside from being really young): The Big Bang Theory is currently incorrect (I believe the article states he has something about a lack of carbon in the model), and he's planning to develop a new theory.

I haven't learned anything serious in physics, so I have nothing to note on his claim. I realize the news article cited puts him claim fairly generally, so I'll ask this: Can someone explain how elements are generally modeled to have formed from the big bang? And is there anything that it Jacob may be missing in the current literature?

## QFT, Homotopy Theory and AI?

-3 30 October 2010 10:48AM

What do you think about the new, exiting connections between QFT, Homotopy Theory and pattern recognition, proof verification and (maybe) AI systems? In view of the background of this forum's participants (selfreported in the survey mentioned a few days ago), I guess most of you follow those developments with some attention.

Concerning Homotopy Theory, there is a coming [special year](http://www.math.ias.edu/node/2610), you probably know Voevodsky's [recent intro lecture](http://www.channels.com/episodes/show/10793638/Vladimir-Voevodsky-Formal-Languages-partial-algebraic-theories-and-homotopy-category-), and [this](http://video.ias.edu/voevodsky-80th) even more popular one. Somewhat related are Y.I. Manin's remarks on the missing quotient structures (analogue to localized categories) in data structures and some of the ideas in Gromov's [essay](http://www.ihes.fr/~gromov/PDF/ergobrain.pdf).

Concerning ideas from QFT, [here](http://arxiv.org/abs/0904.4921) an example. I wonder what else concepts come from it?

BTW, whereas the public discussion focus on basic qm and on q-gravity questions, the really interesting and open issue is the special relativistic QFT: QM is just a canonical deformation of classical mechanics (and could have been found much earlier, most of the interpretation disputes just come from the confusion of mathematical properties with physics data), but Feynman integrals are despite half a century intense research mathematical unfounded. As Y.I. Manin called it in a recent interview, they are "an Eifel tower floating in the air". Only a strong platonian belief makes people tolerate that. I myself take them only serious because there is a clear platonic idea behind them and because number theoretic analoga work very well.

## Deep Structure Determinism

1 10 October 2010 06:54PM

Sort of a response to: Collapse Postulate

Abstract:  There are phenomena in mathematics where certain structures are distributed "at random;" that is, statistical statements can be made and probabilities can be used to predict the outcomes of certain totally deterministic calculations.  These calculations have a deep underlying structure which leads a whole class of problems to behave in the same way statistically, in a way that appears random, while being entirely deterministic.  If quantum probabilities worked in this way, it would not require collapse or superposition.

This is a post about physics, and I am not a physicist.  I will reference a few technical details from my (extremely limited) research in mathematical physics, but they are not necessary to the fundamental concept.  I am sure that I have seen similar ideas somewhere in the comments before, but searching the site for "random + determinism" didn't turn much up so if anyone recognizes it I would like to see other posts on the subject.  However my primary purpose here is to expose the name "Deep Structure Determinism" that jasonmcdowell used for it when I explained it to him on the ride back from the Berkeley Meetup yesterday.

Again I am not a physicist; it could be that there is a one or two sentence explanation for why this is a useless theory--of course that won't stop the name "Deep Structure Determinism" from being aesthetically pleasing and appropriate.

For my undergraduate thesis in mathematics, I collected numerical evidence for a generalization of the Sato-Tate Conjecture.  The conjecture states, roughly, that if you take the right set of polynomials, compute the number of solutions to them over finite fields, and scale by a consistent factor, these results will have a probability distribution that is precisely a semicircle.

The reason that this is the case has something to do with the solutions being symmetric (in the way that y=x2 if and only if y=(-x)2 is a symmetry of the first equation) and their group of symmetries being a circle.  And stepping back one step, the conjecture more properly states that the numbers of solutions will be roots of a certain polynomial which will be the minimal polynomial of a random matrix in SU2.

That is at least as far as I follow the mathematics, if not further.  However, it's far enough for me to stop and do a double take.

A "random matrix?"  First, what does it mean for a matrix to be random?  And given that I am writing up a totally deterministic process to feed into a computer, how can you say that the matrix is random?

A sequence of matrices is called "random" if when you integrate of that sequence, your integral converges to integrating over an entire group of matrices.  Because matrix groups are often smooth manifolds they are designed to be integrated over, and this ends up being sensible.  However a more practical characterization, and one that I used in the writeup for my thesis, is that if you take a histogram of the points you are measuring, the histogram's shape should converge to the shape of the group--that is, if you're looking at the matrices that determine a circle, your histogram should look more and more like a semicircle as you do more computing.  That is, you can have a probability distribution over the matrix space for where your matrix is likely to show up.

The actual computation that I did involved computing solutions to a polynomial equation--a trivial and highly deterministic procedure.  I then scaled them, and stuck them in place.  If I had not know that these numbers were each coming from a specific equation I would have said that they were random; they jumped around through the possibilities, but they were concentrated around the areas of higher probability.

So bringing this back to quantum physics:  I am given to understand that quantum mechanics involves a lot of random matrices.  These random matrices give the impression of being "random" in that it seems like there are lots of possibilities, and one must get "chosen" at the end of the day.  One simple way to deal with this is postulate many worlds, wherein there no one choice has a special status.

However my experience with random matrices suggests that there could just be some series of matrices, which satisfies the definition of being random, but which is inherently determined (in the way that the Jacobian of a given elliptic curve is "determined.")  If all quantum random matrices were selected from this list, it would leave us with the subjective experience of randomness, and given that this sort of computation may not be compressible, the expectation of dealing with these variables as though they are random forever.  It would also leave us in a purely deterministic world, which does not branch, which could easily be linear, unitary, differentiable, local, symmetric, and slower-than-light.

View more: Next