In response to Why Quantum?
Comment author: mitchell_porter2 04 June 2008 09:12:44AM 1 point [-]

Stephen: consistent histories works by having a set of disjoint, coarse-grained histories - "coarse-grained" meaning that they are underspecified by classical standards - which then obtain a-priori probabilities through the use of a "decoherence functional" (which is where stuff like the Hamiltonian, that actually defines the theory, enters). You then get the transition probabilities of ordinary quantum mechanics by conditioning on those global probabilities of whole histories.

Some people have a neo-Copenhagenist attitude towards consistent histories - i.e., it's just a formalism - but if you take it seriously as a depiction of an actually existing ensemble of worlds, it's quite different from the more Parmenidean vision offered here, in which reality is a standing wave in configuration space, and "worlds" (and, therefore, observers) are just fuzzily defined substructures of that standing wave. The worlds in a realist consistent-histories interpretation would be sharply defined and noninteracting.

There is certainly a relation between the two possible versions of Many Worlds, in that you can construct a decoherence functional out of a wavefunction of the universe, and derive the probabilities of the coarse-grained histories from it. In effect, each history correponds to a chunk of configuration space, and the total probability of that history comes from the amplitudes occupying that chunk. (The histories do not need to cover all of configuration space; they only need to be disjoint.) ... I really need some terminology here. I'm going to call one type Parmenidean, and the other type Lewisian, after David Lewis, the philosopher who talked about causally disjoint multiple worlds. So: you can get a Lewisian theory of many worlds from a Parmenidean theory by breaking off chunks of the Parmenidean "block multiverse" and saying that those are the worlds. I can imagine a debate between a Parmenidean and a Lewisian, in which a Parmenidean would claim that their approach is superior because they regard all the possible Lewisian decompositions as equally partially real, whereas the Lewisian might argue that their approach is superior because there's no futzing around about what a "world" is - the worlds are clearly (albeit arbitrarily) defined.

But the really significant thing is that you can get the numerical quantum predictions from the "Lewisian" approach, but you can't get it from the Parmenidean. Robin Hanson's mangled worlds formula gets results by starting down the road towards a Lewisian specification of exactly what the worlds are, but he gets the right count in a certain limit without having to exactly specify when one world becomes two (or many). Anyway, the point is not that consistent histories makes different predictions, but that it makes predictions at all.

In response to Why Quantum?
Comment author: mitchell_porter2 04 June 2008 07:13:57AM 2 points [-]

"The balance of arguments is overwhelmingly tipped; and physicists who deny it, are making specific errors of probability theory (which I have specifically laid out, and shown to you)"

I guess this refers to the error of supposing that Occam's Razor literally means "have as few entities as possible", rather than "have a theory as simple as possible", and opposing Many Worlds for that reason. Which is indeed an error.

But perhaps for the last time, I will try to enumerate those problems with your position that I can remember.

1. There is no relativistic formulation of Many Worlds; you just trust that there is.

2. There is no derivation of the Born probabilities, which contain all the predictive content of quantum mechanics.

3. Robin Hanson has a proposal to derive the probabilities, but for now it rests on making vagueness about the concept of observers and worlds into a virtue.

You've given zero public consideration to other possibilities such as temporally bidirectional causation and nonsubjective collapse theories. You've also ignored Bohmian mechanics, a classically objective theory which does make all the predictions of quantum theory. You also haven't said anything about the one version of Many Worlds which does produce predictions - the version Gell-Mann favors, "consistent histories" - which has a distinctly different flavor to the "waves in configuration space" version.

In view of all that, how can you possibly say that Many Worlds is rationally favored, or that you have made a compelling case for this?

I'll repeat my earlier recommendation:

"What you should say as a neo-rationalist is that ... people should not be content with an incomplete description of the world, and that something like Minimum Description Length should be used to select between possible complete theories when there is nothing better, and you should leave it at that."

I wrote a little essay at Nick Tarleton's forum, here, about these problems. I will at some point link from there to my various comments posted here, so it's all in the one place. And I suppose eventually I'll have to write my own views out at length (not just my anti-MWI views). My main unexpressed view is that string theory is probably the answer, and that attempts to make ontological sense of physics will have to grapple with its details, and so all these other 'interpretations' are merely preliminary ideas that may at best be helpful in the real struggle.

In response to Timeless Identity
Comment author: mitchell_porter2 03 June 2008 10:01:57AM 2 points [-]

Covalent bonds with external atoms are just one form of "correlation with the environment".

I wish to postulate a perfect copy, in the sense that the internal correlations are identical to the original, but in which the correlations to the rest of the universe are different (e.g. "on Mars" rather than "on Earth").

There is some confusion here in the switching between individual configurations, and configuration space. An atom is already a blob in configuration space (e.g. "one electron in the ground-state orbital") rather than a single configuration, with respect to a particle basis.

While we cannot individuate particles in a relative configuration, we can individuate wave packets traveling in relative configuration space. And since even an atom already exists at that level, it is far from clear to me that the attempt to abandon continuity of identity carries over to complicated structures.

In response to Timeless Identity
Comment author: mitchell_porter2 03 June 2008 09:47:00AM 0 points [-]

Eliezer: That's how we distinguish Eliezer from Mitchell.

Isn't that then how we distinguish a nondestructive copy from the original? If the original has been copied nondestructively, why shouldn't we continue to regard it as the original?

In response to Timeless Identity
Comment author: mitchell_porter2 03 June 2008 09:23:53AM 6 points [-]

The argument that "there is no such thing as a particular atom, therefore neither duplicate has a preferred status as the original" looks sophistical, and it may even be possible to show that it is within your preferred quantum framework. Consider a benzene ring. That's a ring of six carbon atoms. If it occurs as part of a larger molecule, there will be covalent bonds between particular atoms in the ring and atoms exterior to it. Now suppose I verify the presence of the benzene ring through some nondestructive procedure, and then create another benzene ring elsewhere, using other atoms. In fact, suppose I have a machine which will create that second benzene ring only if the investigative procedure verifies the existence of the first. I have created a copy, but are you really going to say there's no fact of the matter about which is the original? There's even a hint of how you can distinguish between the two given your ontological framework, when I stipulated that the original ring is bonded to something else; something not true of the duplicate. If you insist on thinking there is no continuity of identity of individual particles, at least you can say that one of the carbon atoms in the first ring is entangled with an outside atom in a way that none of the atoms in the duplicate ring is, and distinguish between them that way. You may be able to individuate atoms within structures by looking at their quantum correlations; you won't be able to say 'this atom has property X, that atom has property Y' but you'll be able to say 'there's an atom with property X, and there's an atom with property Y'.

Assuming that this is on the right track, the deeper reality is going to be field configurations anyway, not particle configurations. Particle number is frame-dependent (see: Unruh effect), and a quantum particle is just a sort of wavefunction over field configurations - a blob of amplitude in field configuration space.

Comment author: mitchell_porter2 02 June 2008 08:37:45AM 2 points [-]

But dammit, wavefunctions don't collapse!

Sorry to distract from your main point, but it is quite a distance from there to "therefore, there are Many Worlds". And for that matter, you have definitely not addressed the notion of collapse in all its possible forms. The idea of universe-wide collapse caused by observation is definitely outlandish, solipsistic in fact, and also leaves "consciousness", "observation", or "measurement" as an unanalysed residue. However, that's not the only way to introduce discontinuity into wavefunction evolution. One might have a jump (they won't be collapses, if you think in terms of state vectors) merely with respect to a particular subspace. The actual history of states of the universe might be a complicated agglomeration of quantum states, with spacelike joins provided by the tensor product, and timelike joins by a semilocal unitary evolution.

Furthermore, there have always been people who held that the wavefunction is only a representation of one's knowledge, and that collapse does not refer to an actual physical process. What you should say as a neo-rationalist is that those people should not be content with an incomplete description of the world, and that something like Minimum Description Length should be used to select between possible complete theories when there is nothing better, and you should leave it at that.

I suppose that the larger context is that in the case of seed AI, we need to get it right the first time, and therefore it would be helpful to have an example of rationality which doesn't just consist of "run the experiment and see what happens", in order to establish that it is possible to think about things in advance. But this is a bad example! Alas, I cannot call upon any intersubjective, third-party validation of this claim, but I do consider this (quantum ontology), if not my one true expertise, certainly a core specialization. And so I oppose my intuition to yours, and say that any valid completion of the Many Worlds idea is either going to have a simpler one-world variant, or, at best, it will be a theory of multiple self-contained worlds - not the splitting, half-merged, or otherwise interdependent worlds of standard MWI rhetoric. This is not a casual judgement; I could give a lengthy account of all the little reasons which lead up to it. If we had a showdown at the rationality dojo, I think it would come out the winner. But first of all, I think you should just ask yourself whether it makes sense to say "Many Worlds is the leading candidate" when all you have really shown is that a particular form of collapse theory is not.

Comment author: mitchell_porter2 01 June 2008 06:08:00AM 0 points [-]

Regarding the definition of "intelligence": It's not hard to propose definitions, if you assume the framework of computer science. Consider the cognitive architecture known as an "expected-utility maximizer". It has, to a first approximation, two parts. One part is the utility function, which offers a way of ranking the desirability of the situation which the entity finds itself in. The other part is the problem-solving part: it suggests actions, selected so as to maximize expected utility.

The utility function itself offers a way to rate the intelligence of different designs for the problem-solving component. You might, for example, average the utility obtained after a certain period of time across a set of test environments, or even across all possible environments. The point is that the EUM is supposed to be maximizing utility, and if one EUM is doing better than another, it must be because its problem-solver is more successful.

The next step towards rating the intrinsic intelligence of the problem-solving component is to compare its performance, not just across different environments, but when presented with different utility functions. Ideally, you would take into account how well it does under all possible utility functions, in all possible environments. (Since this is computer science, a "possible environment" will have a rather abstract definition, such as "any set of causally coupled finite-state machines".)

There are issues with respect to how you average; there are issues with respect to whether the "intelligence" you get from a definition like this is actually calculable. Nonetheless, those really are just details. The point is that there are rigorous ways to rank programs with respect to their ability to solve problems, and whether or not such a ranking warrants the name "intelligence", it is clearly of pragmatic significance. An accurate metric for the problem-solving capability of a goal-directed entity tells you how effective it will be in realizing those goals, and hence how much of an influence it can be in the world.

And this allows me to leap ahead and present a similarly informal account of what a Singularity is, what the problem of Friendliness is, and what the proposed solution is. A Singularity happens when naturally evolved intelligences, using the theory and technology of computation, create artificial intelligences of significantly superior problem-solving capability ("higher intelligence"). This superiority implies that in any conflict of goals, a higher intelligence wins against a lower intelligence (I'm speaking in general), because intelligence by definition is effectiveness in bringing about goal states. Since the goals of an artificial intelligence are thoroughly contingent (definitely so in the case of the EUM cognitive architecture), there is a risk to the natural intelligences that their own goals will be overwhelmed by those of the AIs; this is the problem of Friendliness. And the solution - at least, what I take to be the best candidate solution - is to determine the utility function (or its analogue) of the natural intelligences, determine the utility function of an ideal moral agent, relative to the preferences of that evolved utility function, and use that idealized utility function as the goal system of the artificial intelligences.

That, in schema, is my version of the research program of Eliezer's Institute. I wanted to spell it out because I think it's pretty easy to understand, and who knows how long it will be before Eliezer gets around to expounding it here, at length. It may be questioned from various angles; it certainly needs much more detail; but you have, right there, a formulation of what our situation is, and how to deal with it, which strikes me as eminently defensible and doable.

In response to Class Project
Comment author: mitchell_porter2 31 May 2008 10:39:15AM 5 points [-]

Robin B., I can't speak for Eliezer's characters, but I believe the fashionability of skepticism about string theory has come from the lack of falsifiable predictions, after so many years. No-one has been able to say "this is the ground state". Instead string theorists have studied a large number of possible ground states (distinguished by background geometry), most of them looking nothing like what we see, as they try to get a grip on the theory. The hope used to be that all but one would prove on further study to be unstable. Now there's an interest in anthropic predictions, though that's just one school of thought.

I have read that no-one has ever exhibited a string-theory ground state exactly reproducing the Standard Model, though you can get close. That has to be significant. If such a place can be found in the space of ground states ("moduli space"), you could then try to reason out why it was dynamically favored. And we'll get more information within a few years from the Large Hadron Collider, which will establish whether there's a Higgs boson or something else (I bet on something else; the Higgs was just the simplest way to make a tractable theory and lingers by default).

In string theory's favor is that it generically has spin-1/2 particles (fermions), spin-1 particles (gauge bosons), and spin-2 particles (gravitons). That's a neat trick. So I tend to think either that it is the answer, or that it is just a beast of so many parts that anything you might look for is in there somewhere. In the latter case, it could be compared to the Monster group, the "largest sporadic simple finite group". There are infinitely many finite simple groups, just as there are infinitely many possible field theories, and most of those groups resemble some subgroup of the Monster, just as string theory has spin-1/2, spin-1, and spin-2 fields, just like the real world. It could be that string theory is just the "maximally complicated field theory" (and in fact, mathematically, it has a relationship to the Monster) and that it derives this generic pseudo-predictiveness solely from that. It has a little bit of everything, so anything looks a little bit like it. It would certainly be a mistake to take some real object, like Rubik's cube, discover a few facts about its symmetry group, and then announce that its symmetry group must be the Monster, just because the Monster has subgroups with those properties. It could be that string theorists are making a mistake like that.

On the other hand, what's the alternative? The phenomenological approach to particle physics is just to postulate enough fields with enough properties to explain what you see. You can treat gravity as just another field, contingently present, but then your theory becomes mathematically intractable. Part of string theory's appeal is that you can calculate graviton-graviton scattering, etc., unlike any previous theory of quantum gravity. But the price is that you buy into the unification philosophy. Recently, there have been claims that "contingent" theories of quantum gravity - according to which reality is just a bunch of fields plus gravity, and there's no deeper reason as to why it's that combination of fields - can be made to work; this is the "loop quantum gravity" research program. It's my judgement that string theory is mathematically much more solid. The loop quantum gravity researchers have had to backtrack several times, after making ambitious claims about the construction of consistent "gravity-plus-anything" quantum theories. Right now the evidence (in my semi-lay opinion) points in the other direction, that gravity needs to be part of a larger ensemble of fields with special properties if it is to be quantizable. Which suggests string theory.

In response to Class Project
Comment author: mitchell_porter2 31 May 2008 07:41:59AM 0 points [-]

And on Day 26 they rediscovered string theory, and saw that it was good.

Comment author: mitchell_porter2 30 May 2008 08:18:42AM 10 points [-]

If Einstein had chosen the wrong angle of attack on his problem - if he hadn't chosen a sufficiently important problem to work on - if he hadn't persisted for years - if he'd taken any number of wrong turns - or if someone else had solved the problem first - then dear Albert would have ended up as just another Jewish genius.

But if Einstein was the reason why none of those things happened, then maybe he wasn't just another Jewish genius, eh? Maybe he was smart enough to choose the right methods, to select the important problems, to see the value in persisting, to avoid or recover from all the wrong turns, and to be the first.

My own ruminations on genius have led me to suppose that one mistake which people of the very highest intelligence may make, is to underestimate their own exceptionality; for example, to adopt theories of human potential which are excessively optimistic regarding the capabilities of other people. But that is largely just my own experience speaking. It similarly seems very possible that the lessons you are trying to impart here are simply things you wish you hadn't had to figure out for yourself, but are not especially helpful or relevant for anyone else. In fact, I am reminded of one of my own pessimistic meta-principles regarding people of very high ability, which is that their situation will be so individual that no-one will be able to help them or understand them. It's not literally true, but it does point the way to the further conclusion that they will have to solve their own problems.

If anyone wants to see thoughts about genius they haven't seen before, they should first of all study the works and career of Celia Green. And then, as a side dish, they might like to read the chapter "Odysseus of Ithaca, by Kuno Mlatje", in Stanislaw Lem's A Perfect Vacuum.

View more: Prev | Next