Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Sengachi 03 February 2018 06:12:30AM *  0 points [-]

Just so you all know, Clifford Algebra derivations of quantized field theory show why the Born Probabilities are a squared proportion. I'm not sure there's an intuitively satisfying explanation I can give you for why this is that uses words and not math, but here's my best try.

In mathematical systems with maximal algebraic complexity for their given dimensionality, the multiplication of an object by its dual provides an invariant of the system, a quantity which cannot be altered. (And all physical field theories (except gravity, at this time) can be derived in full from the assumption of maximal algebraic complexity for 1 positive dimension and 3 negative dimensions). [Object refers to a mathematical quantity, in the case of the field theories we're concerned with, mostly bivectors].

The quantity describing time evolution then (complex phase amplitudes) must have a corresponding invariant quantity that is the mod squared of the complex phase. This mod squared quantity, being the system invariant whose sum describes 'benchmark' by which one judges relative values, is then the relevant value for evaluating the physical meaning of time evolutions. So the physical reality one would expect to observe in probability distributions is then the mod squared of the underlying quantity (complex phase amplitudes) rather than the quantity itself.

To explain it in a different way, because I suspect the one way is not adequate without an understanding of the math.

Clifford Algebra objects (i.e. the actual constructs the universe works with, as best we can tell) do not in of themselves contain information. In fact, they contain no uniquely identifiable information. All objects can be modified with an arbitrary global phase factor, turning them into any one of an infinite set of objects. As such, actual measurement/observation of an object is impossible. You can't distinguish between the object being A or Ae^ib, because those are literally indistinguishable quantities. The object which could be those quantities lacks sufficient unique information to actually be one quantity or the other. So you're shit out of luck when it comes to measuring it. But though an object may not contain unique information, the object's mod squared does (and if this violates your intuition of how information works, may I remind you that your classic-world intuition of information counts for absolutely nothing at the group theory level). This mod squared is the lowest level of reality which contains uniquely identifiable information.

So the lowest level of reality at which you can meaningfully identify time evolution probabilities is going to be described as a square quantity.

Because the math says so.

By the way, we're really, really certain about this math. Unless the universe has additional spatial-temporal dimensions we don't know about (and I kind of doubt that) and only contains partial algebraic complexity in that space (and I really, really doubt that), this is it. There is no possible additional mathematical structure with which one could describe our universe that is not contained within the Cl_13 algebra. There is literally no mathematical way to describe our universe which adequately contains all of the structure we have observed in electromagnetism (and weak force and strong force and Higgs force) which does not imply this mod squared invariant property as a consequence.

Furthermore, even before this mod squared property was understood as a consequence of full algebraic complexity, Emmy Noether had described and rigorously proved this relationship as the eponymous Noether's theorem, confirmed its validity against known theories, and used it to predict future results in field theory. So this notion is pretty well backed up by a century of experimental evidence too.

Tl;DR: We (physicists who work with both differential geometries and quantum field theory and whom find an interest in group theory fundamentals beyond what is needed to do conventional experimental or theory work) have known about why the Born Probabilities are a squared proportion since, oh, probably the 1930s? Right after Dirac first published the Dirac Equation? It's a pretty simple thing to conclude from the observation that quantum amplitudes are a bivector quantity. But you'll still see physics textbooks describe it as a mystery and hear it pondered over philosophically, because propagation of the concept would require a base of people educated in Clifford Algebras to propagate through. And such a cohesive group of people just does not exist.

Comment author: Dacyn 15 February 2018 12:34:54AM *  1 point [-]

I don't know much about Clifford algebras. But do you really need them here? I thought the standard formulation of abstract quantum mechanics was that every system is described by a Hilbert space, the state of a system is described by a unit vector, and evolution of the system is given by unitary transformations. The Born probabilities are concerned with the question: if the state of the universe is the sum of where are orthogonal unit vectors representing macroscopically distinct outcome states, then what is the subjective probability of making observations compatible with the state ? The only reasonable answer to this is , because it is the only function of that's guaranteed to sum to based on the setup. (I don't mean this as an absolute statement; you can construct counterexamples but they are not natural.) By the way, for those who don't know already, the reason that is guaranteed to sum to is that since the state vector is a unit vector,

Of course, most of the time when people worry about the Born probabilities they are worried about philosophical issues rather than justifying the naturalness of the squared modulus measure.

Comment author: g_pepper 12 December 2017 02:39:34PM 0 points [-]

On the other hand, maybe you should force them to endure the guilt, because maybe then they will be motivated to research why the agent who made the decision chose TORTURE, and so the end result will be some people learning some decision theory / critical thinking...

The argument that 50 years of torture of one person is preferable to 3^^^3 people suffering dust specs presumes utilitarianism. A non-utilitarian will not necessarily prefer torture to dust specs even if his/her critical thinking skills are up to par.

Comment author: Dacyn 15 December 2017 01:49:28AM 0 points [-]

I'm not a utilitarian. The argument that 50 years of torture is preferable to 3^^^3 people suffering dust specks only presumes that preferences are transitive, and that there exists a sequence of gradations between torture and dust specks with the properties that (A) N people suffering one level of the spectrum is always preferable to N*(a googol) people suffering the next level, and (B) the spectrum has at most a googol levels. I think it's pretty hard to consistently deny these assumptions, and I'm not aware of any serious argument put forth to deny them.

It's true that a deontologist might refrain from torturing someone even if he believes it would result in the better outcome. I was assuming a scenario where either way you are not torturing someone, just refraining from preventing them from being tortured by someone else.

Comment author: wafflepudding 19 September 2015 12:22:52AM 0 points [-]

I believe that the vast majority of people in the dust speck thought experiment would be very willing to endure the collision of the dust speck, if only to play a small role in saving a man from 50 years of torture. I would choose the dust specks on the behalf of those hurt by the dust specks, as I can be very close to certain that most of them would consent to it.

A counterargument might be that, since 3^^^3 is such a vast number, the collective pain of the small fraction of people who would not consent to the dust speck still multiplies to be far larger than the pain that the man being tortured would endure. Thus, I would most likely be making a nonconsensual tradeoff in favor of pain. However, I do not value the comfort of those that would condemn a man to 50 years of torture in order to alleviate a moment's mild discomfort, so 100% of the people whose lack of pain I value would willingly trade it over.

If someone can sour that argument for my mind, I'll concede that I prefer the torture.

Comment author: Dacyn 10 December 2017 10:26:13PM 0 points [-]

The only people who would consent to the dust speck are people who would choose SPECKS over TORTURE in the first place. Are you really saying that you "do not value the comfort of" Eliezer, Robin, and others?

However, your argument raises another interesting point, which is that the existence of people who would prefer that SPECKS was chosen over TORTURE, even if their preference is irrational, might change the outcome of the computation because it means that a choice of TORTURE amounts to violating their preferences. If TORTURE violates ~3^^^3 people's preferences, then perhaps it is after all a harm comparable to SPECKS. This would certainly be true if everyone finds out about whether SPECKS or TORTURE was chosen, in which case TORTURE makes it harder for a lot of people to sleep at night.

On the other hand, maybe you should force them to endure the guilt, because maybe then they will be motivated to research why the agent who made the decision chose TORTURE, and so the end result will be some people learning some decision theory / critical thinking...

Also, if SPECKS vs TORTURE decisions come up a lot in this hypothetical universe, then realistically people will only feel guilty over the first one.

Comment author: Kenny 24 November 2016 05:43:20PM 0 points [-]

Mathematics, the thing that humans do, completely side-steps the trilemma. There's no need to justify any particular axiom, qua mathematics, because one can investigate the system(s) implied by any set of axioms.

But practically, e.g. when trying to justify the use of mathematics to describe the world or some part thereof, one must accept some axioms to even be able to 'play the game'. Radical skepticism, consistently held, is impractical, e.g. if you can't convince yourself that you and I are communicating then how do you convince yourself that there's a Munchausen Trilemma to be solved (or dissolved), let alone anything else about which to reason?

Comment author: Dacyn 26 November 2016 01:22:22AM 0 points [-]

The investigation of the systems implied by a set of axioms also requires some assumptions. For example, one must assume that any axiom implies itself, i.e. P -> P. Once this axiom is accepted, there are a great number of logical axioms which are equally plausible.

Comment author: Dacyn 06 October 2016 11:10:26AM 0 points [-]

So let me see if I've got this straight.

Computer Scientists: For some problems, there are random algorithms with the property that they succeed with high probability for any possible input. No deterministic algorithm for these problems has this property. Therefore, random algorithms are superior.

Eliezer: But if we knew the probability distribution over possible inputs, we could create a deterministic algorithm with the property.

Computer Scientists: But we do not know the probability distribution over possible inputs.

Eliezer: Never say "I don't know"! If you are in a state of ignorance, use an ignorance prior.

Now of course the key question is what sort of ignorance prior we should use. In Jaynes's book, usually ignorance priors with some sort of nice invariance properties are used, which makes calculations simpler. For example if the input is a bit stream then we could assume that the bits are independent coinflips. However, in real life this does not correspond to a state of ignorance, but rather to a state of knowledge where we know that the bit stream does not contain predictable correlations. For example, the probability of 1000 zeros in a row according to this ignorance prior is 10^{-300}, which is not even remotely close to the intuitive probability.

The next step is to try to create an ignorance prior which somehow formalizes Occam's razor. As anyone familiar with MIRI's work on the problem of logical priors should know, this is more difficult than it sounds. Essentially, the best solution so far (see "Logical Induction", Garrabrant et al. 2016) is to create a list of all of the ways the environment could contain predictable correlations (or in the language of this post, resemble an "adversarial telepath"), and then trade them off of each other to get a final probability. One of the main downsides of the algorithm is that it is not possible to list all of the possible ways the environment could be correlated (since there are infinitely many), so you have to limit yourself to taking a feasible sample.

Now, it is worth noting that the above paper is concerned with an "environment" which is really just the truth-values of mathematical statements! It is hard to see how this environment resembles any sort of "adversarial telepath". But if we want to maintain the ethos of this post, it seems that we are forced to this conclusion. Otherwise, an environment with logical uncertainty could constitute a counterexample to the claim that randomness never helps.

To be precise, let f be a mathematical formula with one free variable representing an integer, and suppose we are given access to an oracle which can tell us the truth-values of the statements f(1),...,f(N). The problem is to compute (up to a fixed accuracy) the proportion of statements which are true, with the restriction that we can only make n queries, where n << N. Monte Carlo methods succeed with failure probability exponential in n, regardless of what f is.

Now suppose that f is determined by choosing m bits randomly, where m << n, and interpreting them as a mathematical formula (throwing out the results and trying again if it is impossible to do so). Then if the minimum failure probability is nonzero, it is exponential in m, not n, and therefore bigger than the failure probability for Monte Carlo methods. However, any algorithm which can be encoded in less than ~m bits fails with nonzero probability for diagonalization reasons.

In fact, the diagonalization failure is one of the least important ones, the main point is that you just don't know enough about the environment to justify writing any particular algorithm. Any deterministically chosen sequence has a good chance of being correlated with the environment, just because the environment is math and math things are often correlated. Now, we can call this an "adversarial telepath" if we want, but it seems to occur often enough in real life that this designation hardly seems to "carve reality at its joints".

TL;DR: If even math can be called an "adversarial telepath", the term seems to have lost all its meaning.

Comment author: hairyfigment 02 June 2016 01:21:52AM 0 points [-]

since there is no algorithm to determine whether any given N satisfies the conclusion of the conjecture.

I think you mean, 'determine that it does not satisfy the conclusion'.

Comment author: Dacyn 02 June 2016 05:30:59PM 0 points [-]

I think my original sentence is correct; there is no known algorithm that provably outputs the answer to the question "Does N satisfy the conclusion of the conjecture?" given N as an input. To do this, an algorithm would need to do both of the following: output "Yes" if and only if N satisfies the conclusion, and output "No" if and only if N does not satisfy the conclusion. There are known algorithms that do the first but not the second (unless the twin prime conjecture happens to be true).

Comment author: komponisto 17 June 2009 10:32:17AM *  3 points [-]

Some senses of "erroneous" that might be involved here include (this list is not necessarily intended to be exhaustive):

  • Mathematically incorrect -- i.e. the proofs contain actual logical inconsistencies. This was argued by some early skeptics (such as Kronecker) but is basically indefensible ever since the formulation of axiomatic set theory and results such as Gödel's on the consistency of the Axiom of Choice. Such a person would have to actually believe the ZF axioms are inconsistent, and I am aware of no plausible argument for this.

  • Making claims that are epistemologically indefensible, even if possibly true. E.g., maybe there does exist a well-ordering of the reals, but mere mortals are in no position to assert that such a thing exists. Again, axiomatic formalization should have meant the end of this as a plausible stance.

  • Irrelevant or uninteresing as an area of research because of a "lack of correspondence" with "reality" or "the physical world". In order to be consistent, a person subscribing to this view would have to repudiate the whole of pure mathematics as an enterprise. If, as is more common, the person is selectively criticizing certain parts of mathematics, then they are almost certainly suffering from map-territory confusion. Mathematics is not physics; the map is not the territory. It is not ordained or programmed into the universe that positive integers must refer specifically to numbers of elementary particles, or some such, any more than the symbolic conventions of your atlas are programmed into the Earth. Hence one cannot make a leap e.g. from the existence of a finite number of elementary particles to the theoretical adequacy of finitely many numbers. To do so would be to prematurely circumscribe the nature of mathematical models of the physical world. Any criticism of a particular area of mathematics as "unconnected to reality" necessarily has to be made from the standpoint of a particular model of reality. But part (perhaps a large part) of the point of doing pure mathematics (besides the fact that it's fun, of course), is to prepare for the necessity, encountered time and time again in the history of our species, of upgrading -- and thus changing --our very model. Not just the model itself but the ways in which mathematical ideas are used in the model. This has often happened in ways that (at least at the time) would have seemed very surprising.

For the sake of argument, I will go ahead and ask what sort of nonconstructive entities you think an AI needs to reason about, in order to function properly.

Well, if the AI is doing mathematics, then it needs to reason about the very same entities that human mathematicians reason about.

Maybe that sounds like begging the question, because you could ask why humans themselves need to reason about those entities (which is kind of the whole point here). But in that case I'm not sure what you're getting at by switching from humans to AIs.

Do you perhaps mean to ask something like: "What kind of mathematical entities will be needed in order to formulate the most fundamental physical laws?"

Comment author: Dacyn 01 June 2016 11:15:58PM *  0 points [-]

Why do you think that the axiomatic formulation of ZFC "should have meant an end" to the stance that ZFC makes claims that are epistemologically indefensible? Just because I can formalize a statement does not make that statement true, even if it is consistent. Many people (including me and apparently Eliezer, though I would guess that my views are different from his) do not think that the axioms of ZFC are self-evident truths.

In general, I find the argument for Platonism/the validity of ZFC based on common acceptance to be problematic because I just don't think that most people think about these issues seriously. It is a consensus of convenience and inertia. Also, many mathematicians are not Platonists at all but rather formalists -- and constructivism is closer to formalism than Platonism is.

Comment author: Amanojack 02 May 2011 03:15:23AM *  -2 points [-]

I reject infinity as anything more than "a number that is big enough for its smallness to be negligible for the purpose at hand."

My reason for rejecting infinity in it's usual sense is very simple: it doesn't communicate anything. Here you said (about communication) "When you each understand what is in the other's mind, you are done." In order to communicate, there has to be something in your mind in the first place, but don't we all agree infinity can't ever be in your mind? If so, how can it be communicated?

Edit to clarify: I worded that poorly. What I mean to ask is, Don't we all agree that we cannot imagine infinity (other than imagine something like, say, a video that seems to never end, or a line that is way longer than you'd ever seem to need)? If you can imagine it, please just tell me how you do it!

Also, "reject" is too strong a word; I merely await a coherent definition of "infinity" that differs from mine.

Comment author: Dacyn 01 June 2016 11:15:56PM 1 point [-]

From your post it sounds like you in fact do not have a clear picture of infinity in your head. I have a feeling this is true for many people, so let me try to paint one. Throughout this post I'll be using "number" to mean "positive integer".

Suppose that there is a distinction we can draw between certain types of numbers and other types of numbers. For example, we could make a distinction between "primes" and "non-primes". A standard way to communicate the fact that we have drawn this distinction is to say that there is a "set of all primes". This language need not be construed as meaning that all primes together can be coherently thought of as forming a collection (though it often is construed that way, usually pretty carelessly); the key thing is just that the distinction between primes and non-primes is itself meaningful. In the case of primes, the fact that the distinction is meaningful follows from the fact that there is an algorithm to decide whether any given number is prime.

Now for "infinite": A set of numbers is called infinite if for every number N, there exists a number greater than N in the set. For example, Euclid proved that the set of primes is infinite under this definition.

Now this definition is a little restrictive in terms of mathematical practice, since we will often want to talk about sets that contain things other than numbers, but the basic idea is similar in the general case: the semantic function of a set is provided not by the fact that its members "form a collection" (whatever that might mean), but rather by the fact that there is a distinction of some kind (possibly of the kind that can be determined by an algorithm) between things that are in the set and things that are not in the set. In general a set is "infinite" if for every number N, the set contains more than N members (i.e. there are more than N things that satisfy the condition that the set encodes).

So that's "infinity", as used in standard mathematical practice. (Well, there's also a notion of "infinity" in real analysis which essentially is just a placeholder symbol for "a really large number", but when people talk about the philosophical issues behind infinity it is usually about the definition I just gave above, not the one in real analysis, which is not controversial.) Now, why is this at all controversial? Well, note that to define it, I had to talk about the notion of distinctions-in-general, as opposed to any individual distinction. But is it really coherent to talk about a notion of distinctions-in-general? Can it be made mathematically precise? This is really what the philosophical arguments are all about: what kinds of things are allowed to count as distinctions. The constructivists take the point of view that the only things that should be allowed to count as distinctions are those that can be computed by algorithms. There are some bullets to bite if you take this point of view though. For example, the twin prime conjecture states that for every number N, there exists p > N such that both p and p+2 are prime. Presumably this is either true or false, even if nobody can prove it. Moreover, presumably each number N either is or is not a counterexample to the conjecture. But then it would seem that it is possible to draw a distinction between those N which satisfy the conclusion of the conjecture, and those which are counterexamples. Yet this is false according to the constructive point of view, since there is no algorithm to determine whether any given N satisfies the conclusion of the conjecture.

I guess this is probably long enough already given that I'm replying to a five-year-old post... I could say more on this topic if people are interested.

Comment author: SanguineEmpiricist 23 April 2016 09:16:51PM *  0 points [-]

First i've heard of this, super interesting. Hmm. So what is the correct way to highlight the differences while still maintaining the historical angle? Continue w/ Riemannian geometry? Or just say what you have said, Lorentzian.

Comment author: Dacyn 04 May 2016 11:13:45PM 0 points [-]

Special relativity is good enough for most purposes, which means that (a time slice of) the real universe is very nearly Euclidean. So if you are going to explain the geometry of the universe to someone, you might as well just say "very nearly Euclidean, except near objects with very high gravity such as stars and black holes".

I don't think it's helpful to compare with Euclid's postulates, they reflect a very different way of thinking about geometry than modern differential geometry.

Comment author: Annoyance 14 March 2009 05:00:12PM 0 points [-]

"Would you say that axioms in math are meaningless?"

They distinguish one hypothetical world from another. Furthermore, some of them can be empirically tested. At present, Euclidean geometry seems to be false and Riemannian to be true, and the only difference is a single axiom.

Comment author: Dacyn 23 April 2016 07:52:58PM 1 point [-]

Riemannian geometry is not an axiomatic geometry in the same way that Euclidean geometry is, so it is not true that "the only difference is a single axiom." I think you are thinking of hyperbolic geometry. In any case, the geometry of spacetime according to the theory of general relativity is not any of these geometries, but it is instead a Lorentzian geometry. (I say "a" because the words "Riemannian" and "Lorentzian" both refer to classes of geometries rather than a single geometry -- for example, Euclidean geometry and hyperbolic geometry are both examples of Riemannian geometries.)

View more: Next