All of paragonal's Comments + Replies

 Most people will do very bad things, including mob violence, if they are peer-pressured enough.

Shouldn't this be weighted against the good things people do if they are peer-pressured? I think there's value in not conforming but if all cultures have peer-pressure there needs to be a careful analysis of the pros and cons instead of simply strifing for immunity from it.

 I'm not sure anybody "just" innately lacks the machinery to be peer-pressured.

My first thought here aren't autists but psychopaths.

Specifically, I would love to see a better argument for it being ahead of Helion (if it is actually ahead, which would be a surprise and a major update for me).

I agree with Jeffrey Heninger's response to your comment. Here is a (somewhat polemical) video which illustrates the challenges for Helion's unusual D-He3 approach compared to the standard D-T approach which CFS follows. It illustrates some of Jeffrey's points and makes other claims like Helion's current operational poc reactor Trenta being far from adequate for scaling to a productive reactor when ... (read more)

For example, the way scientific experiments work, your p-value either passes the (arbitrary) threshold, or it doesn't, so you either reject the null, or fail to reject the null, a binary outcome.

Ritualistic hypothesis testing with significance thresholds is mostly used in the social sciences, psychology and medicine and not so much in the hard sciences (although arbitrary thresholds like 5 sigma are used in physics to claim the discovery of new elementary particles they rarely show up in physics papers). Since it requires deliberate effort to get into the ... (read more)

Unfortunately, what I would call the bailey is quite common on Lesswrong. It doesn't take much digging to find quotes like this in the Sequences and beyond:

This is a shocking notion; it implies that all our twins in the other worlds— all the different versions of ourselves that are constantly split off, [...]

Thanks, I see we already had a similar argument in the past.

I think there's a bit of motte and bailey going on with the MWI. The controversy and philosophical questions are about multiple branches / worlds / versions of persons being ontological units. When we try to make things rigorous, only the wave function of the universe remains as a coherent ontological concept. But if we don't have a clear way from the latter to the former, we can't really say clear things about the parts which are philosophically interesting.

4JBlack
So much the worse for the controversy and philosophical questions. If anything, the name is the problem. People get wrong ideas from it, and so I prefer to talk in terms of decoherence rather than "many worlds". There's only one world, it's just more complex than it appears and decoherence gives part of an explanation for why it appears simpler than it is.

I’m reluctant to engage with extraordinarily contrived scenarios in which magical 2nd-law-of-thermodynamics-violating contraptions cause “branches” to interfere.

Agreed. Roland Omnes tries to calculate how big the measurement apparatus of Wigner needs to be in order to measure his friend and gets 10 to the power of 10E18 degrees of freedom ("The Interpretation of Quantum Mechanics", section 7.8).

But if we are going to engage with those scenarios anyway, then we should never have referred to them as “branches” in the first place, ...

Well, that's one of the p... (read more)

4Steven Byrnes
I don’t think it’s a problem—see discussion here & maybe also this one.
Answer by paragonal*6-5

I'm not an expert but I would say that I have a decent understanding of how things work on a technical level. Since you are asking very general questions, I'm going to give quite general thoughts.

(1) The central innovation of the blockchain is the proof-of-work mechanism. It is is an ingenious idea which tackles a specific problem (finding consensus between possibly adversarial parties in a global setting without an external source of trust).

(2) Since Bitcoin has made the blockchain popular, everybody wants to have the specific problem it allegedly solves ... (read more)

7ChristianKl
In what way do you believe Polkadot and Solana have problems because of their setups? Why don't you see those systems working as evidence that other systems are viable?
3jmh
Thanks, you've given me some things to think about.

Rick Beato has a video about people losing their absolute pitch with age (it seems to happen to everyone eventually). There are a lot of anecdata by people who have experienced this both in the video and in the comments.

Some report that after experiencing a shift in their absolute pitch, all music sounds wrong. Some of them adapted somehow (it's unclear to me how much development of relative abilities was involved) and others report not having noticed that their absolute pitch has shifted. Some report that only after they've lost their absolute pitch compl... (read more)

I am quite skeptical that hearing like a person with absolute pitch can be learned because it seems to be somewhat incompatible with relative pitch.

People with absolute pitch report that if a piece of music is played with a slightly lower or higher pitch, it sounds out of tune. If this feeling stays throughout the piece this means that the person doesn't hear relatively. So even if a relative pitch person would learn to name played notes absolutely, I don't think the hearing experience would be the same.

So I think you can't have both absolute pitch and relative pitch in the full sense. (I do think that you can improve at naming played notes, singing notes correctly without a reference note from outside your body, etc.)

1benjamin.j.campbell
I gave this an upvote because it is directly counter to my current belief about how relative/absolute pitch work and interact with each other. I agree that if someone's internalised absolute pitch can constantly identify out of tune notes, even after minutes of repetition, this is a strong argument against my position. On the other hand, maybe they do produce one internal reference note of set frequency, and when comparing known intervals against this, it returns "out of tune" every time. I can see either story being true, but I would like to hunt down some more information on which of these models is more accurate

Thanks for this pointer. I might check it out when their website is up again.

Many characteristics have been proposed as significant, for example:

  • It's better if fingers have less traveling to do.
  • It's better if consecutive taps are done with different fingers or, better yet, different hands.
  • It's better if common keys are near the fingers' natural resting places.
  • It's better to avoid stretching and overusing the pinky finger, which is the weakest of the five.

 

Just an anecdotal experience: I, too, have wrist problems. I have tried touch typing with 10 fingers a couple of times and my problems got worse each time. My experience agre... (read more)

2Erich_Grunewald
That's interesting, because, as you say, I think most layouts assume this prioritisation: Consecutive taps with different hands > With different fingers on same hand > With same finger If the middle one there is really bad for you, and if you know some programming, I think you could run the Carpalx code, assign extra (negative?) weight to the "hand runs" parameter and see if it can generate a layout that suits your needs. (I haven't tried this myself.)

You might be also be interested in "General Bayesian Theories and the Emergence of the Exclusivity Principle" by Chiribella et al. which claims that quantum theory is the most general theory which satisfies Bayesian consistency conditions.

By now, there are actually quite a few attempts to reconstruct quantum theory from more "reasonable" axioms besides Hardy's. You can track the refrences in the paper above to find some more of them.

As you learn more about most systems, the likelihood ratio should likely go down for each additional point of evidence.

I'd be interested to see the assumptions which go into this. As Stuart has pointed out, it's  got to do with how correlated the evidence is. And for fat-tailed distributions we probably should expect to be surprised at a constant rate.

Note you can still get massive updates if B' is pretty independent of B. So if someone brings in camera footage of the crime, that has no connection with the previous witness's trustworthiness, and can throw the odds strongly in one direction or another (in equation, independence means that P(B'|H,B)/P(B'|¬H,B) = P(B'|H)/P(B'|¬H)).

Thanks, I think this is the crucial point for me. I was implicitly operating under the assumption that the evidence is uncorrelated which is of course not warranted in most cases.

So if we have already updated on a lot of evidence... (read more)

3Stuart_Armstrong
Yep, that seems to be right. One minor caveat; instead of I'd say something like: "Past evidence affects how we interpret future evidence, sometimes weakening its impact." Thinking of the untrustworthy witness example, I wouldn't say that "the witness's testimony is already included in the fact that they are untrustworthy" (="part of B' already included in B"), but I would say "the fact they are untrustworthy affects how we interpret their testimony" (="B affects how we interpret B' "). But that's a minor caveat.

From the article:

At this point, I think I am somewhat below Nate Silver’s 60% odds that the virus escaped from the lab, and put myself at about 40%, but I haven’t looked carefully and this probability is weakly held.

Quite off-topic: what does it mean from a Bayesian perspective to hold a probability weakly vs. confidently? Likelihood ratios for updating are independent of the prior so a weakly-held probability should update exactly as a confidently-held one. Is there a way to quantifiy the "strongness" with which one holds a probability?

9Stuart_Armstrong
Imagine you have a coin of unknown bias (taken to be uniform on [0,1]). If you flip this coin and get a heads (an event of initial probability 1/2), you update the prior strongly and your probability of heads on the next flip is 2/3. Now suppose instead you have already flipped the coin two million times, and got a million heads and a million tails. The probability of heads on the next flip is still 1/2; however, you will barely update on that, and the probability of another heads after that is barely above 1/2[1]. In the first case you have no evidence either way, in the second case you have strong evidence either way, and so things update less. In terms of odds ratios, let H be your hypothesis (with negative ¬H), B your past observation, and B' your future observation. Then O(H|B',B) = P(B'|H,B) / P(B'|¬H,B) * O(H|B). The Bayes factor is P(B'|H,B) / P(B'|¬H,B). If you've made a lot of observations in B, then this odds ratio might be close to 1. It's not the same thing as P(B'|H) / P(B'|¬H), which might be very different from 1. Why? Because P(B'|H,B) / P(B'|¬H,B) measures how likely B' is, given H and B versus how likely it is, given ¬H and B. The B might completely screen off the effect of H versus ¬H. In a court case, for example, if you've already established a witness is untrustworthy (B), then their claims (B') have little weight, and are pretty independent of guilt or not (H vs ¬H) - even if the claims would have weight if you didn't know their trustworthiness. Note you can still get massive updates if B' is pretty independent of B. So if someone brings in camera footage of the crime, that has no connection with the previous witness's trustworthiness, and can throw the odds strongly in one direction or another (in equation, independence means that P(B'|H,B) / P(B'|¬H,B) = P(B'|H) / P(B'|¬H)). So: This means that they expect that it's quite likely that there is evidence out there that could change their mind (which makes sense, as they haven't looke
3habryka
This is kind of technically true, but not in a practical sense. As you learn more about most systems, the likelihood ratio should likely go down for each additional point of evidence. The likelihood ratio for an event X is after all P(X|E):P(¬X|E) where the E refers to all the previous observations you've made that are now integrated in your prior.  Usually when referring to "updating on En we use the likelihood ratio P(En|E1,E2,...,En−1):P(¬En|E1,E2,...,En−1) which kind of makes it clear that this will depend on the order of the different Ei.

Thanks for your answer. Part of the problem might have been that I wasn't that proficient with vim. When I reconfigured the clashing key bindings of the IDE I sometimes unknowingly overwrote a vim command which turned out to be useful later on. So I had to reconfigure numerous times which annoyed me so much that I abandoned the approach at the time.

A question for the people who use vim keybindings in IDEs: how do you deal with keybindings for IDE tasks which are not part of vim (like using the debugger, refactoring, code completion, etc.)? The last time I tried to use vim bindings in an IDE there were quite some overlaps with these so I found myself coming up with compromise systems which didn't work that well because they weren't coherent.

4gilch
At least for PyCharm, this was somewhat easier on macOS than on Windows, since you have control, option, command and shift, instead of just Ctrl, Alt, and Shift (well, and the Win key, but the OS reserves too many bindings there.) On macOS, The IDE uses command for most things, while Vim usually uses control when it needs a modifier at all. On Windows they both want to use Ctrl, so it's more difficult to configure all the bindings.
3SatvikBeri
Some IDEs are just very accommodating about this, e.g. PyCharm. So that's great. Some of them aren't, like VS Code. For those, I just manually reconfigure the clashing key bindings. It's annoying, but it only takes ~15 minutes total.

At least for me, I think the question of whether I'm buying too much for myself in a situation of limited supplies was more important for the decision than the fear of being perceived as weird. This depends of course on how limited the supplies actually were at the time of buying but I think it is generally important to distinguish between the shame because one might profit at the expense of others, and the "pure" weirdness of the action.

7AnnaSalamon
Totally. I was not AFAICT worried at the time about limited supply buying, or not very worried; the Safeway we were getting things from did not seem out of much and I hadn't heard people complain about shortages/buying yet as far as I can recall.

We have reason to believe that peptide vaccines will work particularly well here, because we're targeting a respiratory infection, and the peptide vaccine delivery mechanism targets respiratory tissue instead of blood.

Just a minor point: by delivery mechanism, are you talking about inserting the peptides through the nose à la RadVac? If I understand correctly, Werner Stöcker injects his peptide-based vaccine.

You could also turn around this question. If you find it somewhat plausible that that self-adjoint operators represent physical quantities, eigenvalues represent measurement outcomes and eigenvectors represent states associated with these outcomes (per the arguments I have given in my other post) one could picture a situation where systems hop from eigenvector to eigenvector through time. From this point of view, continuous evolution between states is the strange thing.

The paper by Hardy I cited in another answer to you tries to make QM as similar to a cla... (read more)

1mwacksen
Well yeah sure. But continuity is a much easier pill to swallow than "continuity only when you aren't looking".

There are remaining open questions concerning quantum mechanics, certainly, but I don't really see any remaining open questions concerning the Everett interpretation.

 

“Valid” is a strong word, but other reasons I've seen include classical prejudice, historical prejudice, dogmatic falsificationism, etc.

Thanks for answering. I didn't find a better word but I think you understood me right.

So you basically think that the case is settled. I don't agree with this opinion.

I'm not convinced of the validity of the derivations of the Born rule (see IV.C.2 of th... (read more)

I think it makes more sense to think of MWI as "first many, then even more many," at which point questions of "when does the split happen?" feel less interesting, because the original state is no longer as special. [...] If time isn't quantized, then this has to be spread across continuous space, and so thinking of there being a countable number of worlds is right out.

What I called the "nice ontology" isn't so much about the number of worlds or even countability but about whether the worlds are well-defined. The MWI gives up a unique reality for things. Th... (read more)

I agree that the question "how many worlds are there" doesn't have a well-defined answer in the MWI. I disagree that it is a meaningless question.

From the bird's-eye view, the ontology of the MWI seems pretty clear: the universal wavefunction is happily evolving (or is it?). From the frog's-eye view, the ontology is less clear. The usual account of an experiment goes like this:

  • The system and the observer come together and interact
  • This leads to entanglement and decoherence in a certain basis
  • In the final state, we have a branch for each measurement outcome.
... (read more)
6Vaniver
The ontology doesn't feel muddled to me, although it does feel... not very quantum? Like a thing that seems to be happening with collapse postulates is that it takes seriously the "everything should be quantized" approach, and so insists on ending up with one world (or discrete numbers of worlds). MWI instead seems to think that wavefunctions, while having quantized bases, are themselves complex-valued objects, and so there doesn't need to be a discrete and transitive sense of whether two things are 'in the same branch', and instead it seems fine to have a continuous level of coherence between things (which, at the macro-scale, ends up looking like being in a 'definite branch'). [I don't think I've ever seen collapse described as "motivated by everything being quantum" instead of "motivated by thinking that only what you can see exists", and so quite plausibly this will fall apart or I'll end up thinking it's silly or it's already been dismissed for whatever reason. But somehow this does seem like a lens where collapse is doing the right sort of extrapolating principles where MWI is just blindly doing what made sense elsewhere. On net, I still think wavefunctions are continuous, and so it makes sense for worlds to be continuous too.] Like, I think it makes more sense to think of MWI as "first many, then even more many," at which point questions of "when does the split happen?" feel less interesting, because the original state is no longer as special. When I think of the MWI story of radioactive decay, for example, at every timestep you get two worlds, one where the particle decayed at that moment and one where it held together, and as far as we can tell if time is quantized, it must have very short steps, and so this is very quickly a very large number of worlds. If time isn't quantized, then this has to be spread across continuous space, and so thinking of there being a countable number of worlds is right out.

There isn't a sharp line for when the cross-terms are negligible enough to properly use the word "branch", but there are exponential effects such that it's very clearly appropriate in the real-world cases of interest.

I agree that it isn't a problem for practical purposes but if we are talking about a fundamental theory about reality shouldn't questions like "How many worlds are there?" have unambiguous answers?

9Steven Byrnes
No ... "how many worlds are there" is not a question with a well-defined answer in Everett's theory. It's like "How many grains of sand make up a heap?" ... just a meaningless question. The notion that there is a specific, well-defined number of worlds is sometimes implied by the language used in simplifications / popularizations of the theory, but it's not part of the actual theory, and really it can't possibly be, I don't think.

Right, but (before reading your post) I had assumed that the eigenvectors somehow "popped out" of the Everett interpretation.

This is a bit of a tangent but decoherence isn't exclusive to the Everett interpretation. Decoherence is itself a measurable physical process independent of the interpretation one favors. So explanations which rely on decoherence are part of all interpretations.

I mean in the setup you describe there isn't any reason why we can't call the "state space" the observer space and the observer "the system being studied" and then write down

... (read more)
1mwacksen
Isn't the whole point of the Everett interpretation that there is no decoherence? We have a Hilbert space for the system, and a Hilbert space for the observer, and a unitary evolution on the tensor product space of the system. With these postulates (and a few more), we can start with a pure state and end up with some mixed tensor in the product space, which we then interpret as being "multiple observers", right? I mean this is how I read your paper. We are surely not on the same page regarding decoherence, as I know almost nothing about it :) The arxiv-link looks interesting, I should have a look at it.

This

Ok, now comes the trick: we assume that observation doesn't change the system

and this

I think the basic point is that if you start by distinguishing your eigenfunctions, then you naturally get out distinguished eigenfunctions.

doesn't sound correct to me.

The basis in which the diagonalization happens isn't put in at the beginning. It is determined by the nature of the interaction between the system and its environment. See "environment-induced superselection" or short "einselection".

1mwacksen
Ok, but OP of the post above starts with "Suppose we have a system S with eigenfunctions {φi}", so I don't see why (or how) they should depend on the observer. I'm not claiming these are just arbitrary functions. The point is that requiring the the time-evolution on pure states of the form ψ⊗φi to map to pure states of the same kind is arbitrary choice that distinguishes the eigenfunctions. Why can't we chose any other orthonormal basis at this point, say some ONB (wi)i, and require that wi⊗ψ↦ESwi⊗ψi, where ψi is defined so that this makes sense and is unitary? (I guess this is what you mean with "diagonalization", but I dislike the term because if we chose a non-eigenfunction orthonormal basis the construction still "works", the representation just won't be diagonal in the first component).

I mean I could accept that the Schrödinger equation gives the evolution of the wave-function, but why care about its eigenfunctions so much?

I'm not sure if this will be satisfying to you but I like to think about it like this:

  • Experiments show that the order of quantum measurements matters. The mathematical representation of the physical quantities needs to take this into account. One simple kind of non-commutative objects are matrices.
  • If physical quantities are represented by matrices, the possible measurement outcomes need to be encoded in there somehow.
... (read more)
1mwacksen
Right, but (before reading your post) I had assumed that the eigenvectors somehow "popped out" of the Everett interpretation. But it seems like they are built in from the start. Which is fine, it's just deeply weird. So it's kind of hard to say whether the Everett interpretation is more elegant. I mean in the Copenhagen interpretation, you say "measuring can only yield eigenvectors" and the Everett interpretation, you say "measuring can only yield eigenvectors and all measurements are done so the whole thing is still unitary". But in the end even the Everett interpretation distinguishes "observers" somehow, I mean in the setup you describe there isn't any reason why we can't call the "state space" the observer space and the observer "the system being studied" and then write down the same system from the other point of view... The "symmetric matrices<-> real eigenvectors" is of course important, this is essentially just the spectral theorem which tells us that real linear combinations of orthogonal projections are symmetric matrices (and vice versa). Nowadays matrices are seen as "simple non-commutative objects". I'm not sure if this was true when QM was being developed. But then again, I'm not really sure how linear QM "really" is. I mean all of this takes place on vectors with norm 1 (and the results are invariant under change of phase), and once we quotient out the norm, most of the linear structure is gone. I'm not sure what the correct way to think about the phase is. On one hand, it seems like a kind of "fake" unobservable variable and it should be permissible to quotient it out somehow. On the other hand, the complex-ness of the Schrödinger equation seems really important. But is this complexness a red herring? What goes wrong if we just take our "base states" as discrete objects and try to model QM as the evolution of probability distributions over ordered pairs of these states?

Do you see any technical or conceptual challenges which the MWI has yet to address or do you think it is a well-defined interpretation with no open questions?

What's your model for why people are not satisfied with the MWI? The obvious ones are 1) dislike for a many worlds ontology and 2) ignorance of the arguments. Do you think there are other valid reasons?

2evhub
There are remaining open questions concerning quantum mechanics, certainly, but I don't really see any remaining open questions concerning the Everett interpretation. “Valid” is a strong word, but other reasons I've seen include classical prejudice, historical prejudice, dogmatic falsificationism, etc. Honestly, though, as I mention in the paper, my sense is that most big name physicists that you might have heard of (Hawking, Feynman, Gell-Mann, etc.) have expressed support for Everett, so it's really only more of a problem among your average physicist that probably just doesn't pay that much attention to interpretations of quantum mechanics.
Answer by paragonal*160

Like yourself, people aren't surprised by the outcome of your experiment. The surprising thing happens only if you consider more complicated situations. The easiest situations where surprising things happen are these two:

1) Measure the spins of the two entangled particles in three suitably different directions. From the correlations of the observed outcomes you can calculate a number known as the CHSH-correlator S. This number is larger than any model where the individual outcomes were locally predetermined permits. An accessible discussion of this is... (read more)

No. The property which you are describing is not "mixedness" (technical term: "purity"). That the state vector in question can't be written as a tensor product of state vectors makes it an *entangled* state.

Mixed states are states which cannot be represented by *any* state vector. You need a density matrix in order to write them down.

Smolin's book has inspired me to begin working on a theory of quantum gravity. I'll need to learn new things like quantum field theory.

If you don't know Quantum Field Theory, I don't see how you can possibly understand why General Relativity and Quantum Theory are difficult to reconcile. If true, how are you able to work on the solution to a problem you don't understand?

In Smolin's view, the scientific establishment is good at making small iterations to existing theories and bad at creating radically new theories.

I agree with this.

It's therefore not implausible that the solution to quantum gravity could come from a decade of solitary amateur work by someone totally outside the scientific establishment.

For me, this sounds very implausible. Although the scientific establishment isn't geared towars creating radically new theories, I think it is even harder to create such ideas from the outside. I agree that... (read more)

I think so, too, but I don't know it (Eliezer's Sequence on QM is still on my reading list). Given the importance people around here put on Bayes theorem, I find it quite surprising that the idea of a quantum generalization -which is what QBism is about- isn't discussed here apart from a handful of isolated comments. Two notable papers in this direction are

https://arxiv.org/abs/quant-ph/0106133

https://arxiv.org/abs/0906.2187

-7TAG
Einstein was a realist who was upset that the only interpretation available to him was anti-realist. Saying that he took the wavefunction as object of knowledge is technically true, ie, false.

I agree that my phrasing was a bit misleading here. Reading it again, it sounds like Einstein wasn't a realist, which of course is false. For him, QM was a purely statistical theory which needed to be supplemented by a more fundamental realistic theory (a view which has been proven to be untenable only in 2012 by Pusey, Barrett and Rudolph).

Thanks for conceding
... (read more)

I don't think that the QM example is like the others. Explaining this requires a bit of detail.

From section V.:

My understanding of the multiverse debate is that it works the same way. Scientists observe the behavior of particles, and find that a multiverse explains that behavior more simply and elegantly than not-a-multiverse.

That's not an accurate description of the state of affairs.

In order to calculate correct predictions for experiments, you have to use the probabilistic Born rule (and the collapse postulate for sequential measurements). T... (read more)

1TAG
There's no doubt a story as to why QBism didn't become the official LessWrong position.
5Douglas_Knight
Einstein was a realist who was upset that the only interpretation available to him was anti-realist. Saying that he took the wavefunction as object of knowledge is technically true, ie, false. Thanks for conceding that the Copenhagen interpretation has meant many things. Do you notice how many people deny that? It worries me.

Does the book talk about schizophrenia? I'm a bit skeptical that coherence therapy and IFS can be used to heal it but I'm quite interested in hearing your thoughts about schizophrenia in relation to subagent models.

4Gordon Seidoh Worley
My expectation is that it wouldn't work because my model of psychotic disorders suggests that they are primarily caused by sensory processing issues caused by overly strong ontology that causes the psychotic brain's model of the world to become uncorrelated with direct experience, so psychotic disorders need special treatment to deal with the unique problems they create that prevent conventional therapy techniques from working on them because those techniques never get a chance to start working before they have already been warped into providing evidence that further confirms psychotic beliefs and behaviors as adaptive.
6Kaj_Sotala
Schizophrenia is not listed in the book's example list of conditions that Coherence Therapy might work for; there is a case study of a woman who hears hallucinatory voices, though the report states that "She did not fit the typical pattern of schizophrenia, which was the diagnosis she had been given". The general impression I get is that the writer treats them as a psychotic symptom related to her depression rather than her being schizophrenic in general. I don't feel like I know enough about schizophrenia to put it in a subagent context.