All of JessRiedel's Comments + Replies

No, vacuum decay generally expands at sub-light speed.

2Lucius Bushnaq
How sub-light? I was mostly just guessing here, but if it’s below like 0.95c I’d be surprised. 

Vacuum decay is fast but not instant, and there will almost certainly be branches where it maims you and then reverses. Likewise, you can make suicide machines very reliable and fast. It's unreasonable to think any of these mechanical details matter.

2Lucius Bushnaq
It expands at light speed. That's fast enough that no computational processing can possibly occur before we're dead. Sure there's branches where it maims us and then stops, but these are incredibly subdominant compared to branches where the tunneling doesn't happen. Yes, you can make suicide machines very reliable and fast. I claim that whether your proposed suicide machine actually is reliable does in fact matter for determining whether you are likely to find yourself maimed. Making suicide machines that are synchronised earth-wide seems very difficult with current technology.

This work was co-authored by Jordan Stone, Darryl Wright, and Youssef Saleh, whose names appear on the EA Forum post but not on this cross post to LW.
 

(Self-promotion warning.) Alexander Gietelink Oldenziel pointed me toward this post after hearing me describe my physics research and noticing some potential similarities, especially with the Redundant Information Hypothesis.  If you'll forgive me, I'd like to point to a few ideas in my field (many not associated with me!) that might be useful. Sorry in advance if these connections end up being too tenuous.

In short, I work on mathematically formalizing the intuitive idea of wavefunction branches, and a big part of my approach is based on finding varia... (read more)

2Dalcy
This idea sounds very similar to this—it definitely seems extendable beyond the context of physics:
3Alexander Gietelink Oldenziel
Curious if @johnswentworth has any takes on this.
5Erik Jenner
Thanks for that overview and the references! On hydrodynamic variables/predictability: I (like probably many others before me) rediscovered what sounds like a similar basic idea in a slightly different context, and my sense is that this is somewhat different from what John has in mind, though I'd guess there are connections. See here for some vague musings. When I talked to John about this, I think he said he's deliberately doing something different from the predictability-definition (though I might have misunderstood). He's definitely aware of similar ideas in a causality context, though it sounds like the physics version might contain additional ideas

Further, assume that  mediates between  and  (third diagram below).

I can't tell if X is supposed to be another variable, distinct from X_1 and X_2, or if it's suppose to be X=(X_1,X_2), or what. EDIT: From reading further it looks like X=(X_1,X_2). This should be clarified where the variables are first introduced. Just to make it clear that this is not obvious even just within the field of Bayes nets, I open up Pearl's "Causality" to page 17 and see "In Figure 1.2, X={X_2} and Y={X_3} are d-separated by Z={X_1}", i.e. X is... (read more)

3johnswentworth
Edited, thanks.

Other examples:

  • “Career politician” is something of a slur. It seems widely accepted (though maybe you dispute?) that folks who specialize in politics certainly become better at winning politics (“more effective”) but that also this selects for politicians who are less honest or otherwise not well aligned with their constituents.

  • Tech startups still led by their technical CEO are somehow better than those where they have been replaced with a “career CEO”. Obviously there are selection effects, but the career CEOs are generally believed to be more short

... (read more)

Note that I'm specifically not referring to the elements of  as "actions" or "outputs"; rather, the elements of  are possible ways the agent can choose to be.

I don't know what distinction is being drawn here.  You probably need an example to illustrate.

Once you eliminate the requirement that the manager be a practicing scientist, the roles will become filled with people who like managing, and are good at politics, rather than doing science. I’m surprised this is controversial. There is a reason the chair of academic departments is almost always a rotating prof in the department, rather than a permanent administrator. (Note: “was once a professor” is not considered sufficient to prevent this. Rather, profs understand that serving as chair for a couple years before rotating back into research is an unpl... (read more)

7JessRiedel
Other examples: * “Career politician” is something of a slur. It seems widely accepted (though maybe you dispute?) that folks who specialize in politics certainly become better at winning politics (“more effective”) but that also this selects for politicians who are less honest or otherwise not well aligned with their constituents. * Tech startups still led by their technical CEO are somehow better than those where they have been replaced with a “career CEO”. Obviously there are selection effects, but the career CEOs are generally believed to be more short-term- and power-focused. People have tried to fix these problems by putting constraints on managers (either through norms/stigmas about “non-technical” managers or explicit requirements that managers must, e.g., have a PhD). And probably these have helped some (although they tend to get Goodhardted, e.g., people who get MDs in order to run medical companies without any desire to practice medicine). And certainly there are times when technical people are bad managers and do more damage than their knowledge can possibly make up for. But like, this tension between technical knowledge and specializing in management (or grant evaluation) seems like the crux of the issue that must be addressed head-on in any theorizing about the problem.

Letting people specialize as “science managers” sounds in practice like transferring the reins from scientists to MBAs, as was much maligned at Boeing. Similarly, having grants distributed by people who aren’t practicing scientists sounds like a great way to avoid professional financial retaliation and replace it with politicians setting the direction of funding.

9jasoncrawford
Who says they would be MBAs? The best science managers are highly technical themselves and started out as scientists. It's just that their career from there evolves more in a management direction.

UK’s proposal for a joint safety institute seems maybe more notable:

Sunak will use the second day of Britain's upcoming two-day AI summit to gather “like-minded countries” and executives from the leading AI companies to set out a roadmap for an AI Safety Institute, according to five people familiar with the government’s plans.

The body would assist governments in evaluating national security risks associated with frontier models, which are the most advanced forms of the technology.

The idea is that the institute could emerge from what is now the Unit

... (read more)

The softmax acts on the whole matrix

Isn't the softmax applied vector-wise, thereby breaking the apparent transpose symmetry? 

Strictly speaking, the plot could be 100% noise without error bars, sample size, or similar info. So maybe worth including that.

No. All the forms of leverage advocated in the book (e.g., call options and buying stocks on margin) at worst take your portfolio to zero if there is a huge market downturn. The book of course advocates keeping a safe rainy-day fund for basic expenses, like everyone else. So you don’t ever require a bailout. The idea is that having your retirement fund go to zero in your early twenties is hardly catastrophic, and the older you get the less leveraged you should be.

You're drawing a philisophical distinction based on a particular ontology of the wavefunction.  As simpler version arises in classical electromagnetism: we can integrate out the charges and describe the world entirely as an evolving state of the E&M field with the charges acting as weird source terms, or we can do the opposite and integrate out the E&M field to get a theory of charges moving with weird force laws.  These are all equivalent descriptions in that they are observationally indistinguishable.

Does excalidraw have an advantage over a slides editor like PowerPoint or Keynote?

1tcelferact
I would choose it for very different use cases to slides; I've never diagrammed anything in a slides editor. I have historically drawn things in excalidraw, screenshotted them, then pasted them into a slides editor though.

Let me also endorse the usefulness of AlternativeTo.net .  Highly recommended.

You've given some toy numbers as a demonstration that the claim needn't necessarily be undermined, but the question is whether it's undermined by the actual numbers.

3Ruby
I thought about this for a while, and I think the entailment you point out is correct and we can't be sure the numbers turn out as in my example. But also, I think I got myself confused when writing the originally cited passage. I was thinking about how there will be a smaller absolute number of false-positive deaths than the absolute number of false-positive symptomatic cases, because there are fewer death generally.  That doesn't require the false-positive rates to be different to be true. Also thinking about it, the mechanisms by which the false-positive rate would be lower on severe outcomes that I'd been thinking of don't obviously hold. It's probably more like if someone had a false-positive test and then had pneumonia symptoms, it'd be mistaken for Covid, and the rate of that happening is only dependent on the regular Covid test false-positive rate.  

> Of course, the outcomes we’re interested in are hospitalization, severe Covid, and death. I’d expect the false positives on these to be lower than for having Covid at all, but across tens of thousands of people (the Israel study did still have thousands even in later periods), it’s not crazy that some people would be very ill with pneumonia and also get a false positive on Covid.

Does this observation undermine the claim of a general trend in effectiveness with increasing severity of disease? That is, if false positives bias the measured effectiveness ... (read more)

4Ruby
I don't think it undermines it. What matters is the relative frequency of true cases [1] vs false positives. With less severe disease (e.g. symptomatic), we might have a frequency of 1% true cases in the population, plus 0.1% false-positive rate. The true cases greatly outnumber the false-positives. In contrast, vaccinated death from Covid might be only 0.001% in the population, while false-positive deaths are 0.01%. Here the false-positives dominate. So even though the absolute false-positive rate is lower in more severe cases (because it's harder to misattribute deaths than get wrong test results), it still dominates the effectiveness results more because it's larger than the rate of actual occurrences of the event. [1] I say "true cases" deliberately instead of true-positives, because I mean to say the objective underlying frequency of the event, not true-positive detection rate.

The automated tools on Zotero are good enough now that getting the complete bibtex information doesn't really make it much easier.  I can convert a DOI or arXiv number into a complete listing with one click, and I can do the same with a paper title in 2-3 clicks.  The laborious part is (1) interacting with each author and (2) classifying/categorizing the paper.

Does the org have an official stance?  I've seen people write it both ways.  Happy to defer to you on this, so I've edited.

4Daniel Kokotajlo
I don't know, but I've only ever heard the people who work there use CLR.

If we decide to expand the database in 2021 to attempt comprehensive coverage of blog posts, then a machine-readable citation system would be extremely helpful.  However, to do that we would need to decide on some method for sorting/filtering the posts, which is going to depend on what the community finds most interesting.  E.g., do we want to compare blog posts to journal articles, or should the analyses remain mostly separate?  Are we going to crowd-source the filtering by category and organization, or use some sort of automated guessing b... (read more)

Somewhat contra Alex's example of a tree, I am struck by the comprehensibility of biological organisms. If, before I knew any biology, you had told me only that (1) animals are mechanistic, (2) are in fact composed of trillions of microscopic machines, and (3) were the result of a search process like evolution, then the first time I looked at the inside of an animal I think I would have expected absolutely *nothing* that could be macroscopically understood. I would have expected a crazy mesh of magic material that operated at a level way outside my ab... (read more)

3Alex_Altair
I think I only sort of agree with this. There does seem to be some level (macroscopic organs) at which biology makes tons of sense and is relatively immediately understandable. But I get the impression that once you start trying to understand the thing more specifically, and critically, actually do anything in the domains of biology, like medicine or nutrition, you pretty quickly hit a massive wall of non-understandability. My impression is that most medicine is virtually just randomly trying stuff and rolling with what seems to have non-zero benefit and statistically negligible harm. (This post is an elaborated opinion on this.) Another example is in understanding how the mind/brain works. We now have an absolutely wild amount of data about how the brain is structured, but on an actual day-to-day operating level, we are barely able to do better than the ancient Greeks.
3Capybasilisk
But a lot of that feeling depends on which animal's insides you're looking at. A closely related mammal's internal structure is a lot more intuitive to us than, say, an oyster or a jellyfish.
6Ben Pace
+1. It's hard to remember how surprised I'd be to see reality for the first time, but it is shocking to look inside a biological creature and have a sense of "oh yeah, I have some sense of how many of these things connect together". I'd expect things to look more like they do in weird sci-fi like "Annihilation" or something. Although I remember people didn't get basic stuff like what the brain was for for ages, so maybe it did look insane as well.

Agreed. The optimal amount of leverage is of course going to be very dependent on one's model and assumptions, but the fact that a young investor with 100% equities does better *on the margin* by adding a bit of leverage is very robust.

I endorse ESRogs' replies. I'll just add some minor points.

1. Nothing in this book or the lifecycle strategy rests on anything specific to the US stock market. As I said in my review

The fact that, when young, you are buying stocks on margin makes it tempting to interpret this strategy is only good when one is not very risk averse or when the stock market has a good century. But for any time-homogeneous view you have on what stocks will do in the future, there is a version of this strategy that is better than a conventional strategy. (A large fr
... (read more)
The problem is that there are other RNA viruses besides SARS-CoV-2, such as influenza, and depending when in the disease course the samples were taken, the amount of irrelevant RNA might exceed the amount of SARS-CoV-2 RNA by orders of magnitude

There is going to be tons of RNA in saliva from sources besides SARS-CoV-2 always. Bits of RNA are floating around everywhere. Yes, there is some minimum threshold of SARS-CoV-2 density at which the test will fail to detect it, but this should just scale up by a factor of N when pooling over N people. I don't see why other RNA those people have will be a problem any more than the other sources of RNA in a single person are a problem for a non-pooled test.

"The government" in the US certainly doesn't have the authority to do most of these things.

Both the federal and state governments have vast powers during public health emergencies. For instance, the Supreme Court has made clear that the government can hold you down and vaccinate you against your will. Likewise, the Army (not just National Guard) can be deployed to enforce laws, including curfew and other quarantine laws.

Yes, it's unclear whether government officials would be willing to use these options, and how much the public would... (read more)

Hi Rohin, are older version of the newsletter available?

Also:

This sounds mostly like a claim that it is more computationally expensive to deal with hidden information and long term planning.

One consideration: When you are exploring a tree of possibilities, every bit of missing information means you need to double the size of the tree. So it could be that hidden information leads to an exponential explosion in search cost in the absence of hidden-information-specific search strategies. Although strictly speaking this is just a case of something being "more computationally expensive", exponential penalties generically push things from being feasible to infeasible.

2Rohin Shah
Hey Jess, as Ben mentioned I keep all newsletter-related things on my website. I agree that in theory hidden information leads to an exponential explosion. In practice, I think you don't need to search over all the exponentially many ways the hidden information could be in order to get good results. (At least, you don't need to do that in order to beat humans, because humans don't seem to do that.) I think overall we agree though -- when I said "it wasn't clear how to make things work with hidden information -- you could try the same thing but it was plausible it wouldn't work", I was primarily thinking that the computational cost might be too high. I was relatively confident that given unbounded compute, AlphaGo-style algorithms could deal with hidden information.
6Ben Pace
They're all available at his LW profile and also at his offsite blog.

What is the core problem of your autonomous driving group?!

1Alex Flint
It doesn't matter! :P

Marshall, I would keep in mind that good intentions are not sufficient for getting your comments up-voted. They need to contribute to the discussion. Since your account was deleted, we can't to judge one way or the other.

I think there is some truth to Marshall's critique and that the situation could be easily improved by making it clear (either on the "about" page or in some other high-visibility note) what the guidelines for voting are. That means guidelines would have to be agreed upon. Until that happens, I suspect people will continue to just vote up comments they agree with, stifling debate.

I've previously suggested a change to the voting system, but this might require more man-power to implement than is available.

It seems like the only criterion for the rating of comment/post be the degree to which it contributes to healthy discussion (well-explained, on-topic, not completely stupid). However, there is an strong tendency for people to vote comments based on whether they disagree with them or not, which is very bad for healthy discussion. It discourages new ideas and drives away visitors with differing opinions when they see a page full of highly rated comments for a particular viewpoint (cf. reddit).

The feature I would recommend most for this website is a dual ... (read more)

6Vladimir_Nesov
I disagree, because I see these factors as necessarily closely connected, in any person's mind. I rate not quality of prose, but quality of communicated idea, as it comes through. If I think that the idea is silly, I rate it down. If the argument moves me, communicating a piece of knowledge that I at least give a chance of changing my understanding of something, then the message was valuable. It doesn't matter whether the context was to imply a conclusion I agree or disagree with, it only matters whether the idea contributes something to my understanding.
6Eliezer Yudkowsky
This makes... quite a lot of sense, actually. And of course the posts would be sorted by quality votes, not agreement votes.
1thomblake
I'm not sure this is obviously right. I would probably insist upon some usability study to determine how people actually use such features. Of course, if the cost is low such a study could just be implementing them and seeing how it works. I imagine there's a name for this cognitive bias, but I've noticed well-informed folks tend to think agreeable opinions are better-argued, and less agreeable ones are worse-argued (probably a species of confirmation bias). For example, someone posting against physicalism might get downvoted quickly by people who say "but they didn't even consider Dennett's response to this premise". But they might not have the same objections on-hand to an unsound argument in favor of physicalism.
0Jess_Riedel
Also, I am going with the crowd and changing to a user name with an underscore

I'm confused. What is the relationship between Alcor and the Cryonics Institute? Is it either-or? What is the purpose of yearly fees to them if you can just take out insurance which will cover all the costs in the event of your death?

Eliezer, I believe that your belittling tone is conducive to neither a healthy debate nor a readable blog post. I suspect that your attitude is borne out of just frustration, not contempt, but I would still strongly encourage you to write more civilly. It's not just a matter of being nice; rudeness prevents both the speaker and the listener from thinking clearly and objectively, and it doesn't contribute to anything.

-2CynicalOptimist
Can't agree with this enough.

Günther: Of course my comments about Barbour were (partially) ad hominem. The point was not to criticize his work, but to criticize this post. Very few people are qualified to assess the merit of Barbour's work. This includes, with respect, Eliezer. In the absence of expertise, the rational thinker must defer to the experts. The experts have found nothing of note in Barbour's work.

Albert Einstein was not performing philosophy when he developed GR. He was motivated by a philosophical insight and then did physics.

You've drawn many vague conclusions (read: words, not equations or experimental predictions) about the nature of reality from a vague idea promoted by a non-academic. It smacks strongly of pseudo-science.

Julian Barbour's work is unconventional. Many of his papers border on philosophy and most are not published in prominent journals. His first idea, that time is simply another coordinate parameterizing a mathematical object (like a manifold in GR) and that it's specialness is an illusion, is ancient. His second idea, that any theory more fundamental tha... (read more)

0Luke_A_Somers
I find this contrast you're drawing confusing. Making it relational is an attempt to justify the gauge freedom.

I definitely agree that there is truth to Max Planck's assertion. And indeed, the Copenhagen interpretation was untenable as soon as it was put forth. However, Everett's initial theory was also very unsatisfying. It only became (somewhat) attractive with the much later development of decoherence theory, which first made plausible the claim that no-collapse QM evolution could explain our experiences. (For most physicists who examine it seriously, the claim is still very questionable).

Hence, the gradual increase in acceptance of the MW interpretation is a product both of the old guard dying off and the development of better theoretical support for MW.

Psy-Kosh: Oh, I almost forgot to answer your questions. Experimental results are still several years distant. The basic idea is to fabricate a tiny cantilever with an even tinier mirror attached to its end. Then, you position that mirror at one end of a photon cavity (the other end being a regular fixed mirror). If you then send a photon into the cavity through a half-silvered third mirror--so that it will be in a superposition of being in and not in the cavity--then the cantilever will be put into a correlated superposition: it will be vibrating if t... (read more)

Psy-Kosh: It is an awesome experiment. Here are links to Bouwmeester's home page , the original proposal, and the latest update on cooling the cantilever.(Bouwmeester has perhaps the most annoying web interface of any serious scientist. Click in the upper left on "research" and then the lower right on "macroscopic quantum superposition". Also, the last article appeared in nature and may not be accessible without a subscription.)

Obviously, this is a very hard experiment and success is not assured.

Also, you might be interested to know t... (read more)

Excellent post Eliezer. I have just a small quibble: it should be made clear that decoherance and the many worlds interpretations are logically distinct. Many physicists, especially condensed matter physicist working on quantum computation/information, use models of microscopic decoherance on a daily basis while remaining agnostic about collapse. These models of decoherance (used for so-called "partial measurement") are directly experimentally testable.

Maybe a better term for what you are talking about is macroscopic decoherance. As of right ... (read more)

0zslastman
Surely the prior is that the laws of physics hold at all scales? Why wouldn't you extrapolate? Edit: Just noticed how redundant this comment is..

"And both spatial infinity and inflation are standard in the current model of physics."

As mentioned by a commenter above, spatial infinity is by no means required or implied by physical observation. Non-compact space-times are allowed by general relativity, but so are compact tori (which is a very real possibility) or a plethora of bizarre geometries which have been ruled out by experimental evidence.

Inflation is an interesting theory which agrees well with the small (relative to other areas of physics) amount of cosmological data which has bee... (read more)

3waveman
In the prolog to the QM sequence he does actually repeatedly say <this all is my opinion and others have different opinions and I'll talk about that later>

Eliezer:I wouldn't be surprised to learn that there is some known better way of looking at quantum mechanics than the position basis, some view whose mathematical components are relativistically invariant and locally causal. There is. Quantum Field theory takes place on the full spacetime of special relativity, and it is completely lorentz covariant. Quantum Mechanics is a low-speed approximation of QFT and neccessarily chooses a reference frame, destroying covariance.

Hal Finney: The Schrodinger equation (and the relatavistic generalization) dictate local evolution of the wavefunction. Non-locality comes about during the measurement process, which is not well understood.

CPT symmetry is required by Quantum Field Theory, not General Relativity.

The Feynman path integral (PI) and Schrödinger's equation (SE) are completely equivalent formulations of QM in the sense that they give the same time evolution of an initial state. They have exactly the same information content. It's true that you can derive SE from the PI, while the reverse derivation isn't very natural. On the other hand, the PI is mathematically completely non-rigorous (roughly, the space of paths is too large) while SE evolution can be made precise.

Practically, the PI cannot be used to solve almost anything except the harmonic oscil... (read more)

1Robert Wilson III
They're only syntactically equivalent. Their semantics are completely different. In my opinion, Feynman's semantics is objectively correct regarding the 'literal path' of a particle through spacetime. Given we don't officially know their paths, but we do know their end destinations (wave equation), we can figure all possible paths and have the practically impossible paths cancel each other out: leaving only the probable literal paths of a particle complete with a graph of their trajectories. Schrodinger's equation is far behind semantically. I think Feynman's path integrals are superior.

Psy-Kosh: Position-space is special because it has a notion of locality. Two particles can interact if they collide with each other traveling at different speeds, but they cannot interact if they are far from each other traveling at the same speed.

The field, defined everywhere on the 4-D spacetime manifold, is "reality" (up until the magical measurement happens, at least). You can construct different initial value problem (e.g. if the universe is such-and-such at a particular time, how will it evolve?) by taking different slices of the spacetim... (read more)

Chris, in case you didn't see me ask you last time...

http://www.overcomingbias.com/2008/04/philosophy-meet.html#comment-110472438

do you know of a good survey of decoherence?

Psy-Kosh: In Quantum Field Theory, the fields (the analog of wavefunctions in non-relativistic Quantum Mechanics) evolve locally on the spacetime. This is given a precise, observer-independant (i.e. covariant) meaning. This property reduces to the spatially-local evolution of the wavefunction in QM which Eliezer is describing. Further, this indeed identifies position-space as "special", compared to momentum-space or any other decomposition of the Hilbert space.

Eliezer: The wavefunctions in QM (and the fields in QFT) evolve locally under norma... (read more)

1DanielLC
I'm pretty sure Many Worlds doesn't have waveform collapse. Also, I don't think they're talking about configuration space. They're saying that particle a being in point A and particle b being in point B interacting is non-local. That configuration is one point, so it's completely local.

Chris, could you recommend an introduction to decoherence for a grad student in physics? I am dumbstruck by how difficult it is to learn about it and the seeming lack of an authoritative consensus. Is there a proper review article? Is full-on decoherence taught in any physics grad classes, anywhere?

Psy-Kosh: I have never heard of anyone ever successfully formulating quantum (or classical) mechanics without the full spectrum of real numbers. You can't even have simple things, like right triangles with non-integer side length, without irrational numbers to "fill in the gaps". Any finite-set formulation of QM would look very different from what we understand now.

Load More