transhumanist_atom_understander

Sequences

Prospects for Solartopia

Wiki Contributions

Comments

Sorted by

This is a good example of neglecting magnitudes of effects. I think in this case most people just don't know the magnitude, and wouldn't really defend their answer in this way. It's worth considering why people sometimes do continue to emphasize that an effect is not literally zero, even when it is effectively zero on the relevant scale.

I think it's particularly common with risks. And the reason is that when someone doesn't want you to do something, but doesn't think their real reason will convince you, they often tell you it's risky. Sometimes this gives them a motive to repeat superstitions. But sometimes, they report real but small risks.

For example, consider Matthew Yglesias on the harms of marijuana:

Inhaling smoke into your lungs is, pretty obviously, not a healthy activity. But beyond that, when Ally Memedovich, Laura E. Dowsett, Eldon Spackman, Tom Noseworthy, and Fiona Clement put together a meta-analysis to advise the Canadian government, they found evidence across studies of “increased risk of stroke and testicular cancer, brain changes that could affect learning and memory, and a particularly consistent link between cannabis use and mental illnesses involving psychosis.”

I'll ignore the associations with mental illness, which are known to be the result of confounding, although this is itself an interesting category of fake risks. For example a mother that doesn't want her child to get a tattoo, because poor people get tattoos, could likely find a correlation with poverty, or with any of the bad outcomes associated with poverty.

Let's focus on testicular cancer, and assume for the moment that this one is not some kind of confounding, but is actually caused by smoking marijuana. The magnitude of the association:

The strongest association was found for non-seminoma development – for example, those using cannabis on at least a weekly basis had two and a half times greater odds of developing a non-seminoma TGCT compared those who never used cannabis (OR: 2.59, 95 % CI 1.60–4.19). We found inconclusive evidence regarding the relationship between cannabis use and the development of seminoma tumours.

What we really want is a relative risk (how much more likely is testicular cancer among smokers?) but for a rare outcome like testicular cancer, the odds ratio should approximate that. And testicular cancer is rare:

Testicular cancer is not common: about 1 of every 250 males will develop testicular cancer at some point during their lifetime.

So while doubling your testicular cancer risk sounds bad, doubling a small risk results in a small risk. I have called this a "homeopathic" increase, which is perhaps unfair; I should probably reserve that for probabilities on the order of homeopathic concentrations.

But it does seem to me to be psychologically like homeopathy. All that matters is to establish that a risk is present, it doesn't particularly matter its size.

Although this risk is not nothing... it's small but perhaps not negligible.

It's great to have a LessWrong post that states the relationship between expected quality and a noisy measurement of quality:

(Why 0.5? Remember that performance is a sum of two random variables with standard deviation 1: the quality of the intervention and the noise of the trial. So when you see a performance number like 4, in expectation the quality of the intervention is 2 and the contribution from the noise of the trial (i.e. how lucky you got in the RCT) is also 2.)

We previously had a popular post on this topic, the tails come apart post, but it actually made a subtle mistake when stating this relationship. It says:

For concreteness (and granting normality), an R-square of 0.5 (corresponding to an angle of sixty degrees) means that +4SD (~1/15000) on a factor will be expected to be 'merely' +2SD (~1/40) in the outcome - and an R-square of 0.5 is remarkably strong in the social sciences, implying it accounts for half the variance.

The example under discussion in this quote is the same as the example in this post, where quality and noise have the same variance, and thus R^2=0.5. And superficially it seems to be stating the same thing: the expectation of quality is half the measurement.

But actually, this newer post is correct, and the older post is wrong. The key is that "Quality" and "Performance" in this post are not measured in standard deviations. Their standard deviations are 1 and √2, respectively. Elaborating on that: Quality has a variance, and standard deviation, of 1. The variance of Performance is the sum of the variances of Quality and noise, which is 2, and thus its standard deviation is √2. Now that we know their standard deviations, we can scale them to units of standard deviation, and obtain Quality (unchanged) and Performance/√2. The relationship between them is:

That is equivalent to the relationship stated in this post.

More generally, notating the variables in units of standard deviation as and (since they are "z-scores"),

where is the correlation coefficient. So if your noisy measurement of quality is standard deviations above its mean, then the expectation of quality is standard deviations above its mean. It is that is variance explained, and is thus 1/2 when the signal and noise have the same variance. That's why in the example in this post, we divide the raw performance by 2, rather than converting it to standard deviations and dividing by 2.

I think it's important to understand the relationship between the expected value of an unknown and the value of a noisy measurement of it, so it's nice to see a whole post about this relationship. I do think it's worth explicitly stating the relationship on a standard deviation scale, which this post doesn't do, but I've done that here in my comment.

Some other comments brought up that the heme iron in meat is better absorbed, which is true, see figure 1. But the good news is that Impossible burgers have heme iron. They make it in yeast by adding a plasmid with the heme biosynthesis enzymes, pathway in Figure 1 of their patent on the 56th page of the pdf.

I think we'll have an internet full of LLM-bots "thinking" up and doing stuff within a year.

Did this happen? At least not obviously.

Yes, it seems like biotech will provide the tools to build nanotech, and Drexler himself is still emphasizing the biotech pathway. In fact, in Engines of Creation, Drexlerian nanotech was called "second-generation nanotech", with the first generation understood to include current protein synthesis as well as future improvements to the ribosome.

I don't really see the point of further development of diamondoid nanotech. Drexler made his point in Nanosystems: certain capabilities that seem fantastical are physically possible. It conveniently opens with a list of lower bounds on capabilities, and backs them up with what is, as far as I'm concerned, and enough rigor to make the point.

Once that point has been made, if you want to make nanotechnology actually happen, you should be focused on protein synthesis, right? What you need is not better nanotech designs. It's some theory of why protein engineering didn't take over abiotic industry the way people expected, why we're building iridium-based hydrogen electrolyzers at scale and have stopped talking about using engineered hydrogenases and so on. A identification of the challenges, and a plan for addressing them. What's the point of continuing to hammer in that second-generation nanotech would be cool if only we could synthesize it?

I didn't feel chills from music for a long time, and then started to get them again after doing physical therapy and learning exercises to straighten my back and improve my posture. It was a notable enough change that I reported it to my physical therapists, but I don't recall how I interpreted it at the time ("I'm getting chills again" vs "chills are real??" or what).

An example important in my life is planning: I "couldn't" make long-term plans or complete my to-do list as long as my "to-do list" was just a list of obligations rather than anything I really wanted done. More generally, I think plans "on paper" are especially easy case, since they don't take a telepath. For example, see the planning fallacy and Robin Hanson's comment that managers prefer the biased estimates. Getting to a corporate level, there's accounting. A cool related image is in episode two of Twin Peaks when Josie opens the safe and finds two ledgers, one making the mill look profitable and the other tracking the real debts. That's an example of "occlumency" I guess. But how common is it to have two account books like that, one internally and the other for investors? Or two schedules, a real one for real planning and a fake one to project optimism to the higher-ups? Or two to-do lists, one of stuff I plan to really do and the other I'm just saying I'll do to avoid an argument? Like, actually two files or pieces of paper? Certainly in a corporate context there's good legal reasons not to, since liability for fraud often depends on what you can be proven to know, right?

I wonder if there's also an analogy to the Gibbs sampling algorithm here.

For a believer, it will mostly bounce back and forth from "Assuming God is real, the bible is divinely inspired" and "Assuming the bible is divinely inspired, God must be real". But if these are not certainties, occasionally it must generate "Assuming God is real, the bible is actually not divinely inspired". And then from there, probably to "Assuming the bible is not divinely inspired, God is not real." But then also occasionally it can "recover", generating "Assuming the bible is not divinely inspired, God is actually real anyway." So you need that conditional probability too. But given all the conditional probabilities, the resulting chain generates the joint distribution over whether or not the bible is divinely inspired and whether or not God is real.

The reason nobody else talks about the A_p distribution is because the same concept appears in standard probability expositions as a random variable representing an unknown probability. For example, if you look in Hoff's "A First Course in Bayesian Statistics", it will discuss the "binomial model" with an unknown "parameter" Θ. The "event" Θ=p plays the same role as the proposition A_p, since P(Y=1|Θ=p) = p. I think Jaynes does have something to add, but not so much in the A_p distribution chapter as in his chapter on the physics of coin flips, and his analysis of die rolls which I'm not sure if is in the book. He gets you out of the standard Bayesian stats mindset where reality is a binomial model or multinomial model or whatever, and shows you that A_p can actually have a meaning in terms of a physical model, such as a disjunction of die shapes that lead to the same probability of getting 6. Although your way of thinking of it as a limiting posterior probability from a certain kind of evidence is interesting too (or Jaynes's way of thinking of it, if it was in the book; I don't recall). Anyway, I wrote a post on this that didn't get much karma, maybe you'll be one of the few people that's interested.

Make sense. I suppose we assume that the insurance pays out the value of the asset, leaving our wealth unchanged. So assuming we buy the insurance, there's no randomness in our log wealth, which is guaranteed to be log(W-P). The difference between that, and our expected log wealth if we don't buy the insurance, is V. That's why log(W-P) is positive in the formula for V, and all the terms weighted by probabilities are negative.

Load More