Wiki Contributions

Comments

These are good points. Do you think that if the researchers did find the sort of discretization that they are hypothesizing, that this would represent at least some weak evidence in favor of the simulation hypothesis, or do you think it's completely uninformative with respect to the simulation hypothesis?

Damn. I quickly checked to see if this link had been posted, but I guess I didn't look far back enough--I assumed that if it had been, it would have been very recently, but apparently it was actually posted 10 days ago... my bad.

Have to disagree with you on, well, several points here.

Heuristics in Heuristics and Biases are only descriptive. [...] Heuristics in Heuristics and biases are defined as having negative side effects.

If your claim is that heuristics are defined by H&B theorists as being explicitly not prescriptive, in the sense of never being "good" or "useful," this is simply not the case. For instance, in the opening paragraph of their seminal 1974 Science article, Kahneman & Tversky clearly state that "...people rely on a limited number of heuristic principles which reduce the complex tasks of assessing probabilities and predicting values to simpler judgmental operations. In general, these heuristics are quite useful, but sometimes they lead to severe and systematic errors." Gigerenzer et al. would not necessarily disagree with this definition (they tend to define heuristics in terms of "ignoring information" rather than "reducing complexity," although the end result is much the same), although they would almost certainly phrase it in a more optimistic way.

...nor probably could you even if you tried. You could not intentionally reproduce the pattern of cognitive biases that their heuristics allegedly cause, many appear to be irretrievably outside of conscious awareness or control.

Representativeness, one of the earliest examples of a heuristic given by the H&B program, is certainly used in a conscious and deliberate way. When asked, subjects routinely report relying on representativeness to make frequency or probability judgments, and they generally see nothing wrong or even really remarkable about this fact. Nick Epley's work also strongly suggests that people very deliberately rely on anchoring-and-adjustment strategies when making some common judgments (e.g., "When was George Washington elected president?" "Hmm, well it was obviously some time shortly after the Declaration of Independence, which was in 1776... so maybe 1786?").

Fast and Frugal heuristics, however, you can learn and use intentionally.

One can certainly learn to use any heuristic strategy, but for some heuristics proposed by the F&F camp, such as the so-called fluency heuristic (Hertwig et al., 2008), it is not at all obvious that in practice they are utilized in any intentional way, or even that subjects are aware of using them. The fluency heuristic in particular is extremely similar to the availability heuristic proposed decades earlier by Kahneman & Tversky.

Descriptive F&F heuristics aren't evolutionary quirks.

I'm not sure what you mean here. If an "evolutionary quirk" is a locally optimal solution that falls short of a global maximum, then the heuristics described by both H&B and F&F theorists are most certainly "evolutionary quirks." The claim being advanced by F&F theorists is not that the heuristics we tend to use are optimal in any sense of having maximal evolutionary adaptedness, but simply that they work just fine thanks. Note, however, that they are outperformed in simple inference tasks even by relatively simple strategies like multiple regression, and outperformed in more challenging prediction tasks by, e.g., Bayes Nets. They are decidedly not globally optimal.

...besides the obvious that Fast and Frugal heuristics are "good" while heuristics as in Heuristics and biases are "bad".

This impression is entirely due to differences in the framing and emphasis employed by the two camps. It does not represent anything like a fundamental distinction between how they each view the nature or role of heuristics in judgment and decision making.

What I'm saying is that is how many people tend to wrongly interpret such statistics to define their own null hypothesis in the way I outlined in the post.

But that's not right. The problem that your burden of proof example describes is a problem of priors. The theist and the atheist are starting with priors that favor different hypotheses. But priors (notoriously!) don't enter into the NHST calculus. Given two statistical models, one of which is a nested subset of the other (this is required in order to directly compare them), there is not a choice of which is the null: the null model is the one with fewer parameters (i.e., it is the nested subset). It isn't up for debate.

There are other problems with NHST--as you point out later in the post, some people have a hard time keeping straight just what the numbers are telling them--but the issue I highlighted above isn't one of them for me.

Also, isn't model complexity quite hard to determine with the statements "God exists" and "God does not exist". Isn't the complexity in this sense subject to easy bias?

Yes. As you noted in your OP, forcing this pair of hypotheses into a strictly statistical framework is awkward no matter how you slice it. Statistical hypotheses ought to be simple empirical statements.

As an aspiring scientist, I hold the Truth above all.

That will change!

More seriously though...

As one can see, the biggest problem is determining burden of proof. Statistically speaking, this is much like the problem of defining the null hypothesis.

Well, not really. The null and alternative hypotheses in frequentist statistics are defined in terms of their model complexity, not our prior beliefs (that would be Bayesian!). Specifically, the null hypothesis represents the model with fewer free parameters.

You might still face some sort of statistical disagreement with the theist, but it would have to be a disagreement over which hypothesis is more/less parsimonious--which is really a rather different argument than what you've outlined (and IMO, one that the theist would have a hard time defending).

It doesn't sound unreasonable to me given the severity of your symptoms. But I'm not a sleep doctor.

Consider also that there are other ways to procure drugs like this, i.e., shady online vendors from overseas. Just make sure you do your research on the vendors first. There are people who have ordered various drugs from these vendors, chemically verified that the drugs were in fact what they were advertised to be, and then posted their results in various places online for the benefit of others. Bottom line: some companies are more trustworthy than others--do your homework. And obviously you should exercise due caution when taking a new drug without a doctor's consent.

How about Modafinil or a similar drug? It is prescribed for narcolepsy. More generally, can I safely assume that "everything" includes having talked to your doctor about how serious these symptoms are?

I think you're taking the fundamentally wrong approach. Rather than trying to simply predict when you'll be sleepy in the near-term, you should try to actively get your sleeping patterns under control.

Robin Hanson's posts from the AI Foom debate are not included in the list of all articles. Covering only Yudkowsky's side of the debate would be a little strange for readers I think. Should we feature Hanson's posts (and others who participated in the debate) during that time as well?

Yes, that's exactly right.

And although I'm having a hard time finding a news article to verify this, someone informed me that the official breast cancer screening recommendations in the US (or was it a particular state, perhaps California?) were recently modified such that it is now not recommended that women younger than 40 (50?) receive regular screening. The young woman who informed me of this change in policy was quite upset about it. It didn't make any sense to her. I tried to explain to her how it actually made good sense when you think about it in terms of base rates and expected values, but of course, it was no use.

But to return to the issue clinical implications, yes: if a woman belongs to a population where the result of a mammogram would not change our decision about whether a biopsy is necessary, then probably she shouldn't have the mammogram. I suspect that this line of reasoning would sound quite foreign to most practicing doctors.

Load More