Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: jimrandomh 17 April 2013 03:01:26AM 31 points [-]

This is testing for discretization of space. Which would be a very interesting fact about the universe, but somewhat orthogonal to whether it's a simulation; a root-level universe could still be discretized, and a simulated universe could be continuous or discretized more finely than any instrument can detect.

Comment author: jake987722 17 April 2013 04:50:54AM 3 points [-]

These are good points. Do you think that if the researchers did find the sort of discretization that they are hypothesizing, that this would represent at least some weak evidence in favor of the simulation hypothesis, or do you think it's completely uninformative with respect to the simulation hypothesis?

Comment author: Grognor 13 March 2012 01:21:30AM 2 points [-]

We already got this link.

Comment author: jake987722 13 March 2012 04:51:16AM 5 points [-]

Damn. I quickly checked to see if this link had been posted, but I guess I didn't look far back enough--I assumed that if it had been, it would have been very recently, but apparently it was actually posted 10 days ago... my bad.

LINK: Can intelligence explode?

-3 jake987722 13 March 2012 12:03AM

I thought many of you would be interested to know that the following paper just appeared in Journal of Consciousness Studies:

"Can Intelligence Explode?", by Marcus Hutter. (LINK HERE)

Abstract: The technological singularity refers to a hypothetical scenario in which technological advances virtually explode. The most popular scenario is the creation of super-intelligent algorithms that recursively create ever higher intelligences. It took many decades for these ideas to spread from science fiction to popular science magazines and finally to attract the attention of serious philosophers. David Chalmers' (JCS 2010) article is the first comprehensive philosophical analysis of the singularity in a respected philosophy journal. The motivation of my article is to augment Chalmers' and to discuss some issues not addressed by him, in particular what it could mean for intelligence to explode. In this course, I will (have to) provide a more careful treatment of what intelligence actually is, separate speed from intelligence explosion, compare what super-intelligent participants and classical human observers might experience and do, discuss immediate implications for the diversity and value of life, consider possible bounds on intelligence, and contemplate intelligences right at the singularity.

I have only just seen the paper and have not yet thread through it myself, but I thought we could use this thread for discussion.

Comment author: nerfhammer 22 February 2012 06:22:13AM *  4 points [-]

Fast and Frugal heuristics can be descriptive (meaning human beings naturally use them at some level) or prescriptive (here are some good heuristics you can learn to use). Heuristics in Heuristics and Biases are only descriptive.

The Heuristics and Biases theorists would never suggest someone should try to "use" one of their heuristics, nor probably could you even if you tried. You could not intentionally reproduce the pattern of cognitive biases that their heuristics allegedly cause, many appear to be irretrievably outside of conscious awareness or control. For that matter, they often appear to be nearly impossible to stop using even if you wanted to.

Fast and Frugal heuristics, however, you can learn and use intentionally. The Fast and Frugal theorists generally don't suggest that it would be difficult to stop using their heuristics should you be aware of them and have the desire to. Descriptive heuristics may even be discoverable via introspection.

Heuristics in Heuristics and biases are defined as having negative side effects. There are no heuristics in H&B that aren't revealed via errors. Heuristics in H&B are presumed to be either needed by some necessary efficiency or could be an evolutionary quirk like the blind spot in your eye. Fast and Frugal heuristics do not require negative side effects and are usually not described with any. Descriptive F&F heuristics aren't evolutionary quirks. Heuristics in F&F are defined as being a helpful efficiency gain.

So they are mutually exclusive in some properties, besides the obvious that Fast and Frugal heuristics are "good" while heuristics as in Heuristics and biases are "bad".

Comment author: jake987722 23 February 2012 01:48:48AM *  2 points [-]

Have to disagree with you on, well, several points here.

Heuristics in Heuristics and Biases are only descriptive. [...] Heuristics in Heuristics and biases are defined as having negative side effects.

If your claim is that heuristics are defined by H&B theorists as being explicitly not prescriptive, in the sense of never being "good" or "useful," this is simply not the case. For instance, in the opening paragraph of their seminal 1974 Science article, Kahneman & Tversky clearly state that "...people rely on a limited number of heuristic principles which reduce the complex tasks of assessing probabilities and predicting values to simpler judgmental operations. In general, these heuristics are quite useful, but sometimes they lead to severe and systematic errors." Gigerenzer et al. would not necessarily disagree with this definition (they tend to define heuristics in terms of "ignoring information" rather than "reducing complexity," although the end result is much the same), although they would almost certainly phrase it in a more optimistic way.

...nor probably could you even if you tried. You could not intentionally reproduce the pattern of cognitive biases that their heuristics allegedly cause, many appear to be irretrievably outside of conscious awareness or control.

Representativeness, one of the earliest examples of a heuristic given by the H&B program, is certainly used in a conscious and deliberate way. When asked, subjects routinely report relying on representativeness to make frequency or probability judgments, and they generally see nothing wrong or even really remarkable about this fact. Nick Epley's work also strongly suggests that people very deliberately rely on anchoring-and-adjustment strategies when making some common judgments (e.g., "When was George Washington elected president?" "Hmm, well it was obviously some time shortly after the Declaration of Independence, which was in 1776... so maybe 1786?").

Fast and Frugal heuristics, however, you can learn and use intentionally.

One can certainly learn to use any heuristic strategy, but for some heuristics proposed by the F&F camp, such as the so-called fluency heuristic (Hertwig et al., 2008), it is not at all obvious that in practice they are utilized in any intentional way, or even that subjects are aware of using them. The fluency heuristic in particular is extremely similar to the availability heuristic proposed decades earlier by Kahneman & Tversky.

Descriptive F&F heuristics aren't evolutionary quirks.

I'm not sure what you mean here. If an "evolutionary quirk" is a locally optimal solution that falls short of a global maximum, then the heuristics described by both H&B and F&F theorists are most certainly "evolutionary quirks." The claim being advanced by F&F theorists is not that the heuristics we tend to use are optimal in any sense of having maximal evolutionary adaptedness, but simply that they work just fine thanks. Note, however, that they are outperformed in simple inference tasks even by relatively simple strategies like multiple regression, and outperformed in more challenging prediction tasks by, e.g., Bayes Nets. They are decidedly not globally optimal.

...besides the obvious that Fast and Frugal heuristics are "good" while heuristics as in Heuristics and biases are "bad".

This impression is entirely due to differences in the framing and emphasis employed by the two camps. It does not represent anything like a fundamental distinction between how they each view the nature or role of heuristics in judgment and decision making.

[SEQ RERUN] Some Claims Are Just Too Extraordinary

6 jake987722 27 April 2011 03:06AM

Today's post, Some Claims Are Just Too Extraordinary was originally published on 20 January 2007. A summary (taken from the LW wiki):

Publications in peer-reviewed scientific journals are more worthy of trust than what you detect with your own ears and eyes.

Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was A Fable of Science and Politics, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

[SEQ RERUN] A Fable of Science and Politics

9 jake987722 26 April 2011 03:42AM

Today's post, A Fable of Science and Politics was originally published on 23 December 2006. A summary (taken from the LW wiki):

People respond in different ways to clear evidence they're wrong, not always by updating and moving on.


Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was "I don't know.", and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

Comment author: Kai-o-logos 21 April 2011 01:03:51AM *  1 point [-]

I'm not saying that the frequentist statistical belief logic actually goes like that above. What I'm saying is that is how many people tend to wrongly interpret such statistics to define their own null hypothesis in the way I outlined in the post.

As I've said before, the MOST common problem is not the actual statistics, but how the ignorant interpret that statistics. I am merely saying, I would prefer Bayesian statistics to be taught because it is much harder to botch up and read our own interpretation into it. (For one, it is ruled by a relatively easy formula)

Also, isn't model complexity quite hard to determine with the statements "God exists" and "God does not exist". Isn't the complexity in this sense subject to easy bias?

Comment author: jake987722 21 April 2011 02:03:24AM 2 points [-]

What I'm saying is that is how many people tend to wrongly interpret such statistics to define their own null hypothesis in the way I outlined in the post.

But that's not right. The problem that your burden of proof example describes is a problem of priors. The theist and the atheist are starting with priors that favor different hypotheses. But priors (notoriously!) don't enter into the NHST calculus. Given two statistical models, one of which is a nested subset of the other (this is required in order to directly compare them), there is not a choice of which is the null: the null model is the one with fewer parameters (i.e., it is the nested subset). It isn't up for debate.

There are other problems with NHST--as you point out later in the post, some people have a hard time keeping straight just what the numbers are telling them--but the issue I highlighted above isn't one of them for me.

Also, isn't model complexity quite hard to determine with the statements "God exists" and "God does not exist". Isn't the complexity in this sense subject to easy bias?

Yes. As you noted in your OP, forcing this pair of hypotheses into a strictly statistical framework is awkward no matter how you slice it. Statistical hypotheses ought to be simple empirical statements.

Comment author: jake987722 21 April 2011 12:56:43AM *  2 points [-]

As an aspiring scientist, I hold the Truth above all.

That will change!

More seriously though...

As one can see, the biggest problem is determining burden of proof. Statistically speaking, this is much like the problem of defining the null hypothesis.

Well, not really. The null and alternative hypotheses in frequentist statistics are defined in terms of their model complexity, not our prior beliefs (that would be Bayesian!). Specifically, the null hypothesis represents the model with fewer free parameters.

You might still face some sort of statistical disagreement with the theist, but it would have to be a disagreement over which hypothesis is more/less parsimonious--which is really a rather different argument than what you've outlined (and IMO, one that the theist would have a hard time defending).

Comment author: pdf23ds 12 April 2011 03:10:03AM 1 point [-]

I suppose I could shop around for a doctor willing to prescribe modafinil for my sort of sleep problems. I have thought of trying it in the past, but that's pretty far off-label.

"Everything" includes having read all current medical literature, which all says that severe circadian rhythm disorders are basically untreatable, and having one sleep doctor basically give up. I could also try more sleep doctors, I suppose.

Comment author: jake987722 12 April 2011 03:54:18AM 2 points [-]

It doesn't sound unreasonable to me given the severity of your symptoms. But I'm not a sleep doctor.

Consider also that there are other ways to procure drugs like this, i.e., shady online vendors from overseas. Just make sure you do your research on the vendors first. There are people who have ordered various drugs from these vendors, chemically verified that the drugs were in fact what they were advertised to be, and then posted their results in various places online for the benefit of others. Bottom line: some companies are more trustworthy than others--do your homework. And obviously you should exercise due caution when taking a new drug without a doctor's consent.

Comment author: pdf23ds 12 April 2011 02:23:41AM 2 points [-]

"I find it impossible to wake up at a consistent time every day (+/- 8 hours), despite years of trying"

In other words, I've tried everything else.

Comment author: jake987722 12 April 2011 02:55:02AM 2 points [-]

How about Modafinil or a similar drug? It is prescribed for narcolepsy. More generally, can I safely assume that "everything" includes having talked to your doctor about how serious these symptoms are?

View more: Next