All of jake987722's Comments + Replies

These are good points. Do you think that if the researchers did find the sort of discretization that they are hypothesizing, that this would represent at least some weak evidence in favor of the simulation hypothesis, or do you think it's completely uninformative with respect to the simulation hypothesis?

0Decius
I think that the simulation hypothesis does not permit any evidence for or against that does not violate the laws of physics. There's no way to distinguish between being in a real universe, a discrete simulation of a continuous universe, a discrete simulation of a discrete universe, or a continuous simulation of either a discrete or continuous universe with limited measuring tools. If the laws of physics starting changing in dramatically interesting ways, I might see that as evidence of simulation... and now I need to reevaluate the evidence about cold fusion in that light...
7CarlShulman
I'd say it would be very weak to negligible evidence in favor.

Damn. I quickly checked to see if this link had been posted, but I guess I didn't look far back enough--I assumed that if it had been, it would have been very recently, but apparently it was actually posted 10 days ago... my bad.

Have to disagree with you on, well, several points here.

Heuristics in Heuristics and Biases are only descriptive. [...] Heuristics in Heuristics and biases are defined as having negative side effects.

If your claim is that heuristics are defined by H&B theorists as being explicitly not prescriptive, in the sense of never being "good" or "useful," this is simply not the case. For instance, in the opening paragraph of their seminal 1974 Science article, Kahneman & Tversky clearly state that "...people rely on a limited nu... (read more)

0nerfhammer
No, no, that's not what I'm saying. The claim that heuristics have negative side effects does not entail a claim that negative side effects are the only characteristics they have. The 'side effect' terminology might be taken to imply that there is a main effect which is not necessarily negative. They have always claimed that heuristics are right most of the time. But they wouldn't recommend you purposefully try to "use" them. They only propose heuristics that could theoretically explain empirically observed biases. F&F heuristics do not necessarily need to explain biases. A F&F heuristic might only explain when you get something right that you otherwise shouldn't. I'm not even sure that an F&F heuristic need explain anything empirically observed but rather could be a decision strategy that they modelled as being effective that everyone should learn (what I clumsily meant by 'prescriptive'). And they have published ways to teach use of some of their heuristics. I don't recall introspective interviews with subjects taking place in H&B research, though I may apparently be wrong about that. What I had in mind when I wrote that was that I seem to recall K & T and Gigerenzer sparring over the validity of doing that. Except.... now that I think of it I seem to recall something like that in the really early K & T papers... maybe as I understood it, which may be obsolete, is that introspection could be useful to help generate empirical theories but could not be used to validate them whereas I seem to recall Gigerenzer arguing that they could provide validity. Maybe the camps have converged on that, or my memory continues to be faulty. [irrelevant digression: representativeness was the absolute earliest, and by a large margin if you include "the law of small numbers" as the germ of representativeness. But if you count the law of small numbers as a heuristic and separately then it was the first.] It implies that anchoring-and-adjustment is consciously available as a strat

What I'm saying is that is how many people tend to wrongly interpret such statistics to define their own null hypothesis in the way I outlined in the post.

But that's not right. The problem that your burden of proof example describes is a problem of priors. The theist and the atheist are starting with priors that favor different hypotheses. But priors (notoriously!) don't enter into the NHST calculus. Given two statistical models, one of which is a nested subset of the other (this is required in order to directly compare them), there is not a choice of w... (read more)

As an aspiring scientist, I hold the Truth above all.

That will change!

More seriously though...

As one can see, the biggest problem is determining burden of proof. Statistically speaking, this is much like the problem of defining the null hypothesis.

Well, not really. The null and alternative hypotheses in frequentist statistics are defined in terms of their model complexity, not our prior beliefs (that would be Bayesian!). Specifically, the null hypothesis represents the model with fewer free parameters.

You might still face some sort of statistical d... (read more)

2Kai-o-logos
I'm not saying that the frequentist statistical belief logic actually goes like that above. What I'm saying is that is how many people tend to wrongly interpret such statistics to define their own null hypothesis in the way I outlined in the post. As I've said before, the MOST common problem is not the actual statistics, but how the ignorant interpret that statistics. I am merely saying, I would prefer Bayesian statistics to be taught because it is much harder to botch up and read our own interpretation into it. (For one, it is ruled by a relatively easy formula) Also, isn't model complexity quite hard to determine with the statements "God exists" and "God does not exist". Isn't the complexity in this sense subject to easy bias?

It doesn't sound unreasonable to me given the severity of your symptoms. But I'm not a sleep doctor.

Consider also that there are other ways to procure drugs like this, i.e., shady online vendors from overseas. Just make sure you do your research on the vendors first. There are people who have ordered various drugs from these vendors, chemically verified that the drugs were in fact what they were advertised to be, and then posted their results in various places online for the benefit of others. Bottom line: some companies are more trustworthy than others--do your homework. And obviously you should exercise due caution when taking a new drug without a doctor's consent.

How about Modafinil or a similar drug? It is prescribed for narcolepsy. More generally, can I safely assume that "everything" includes having talked to your doctor about how serious these symptoms are?

1pdf23ds
I suppose I could shop around for a doctor willing to prescribe modafinil for my sort of sleep problems. I have thought of trying it in the past, but that's pretty far off-label. "Everything" includes having read all current medical literature, which all says that severe circadian rhythm disorders are basically untreatable, and having one sleep doctor basically give up. I could also try more sleep doctors, I suppose.

I think you're taking the fundamentally wrong approach. Rather than trying to simply predict when you'll be sleepy in the near-term, you should try to actively get your sleeping patterns under control.

4pdf23ds
Besides, having a tool that could forecast my sleep patterns given different variables would allow me to understand the interactions of those variables and ultimately would allow me to take control of my sleep patterns.
3pdf23ds
"I find it impossible to wake up at a consistent time every day (+/- 8 hours), despite years of trying" In other words, I've tried everything else.

Robin Hanson's posts from the AI Foom debate are not included in the list of all articles. Covering only Yudkowsky's side of the debate would be a little strange for readers I think. Should we feature Hanson's posts (and others who participated in the debate) during that time as well?

0Alexandros
Very good point. I will try to get some consensus on which posts will make it to the rerun listing. I have no objection in principle to including non-Eliezer posts.

Yes, that's exactly right.

And although I'm having a hard time finding a news article to verify this, someone informed me that the official breast cancer screening recommendations in the US (or was it a particular state, perhaps California?) were recently modified such that it is now not recommended that women younger than 40 (50?) receive regular screening. The young woman who informed me of this change in policy was quite upset about it. It didn't make any sense to her. I tried to explain to her how it actually made good sense when you think about it in ... (read more)

I agree with previous comments about publishing in journals being an important status issue, but I think there is other value as well which is being ignored. For all of its annoyances and flaws, one good thing about peer review is that it really makes your paper better. When you submit a pretty good paper to a journal and get back the "revise and resubmit" along with the detailed list of criticisms and suggestions, then by the time the paper actually makes it into the journal, chances are that it will have become a really good paper.

But to return... (read more)

2CronoDAS
-- Paraphrase of a speaker at the Northeast Conference on Science and Skepticism If it's not published, it might be correct, but it's not science.

As far as the take-home practical message goes, on my reading it was never about how well doctors could "diagnose cancer" per se based on mammogram results--rather, the reason we ask about P(cancer | positive) is because it ought to inform our decision about whether a biopsy is really warranted. If a healthy young woman from a population with an exceedingly low base rate for breast cancer has a positive mammogram, the prior probability of her having cancer may still be low enough that there might actually be negative expected value in following u... (read more)

9Alicorn
If a biopsy is the next step in diagnosing breast cancer after a positive mammogram, then we shouldn't perform mammograms on anyone it still wouldn't be worth biopsying should their mammogram turn up positive.

Should we go somewhere else to discuss, rather than heavily nested comments? Would a new discussion topic page be the right place?

I agree that the nested comment format is a little cumbersome (in fact, this is a bit of a complaint of mine about the LW format in general), but it's not clear that this discussion warrants an entirely new topic.

Terminology isn't terribly important . . . If you want to take the Popperian conception of a good theory and label it "justified" it doesn't matter so much.

Okay. So what is really at issue here is wheth... (read more)

4curi
I think it's a big topic. Began answering your question here: http://lesswrong.com/r/discussion/lw/551/popperian_decision_making/
3curi
No regress has begun. I already answered why: Try to regress me. It is possible, if you want, to create a regress of some kind which isn't the same one and isn't important. The crucial issue is: are the questions that continue the regress any good? Do they have some kind of valid point to them? If not, then I won't regard it as a real regress problem of the same type. You'll probably wonder how that's evaluated, but, well, it's not such a big deal. We'll quickly get to the point where your attempts to create regress look silly to you. That's different than the regresses inductivists face where it's the person trying to defend induction who runs out of stuff to say.

If i convinced you of this one single issue (that there is a method for making the decision), would you follow up with a thousand other objections to Popperian epistemology, or would we have gotten somewhere?

Yes, we will have gotten somewhere. This issue is my primary criticism of Popperian epistemology. That is, given what I understand about the set of ideas, it is not clear to me how we would go about making practical scientific decisions. With that said, I can't reasonably guarantee that I will not have later objections as well before we've even had ... (read more)

3curi
OK then :-) Should we go somewhere else to discuss, rather than heavily nested comments? Would a new discussion topic page be the right place? That is the general idea (but incomplete). The reason we behave as if it's true is that it's the best option available. All the other theories are criticized (= we have an explanation of what we think is a mistake/flaw in them). We wouldn't want to act on an idea that we (thought we) saw a mistake in, over one we don't think we see any mistake with -- we should use what (fallible) knowledge we have. A justification is a reason a conjecture is good. Popperian epistemology basically has no such thing. There are no positive arguments, only negative. What we have instead of positive arguments is explanations. These are to help people understand an idea (what it says, what problem it is intended to solve, how it solves it, why they might like it, etc...), but they do not justify the theory, they play an advisory role (also note: they pretty much are the theory, they are the content that we care about in general). One reason that not being criticized isn't a justification is that saying it is gets you a regress problem. So let's not say that! The other reason is: what would that be adding as compared with not saying it? It's not helpful (and if you give specific details/claims of how it is helpful, which are in line with the justificationist tradition, then I can give you specific criticisms of those). Terminology isn't terribly important. David Deutsch used the word justification in his explanation of this in the dialog chapter of The Fabric of Reality (highly recommended). I don't like to use it. But the important thing is not to mean anything that causes a regress problem, or to expect justification to come from authority, or various other mistakes. If you want to take the Popperian conception of a good theory and label it "justified" it doesn't matter so much.

So, how do Popperians decide? They conjecture an answer, e.g. "yes". Actually, they make many conjectures, e.g. also "no". Then they criticize the conjectures, and make more conjectures. So for example I would criticize "yes" for not providing enough explanatory detail about why it's a good idea. Thus "yes" would be rejected, but a variant of it like "yes, because nuclear power plants are safe, clean, and efficient, and all the criticisms of them are from silly luddites" would be better. If I didn't unders

... (read more)
1curi
When you have exactly one non-refuted theory, you go with that. The other cases are more complicated and difficult to understand. Suppose I gave you the answer to the other cases, and we talked about it enough for you to understand it. What would you change your mind about? What would you concede? If i convinced you of this one single issue (that there is a method for making the decision), would you follow up with a thousand other objections to Popperian epistemology, or would we have gotten somewhere? If you have lots of other objections you are interested in, I would suggest you just accept for now that we have a method and focus on the other issues first. But some are criticized and some aren't. But how is that to be judged? No, we always go with uncriticized ideas (which may be close variants of ideas that were criticized). Even the terminology is very tricky here -- the English language is not well adapted to expressing these ideas. (In particular, the concept "uncriticized" is a very substantive one with a lot of meaning, and the word for it may be misleading, but other words are even worse. And the straightforward meaning is OK for present purposes, but may be problematic in future discussion.). Yes, different. Both of these are justificationist ways of thinking. They consider how much justification each theory has. The first one rejects a standard source of justification, does not replace it, and ends up stuck. The second one replaces it, and ends up, as you say, reasonably similar to Bayesianism. It still uses the same basic method of tallying up how much of some good thing (which we call justification) each theory has, and then judging by what has the most. Popperian epistemology does not justify. It uses criticism for a different purpose: a criticism is an explanation of a mistake. By finding mistakes, and explaining what the mistakes are, and conjecturing better ideas which we think won't have those mistakes, we learn and improve our knowledge.

Might it be a good idea to feature the IRC channel more centrally on the website? Eliezer's concern notwithstanding, if I'm going to kill time anyway (and believe me, I'm going to anyway), it might be nice to do so in a busy LW IRC room. I could think of less productive things to do for an hour.

1David_Gerard
I have already gained what feels like a fair bit of positive effect in my life from including LW as part of my Internet-as-television time. So LW-related stuff as Internet time sink is not necessarily a bad thing if you can keep a lid on it. If IRC can do the reminding to be less dumb as well, then good.
0jsalvatier
I agree.

Sure there are associated values. By implying that a particular out-group is "ugly, smelly, no friends, socially unacceptable, negative, aggressive," etc. etc., you simultaneously imply that your in-group is none of those things. You elevate the in-group by derogating the out-group. Presumably you and your in-group value not having all of those negative traits.

If situationism is true, why do the folk have such a robust theory of character traits? Can we provide an error theory for why people have such a theory?

Jones and Nisbett attempted to answer this question in their classic paper on actor-observer bias. It's an interesting read.

However, beware of falling into an overly strict interpretation of situationism (as I think Jones and Nisbett did) which amounts to little more than behaviorism in new clothes. People do tend to underestimate the extent to which their behavior and the behavior of others is driven b... (read more)

Wikipedia has a page on Just-world phenomenon which lists the following references:

Lerner, M, & Simmons, CH. (1966). Observer reaction to the 'innocent victim': Compassion or rejection? Journal of Personality and Social Psychology 4 (2): 203–210.

Carli, L.L. (1999). Cognitive reconstruction, hindsight, and reactions to victims and perpetrators. Personality and Social Psychology Bulletin, 25, 966-979.

Lerner, M.J. & Miller, D.T. (1978). Just world research and the attribution process: Looking back and ahead. Psychological Bulletin, 85, 1030-1051.

I bet... (read more)

I can report with some degree of confidence that the Blanton paper represents a skeptical view which is very much a minority in the field. This doesn't necessarily mean that it's biased or "wrong," but I think a LessWronger such as yourself will understand what this suggests regarding the intellectual status of their claims.

A couple papers to balance out the view from above:

Rebuttal to above by authors of "reanalyzed" study http://www.bsos.umd.edu/psyc/hanges/Ziegert%20and%20Hanges%202009.pdf

Reply to a different but similar Tetlock-and... (read more)

0teageegeepea
Thanks for the links.

I'm not sure. You may not be able to in any feasible or satisfactory way, which was sort of my point.

This sort of conditioning works best when the reward is administered within about 500 ms of the response (sorry, don't have a citation). Something to keep in mind.

0PhilGoetz
How do you apply that to cases where the response is a task that can stretch out over an hour?
0John_Maxwell
Thanks!

It's also possible many people are simply not terribly good at using Internet, or that many disciplines don't yet have information available on the Internet - in the long term the normal case will far more information than you ever need available online, but this might not always be the case yet.

It's not the first possibility, it's the second. I'm quite comfortable in saying that I am very capable at finding specific online content if it's out there to be found. The problem is that most of the disciplines I'm interested in reading about don't have the g... (read more)

1xamdam
Incidentally, Google is clearly aware of this, and willing to step into some hot (and unprofitable in the short term) waters to get to the book-stored knowledge. They also revived decent open-source OCR probably for this purpose.
2NancyLebovitz
Is this because of the lack of lab work as well as the lack of textbook-level information?

That's a pretty good list they have going, but in my opinion the Gigerenzer et al. volume should be replaced by one published 3 years earlier by the same research group: Simple Heuristics That Make Us Smart. It's the same basic thing, but a bit more comprehensive and more directly relevant to cognitive psych (no chapters on animal rationality and etc).

Also, while the 1982 H&B volume is obviously very good and certainly belongs on the list, the picture is pretty incomplete without the updated 2002 H&B volume as well as Choices, Values, and Frames (1999).

Hi.

I'm a grad student studying social psychology, more or less in the heuristics & biases tradition. I've been loosely following the blog for maybe six months or so. The discussions are always thought provoking and frequently amusing. I hope to participate more in the near future.

Okay, but is it a part of the typical Bayesian routine to wield formal decision theory, or do we just calculate P(H|E) and call it a day?

3Cyan
I don't think formal decision theory is common in applied Bayesian stats in science; the only paper I can quickly recall that did a decision analysis is Andrew Gelman's radon remediation study. Maybe econometrics is different, since it's a lot easier to define losses in that context.

We could just as easily imagine the selection bias having worked the other way (LessWrongers are hardly a representative sample and some have motivated reasons for choosing one way or another, especially having read through the thread), but you're of course right that, in any case, this sample isn't telling us much.

I thought the baby was cuter... but why bother voting in a meaningless poll like this? (No offense :P)

0gregconen
People that find human infants cuter than rabbit, dog, or cat infants isn't a direct contradiction of the hypothesis, as humans would be particularly likely to find human infants cute (just as dogs are particularly likely to be protective and nurturing to puppies). The point is that animals with large litters are particularly likely to have cute infants other things (like degree of genetic closeness) equal, and that large litter animals would be sufficiently cute to overcome the fact that we're not related. Of course, domestic puppies and kittens have an advantage over wild animals, as much selection was based on human popularity. Thus, the question is whether you find say Infant Elephants as cute as infant (wild) rabbits or Wolf Puppies.

I mean, not only is the "p-value" threshold arbitrary, not only are we depriving ourselves of valuable information by "accepting" or "not accepting" a hypothesis rather than quantifying our certainty level, but...what about P(E|H)?? (Not to mention P(H).)

Well, P(E|H) is actually pretty easy to calculate under a frequentist framework. That's the basis of power analysis, a topic covered in any good intro stat course. The real missing ingredient, as you point out, is P(H).

I'm not fully fluent in Bayesian statistics, so while I... (read more)

3Cyan
The formal decision-making machinery involves picking a loss function and minimizing posterior expected loss.

Something is falsifiable if if it is false, it can be proven false.

Isn't this true of anything and everything in mathematics, at least in principle? If there is "certainly an objective truth or falsehood to P = NP," doesn't that make it falsifiable by your definition?

0Technologos
I know they get overused, but Godel's incompleteness theorems provide important limits to what can and cannot be proven true and false. I don't think they apply to P vs NP, but I just note that not everything is falsifiable, even in principle.
1orthonormal
It's not always that simple (consider the negation of G). (If this is your first introduction to Gödel's Theorem and it seems bizarre to you, rest assured that the best mathematicians of the time had a Whiskey Tango Foxtrot reaction on the order of this video. But turns out that's just the way it is!)

Yeah, that was a pretty clever turn of phrase.