Comment author: someonewrongonthenet 14 July 2014 02:54:06AM *  3 points [-]

I sort of side with Mitchel on this.

A mentor of mine once told me that replication is useful, but not the most useful thing you could be doing because it's often better to do a followup experiment that rests on the premises established by the initial experiment. If the first experiment was wrong, the second experiment will end up wrong too. Science should not go even slower than it already does - just update and move on, don't obsess.

It's kind of how some of the landmark studies on priming failed to replicate, but there are so many followup studies which are explained by priming really well that it seems a bit silly to throw out the notion of priming just because of that.

Keep in mind, while you are unlikely to hit statistically significance where there is no real result, it's not statistically unlikely to have a real result that doesn't hit significance the next time you do it. Significance tests are attuned to get false negatives more often than false positives.

Emotionally though... when you get a positive result in breast cancer screening even when you're not at risk, you don't just shrug and say "probably a false positive" even though it is. Instead, you irrationally do more screenings and possibly get a needless operation. Similarly, when the experiment fails to replicate, people don't shrug and say "probably a false negative", even though that is, in fact, very likely. Instead, they start questioning the reputation of the experimenter. Understandably, this whole process is nerve wracking for the original experimenter. Which I think is where Mitchel was - admittedly clumsily - groping towards with the talk of "impugning scientific integrity".

Comment author: Dan_Moore 16 July 2014 02:17:25PM 2 points [-]

A mentor of mine once told me that replication is useful, but not the most useful thing you could be doing because it's often better to do a followup experiment that rests on the premises established by the initial experiment. If the first experiment was wrong, the second experiment will end up wrong too. Science should not go even slower than it already does - just update and move on, don't obsess.

If you're concerned about the velocity of scientific progress, you should also be concerned about wrong turns. A Type 1 Error (establishing a wrong result by incorrectly rejecting a null hypothesis) is, IMHO, far more damaging to science than failure to establish a correct result - possibly due to an insufficient experimental setup.

Comment author: Vaniver 14 July 2014 12:47:38AM 18 points [-]

Either way, I think you are being quite uncharitable to Mitchell.

I disagree. Let's look at this section again:

Whether they mean to or not, authors and editors of failed replications are publicly impugning the scientific integrity of their colleagues. Targets of failed replications are justifiably upset, particularly given the inadequate basis for replicators’ extraordinary claims.

Contrast this to:

“This been difficult for me personally because it’s an area that’s important for my research,” he says. “But I choose the red pill. That’s what doing science is.”

From here, linked before on LW here.

The first view seems to have the implied assumption that false positives don't happen to good researchers, whereas the second view has the implied assumption that theories and people are separate, and people should follow the facts, rather than the other way around.


But perhaps it is the case that, in social psychology, the majority of false positives are not innocent, and thus when a researchers results do not replicate it is a sign that they're dishonest rather than that they're unlucky. In such a case, he is declaring that researchers should not try to expose dishonesty, which should bring down opprobrium from all decent people.

Comment author: Dan_Moore 14 July 2014 01:43:24PM 1 point [-]

The goal is to set up the experiments to make it solely about the results and not about colleagues. If 'scientific integrity' means sloppy, porous experimental setup, then impugning this is not a bad thing. Ideally the experimental design and execution should transcend the question of the researchers' motives.

Comment author: Dan_Moore 09 June 2014 04:01:45PM 1 point [-]

I just read an AI thriller by Greg Iles called 'The Footprints of God'. Don't want to spoiler it, so I'll just say that it strikes me as singularity-lite.

Also, here's an objectivist Harry Potter treatment.

Comment author: TheOtherDave 15 May 2014 06:13:09PM 1 point [-]

I endorse "downvote what you want less of" as a matter of board policy.

If individuals want less of things they ought to want more of, I endorse opposing the incorrect values of those individuals.

Those are two separate claims, and I oppose entangling them into a single claim, and also oppose further entangling them with "yay rationality! boo bias!" cheerleading.

Comment author: Dan_Moore 15 May 2014 08:25:49PM 0 points [-]

If individuals want less of things they ought to want more of, I endorse opposing the incorrect values of those individuals.

Downvoted per your request.

Comment author: Kawoomba 09 April 2013 02:11:57PM -1 points [-]

Unfortunately, there's an error in your logic: You call that type of medical journal article error "universal", i.e. applicable in all cases. Clearly a universal quantifier if I ever saw one.

That means that for all medical journal articles, it is true that they contain that error.

However, there exists a medical journal article that does not contain that error.

Hence the medical journal error is not universal, in contradiction to the title.

First logical error ... and we're not even out of the title? Oh dear.

Comment author: Dan_Moore 29 April 2014 08:09:17PM *  2 points [-]

Perhaps a clearer title would have been 'A Universal Quantifier Medical Journal Article Error'. Bit of a noun pile, but the subject of the post is an alleged unjustified use of a universal quantifier in a certain article's conclusion.

By the way, I think PhilGoetz is 100% correct on this point - i.e., upon failure to prove a hypothesis using standard frequentist techniques, it is not appropriate to claim a result.

Comment author: Nornagest 28 March 2014 04:29:22PM *  2 points [-]

Based on the idea that you get what you incentivize, and irrespective of other factors, I'd expect a marginal to mild increase. Self-driving cars can make commutes a bit more pleasant and substantially less dangerous, but they can't reduce commute times (until they reach saturation or close to it), and time's the main limitation.

Comment author: Dan_Moore 28 March 2014 05:03:27PM 0 points [-]

Some of the effects will depend on details of the implementation. For example, if self-driving cars are constrained to obey highway speed limits, the commute time may increase in some cases, at least initially. Upon achieving saturation of self-driving cars, I would expect shorter commute times on non-highways. Also, upon saturation, it may be seen as desirable to raise the highway speed limit.

Comment author: Dan_Moore 28 March 2014 04:10:13PM 1 point [-]

I am wondering about the effect of the advent of self-driving cars on urban sprawl. Will it increase or decrease sprawl?

Urban sprawl is said to be an unintended consequence of the development of the US interstate highway system.

Comment author: Dan_Moore 28 March 2014 02:55:16PM *  0 points [-]

The Pollination Project is run by a guy who gives $1,000 a day, to a different recipient every day. Rational justifications for this approach include minimizing the model risk - i.e., perhaps the model you used to decide which single charitable cause is the best is wrong. Also, small donations seem likely to produce a high velocity of the money donated.

Comment author: Dan_Moore 27 March 2014 02:52:08PM *  1 point [-]

I've taken an interest in steepled arrangements of quadrilaterals; i.e., an arrangement of n quadrilaterals with 2n vertices such that the intersection of any two quadrilaterals is either a vertex or the empty set, and each quadrilateral meets four others at its vertices. The implication is that n >=5, and I'm focused on the 3 dimensional case. A link from an earlier open thread shows that such an arrangement is possible.

The term steepled refers to a hand position where the corresponding fingers of each hand meet at the fingertips, forming a 'steeple'.

Consider an ant crawling on the surface of one of these arrangements. Starting at a vertex, she makes a bee-line (ant-line?) for the diagonally opposite corner of the quadrilateral she's on, and then enters the next quadrilateral which intersects at that point, and repeats the process again and again, always going across the diagonal. Eventually, she must arrive at her starting point, because there are a finite number of vertices. She may not have hit all the vertices, but she did not retrace her steps at any point.

Given a numerical labeling of the quadrilaterals, her path can be represented by a cycle of numerals corresponding to each quadrilateral she traversed. If there are unvisited vertices, the process can be repeated starting at an unvisited vertex, generating another cycle, until all vertices have been visited.

Different cycles or product of cycles (different even after a permutation of quadrilateral labels) represent different steepled quadrilateral arrangement types. The cycles have the following properties:

  • Each of the n numerals appears exactly twice (corresponding to the two diagonals of each quadrilateral).

  • Each numeral is neighbored in the cycle(s) by four different numerals between its two appearances.

  • Each cycle must have length >= 3.

Looking at the case n = 5, there are at least 7 distinct potential quadrilateral arrangements - i.e., 7 different eligible cycle products of the numerals 0 through 4. I'm looking into the question of whether each of these represents a physically possible 3-d steepled quadrilateral arrangement, and if so, can it be accomplished with all convex quadrilaterals or not. (That is, are you forced to use any non-convex quadrilaterals to construct the arrangement.)

I'm planning on posting this as a question to Math StackExchange, but I prefer to first be confident I can answer the question asked within a month - due to the logistics of that site, it's possible for a question to disappear after a month if no-one has answered it.

Comment author: Dan_Moore 11 March 2014 01:30:22PM *  0 points [-]

Here is an example of a long post that requires a good deal of reader perseverance to arrive at its main point. To wit, CDC obesity studies since the mid-20th century underwent a change in demographic sampling partway through (with more blacks and Hispanics sampled), resulting in a likely overstatement of obesity trend statistics.

The post title gives a hint, but the article would have been improved by indicating where it was headed much earlier on.

In contrast, this post delivers its message regarding the interpretation of "accurate more than 90% of the time" (including definitions of sensitivity and specificity) in a straightforward manner. I wouldn't describe the post as terse, but I give it high marks for content/length.

View more: Prev | Next