You know the drill - If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
And, while this is an accidental exception, future open threads should start on Mondays until further notice.
And, while this is an accidental exception, future open threads should start on Mondays until further notice.
60-yo men die all the time; anytime someone who writes on diet dies, someone is going to say 'I wonder if this proves/disproves his diet claims', no matter what the claims were or their truth. They don't, of course, since even if you had 1000 Seth Roberts, you wouldn't have a particularly strong piece of evidence on correlation of 'being Roberts' and all-cause mortality, and his diet choices were not randomized, so you don't even get causal inference. More importantly, if Roberts had died at any time before his actuarial life expectancy (in the low 80s, I'd eyeball it, given his education, ethnicity, and having survived so long already), people would make this claim.
OK, so let's be a little more precise and play with some numbers.
Roberts published The Shangri-la Diet in 2006. If he's 60 now in 2014 (8 years later), then he was 52 then. Let's say people would only consider his death negatively if he died before his actuarial life expectancy, and I'm going to handwave that as 80; then he has 28 years to survive before his death stops looking bad.
What's his risk of dying if his diet makes zero difference to his health one way or another? Looking at http://www.ssa.gov/OACT/STATS/table4...
I just graduated from FIU with a bachelor's in philosophy and a minor in mathematics. I'd like to thank my parents, God and Eliezer Yudkowsky (whose The Sequences I cited in each of the five papers I had to turn in during my final semester).
I have to say, I seriously don't get the Bayesian vs Frequentist holy wars. It seems to me the ratio of importance to education of its participants is ridiculously low.
Bayesian and frequentist methods are sets of statistical tools, not sacred orders to which you pledge a blood oath. Just understand the usage of each tools, and the fact that virtually any model of something that happens in the real world is going to be misspecified.
Full disclosure: I have papers using B (on structure learning using BIC, which is an approximation to a posterior of a graphical model), and using F (on estimation of causal effects). I have no horse in this race.
Bayes rule is the answer to that problem that provides the promise of a solution.
See, this is precisely the kind of stuff that makes me shudder, that regularly appears on LW, in an endless stream. While Scott Alexander is busy bible thumping data analysts on his blog, people here say stuff like this.
Bayes rule doesn't provide shit. Bayes rule just says that p(A | B) p(B) = p(B | A) p(A).
Here's what you actually need to make use of info in this study:
(a) Read the study.
(b) See if they are actually making a causal claim.
(c) See if they are using experimental or observational data.
(d) Experimental? Do we believe the setup? Are we in a similar cohort? What about experimental design issues? Observational? Do they know what they are doing, re: causality-from-observational-data? Is their model that permits this airtight (usually it is not, see Scott's post on "adjusting for confounders". Generally to really believe that adjusting for confounders is reason...
A "holy war" between Bayesians and frequentists exists in the modern academic literature for statistics, machine learning, econometrics, and philosophy (this is a non-exhaustive list).
Bradley Efron, who is arguably the most accomplished statistician alive, wrote the following in a commentary for Science in 2013 [1]:
...The term "controversial theorem" sounds like an oxymoron, but Bayes' theorem has played this part for two-and-a-half centuries. Twice it has soared to scientific celebrity, twice it has crashed, and it is currently enjoying another boom. The theorem itself is a landmark of logical reasoning and the first serious triumph of statistical inference, yet is still treated with suspicion by most statisticians. There are reasons to believe in the staying power of its current popularity, but also some signs of trouble ahead.
[...]
Bayes' 1763 paper was an impeccable exercise in probability theory. The trouble and the subsequent busts came from overenthusiastic application of the theorem in the absence of genuine prior information, with Pierre-Simon Laplace as a prime violator. Suppose that in the twins example we lacked the prior knowledge that one-third of tw
The Amanda Knox prosecution saga continues: if the original motive does not hold, deny the need for a motive.
During our Hamburg Meetup we discussed selection pressure on humans. We agreed that there is almost none on mutations affecting health in general due to medicine. But we agreed that there is tremendous pressure on contraception. We identified four ways evolution works around contraception. We discussed what effects this could have on the future of society. The movie Idiocracy was mentioned. This could be a long term (a few generations) existential risk.
The four ways evolution works around contraception:
Biological factors. Examples are hormones compensating the contraception effects of the pill or allergies against condoms. These are easily recognized, measured and countered by the much faster operating pharma industry. There are also little ethical issues with this.
Subconscious mental factors. Factors mostly leading to non- or mis-use of contraception. Examples are carelessness, impulsiveness, fear, and insufficient understanding of the contraceptives usage. These are what some fear leads to collective stultification. There are ethical injunctions to 'cure' these factors even if medically/therapeutically possible.
Conscious mental factors. Factors leading to explicit family pl
Group selection factors. These are factors favoring groups which collectively have more children. The genetic effects are likely weak here but the memetic effects are strong. A culture with social norms against contraception or for large families are likely to out-birth other groups.
These will by far be the strongest. See for example the birth rates of religious people versus anyone else.
When I'm procrastinating on a project by working on another, sexier project, it feels exactly like a love triangle where all three participants are inside my head, with all the same pleading, promises and infidelities. I wish that told us something new about procrastination or love!
A statistical look at whether bike helmets make sense-- concludes that there are some strong arguments against requiring bike helmets, and that drivers give less room to cyclists wearing helmets.
I am repeating myself so much, but..
Why is this posted a day early (when the prior thread was posted earlier than it should've been solely so they can start on Monday).? And way more impotantly, why is the open_thread tag not there. Can you please at least include it (it is important for the sidebar functionality, among other things).
I often rant about people posting the open threads incorrectly when there is so little about it (posting it after the previous one is over, making it last 7 days and adding a simple tag), but this is the 3rd OT posted this wee...
Why engineering hours should not be viewed as fungible-- increasing speed/preventing bottlenecks is important enough to be worth investing in. An example of how to be utilitarian without being stupid about it.
Any recommendations for discussions of how to figure out what's important to measure?
HELP WANTED: I recall that it is highly questionable that consciousness is even continuous. We feel like it is, but (as you know) we have considerable experimental evidence that your "consciousness" thinks things well after you've decided to do them. I can't find it, but I recall a result that says that "consciousness" is a story your brain tells itself after-the-fact, in bursts between gaps of obliviousness. (This also dissolves "quantum immortality".) Does anyone know about this one?
The 135 degree angle sitting position seems popular these days but sometimes the chair you are sitting in can not recline.
So if you must sit in a non-reclining chair, is it better to sit upright on the edge of your seat with your knees bent at 45 degrees and maintain a hip angle of 135 degrees or to sit in a relaxed upright position using the back support at a relatively 90 degree hip angle?
This is a really great take on why use of privilege-based critique in (often leftist) public discourse is flawed:
(Tl;dr: it's both malicious, because it resorts to using essential features of interlocutors against them--ie, quasi-ad hominems--and fallacious, because it fails to explain why the un(der)-privileged can offer arguments that work against their own interests.)
To the extent that privilege claims are about ignorance, I think they're likely to have a point. To the extent that they're a claim that some people are guaranteed to be wrong, they're ad hominem.
One really common case is when person A says something to the effect of, "I don't see why B people don't do X instead of complaining about fooism" — but X is an action that is (relatively easily) available to person A, but is systematically unavailable to B people. (And sometimes because of fooism.)
Or, X has been tried repeatedly in the history of B people, and has failed; but A doesn't know that history.
Or, X is just ridiculously expensive (in money/time/energy) and B people are poor/busy/tired, or otherwise ill-placed to implement it.
Or, X is an attempt to solve the wrong problem, but A doesn't have the practical experience to distinguish the actual problem from the situation at hand — A may be pattern-matching a situation into the wrong category.
Some of this post could totally be rephrased as being about "non-depressed-person privilege", but the author doesn't write like that.
Ok, my utility is probably low considering this open thread closes in 3 days :(
Anyhow, I had a thought when reading the Beautiful Probabilities in the Sequences. http://lesswrong.com/lw/mt/beautiful_probability/
It is a bit beyond my access and resources, but I'd love to see a graph/chart showing the percentage of scientific studies which become invalid or the percent which remain valid as we reduce the p <0.05.
So it would start with 100% of journal articles (take a sampling from the top 3 journals across various disciplines then break them down betwee...
Poll: Consequentialism and the motive for holding true beliefs
1. Is an action's moral status (rightness or wrongness) dictated solely by its consequences? [pollid:685]
(For calibration — I would expect people who identify strongly as consequentialists to answer "strong yes" on question 1, while people who identify strongly as deontologists to answer "strong no", while people who are somewhere in between would choose one of the middle buttons based on how they lean.)
2. Is the truth value (truth or falsity) of a belief about the world ...
As someone who uses more water than most people, would it be irresponsible for me to move to a dry climate?
I realized that I've been entirely leaving the southwest United States off of my list of options for where to live after I graduate college, because I'd decided when I was much younger that I shouldn't live in the desert. Now, I'm realizing that I have very little idea how important that is compared to other concerns. I'm not sure how to go about weighing the utility of an additional person using too much water in the desert.
I probably use 2-3 times m...
this piece is about whether earning to give is the best way to be altruistic.
but I think a big issue is what altruism is. do most people mostly agree on what's altruistic or good? have effective altruists tried to determine what real people or organizations want?
you don't want to push "altruism given hidden assumptions X, Y and Z that most people don't agree with." for example, in Ben Kuhn's critique he talks about a principle of egalitarianism. But I don't think most people think of "altruism" as something that applies equally to the ...
Hi, CFAR alumni here. Reposting I guess, the OTs are getting confusing.
Is there something like a prediction market running somewhere in discussion?
Going mostly off of Gwern's recommendation, it seems like PredictionBook is the go-to place to make and calibrate predictions, but it lacks the "flavour" that the one at CFAR did. CFAR (in 2012, at least) had a market where your scoring was based on how much you updated the previous bet towards the truth. I really enjoyed the interactional nature of it.
What would it take to get such a thread going onli...
I've just made an enrollment deposit at the University of Illinois at Urbana-Champaign, and I'm wondering if any other rationalists are going, and if so, would they be interested in sharing a dorm?
LINK: Someone on math.stackexchange ask if politically incorrect conclusions are more likely to be true by Bayesian Logic. The answer given is pretty solid (and says no).
Career advice? I've been offered a fellowship with the Education Pioneers.
For ten months (starting in Sept), I'd be embedded with a school district, charter, or gov't agency to do statistics and other statistical planning. I need to reply to them by next Friday, and I'd appreciate people pointing out questions they think I should ask/weight in my own decisionmaking. Please take a second to think unprimed, before I share some of my own thoughts below.
I'm currently working for less than minimum wage in a journalism internship that ends June 1. I strong...
How is the picture of the Sirens and Odyssey tied to a mast in the header of Overcoming Bias related to the concepts talked on the site?
Odysseus realized that he couldn't trust his own mind (or those of his sailors) but found a workaround.
To "overcome bias" is to find workarounds for the mind's failure modes.
I suggest that siren worlds should be relabeled "Devil's Courtships", after the creepy song of the same name:
..."I'll buy you a pennyworth o' priens If that be the way true love begins If ye'll gang alang wi' me m'dear, if ye'll gang alang wi' me?"
"Ye can hae your pennyworth of priens Though that be the way true love begins For I'll never gang wi' you m'dear, I'll never gang wi' you."
"I'll buy you a braw snuff box Nine times opened, nine times locked If ye'll gang alang wi' me m'dear, if ye'll gang alang wi' me?"
"
After a contribution to a previous thread I thought some more about what I actually wanted to say, so here is a much more succint version:
The average of any distribution or even worse of a dataset is not a sufficient description without a statement about the distribution.
So often research results are reported as a simple average with a standard deviation. The educated statistician will recognise these two numbers as the first two modes of a distribution. But these two modes completely describe a distribution if it is a normal distribution. Though the centr...
It looks like I made a mistake-- I checked, but somehow failed to see the thread ending on April 27.
I could kill this one, but there's already one legitimate open thread post. I'll go with the consensus on whether to delete it.
It might be worth editing more instructions to submitters into this post. Along the lines of 'If you notice that the previous thread has expired, feel free to post the next one. It should run monday-sunday, and it should include the open_thread tag so that it gets picked up on the sidebar'.
There's been debate on whether earning to give is the best way to be altruistic.
But it seems to me that the real issue is not what is most altruistic but what altruism is. It's not clear to me that most people mostly agree on what's altruistic or good--or even if one person is self-consistent in different contexts. Is there some case for this besides just saying "I have this intuition that most people agree on what's good"? Has there been much attempt by effective altruists to investigate what real people or organizations want?
"The burden of proof is on you."
No, most of the time the burden of proof is on both parties. In complete absence of any evidence both the statement and its logical negation have equal weight. So if one party states "you can't predict the shape of the bottle the liquid was poured out of from the glass it is in" and the other party states the opposite, the burden of proof lies on both parties to state their respective evidence. Of course in the special case above the disagreement was about the exact meaning of "can" or "can...
One sense of "burden of proof" seems to be a game-rule for a (non-Bayesian) adversarial debate game. It is intended to exclude arguments from ignorance, which if permitted would stall the game. The players are adversaries, not co-investigators. The player making a novel claim bears the burden of proof — rather than a person criticizing that claim — so that the players actually have to bring points to bear. Consider:
A: God loves frogs. They are, above all other animals, sacred to him.
B: I don't believe it.
A: But you can't prove that frogs aren't sacred!
B: Well of course not, it never occurred to me to consider as a possibility.
At this point the game would be stalled at zero points.
The burden-of-proof rule forbids A's last move. Since A started the game by making a positive claim — the special status of frogs — A has to provide some evidence for this claim. B can then rebut this evidence, and A can present new evidence, and then we have a game going:
A: God loves frogs. They are, above all other animals, sacred to him.
B: I don't believe it.
A: Well, the God Book says that God loves frogs.
B: But the God Book also says that chickens are a kind of flea, and modern taxonomy shows...
If one party is espousing a hypothesis which has a very low prior probability, then they suffer the burden of providing evidence to support this hypothesis. Finding evidence takes time and resources; if you want to support the low probability hypothesis, then you spend the resources.
Its probability is different in estimates of the people who disagree, and its best alternative will find the status of having "low probability" in estimate of different people. Just "low probability" doesn't make the situation asymmetric.
You should spend the resources when there is high value of information, otherwise do something else. Improving someone else's beliefs may have high value for them.