You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

The call of the void

-6 Elo 28 August 2016 01:17PM

Original post:  http://bearlamp.com.au/the-call-of-the-void

L'appel du vide - The call of the void.

When you are standing on the balcony of a tall building, looking down at the ground and on some track your brain says "what would it feel like to jump".  When you are holding a kitchen knife thinking, "I wonder if this is sharp enough to cut myself with".  When you are waiting for a train and your brain asks, "what would it be like to step in front of that train?".  Maybe it's happened with rope around your neck, or power tools, or what if I take all the pills in the bottle.  Or touch these wires together, or crash the plane, crash the car, just veer off.  Lean over the cliff...  Try to anger the snake, stick my fingers in the moving fan...  Or the acid.  Or the fire.

There's a strange phenomenon where our brains seem to do this, "I wonder what the consequences of this dangerous thing are".  And we don't know why it happens.  There has only been one paper (sorry it's behind a paywall) on the concept.  Where all they really did is identify it.  I quite like the paper for quoting both (“You know that feeling you get when you're standing in a high place… sudden urge to jump?… I don't have it” (Captain Jack Sparrow, Pirates of the Caribbean: On Stranger Tides, 2011). And (a drive to return to an inanimate state of existence; Freud, 1922).

Taking a look at their method; they surveyed 431 undergraduates for their experiences of what they coined HPP (High Place Phenomenon).  They found that 30% of their constituents have experienced HPP, and tried to measure if it was related to anxiety or suicide.  They also proposed a theory. 

...we propose that at its core, the experience of the high place phenomenon stems from the misinterpretation of a safety or survival signal. (e.g., “back up, you might fall”)

I want to believe it, but today there are Literally no other papers on the topic.  And no evidence either way.  So all I can say is - We don't really know.  s'weird.  Dunno.


This week I met someone who uncomfortably described their experience of toying with L'appel du vide.  I explained to them how this is a common and confusing phenomenon, and to their relief said, "it's not like I want to jump!".   Around 5 years ago (before I knew it's name) an old friend recounting the experience of living and wondering what it was like to step in front of moving busses (with discomfort), any time she was near a bus.  I have coaxed a friend out of the middle of a road (they weren't drunk and weren't on drugs at the time).  And dragged friends out of the ocean.  I have it with knives, in a way that borderlines OCD behaviour.  The desire to look at and examine the sharp edges.

What I do know is this.  It's normal.  Very normal.  Even if it's not 30% of the population, it could easily be 10 or 20%.  Everyone has a right to know that it happens, and it's normal and you're not broken if you experience it.  Just as common a shared human experience as common dreams like your teeth falling out, or of flying, running away from groups of people, or being underwater.  Or the experience of rehearsing what you want to say before making a phone call.  Or walking into a room for a reason and forgetting what it was.

Next time you are struck with the L'appel du vide, don't get uncomfortable.  Accept that it's a neat thing that brains do, and it's harmless.  Experience it.  And together with me - wonder why.  Wonder what evolutionary benefit has given so many of us the L'appel du vide.  

And be careful.


Meta: this took one hour to write.

Rationality when Insulated from Evidence

3 JustinMElms 29 June 2016 04:03PM

Basically: How does one pursue the truth when direct engagement with evidence is infeasible?

I came to this question while discussing GMO labeling. In this case I am obviously not in a position to experiment for myself, but furthermore: I do not have the time to build up the bank of background understanding to engage vigorously with the study results themselves. I can look at them with a decent secondary education's understanding of experimental method, genetics, and biology, but that is the extent of it.

In this situation I usually find myself reduced to weighing the proclamations of authorities

 

  • I review aggregations of authority from one side and then the other--because finding a truly unbiased source for contentious issues is always a challenge, and usually says more about the biases of whoever is anointing the source "unbiased." 
  • Once I have reviewed the authorities, I do at least some due diligence on each authority so that I can modulate my confidence if a particular authority is often considered partisan on an issue. This too can present a bias spiral checking for bias in the source pillorying the authority as partisan ad infinitum.
  • Once I have some known degree of confidence in the authorities of both sides, I can form some level of confidence in a statement like: "I am ~x% confident that the scientific consensus is on Y's side" or "I am ~Z% confident that there is not scientific consensus on Y"
Once that establishes a baseline on an issue, I am able to do some argumentation analysis to see what arguments each side has that simply should not be included in the discussion. This is usually irrelevant appeals (e.g.: In the GMO labeling debate, "It must be better because it's more natural") or corollary citations that are screened off by evidence closer to the source (e.g.: In the GMO labeling debate, "X many countries require GMO labeling" should be screened off by looking at the evidence that led to that decision).

After that, I find myself with a rather unfulfilling meta-assessment of an issue. I fear that I am asking for a non-existent shortcut around the hard solution of: "If an answer is important to you, do the necessary learning to at least be able to engage directly with the evidence," but I will ask anyway: does anyone else have strategies for seeking the truth while insulated from direct evidence?

 

How to be skeptical

-3 Clarity 26 December 2015 06:33AM

Community

The Center For Applied Rationality (CFAR) checklist is a heuristic for assessing the admissibility of one's own testimony. 

What of the challenge of evaluating the testimony of others?

Slapping the label of a bias on a situation?

Arguing at the object level by provision of evidence to the contrary?

This risks Gish Gallop. For those who prefer to pick their battles, I commisioned this post of my time, a structural intervention into the information ecosystem.

We need not event the wheel, for legal theorists have researched this issue for years, while practitioners and courts have identified heuristics useful to lay people interested in this field. 

Precedent 

The Daubert standard provides a rule of evidence regarding the admissibility of expert witnessestestimony during United States federal legal proceedings. Pursuant to this standard, a party may raise a Daubert motion, which is a special case of motion in limine raised before or during trial to exclude the presentation of unqualified evidence to the jury. The Daubert trilogy refers to the three United States Supreme Court cases that articulated the Daubert standard:

-https://en.wikipedia.org/wiki/Daubert_standard

Further reading on the case is available here on Google Scholar

Practice

How can this be applied in practice? 

What is the first principle of skepticism. It's effectively synonymous: 'question'

What question? This isn't the 5 W's of primary school, after all.

I have summarized critical questions to a reading here to get the ball rolling:

Issues to consider when contesting and evaluating expert opinion evidence

 

A. Relevance (on the voir dire)

I accept that you are highly qualified and have extensive experience, but how do we know that your level of performance regarding . . . [the task at hand — eg, voice comparison] is actually better than that of a lay person (or the jury)?

What independent evidence... [such as published studies of your technique and its accuracy] can you direct us to that would allow us to answer this question?

What independent evidence confirms that your technique works?

Do you participate in a blind proficiency testing program?

Given that you undertake blind proficiency exercises, are these exercises also given to lay persons to determine if there are significant differences in results, such that your asserted expertise can be supported?

B. Validation 

Do you accept that techniques should be validated?

Can you direct us to specific studies that have validated the technique that you used?

What precisely did these studies assess (and is the technique being used in the same way in this case)?

Have you ever had your ability formally tested in conditions where the correct answer was known? (ie, not a previous investigation or trial)

Might different analysts using your technique produce different answers?

Has there been any variation in the result on any of the validation or proficiency tests you know of or participated in?

Can you direct us to the written standard or protocol used in your analysis?

Was it followed?

C. Limitations and errors

Could you explain the limitations of this technique?

Can you tell us about the error rate or potential sources of error associated with this technique?

Can you point to specific studies that provide an error rate or an estimation of an error rate for your technique?

How did you select what to examine?

Were there any differences observed when making your comparison . . . [eg, between two fingerprints], but which you ultimately discounted? On what basis were these discounted?

Could there be differences between the samples that you are unable to observe?

Might someone using the same technique come to a different conclusion?

Might someone using a different technique come to a different conclusion?

Did any of your colleagues disagree with you?

Did any express concerns about the quality of the sample, the results, or your interpretation?

Would some analysts be unwilling to analyse this sample (or produce such a confident opinion)?

...

D Personal proficiency 

...

Have you ever had your own ability... [doing the specific task/using the technique] tested in conditions where the correct answer was known?

If not, how can we be confident that you are proficient?

If so, can you provide independent empirical evidence of your performance?


E Expressions of opinion 

...

Can you explain how you selected the terminology used to express your opinion? Is it based on a scale or some calculation?

If so, how was the expression selected?

Would others analyzing the same material produce similar conclusions, and a similar strength of opinion? How do you know?

Is the use of this terminology derived from validation studies?

Did you report all of your results?

You would accept that forensic science results should generally be expressed in non-absolute terms?



More

For further reading, I recommend the seminal text in cross-examination which is the 1903 The Art of Cross Examination.

The Full Text is available free here on Project Gutenberg.

Other countries use different standards, such as the Opinion Rule in Australia.


Deworming a movement

-6 Clarity 30 August 2015 09:25AM

Over the last few days I've been reviewing the evidence for EA charity recommendations. Based on my personal experience alone, the community seems to be comprehensively inept, poor at marketing, extremely insular, methodologically unsophisticated but meticulous, transparent and well-intentioned. I currently hold the belief that EA movement building does more harm than good and that is requires significant rebranding and shifts in its informal leadership or to die out before it damages the reputation of the rationalist community and our capacity to cooperate with communities that share mutual interests.

It's one thing to be ineffective and know it. It's another thing to be ineffective and not know it. It's yet another thing to be ineffective, not know it, yet champion effectiveness and make a claim to moral superiority.

In case you missed the memo deworming is controversial, GiveWell doesn't engage with the meat of the debate, and my investigations of the EA community's spaces suggests that it's not at all known. I've even briefly posted about it elsewhere on LessWrong to see if there was unspoken knowledge about it, but it seems not. Given that it's the hot topic in mainstream development studies and related academic communities, I'm aghast at how irresponsive 'we' are.

What's actionable for us here. If you're looking for a high reliability effective altruism prospect, do not donate to SCI or Evidence Action. And by extension, do not donate to EA organisations to donate to these groups, including GiveWell. I am assuming you will use those funds more wisely instead, say buying healthier food for yourself.

For who don't to review the links for a more comprehensive analyses from Cochrane and GiveWell, here is one summary of the debate recommended in the Cochrane article:

Last month there was another battle in an ongoing dispute between economists and epidemiologists over the merits of mass deworming. In brief, economists claim there is clear evidence that cheap deworming interventions have large effects on welfare via increased education and ultimately job opportunities. It’s a best buy development intervention. Epidemiologists claim that although worms are widespread and can cause illnesses sometimes, the evidence of important links to health is weak and knock-on effects of deworming to education seem implausible. As stated by Garner “the belief that deworming will impact substantially on economic development seems delusional when you look at the results of reliable controlled trials.”

Aside: Framing this debate as one between economists and epidemiologists captures some of the dynamic of what has unfortunately been called the “worm wars” but it is a caricature. The dispute is not just between economists and epidemiologists. For an earlier round of this see this discussion here, involving health scientists on both sides. Note also that the WHO advocates deworming campaigns.

So. Deworming: good for educational outcomes or not?

On their side, epidemiologists point to 45 studies that are jointly analyzed in Cochrane reports. Among these they see few high quality studies on school attendance in particular, with a recent report concluding that they “do not know if there is an effect on school attendance (very low quality evidence).” Indeed they also see surprisingly few health benefits. One randomized control trial included one million Indian students and found little evidence of impact on health outcomes. Much bigger than all other trials combined; such results raise questions for them about the possibility of strong downstream effects. Economists question the relevance of this result and other studies in the Cochrane review.

On their side, the chief weapon in the economists’ arsenal has for some time been a paper from 2004 on a study of deworming in West Kenya by Ted Miguel and Michael Kremer, two leading development economists that have had an enormous impact on the quality of research in their field. In this paper, Miguel and Kremer (henceforth MK) claimed to show strong effects of deworming on school attendance not just for kids in treated schools but also for the kids in untreated schools nearby. More recently a set of new papers focusing on longer term impacts, some building on this study, have been added to this arsenal. In addition, on their side, economists have a few things that do not depend on the evidence at all: determination, sway, and the moral high ground. After all, who could be against deworming kids?

 


 

Additional criticisms of GiveWelL charities: http://lesswrong.com/lw/mo0/open_thread_aug_24_aug_30/cp8h

The kind of work I think EA's should be focussing on http://lesswrong.com/lw/mld/genosets/cnys AND

http://lesswrong.com/r/discussion/lw/mk2/lets_pour_some_chlorine_into_the_mosquito_gene/

The problem with MIRI: http://lesswrong.com/lw/cr7/proposal_for_open_problems_in_friendly_ai/cm2j

 

 

[LINK] Amanda Knox exonerated

9 fortyeridania 28 March 2015 06:15AM

Here are the New York Times, CNN, and NBC. Here is Wikipedia for background.

The case has made several appearances on LessWrong; examples include:

Does this seem to you like evidence for the existence of psychic abilities in humans?

-5 gothgirl420666 30 May 2014 02:44AM

I was recently reminded of something I have encountered that seems to me to be good evidence for paranormal phenomena. Can anyone help me figure out what might be going on? 

When I was a little younger, I used to play the online riddle game Notpron. In this game, the player (essentially) has to analyze a webpage for clues towards the URL to the next webpage, and then repeat for 140 stages. The creator of this game, DavidM, at some point became a huge new age conspiracy theory loony type. Three years after the original ending of the riddle went online, he revised it to include an additional final level: Level Nu. This level is very different than the ones preceding it. I can't link to the page for obvious reasons, but I will transcribe it here:

835 492 147 264

Remote view the photography this number represents!

Email me all your results to david@david-m.org. I'll get you some feedback. Get me all elements or impressions that seem really strong for you. Or send me your sketches if you like.

Don't bruteforce, or you'll be banned from this one. You have as many attempts as you like, take your time.

Yes, I mean it. No tricks here, just pure remote viewing. The number represents a picture, I want to know what's on there.

So learn some remote viewing technique you like best and go ahead. The internet has lots of information. Have fun!

Please do this ALL by yourself, not even with your very very close friends. Because its boring and stupid, and because you can put bullshit into each others head, which is hard to get rid of again, because the mind needs to be shut down for this to work properly. So do it alone, just talk to me about it, please.

(Yes, this really works, one friend got the content of the picture on first try...and yes, he only got the number from me.)

I personally tried to solve it myself. I was less of a rationalist back then, and so I was fairly open-minded about the existence of most paranormal phenomena. The picture I was looking for was the shark

Here is a shortened, paraphrased transcript of our email conversation:

Me: I'm imagining palm trees by a lake at sunset.
David: It's not bad, but I don't want to give you any more information because it will interfere with your efforts.
Me: I'm picturing an elephant walking into a barn.
David: Nope. 
Me: How many people have attempted this? And how many people have solved it with the current picture?
David: About half of the people who attempted solved it. Most solved it on their first try. I don't know exactly how many people solved this picture, but it has been a few. 
Me: Is it a space shuttle?
David: No. 
Me: (Expressions of frustration, with a few guesses thrown in.)
David: (Encouragement and advice, no comment on the guesses. Says "I can very well see that you receive the right input, but your mind is screwing it up into something else.")
Me: It's a bee?
David: No. Are you getting more subtle input, instead of a specific idea?
Me: Yeah, for that one, I saw something sharp, bright yellow colors, symmetry, a noisy drone, and two colors in pattern.
David: So THIS is interesting. Everything else you said wasn't!
Me: Are you saying that I was close? 
David: These elements sound like they are on target. They are too vague yet to tell if they are for real. 
Me: Thanks. The only other thing I could think about that relates to those elements is a pencil. I'll try again tonight. 
David: Stop fiddling around with your mind about this. It's bound to fail. There's no way to guess the target just from what you said. 
Me: I just tried it again. Is it a helicopter?
David: Are you sure you aren't viewing the old solution? There was a helicopter involved. 
Me: The boat? I'm not trying too. I guess I'll just keep trying... I even have the numbers memorized at this point.
David. The boat was shot from a helicopter. You shouldn't memorize the numbers. They don't matter. Memorizing them might just create unwanted associations.
Me: Okay. I say helicopter because I had an experience where I saw a bunch of spinning fan blades. I was going to guess a fan, but I could sense that there was more. Then I went "through" the fan blades and for a second I saw the whole helicopter. 
David: It sounds like it could be on target. But ignore it, it's not the object of interest.

At that point, I lost interest and gave up. Looking back, I can honestly say that I saw nothing remotely (haha) similar to the picture of the shark. I was not even a tiny bit close. I'm not sure why David said that I was on track, I can't see any association between the shark and what I was guessing. 

So that's everything I know. 

Points in favor of it being real:
  • "Most people" apparently guessed it on their first try.
  • According to David, about half the people who tried it have solved it. 
  • The dream thing - absolutely insane, hard to imagine that it's a coincidence. 
  • David did not consider the guy who guessed the shark as "something approaching me, it is a situation that I need to react to" to have solved the level. This shows that he requires fairly high standards of accuracy.
  • David implies that in order to have guessed the boat, you need to say the word "boat", also implying high standards. 
  • David did not really give me very much help or "lead" me anywhere when I tried to solve it. 
Points in favor of it being fake:
  • One person who solved it says that he did not solve it using remote viewing. 
  • It didn't work for me at all. 
  • David might very well be exaggerating both the percentage of people who successfully solved it and the percentage of people who guessed it on their first try. 
  • David might be (and in fact probably is) only reporting the "best" answers in his forum posts. For the fruit and the shark, he seems to be posting about half of the people who solved it in that time period. For the boat, he doesn't really give specifics, and instead says "Most people just said it was a boat on their first guess."
Here are my two theories regarding this.
  • Maybe DavidM is in fact "leading" people to the answer through a series of multiple guesses. For this to be true, however, a few things would have to be the case. First of all, his assertion that most people guessed it on their first try would have to be greatly exaggerated. Let's imagine that David is outright lying about most people guessing it on their first try and that half the people who attempted the riddle solved it. However, at least six people (I don't feel like going back through all 29 pages and counting) posted on the forum that they solved it on their first try. Let's imagine that all 300 people who reached the level attempted it. This is still a 1/50 "first guess" rate, and that's out of all the photographs in the world. However, maybe by some conjunction of 1) exaggerating those two numbers, 2) his dialogue with me being atypical, 3) the answers he posted on the forum being atypical, 4) his refusal to accept "something approaching me" being atypical and 5) the dream being a total coincidence, it may be true that he actually is doing a form of "leading" and is covering it up well. This feels like a really unsatisfactory answer. It relies on a lot of conjunctions and it seems clear that the only way to arrive at it is by a thorough search for some sort of answer that fits nicely in with our pre-existing worldview. That being said, I suspect it might be the most likely answer. 
  • Perhaps the level is an elaborate joke. In reality there is some other more conventional means of arriving at a solution, and people who solve it are told to play along. I can sort of see this being the case, given that 1) there are some other levels of Notpron that have "prankster-ish" elements and 2) I have actually myself been a part of a very similar joke on an even bigger scale, so I know that it can happen. However, on the other hand, DavidM really strongly believes in the conspiracy theory new age stuff and vigorously promotes it, so it seems unlikely that he would sabotage his own ideology like that. Also, while there are other prankster-ish levels of Notpron, nothing comes close to being as clever or elaborate as this scenario would be. 
So, given the above and this recent article from Slate Star Codex, I feel like I am forced to raise my credence level for remote viewing being real to somewhere between 50 and 60 percent. 

Does this seem in error to you? 
 

Evidence and counterexample to positive relevance

-2 fsopho 25 May 2013 06:40PM

I would like to share a doubt with you. Peter Achinstein, in his The Book of Evidence considers two probabilistic views about the conditions that must be satisfied in order for e to be evidence that h. The first one says that e is evidence that h when e increases the probability of h when added to some background information b:

(Increase in Probability) e is evidence that iff P(h|e&b) > P(h|b).

 

The second one says that e is evidence that h when the probability of h conditional on e is higher than some threshold k:

(High Probability) e is evidence that h iff P(h|e) > k.

 

A plausible way of interpreting the second definition is by saying that k = 1/2. When one takes k to have such fixed value, it turns out that P(h|e) > k has the same truth-conditions as P(h|e) > P(~h|e) - at least if we are assuming that P is a function obeying Kolmogorov's axioms of the probability calculus. Now, Achinstein takes P(h|e) > to be a necessary but insufficient condition for e to be evidence that h - while he claims that P(h|e&b) > P(h|b) is neither necessary nor sufficient for e to be evidence that h. That may seem shocking for those that take the condition fleshed out in (Increase in Probabilityat least as a necessary condition for evidential support (I take it that the claim that it is necessary and sufficient is far from accepted - presumably one also wants to qualify e as true, or as known, or as justifiably believed, etc). So I would like to check one of Achinstein's counter-examples to the claim that increase in probability is a necessary condition for evidential support.

The relevant example is as follows:

 

The lottery counterexample

Suppose one has the following background b and piece of evidence e1:

b:  This is a fair lottery in which one ticket drawn at random will win.

e1The New York Times reports that Bill Clinton owns all but one of the 1000 lottery tickets sold in a lottery.

Further, one also learns e2:

e2The Washington Post reports that Bill Clinton owns all but one of the 1000 lottery tickets sold in a lottery. 

So, one has evidence in favor of

h:  Bill Clinton will win the lottery.

 

The point now is that, although it seems right to regard e2 as being evidence in favor of h, it fails to increase h's probability conditional on (b&e1) - at least so says Achinstein. According to his example, the following is true:

 

P(h|b&e1&e2) = P(h|b&e1) = 999/1000.

 

Well, I have my doubts about this counterexample. The problem with it seems to me to be this: that e1 and e2 are taken to be the same piece of evidence. Let me explain. If e1 and e2 increase the probability of h, that is because they increase the probability of a further proposition:

 

g: Bill Clinton owns all but one of the 1000 lottery tickets sold in a lottery,

 

and, as it happens, g increases the probability of h. That The New York Times reports g, assuming that the New York Times is reliable, increases the probability of g - and the same can be said about The Washington Post reporting g. But the counterexample seems to assume that both e1 and e2 are equivalent with g, and they're not. Now, it is clear that P(h|b&g) = P(h|b&g&g), but this does not show that e2 fails to increase h's probability on (b&e1). So, if it is true that e2 increases the probability of g conditional on e1, that is, if P(g|e1&e2) > P(g|e1), and if it is true that g increases the probability of h, then it is also true that e2 increases the probability of h. I may be missing something, but this reasoning sounds right to me - the example wouldn't be a counterexample. What do you think?

Seeking reliable evidence - claim that closing sweatshops leads to child prostitution

11 michaelcurzi 04 May 2013 02:51AM

I've been looking for reliable evidence of a claim I've heard a few times. The claim is that the closing of sweatshops (by anti-globalization activists) has resulted in many of the child workers becoming prostitutes. The idea is frequently proffered as an example of do-gooder foolishness ignoring basic economics and screwing people over.

However, despite searching for a while, I can't find anything to indicate that this actually happened.

Some guy at the Library of Economics and Liberty mentions it here:

In one famous 1993 case U.S. senator Tom Harkin proposed banning imports from countries that employed children in sweatshops. In response a factory in Bangladesh laid off 50,000 children. What was their next best alternative? According to the British charity Oxfam a large number of them became prostitutes.

But in the article, Paul Krugman mentions the Oxfam study without citation:

In 1993, child workers in Bangladesh were found to be producing clothing for Wal-Mart, and Senator Tom Harkin proposed legislation banning imports from countries employing underage workers. The direct result was that Bangladeshi textile factories stopped employing children. But did the children go back to school? Did they return to happy homes? Not according to Oxfam, which found that the displaced child workers ended up in even worse jobs, or on the streets -- and that a significant number were forced into prostitution.

I looked at some Oxfam stuff, but couldn't find the study.

A similar claim is made in The Race to the Top: The Real Story of Globalization by Tomas Larsson (go here and use the search tool for the word 'prostitution'), but doesn't mention the Oxfam study:

Keith E. Maskus, an economist at the University of Colorado, has studied the issue... He concludes that... "The celebrated French ban of soccer balls sewn in Pakistan for the World Cup in 1998 resulted in significant dislocation of children from employment. Those who tracked them found that a large proportion ended up begging and/or in prostitution,"

I looked for a paper or something by Maskus but came up empty.

I was taught this fact at a Poli Sci class in college, but I'm starting to think it's more likely to be an information cascade. Can anyone do a better job than me?

Thanks in advance.

Memes?

7 Crystalist 23 September 2012 12:05AM

"All models are wrong, but some are useful" — George E. P. Box

As a student of linguistics, I’ve run into the idea of a meme quite a lot. I’ve even looked into some of the proposed mathematical models for how they transmit across generations.

And it certainly is a compelling idea, not least because the potential for modeling cultural evolution alone is incredible. But while I was researching the idea (and admittedly, this was some time ago; I could well be out of date) I never once saw a test of the model. Oh, there were several proposed applications, and a few people were playing around with models borrowed from population genetics, but I saw no proof of concept.

This became more of a problem when I tried to make the idea pay rent. I don’t think anyone disputes that ideas, behaviors, etc. are transmitted across and within generations, or that these ideas, behaviors, etc. change over time. As I understand it, though, memetics argues that these ideas and behaviors change over time in a pattern analogous to the way that genes change.

The most obvious problem with this is that genes can be broken down into discrete units. What’s the fundamental unit of an idea? Of course, in a sense, we could think of the idea as discrete, if we look at the neural pattern it’s being stored as. This exact pattern is not necessarily transmitted through whatever channel(s) you’re using to communicate it — the pattern that forms in someone else’s brain could be different. But having a mechanism of reproduction isn’t so important as showing a pattern to the results of that reproduction: after all, Darwin had no mechanism, and yet we think of him as one of the key figures in discovering evolution.

But I haven’t seen evidence for the assertion that memes change through time like genes. I have seen anecdotes and examples of ideas and behaviors that have spread through a culture, but no evidence that the pattern is the same. I haven’t even seen a clear way of identifying a meme, observing it’s reproduction, or tracking its offspring. Not so much as a study on the change of frequency of memes in an isolated population. Memetics today has less evidence than Darwin did when he started out; at least Darwin could point to discrete entities that were changing.

Without this sort of evidence, all the concept of a meme gives me is that ideas and behaviors can get transmitted, and that they can change. And I don’t need a new concept for that. Every now and then I’ll run a search on memetics just to see if anyone’s tried to address these problems — after all, a model describing how the frequency of ideas change in a population could be extremely useful to me — but so far I’ve seen nothing, and I don’t usually have the time to run a truly thorough search.

If any of you have, and if you know of evidence for the concept, please send me a link.

Beyond Reasonable Doubt? - Richard Dawkins [link]

24 Dreaded_Anomaly 10 February 2012 02:28AM

A new article looking at the jury system rationally and scientifically.

Excerpt:

Courtroom dramas accurately portray the suspense that hangs in the air when the jury returns and delivers its verdict. All, including the lawyers on both sides and the judge, are on tenterhooks and hold their breath while they wait to hear the foreman of the jury pronounce the words, “Guilty” or “Not guilty”. However, if the phrase “beyond reasonable doubt” means what it says, there should be no doubt of the outcome in the mind of anybody who has sat through the same trial as the jury. That includes the judge who, as soon as the jury has delivered its verdict, is prepared to give the order for execution — or release the prisoner without a stain on his character.

And yet, before the jury returned, there was enough “reasonable doubt” in that same judge’s mind to keep him on tenterhooks waiting for the verdict.

You cannot have it both ways. Either the verdict is beyond reasonable doubt, in which case there should be no suspense while the jury is out. Or there is real, nail-biting suspense, in which case you cannot claim that the case has been proved “beyond reasonable doubt”.

This really struck me as something that could have been on LW's front page.

two puzzles on rationality of defeat

4 fsopho 12 December 2011 02:17PM

I present here two puzzles of rationality you LessWrongers may think is worth to deal with. Maybe the first one looks more amenable to a simple solution, while the second one has called attention of a number of contemporary epistemologists (Cargile, Feldman, Harman), and does not look that simple when it comes to a solution. So, let's go to the puzzles!

 

Puzzle 1 

At t1 I justifiably believe theorem T is true, on the basis of a complex argument I just validly reasoned from the also justified premises P1, P2 and P3.
So, in t1 I reason from premises:
 
(R1) P1, P2 ,P3
 
To the known conclusion:
 
(T) T is true
 
At t2, Ms. Math, a well known authority on the subject matter of which my reasoning and my theorem are just a part, tells me I’m wrong. She tells me the theorem is just false, and convince me of that on the basis of a valid reasoning with at least one false premise, the falsity of that premise being unknown to us.
So, in t2 I reason from premises (Reliable Math and Testimony of Math):
 
(RM) Ms. Math is a reliable mathematician, and an authority on the subject matter surrounding (T),
 
(TM) Ms. Math tells me T is false, and show to me how is that so, on the basis of a valid reasoning from F, P1, P2 and P3,
 
(R2) F, P1, P2 and P3
 
To the justified conclusion:
 
(~T) T is not true
 
It could be said by some epistemologists that (~T) defeat my previous belief (T). Is it rational for me to do this way? Am I taking the correct direction of defeat? Wouldn’t it also be rational if (~T) were defeated by (T)? Why ~(T) defeats (T), and not vice-versa? It is just because ~(T)’s justification obtained in a later time?


Puzzle 2

At t1 I know theorem T is true, on the basis of a complex argument I just validly reasoned, with known premises P1, P2 and P3. So, in t1 I reason from known premises:
 
(R1) P1, P2 ,P3
 
To the known conclusion:
 
(T) T is true
 
Besides, I also reason from known premises:
 
(ME) If there is any evidence against something that is true, then it is misleading evidence (evidence for something that is false)
 
(T) T is true
 
To the conclusion (anti-misleading evidence):
 
(AME) If there is any evidence against (T), then it is misleading evidence
 
At t2 the same Ms. Math tells me the same thing. So in t2 I reason from premises (Reliable Math and Testimony of Math):
 
(RM) Ms. Math is a reliable mathematician, and an authority on the subject matter surrounding (T),
 
(TM) Ms. Math tells me T is false, and show to me how is that so, on the basis of a valid reasoning from F, P1, P2 and P3,
 
But then I reason from::
 
(F*) F, RM and TM are evidence against (T), and
 
(AME) If there is any evidence against (T), then it is misleading evidence
 
To the conclusion:
 
(MF) F, RM and TM is misleading evidence
 
And then I continue to know T and I lose no knowledge, because I know/justifiably believe that the counter-evidence I just met is misleading. Is it rational for me to act this way?
I know (T) and I know (AME) in t1 on the basis of valid reasoning. Then, I am exposed to misleading evidences (Reliable Math), (Testimony of Math) and (F). The evidentialist scheme (and maybe still other schemes) support the thesis that (RM), (TM) and (F) DEFEATS my justification for (T) instead. So that whatever I inferred from (T) is no longer known. However, given my previous knowledge of (T) and (AME), I could know that (MF): F is misleading evidence. It can still be said that (RM), (TM) and (F) DEFEAT my justification for (T), given that (MF) DEFEAT my justification for (RM), (TM) and (F)?

Russ Roberts and Gary Taubes on confirmation bias [podcast]

4 fortyeridania 04 December 2011 05:51AM

Here is the link. The context is nutritional science and epidemiology, but confirmation bias is the primary theme pumping throughout the discussion. Gary Taubes has gained a reputation for contrarianism.* According to Taubes, the current nutritional paradigm (fat is bad, exercise is good, carbs are OK) does not deserve high credibility.

Roberts brings up the role of identity in perpetuating confirmation bias--a hypothesis has become part of you, so it has become that much harder to countenance contrary evidence. In this context they also talk about theism (Roberts is Jewish, while Taubes is an atheist). And, the program being EconTalk, Roberts draws analogies with economics.

*Sometime between 45 and 50 minutes in, Roberts points out that given this reputation, Taubes is susceptible to belief distortion as well:

What's your evidence that you are not just falling prey to the Ancel Keys and other folks who have made the same mistake?

I do not think Taubes gives a direct answer.

Scooby Doo and Secular Humanism [link]

26 Dreaded_Anomaly 03 December 2011 04:58AM

A great column by Chris Sims at the Comics Alliance.

Excerpt:

Because that's the thing about Scooby-Doo: The bad guys in every episode aren't monsters, they're liars.

I can't imagine how scandalized those critics who were relieved to have something that was mild enough to not excite their kids would've been if they'd stopped for a second and realized what was actually going on. The very first rule of Scooby-Doo, the single premise that sits at the heart of their adventures, is that the world is full of grown-ups who lie to kids, and that it's up to those kids to figure out what those lies are and call them on it, even if there are other adults who believe those lies with every fiber of their being. And the way that you win isn't through supernatural powers, or even through fighting. The way that you win is by doing the most dangerous thing that any person being lied to by someone in power can do: You think.

Tim Minchin fans may recall him mentioning Scooby Doo in a similar light in his beat poem Storm, and it's been brought up on Less Wrong before.

When viewed in this light, Scooby Doo really is like an elementary version of Methods of Rationality.

Religion, happiness, and Bayes

3 fortyeridania 04 October 2011 10:21AM

Religion apparently makes people happier. Is that evidence for the truth of religion, or against it?

(Of course, it matters which religion we're talking about, but let's just stick with theism generally.)

My initial inclination was to interpret this as evidence against theism, in the sense that it weakens the evidence for theism. Here's why:

  1. As all Bayesians know, a piece of information F is evidence for an hypothesis H to the degree that F depends on H. If F can happen just as easily without H as with it, then F is not evidence for H. The more likely we are to find F in a world without H, the weaker F is as evidence for H.
  2. Here, F is "Theism makes people happier." H is "Theism is true."
  3. The fact of widespread theism is evidence for H. The strength of this evidence depends on how likely such belief would be if H were false.
  4. As people are more likely to do something if it makes them happy, people are more likely to be theists given F.
  5. Thus F opens up a way for people to be theists even if H is false.
  6. It therefore weakens the evidence of widespread theism for the truth of H.
  7. Therefore, F should decrease one's confidence in H, i.e., it is evidence against H.

We could also put this in mathematical terms, where F represents an increase in the prior probability of our encountering the evidence. Since that prior is a denominator in Bayes' equation, a bigger one means a smaller posterior probability--in other words, weaker evidence.

OK, so that was my first thought.

But then I had second thoughts: Perhaps the evidence points the other way? If we reframe the finding as "Atheism causes unhappiness," or posit that contrarians (such as atheists) are dispositionally unhappy, does that change the sign of the evidence?

Obviously, I am confused. What's going on here?

Make evidence charts, not review papers? [Link]

14 XiXiDu 04 September 2011 01:26PM

How do you get on top of the literature associated with a controversial scientific topic? For many empirical issues, the science gives a conflicted picture. Like the role of sleep in memory consolidation, the effect of caffeine on cognitive function, or the best theory of a particular visual illusion. To form your own opinion, you’ll need to become familiar with many studies in the area.

You might start by reading the latest review article on the topic. Review articles provide descriptions of many relevant studies. Also, they usually provide a nice tidy story that seems to bring the literature all together into a common thread- that the author’s theory is correct! Because of this bias, a review article may not help you much to make an independent evaluation of the evidence. And the evidence usually isn’t all there. Review articles very rarely describe, or even cite, all the relevant studies. Unfortunately, if you’re just getting started, you can’t recognize which relevant studies the author didn’t cite.

[...]

Hal Pashler and I have created, together with Chris Simon of the Scotney Group who did the actual programming, a tool that addresses these problems. It allows one to create systematic reviews of a topic, without having to write many thousands of words, and without having to weave all the studies together with a narrative unified by a single theory. You do it all in a tabular form called an ‘evidence chart’. Evidence charts are an old idea, closely related to the “analysis of competing hypotheses” technique. Our evidencechart.com website is fully functioning and free to all, but it’s in beta and we’d love any feedback.

More: alexholcombe.wordpress.com/2010/09/02/make-evidence-charts-not-review-papers/

Example: What is the role of sleep on hippocampus-dependent memory consolidation?

I thought this was an interesting idea. Do you think it would be possible and useful to create an evidence chart for risks from AI, existential risks in general and other topics on lesswrong?

A failure of conservation of evidence

0 PhilGoetz 24 August 2011 03:21PM

From Bloomberg:

U.S. stocks rallied, driving the Standard & Poor’s 500 Index up from the cheapest valuations since 2009, as weaker-than-estimated economic data reinforced optimism the Federal Reserve will act to spur growth.

The S&P 500 rose 3.4 percent to 1,162.35 at 4 p.m. in New York, for the biggest rally since Aug. 11. All 10 industries in the benchmark gauge rose, with gains ranging between 1.8 percent and 4.6 percent. The Dow Jones Industrial Average added 322.11 points, or 3 percent, to 11,176.76.

“There’s plenty of evidence that the economy has slowed,” Kevin Caron, market strategist in Florham Park, New Jersey, at Stifel Nicolaus & Co., said in a telephone interview. His firm has more than $115 billion in client assets. “The speculation would be that it’s possible that the Fed will say something designed to calm markets and provide a bit of encouragement.”

So, bad news about the economy explains the market going both up and down!

(Presumably, good news about the economy can also explain the market going both up and down.)

What do the patterns of good and bad behaviours in an online world reveal about the nature of humanity?

5 XiXiDu 06 July 2011 05:36PM

Title:

Emergence of good conduct, scaling and Zipf laws in human behavioral sequences in an online world

Abstract:

We study behavioral action sequences of players in a massive multiplayer online game. In their virtual life players use eight basic actions which allow them to interact with each other. These actions are communication, trade, establishing or breaking friendships and enmities, attack, and punishment. We measure the probabilities for these actions conditional on previous taken and received actions and find a dramatic increase of negative behavior immediately after receiving negative actions. Similarly, positive behavior is intensified by receiving positive actions. We observe a tendency towards anti-persistence in communication sequences. Classifying actions as positive (good) and negative (bad) allows us to define binary 'world lines' of lives of individuals. Positive and negative actions are persistent and occur in clusters, indicated by large scaling exponents alpha~0.87 of the mean square displacement of the world lines. For all eight action types we find strong signs for high levels of repetitiveness, especially for negative actions. We partition behavioral sequences into segments of length n (behavioral `words' and 'motifs') and study their statistical properties. We find two approximate power laws in the word ranking distribution, one with an exponent of kappa-1 for the ranks up to 100, and another with a lower exponent for higher ranks. The Shannon n-tuple redundancy yields large values and increases in terms of word length, further underscoring the non-trivial statistical properties of behavioral sequences. On the collective, societal level the timeseries of particular actions per day can be understood by a simple mean-reverting log-normal model.

Link:

http://arxiv.org/abs/1107.0392

Popular science interpretation:

The way patterns of behaviour emerge and spread through society is the subject of intense research at the moment.

[...] behaviours spread from one network to another, for example, an angry phone conversation can affect the next email you write.

Today, Stefan Thurner at the Santa Fe Institute in New Mexico and a couple of pals [...] study the patterns of behaviour that emerge in a virtual world where every interaction is recorded for posterity.

The world they've chosen is a massive multiplayer online game called Pardus, which started in 2004 and today has some 380,00 players.

Thurner and co studied eight basic actions in which players interact with each other. These are: communication, trade, establishing or breaking friendships and enmities, attack and punishment. They simply recorded the stream of actions that each player performs and then looked for patterns that occur more often than expected.

Their conclusions are straightforward to state. Thurner and co found that positive behaviour intensifies after an individual receives a positive action.

However, they also found a far more dramatic increase in negative behaviour immediately after an individual receives a negative action. "The probability of acting out negative actions is about 10 times higher if a person received a negative action at the previous timestep than if she received a positive action," they say.

Negative action is also more likely to be repeated than merely reciprocated, which is why it spreads more effectively.

So negative actions seem to be more infectious than positive ones.

However, players with a high fraction of negative actions tend to have shorter lives. Thurner and co speculate that there may be two reasons for this: "First because they are hunted down by others and give up playing, second because they are unable to maintain a social life and quit the game because of loneliness or frustration."

So the bottom line is that the society tends towards positive behaviour.

[...] it opens a new front in the study of the human condition and the origin of good and bad behaviour.

[...]

"We interpret these findings as empirical evidence for self organization towards reciprocal, good conduct within a human society," they say.

[...] (popsci author note) Maybe. More interesting will be a next generation of studies that examine how small changes in environmental conditions can lead to big changes in behaviour.

Link:

http://www.technologyreview.com/blog/arxiv/26967/

Link: "Health Care Myth Busters: Is There a High Degree of Scientific Certainty in Modern Medicine?"

8 CronoDAS 01 April 2011 05:25AM

A feature in Scientific American magazine casts some light on the troubled state of modern medicine.

Health Care Myth Busters: Is There a High Degree of Scientific Certainty in Modern Medicine?

Short excerpt:

We could accurately say, "Half of what physicians do is wrong," or "Less than 20 percent of what physicians do has solid research to support it." Although these claims sound absurd, they are solidly supported by research that is largely agreed upon by experts.

Scientific American often gates its online articles after some time has passed, so I don't know how long it will be available.

VIDEO: The Problem With Anecdotes

5 JenniferRM 12 January 2011 02:37AM

Inspired by some of the comments in Back To The Basics I thought it might be interesting to see whether and how video embedding works in the discussion area.  The experiment is intended to function technically to see if this is possible, but also socially to see if the reaction is good and the comments are high quality.

When trying to set up this video I clicked the "HTML" button among the text tools (to the right of "Insert/edit image" and to the left of "Insert horizontal ruler".  In the text box that popped up, I pasted the html that I had already found on youtube by pressing the "Embed" button for a video that seemed thematically appropriate.

Assuming that this technically succeeds, we'll all have an some anecdotal evidence about whether videos are a positive contribution to LW.

One thing that might be useful to mention is that QualiaSoup has produced about 28 videos of which I picked one that seemed particularly relevant to this forum and that I watched before posting.  I didn't really learn anything from this but I also didn't notice anything glaringly wrong with it.  If this experiment is good enough to repeat we might want to think about the standards we'd expect video posts to live up to.  Not sure what those should be, but it seemed like a good idea to mention that conversation around this might be useful.

Without further ado, "The Problem With Anecdotes"...

 

 

The Decline Effect and the Scientific Method [link]

12 Dreaded_Anomaly 31 December 2010 01:23AM

The Decline Effect and the Scientific Method (article @ the New Yorker)

First, as a physicist, I do have to point out that this article concerns mainly softer sciences, e.g. psychology, medicine, etc.

A summary of explanations for this effect:

  • "The most likely explanation for the decline is an obvious one: regression to the mean. As the experiment is repeated, that is, an early statistical fluke gets cancelled out."
  • "Jennions, similarly, argues that the decline effect is largely a product of publication bias, or the tendency of scientists and scientific journals to prefer positive data over null results, which is what happens when no effect is found."
  • "Richard Palmer... suspects that an equally significant issue is the selective reporting of results—the data that scientists choose to document in the first place. ... Palmer emphasizes that selective reporting is not the same as scientific fraud. Rather, the problem seems to be one of subtle omissions and unconscious misperceptions, as researchers struggle to make sense of their results."
  • "According to Ioannidis, the main problem is that too many researchers engage in what he calls “significance chasing,” or finding ways to interpret the data so that it passes the statistical test of significance—the ninety-five-per-cent boundary invented by Ronald Fisher. ... The current “obsession” with replicability distracts from the real problem, which is faulty design."

These problems are with the proper usage of the scientific method, not the principle of the method itself. Certainly, it's important to address them. I think the reason they appear so often in the softer sciences is that biological entities are enormously complex, and so higher-level ideas that make large generalizations are more susceptible to random error and statistical anomalies, as well as personal bias, conscious and unconscious.

For those who haven't read it, take a look at Richard Feynman on cargo cult science if you want a good lecture on experimental design.

Risk is not empirically correlated with return

3 jsalvatier 22 November 2010 05:31AM

The most widely appreciated finance theory is the Capital Asset Pricing Model. It basically says that diminishing marginal utility of absolute wealth implies that riskier financial assets should have higher expected returns than less risky assets and that only risk correlated with the market (beta risk) is a whole is important because other risk can be diversified out.

Eric Falkenstein argues that the evidence does not support this theory; that the riskiness of assets (by any reasonable definition) is not positively correlated with return (some caveats apply). He has a paper (long but many parts are skimmable; not peer reviewed; also on SSRN) as well as a book on the topic. I recommend reading parts of the paper.

The gist of his competing theory is that people care mostly about relative gains rather than absolute gains. This implies that riskier financial assets will not have higher expected returns than less risky assets. People will not require a higher return to hold assets with higher undiversifiable variance because everyone is exposed to the same variance and people only care about their relative wealth.

Falkenstein has a substantial quantity of evidence to back up his claim. I am not sure if his competing theory is correct, but I find the evidence against the standard theory quite convincing.

If risk is not correlated with returns, then anyone who is mostly concerned with absolute wealth can profit from this by choosing a low beta risk portfolio.

This topic seems more appropriate for the discussion section, but I am not completely sure, so if people think it belongs in the main area, let me know.

Added some (hopefully) clarifying material:

All this assumes that you eliminate idiosyncratic risk through diversification. Technically impossible, but you can get it reasonably low. The R's are all *instantaneous* returns; though since these are linear models they apply to geometrically accumulated returns as well. The idea that E(R_asset) are independent of past returns is a background assumption for both models and most of finance.

Beta_portfolio = Cov(R_portfolio, R_market)/variance(R_market)

In CAPM your expected and variance are:

E(R_portfolio) = R_rfree + Beta_portfolio * (E(R_market) - R_rfree)
Var(R_portfolio) = Beta_portfolio * Var(R_market)

in Falkenstein's model your expected return are:

E(R_portfolio) = R_market       # you could also say = R_rfree; the point is that its a constant
Var(R_portfolio) = Beta_portfolio * Var(R_market)

The major caveat being that it doesn't apply very close to Beta_portfolio = 0; Falkenstein attributes this to liquidity benefits. And it doesn't apply to very high Beta_portfolio; he attributes this to "buying hope". See the paper for more.

Falkenstein argues that his model fits the facts more closely than CAPM. Assuming Falkenstein's model describes reality, if your utility declines with rising Var(R_portfolio) (the standard assumption), then you'll want to hold a portfolio with a beta of zero; or taking into account the caveats, a low Beta_portfolio. If your utility is declining with Var(R_portfolio - R_market), then you'll want to hold the market portfolio. Both of these results are unambiguous since there's no trade off between either measure of risk and return.

Some additional evidence from another source, and discussion: http://falkenblog.blogspot.com/2010/12/frazzini-and-pedersen-simulate-beta.html

It's a fact: male and female brains are different

3 araneae 07 October 2010 08:15PM

In Which I Present The Opposing Side's Hypothesis and Falsify It

This post is in part in response to a New Scientist article/book review "Fighting back against neurosexism."  And the tagline is "Are differences between men and women hard-wired in the brain? Two new books argue that there's no solid scientific evidence for this popular notion."  

Full disclosure here: I haven't read the books, although I do have a B.S. in neurobiology. But you don't even need to understand anything about neurobiology to falslify their most basic hypothesis: that male and female brains have no hardwired behavioral differences.  

And it's easy to falsify: if male and female brains were the same, all humans would be completely bisexual.  If it's true that female brains, on average, prefer to fuck, date, and marry men, and male brains, on average, prefer to fuck, date, and marry women, then male and female brains are in fact different.

continue reading »