You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Open thread, August 4 - 10, 2014

5 Post author: polymathwannabe 04 August 2014 12:20PM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one.

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

Comments (307)

Comment author: David_Gerard 11 August 2014 10:13:19AM 3 points [-]
Comment author: ciphergoth 10 August 2014 10:17:38AM *  7 points [-]

I've never tried to fnord something before, did I do it right?

Frankenstein's monster doomsayers overwhelmed by Terminator's Skynet become ever-more clever singularity singularity the technological singularity idea that has taken on a life of its own techno-utopians wealthy middle-aged men singularity as their best chance of immortality Singularitarians prepared to go to extremes to stay alive for long enough to benefit from a benevolent super-artificial intelligence a man-made god that grants transcendence doomsayers the techno-dystopians Apocalypsarians equally convinced super-intelligent AI no interest in curing cancer or old age or ending poverty malevolently or maybe just accidentally bring about the end of human civilisation Hollywood Golem Frankenstein's monster Skynet and the Matrix fascinated by the old story man plays god and then things go horribly wrong singularity chain reaction even the smartest humans cannot possibly comprehend how it works out of control singularity technological singularity cautious and prepared optimistic obsessively worried by a hypothesised existential risk a sequence of big ifs risk while not impossible is improbable worrying unnecessarily we're falling into a trap fallacy taking our eyes off other risks none of this has brought about the end of civilisation a huge gulf obsessing about the risk of super-intelligent AI cautious and prepared we should be worrying about present-day AI rather than future super-intelligent AI.

Artificial intelligence will not turn into a Frankenstein's monster, Alan Winfield, Observer, Sunday 10 August 2014

Comment author: Error 09 August 2014 11:35:14PM 1 point [-]

I'm at Otakon 2014, and there was a panel today about philosophy and videogames. The description read like Less Wrongese. I couldn't get in (it was full) but I'm wondering if anyone here was responsible for it.

Comment author: NancyLebovitz 09 August 2014 10:01:11AM 1 point [-]

A history of anime fandom

I'm not vouching for this, but it sounds plausible.

Comment author: [deleted] 09 August 2014 02:02:38AM 6 points [-]

Quoted in full from here:

A 33-year-old doctor in Africa and a 60-year-old missionary have both contracted Ebola, and both will likely die. In a made-for-tv-movie scenario, there’s only enough serum for one person so the doctor insists it go to the old lady. People are using this to illustrate how awesome and selfless the doctor is, saying that Even Now he “puts the needs of others above his own needs.” I, on the other hand, think this is a rotten stinking act of hubris. As a DOCTOR, he is far more valuable to the African people, and as such HE should get the serum. Not only is his act NOT selfless, in fact many more people will die since he has essentially killed their doctor. - Ruth Waytz

Comment author: pragmatist 09 August 2014 06:27:56AM 7 points [-]

I see the broad point Waytz is making, but the ranty delivery is pretty silly. Why is the doctor's act not selfless? It certainly appears to be motivated by altruism (even if that altruism is misguided, from a utilitarian perspective). Having a non-utilitarian moral code is not the same thing as selfishness.

Second, the anger in that comment seems to have more to do with a distaste for deontological altruistic gestures than anything else. I really doubt Waytz would be as mad if the doctor had simply decided that he had had enough of working in the medical profession and decided to open a bistro instead.

Comment author: NancyLebovitz 08 August 2014 07:45:55PM 3 points [-]

How to Work with "Stupid" People

The hypothesis is that people frequently underestimate the intelligence of those they work with. The article suggests some ways people could get the wrong impression, and some strategies for improving communications and relationships. It all seems very plausible.

However, the author doesn't offer any examples, and the comments are full of complaints about unchangeably stupid coworkers.

Comment author: Viliam_Bur 09 August 2014 08:55:50PM *  5 points [-]

I believe I had the opposite problem most of my life. I was taught to be humble, to never believe I am better than anyone else, et cetera. Nice political slogans, and probably I should publicly pretend to believe it. But there is a problem that I have a lot of data of people doing stupid things, and I need some explanation. And of course, if I forbid myself to use the potentially correct explanation, then I am pushing myself towards the incorrect ones.

Sometimes the problem is that I didn't understand something, so the seemingly stupid behavior wasn't actually stupid, it was me not understanding something. Yes, sometimes this happens, so it is reasonable to consider this hypothesis seriously. But oftentimes, even after careful exploration, the stupid behavior is stupid. When people keep saying that 2+2=5, it could mean they have secret mathematical knowledge unknown to you; but it is more likely that they are simply wrong.

But the worse problem is that refusing to believe in other people's stupidity deprives you of wisdom of "Never attribute to malice that which is adequately explained by stupidity." Not believing in stupidity can make you paranoid, because if those people don't do stupid things because of stupidity, then they must have some purpose doing it. And if it's a stupid thing that happens to harm you, it means they hate you, or at least don't mind when you are harmed. Ignorance starts to seem like strategical plausible deniability.

I had to overcome my upbringing and say to myself: "Viliam, your IQ is at least four sigma over the average, so when many people seem retarded to you, even many university-educated people, that's because they really are retarded, compared with you. They are usually not passively aggressive; they are trying to do their best, their best is just often very unimpressive to you (but probably impressive in their own eyes, and in eyes of their peers). You are expecting from them more than they can realistically provide; and they often even don't understand what you are saying. And they live in their world, where they are the norm; you are the exception. And it will never change, so you better get used to it, otherwise you prepare yourself for a lifetime of disappointment."

From that moment, when I see someone doing something stupid, I consider a hypothesis "maybe that's the best their intelligence allows them to do". And suddenly, I am not angry at most people around me. They are nice people, they are just not my equals, and it's not their fault. Often they have a knowledge that I don't have, and I can learn from them. (Intelligence does not equal knowledge.) But also, they often do something completely stupid that likely doesn't seem stupid in their eyes. I should not assume that everything they do makes sense. I should not expect them to able to understand everything I am trying to explain; I can try, but I shouldn't become too involved in it; sometimes I have to give up and accept some stupidity as a part of my environment.

The proper way to work with stupid people is to realize their limitations and don't blame them for not being what you want them to be. (Of course you should always check whether your estimates are correct. But they are not always wrong.)

Comment author: Lumifer 08 August 2014 08:11:44PM 1 point [-]

That blog post assumes that actual stupidity is never the "real" problem. I beg to disagree.

Comment author: Pfft 11 August 2014 08:18:47PM *  2 points [-]

Or does it?

They may have raw intelligence, but poor thinking habits—patterns of absorbing, processing, and filing information. Cognitively, they aren’t set up to get to the heart of a matter, to distinguish between essential and accidental details, to form and apply valid generalizations. This too may require patience. It isn’t good, but it isn’t willful, irrational, or stupid. Concentrate on what other virtues and talents they bring to the table, such as creativity, diligence, or relationship-building.

This seems to mean exactly "maybe they are stupid after all", but expressed using a different set of words.

(I would guess that the author at some point adopted "never think that someone is stupid" as a deontological rule, and then unintentionally evolved a different set of words to be able to think about stupidity without triggering the filter...)

Comment author: ahbwramc 09 August 2014 05:41:33PM 0 points [-]

I think purely from a fundamental attribution error point of view we should expect the average "stupid" person we encounter to be less stupid than they seem.

(which is not to say stupidity doesn't exist of course, just that we might tend to overestimate its prevalence)

I guess the other question would be, are there any biases that might lead us to underestimate someone's stupidity? Illusion of transparency, perhaps, or the halo effect? I still think we're on net biased against thinking other people are as smart as us.

Comment author: Azathoth123 10 August 2014 08:44:53PM 2 points [-]

Are you saying that charlatans and cranks don't exist or at least never manage to obtain any followers?

Comment author: Lumifer 10 August 2014 12:10:57AM 3 points [-]

are there any biases that might lead us to underestimate someone's stupidity?

Sex appeal, of course :-D

Comment author: NancyLebovitz 08 August 2014 08:22:20PM 1 point [-]

You're right. I'm sure that actual stupidity is sometimes the real problem. On the other hand, it would surprise me if it's always the real problem. At that point, the question becomes how much effort is worth putting in.

Comment author: niceguyanon 08 August 2014 01:50:56PM 4 points [-]

Non-conventional thinking here, feel free to tell me why this is wrong/stupid/dangerous.

I am young and healthy, and when I catch a cold, I think " cool, when I recover immune system +1." I take this one step further though, when I don't get sick for a long time, I start to hope I get sick because I want to exercise my immune system. I know this might sound obviously wrong but can we just discuss why exactly?

My priors tell me that actively avoiding any germs and people to prevent getting sick is unhealthy. So I have lived my life not avoiding germs but also not asking people to cough on me either. But is there room to optimize? I caught something pretty nasty that lasted a month, and I am sure I got it from being at a large music festival breathing hot breathy air, but better now than catching that strain of what ever it was, when I am 70 right? And I don't mean I want to catch a serious case of pneumonia and potentially die, I mean what if there was a way to catch a strain of the common cold every now and then deliberately.

Comment author: satt 09 August 2014 02:51:28PM 1 point [-]

The catch I'd expect here is for the marginal immunological benefit from an extra cold to be less than the marginal cost of suffering an extra cold, although a priori I'm not sure which way a cost-benefit analysis would go.

It'd depend on how well colds help your immune system fight other diseases; the expected marginal number of colds prevented per extra cold suffered; the risk of longer-term side effects of colds; how the cost of getting sick changes with age (which you mentioned); the chance that you'll mistakenly catch something else (like influenza) if you try to catch someone else's cold; and the doloric cost of suffering through a cold. One might have to trawl through epidemiology papers to put usable numbers on these.

Consuming probiotics (or even specks of dirt picked up from the ground) might be easier & safer.

Comment author: Lumifer 08 August 2014 03:28:19PM 3 points [-]

But is there room to optimize?

Maybe, but I don't think you can find out -- the data is too noisy and the variance is too big.

Besides, of course, the better your immune system gets, the more rarely will you get sick with infectious diseases...

Comment author: polymathwannabe 08 August 2014 03:27:13PM 1 point [-]

Your immune system is already being subjected to constant demands by the simple fact that you don't live in a quarantine bunker. Let it do its job. Intentional germ-seeking is reckless.

Comment author: pianoforte611 08 August 2014 02:57:41PM 1 point [-]

There are over 100 strains of the common cold. If you gain immunity to one, this will not significantly decrease your chance of catching a cold in the far future. On the other hand, good hygiene will significantly decrease your chance of being infected by most contagious diseases.

Comment author: NancyLebovitz 08 August 2014 06:09:21PM 1 point [-]

It's at least plausible that people become less vulnerable to colds as they get older.

http://www.nytimes.com/2013/08/06/science/can-immunity-to-the-common-cold-come-with-age.html?_r=0

Comment author: Lumifer 08 August 2014 03:30:42PM 2 points [-]

If you gain immunity to one, this will not significantly decrease your chance of catching a cold in the far future.

He's not talking about gaining immunity in the vaccination sense. He's talking about developing a better, stronger immune system.

Comment author: Lumifer 07 August 2014 05:42:34PM 2 points [-]

What it means to be statistically educated, a list by the American Statistical Association. Not half bad.

Comment author: Kawoomba 07 August 2014 03:56:42PM 6 points [-]

Many European countries, such as France, Denmark and Belgium, enjoyed jokes that were surreal, like Dr Wiseman's favourite:

An alsatian went to a telegram office and wrote: "Woof. Woof. Woof. Woof. Woof. Woof. Woof. Woof. Woof."

The clerk examined the paper and told the dog: "There are only nine words here. You could send another 'Woof' for the same price."

"But," the dog replied, "that would make no sense at all."

Dr Wiseman is now preparing scientific papers based on his findings, which he believes will benefit people developing artificial intelligence in computer programs.

Source, it's from back in 2002

Comment author: Stuart_Armstrong 07 August 2014 02:07:15PM 14 points [-]

In your open thread inbox, less wrong comments have the options "context" and "report" (in that order), whereas private messages have "report" and "reply" (in that order). Many times I've accidentally pressed "report" on a private message, and fortunately caught myself before continuing.

I'd suggest reversing the order of "report" and "reply", so that they fit with the comments options.

Right, that's my tiny suggestion for this month :-)

Comment author: Ixiel 07 August 2014 12:56:37AM 1 point [-]

Is there a way to see if I can vote both ways?

A month or so ago I started to get errors saying I can't downvote. I don't really care that much (it's not me that's gaining from my vote), but if I can't downvote I want to make sure I don't upvote so I don't bias things.

Comment author: tut 07 August 2014 04:16:52PM 1 point [-]

I had those too. It stopped rather quickly.

Comment author: Alicorn 07 August 2014 01:06:48AM 3 points [-]

Your downvotes are limited by your karma (I think it's four downvotes to a karma point). I don't think you will meaningfully bias anything if you continue to upvote things you like while accumulating enough karma to downvote again.

Comment author: tut 07 August 2014 04:19:20PM 1 point [-]

That they are, even when everything works perfectly. There was also an error a while ago that gave the same error message to (some?) people who were not at their limit.

Comment author: Ixiel 07 August 2014 01:32:46AM 1 point [-]

Yeah it's the principle. I guess I'll just try a down before I up going forward. Thanks Al

Comment author: Skeptityke 06 August 2014 06:12:59PM 1 point [-]

Physics puzzle: Being exposed to cold air while the wind is blowing causes more heat loss/feels colder than simply being exposed to still cold air.

So, if the ambient air temperature is above body temperature, and ignoring the effects of evaporation, would a high wind cause more heat gain/feel warmer than still hot air?

Comment author: bramflakes 09 August 2014 03:36:48PM 3 points [-]

Yes, it's how hair dryers work.

Comment author: tut 07 August 2014 08:57:08AM 1 point [-]

Yes. This happens sometimes in a really wet Sauna.

But conditions in which you actually feel this also kill you in less than a day. You need to lose about 100 W of heat in order to keep a stable body temperature, and moving air only feels hotter than still air if you are gaining heat from the air.

Comment author: shminux 06 August 2014 09:24:39PM *  1 point [-]

Yes. Your body would try to cool your face exposed to hot air by circulating more blood through it, creating a temperature gradient through the surface layer. Consequently, the air nearest your face would be colder than ambient. A wind would blow away the cooler air, resulting in the air with ambient temperature touching your skin. Of course, in reality humidity and sweating are major factors, negating the above analysis.

Comment author: Lumifer 06 August 2014 07:20:20PM 5 points [-]

Yes, though ignoring the effects of evaporation is ignoring a major factor.

Comment author: iarwain1 06 August 2014 04:37:02PM 3 points [-]

Anybody have any advice on how to successfully implement doublethink?

Comment author: mathnerd314 06 August 2014 07:53:07PM *  7 points [-]

Once upon a time I tried using what I could coin "quicklists". I took a receipt, turned it over to the back (clear side), and jotted down 5-10 things that I wanted to believe. Then I set a timer for 24 hours and, before that time elapsed, acted as if I believed those things. My experiment was too successful; by the time 24 hours were up I had ended up in a different county, with little recollection of what I'd been doing, and some policemen asking me pointed questions. (I don't believe any drugs were involved, just sleep deprivation, but I can't say for certain).

More recently, I rented and saw the film Memento, which explores these techniques in a fictional setting. The concept of short-term forgetting seemed reasonable and the techniques the character uses to work around it are easily adapted in real life. My initial test involved printing out a pamphlet with some dentistry stuff in tiny type (7 12-pt pages shrunk to fit on front-back of 1 page, folded in quarters), and carrying it with me to my dentist appointment. I was able to discuss most of the things from my pamphlet, and it did seem that the level of conversation was raised, but there were many other variables as well so it's hard to quantify the exact effect.

I'm not certain these techniques actually count as "doublethink", since the contradiction is between my "internal" beliefs and the beliefs I wrote down, but it does allow some exploration of the possibilities beyond rationality. I can override my system 2 with a piece of paper, and then system 1 follows.

NB: Retrieving your original beliefs after you've been going off of the ones from the paper is left as an exercise to the student

Comment author: SolveIt 07 August 2014 05:18:11AM 1 point [-]

I would like to read more about this. Would you consider writing it up?

Comment author: mathnerd314 07 August 2014 03:22:49PM 0 points [-]

I thought I had written all I could. What sort of things should I add?

Comment author: Vulture 07 August 2014 11:18:49PM *  1 point [-]

I think a little more elaboration on the quicklists experiment would be appreciated, and in particular a clearer description of what you think transpired when it went "too right". For me, at least, your experimental outcome might be extremely surprising (depending on the extent of the sleep deprivation involved), but I'm not even sure yet what model I should be re-assessing.

Comment author: Lumifer 06 August 2014 03:28:05PM 2 points [-]
Comment author: Douglas_Knight 09 August 2014 02:09:50AM 0 points [-]
Comment author: witzvo 07 August 2014 02:06:58AM 1 point [-]

That's a pretty cool histogram in figure 2.

Comment author: DavidAgain 06 August 2014 08:30:35AM 2 points [-]

Thought that people (particularly in the UK) might be interested to see this, a blog from one of the broadsheets on Bostrom's Superintelligence

http://blogs.telegraph.co.uk/news/tomchiversscience/100282568/a-robot-thats-smarter-than-us-theres-one-big-problem-with-that/

Comment author: Lumifer 05 August 2014 09:20:07PM 3 points [-]
Comment author: Lumifer 05 August 2014 06:45:03PM 2 points [-]

Another attempt at a sleep sensor, currently funded on Kickstarter.

Comment author: [deleted] 05 August 2014 02:35:12PM 3 points [-]

I have been considering finding a group of writers/artists to associate with in order to both provide me a catalyst for self-improvement and a set of peers who are serious about their work. I have several friends who are "into" writing or comics or whatever other medium, but most of them are as "into" it as the time between video games, drinking, and staying up late to binge Dexter episodes allows.

We have a whole sequences here on LessWrong about the Craft and the Community. So I don't feel the need to provide some bits of anecdotal evidence for why I think having a community for your craft is a good idea.

Instead, I'll just ask, to the writers: how have you found a community for your craft/have you bothered?

Comment author: Alicorn 06 August 2014 06:27:33AM 3 points [-]

I put writing online for free and siphoned off spare HPMoR fans until I had enough fanbase to maintain my own stable of beta readers, set of tumblr tags, and modestly populated forum. This is more how I cultivated a fandom than a set of colleagues, but some of the people I collected this way also cowrite with me and most of them are available to spur me along.

Comment author: TylerJay 05 August 2014 03:55:04PM 0 points [-]

I was once part of an online community on sffworld writing forum. There were regular posters like on any forum and there was also a small workshop (6-8 people) and each week two people would submit something for the rest of the group to read and provide feedback on. It was motivating and fun.

Comment author: polymathwannabe 05 August 2014 02:43:04PM 0 points [-]

I frequent a sci-fi fan club in my city and from that group emerged a tiny writing workshop (6 members currently). The couple of guys who came up with the idea had heard that I wrote some small stuff and won a local contest, and thus I got invited. Every two Sundays we meet via Skype to comment on the stories that we've posted to our FB group since the last meeting. It has been helpful for me; we've agreed to be brutally honest with one another.

Comment author: TheMajor 05 August 2014 01:01:21PM 5 points [-]

Not sure if this belongs here, but not sure where else it should go.

Many pages on the internet disappear, returning 404's when looking for them (especially older pages). The material I found on LW and OB is of such great quality that I would really hate it if a part of the pages here also disappeared (as in became harder to access for me). I am not sure if this is in any part realistic, but the thought does bother me. So I was hoping to somehow make a local backup of LW/OB, downloading all pages to a hard drive. There are other reasons for wanting this same thing: I am frequently in regions without internet access, and also this might finally allow me to organise the posts (the categories on LW leave much to be desired, the closest thing to a good structure I found is the chronological list on OB, which seems to be absent on LW?).

So my triple question: should I be worried about pages disappearing (probably not too much), would it still be a good idea to try to make a local backup (probably yes, storage is cheap and I think it would be useful for me personally to have LW offline, even only the older posts) and how does one go about this?

Comment author: David_Gerard 05 August 2014 07:43:58PM 7 points [-]

Pages here are disappearing - someone's been going through the archive deleting posts they don't like. (c.f. [1] versus [2].) (The post is still slightly available, but the 152 comments are no longer associated with it.) So get archiving sooner rather than later.

Comment author: TylerJay 05 August 2014 03:52:44PM 7 points [-]

You might be interested in reading Gwern's page on Archiving URLs and Link Rot

Comment author: SolveIt 05 August 2014 07:41:18AM 4 points [-]

As a person living very far away from west Africa, how worried should I be about the current Ebola outbreak?

Comment author: palladias 05 August 2014 03:35:49PM 2 points [-]

TL;DR: Ebola is very hard to transmit person to person. Don't think flu, think STDs.

Ebola isn't airborne, so breathing the same air, being on the same plane as an Ebola case will not give you Ebola. It doesn't spread quite like STDs, but it does require getting an infected person's bodily fluids (urine, semen, blood, and vomit) mixed up in your bodily fluids or in contact with a mucous membrane.

So, don't sex up your recently returned Peace Corps friend who's been feeling a little fluish, and you should be a-ok.

Comment author: byrnema 15 August 2014 04:07:46PM 2 points [-]

A person infected with Ebola is very contagious during the period they are showing symptoms. The CDC recommends casual contact and droplet precautions.

Note the following description of (casual) contact:

Casual contact is defined as a) being within approximately 3 feet (1 meter) or within the room or care area for a prolonged period of time (e.g., healthcare personnel, household members) while not wearing recommended personal protective equipment (i.e., droplet and contact precautions–see Infection Prevention and Control Recommendations); or b) having direct brief contact (e.g., shaking hands) with an EVD case while not wearing recommended personal protective equipment (i.e., droplet and contact precautions–see Infection Prevention and Control Recommendations). At this time, brief interactions, such as walking by a person or moving through a hospital, do not constitute casual contact.

(Much more contagious than an STD.)

But Lumifer is also correct. People without symptoms are not contagious, and people with symptoms are conspicuous (e.g. Patrick Sawyer was very conspicuous when he infected staff and healthcare workers in Nigeria) and unlikely to be ambulatory. The probability of a given person in West Africa being infected is very small (2000 cases divided by approximately 20 million people in Guinea, Sierra Leone and Liberia) and the probability of a given person outside this area being infected is truly negligible. If we cannot contain the virus in the area, there will be a lot of time between the observation of a burning 'ember' (or 10 or 20) and any change in these probabilities -- plenty of time to handle and douse out any further hotspots that form.

The worst case scenario in my mind is that it continues unchecked in West Africa or takes hold in more underdeveloped countries. This scenario would mean more unacceptable suffering and would also mean the outbreak gets harder and harder to squash and contain, increasing the risk to all countries.

We need to douse it while it is relatively small -- I feel so frustrated when I hear there are hospitals in these regions without supplies such as protective gear. What is the problem? Rich countries should be dropping supplies already.

Comment author: Lumifer 05 August 2014 04:46:41PM 2 points [-]

Ebola is very hard to transmit person to person.

Um. Given that an epidemic is actually happening and given that more than one doctor attending Ebola patients got infected, I'm not sure that "very hard" is the right term here.

Having said that, if you don't live in West Africa your chances of getting Ebola are pretty close to zero. You should be much more afraid of lightning strikes, for example.

Comment author: Khoth 05 August 2014 10:05:28AM 4 points [-]
Comment author: byrnema 10 August 2014 03:51:58PM *  0 points [-]

Sorry, realized I don't feel comfortable commenting on such a high-profile topic. Will wait a few minutes and then delete this comment (just to make sure there are no replies.)

Comment author: gjm 05 August 2014 10:04:11AM 3 points [-]

(Not in any way an expert; just going by what I've heard elsewhere.) I think the answer probably depends substantially on how much you care about the welfare of West Africans. It is very unlikely to have any impact to speak of in the US or Western Europe, for instance.

Comment author: Bakkot 05 August 2014 02:34:00AM 21 points [-]

I wrote a userscript / Chrome extension / zero-installation bookmarklet to make finding recent comments over at Slate Star Codex a lot easier. Observe screenshots. I'll also post this next time SSC has a new open thread (unless Yvain happens to notice this).

Comment author: NancyLebovitz 06 August 2014 07:03:25PM 0 points [-]

I tried downloading it by clicking on "install the extension", but it doesn't seem to get to my browser (Chrome). Am I missing something?.

Comment author: Bakkot 06 August 2014 09:07:19PM 3 points [-]

"Install the extension" is a link bringing you to the chrome web store, where you can install it by clicking in the upper-right. The link is this, in case it's Github giving you trouble somehow.

If the Chrome web store isn't recognizing that you're running Chrome, that's probably not a thing I can fix, though you could try saving this link as something.user.js, opening chrome://extensions, and dragging the file onto the window.

Comment author: NancyLebovitz 07 August 2014 05:01:20AM *  1 point [-]

Thank you. That worked. I never would have guessed that an icon which simply had the word "free" on it was the download button.

Would it be worth your while to do this for LW? It makes me crazy that the purple edges for new comments are irretrievably lost if the page is downloaded again.

Comment author: Bakkot 07 August 2014 07:57:34PM *  3 points [-]

Would it be worth your while to do this for LW?

Sure. Remarkably little effort required, it turned out. (Chrome extension is here.)

I guess I'll make a post about this too, since it's directly relevant to LW.

Comment author: Risto_Saarelma 10 August 2014 08:23:10AM 1 point [-]

This doesn't seem to handle stuff deep enough in the reply chain to be behind "continue this thread" links. On the massive threads where you most need the thing, a lot of the discussion is going to end up beyond those.

Comment author: Bakkot 10 August 2014 03:29:52PM *  0 points [-]

It seems to work for me. "Continue this thread" brings you to a new page, so you'll have to set the time again, is all. Comments under a "Load more" won't be properly highlighted until you click in and out of the time textbox after loading them.

Comment author: Risto_Saarelma 11 August 2014 03:38:46AM *  1 point [-]

The use case is that I go to the top page of a huge thread, the only new messages are under a "Continue this thread" link, and I want the widget to tell me that there are new messages and help me find them. I don't want to have to open every "Continue" link to see if there are new messages under one of them.

Comment author: Bakkot 11 August 2014 04:48:01AM 0 points [-]

Ah. That's much more work, since there's no way of knowing if there's new comments in such a situation without fetching all of those pages. I might make that happen at some point, but not tonight.

Comment author: NancyLebovitz 15 August 2014 04:49:53PM 1 point [-]

Thanks very much. I think there's an "unpack the whole page" program somewhere. Anyone remember it?

Comment author: [deleted] 05 August 2014 10:00:08PM 0 points [-]

Thanks a million!

Comment author: Creutzer 05 August 2014 08:39:00PM 1 point [-]

Great idea and nicely done! It also had the additional benefit of constituting my very first interaction with javascript because I needed to modify somethings. (Specifically, avoid the use of localStorage.)

Comment author: Bakkot 05 August 2014 08:46:53PM 0 points [-]

I'm curious what you used instead (cookies?), or did you just make a historyless version? Also, why did you need that? localStorage isn't exactly a new feature (hell, IE has supported it since version 8, I think).

Comment author: Creutzer 05 August 2014 09:02:29PM *  1 point [-]

It appears that my Firefox profile has some security features that mess with localStorage in a way that I don't understand. I used Greasemonkey's GM_[sg]etValue instead. (Important and maybe obvious, but not to me: their use has to be desclared with @grant in the UserScript preamble.)

Comment author: Risto_Saarelma 05 August 2014 06:49:34AM 1 point [-]

This looks excellent.

Comment author: Bakkot 05 August 2014 02:22:27AM *  14 points [-]

I wrote a userscript to add a delay and checkbox reading "I swear by all I hold sacred that this comment supports the collective search for truth to the very best of my abilities." before allowing you to comment on LW. Done in response to a comment by army1987 here.

Edit: per NancyLebovitz and ChristianKl below, solicitations for alternative default messages are welcomed.

Comment author: [deleted] 05 August 2014 10:03:07PM 1 point [-]

Testing this...

Comment author: [deleted] 05 August 2014 10:06:03PM 0 points [-]

Nope, doesn't seem to work. (I am probably doing something wrong as I never used Greasemonkey before.)

Comment author: Bakkot 05 August 2014 10:35:50PM *  1 point [-]

Just tested this on a clean FF profile, so it's almost certainly something on your end. Did you successfully install the script? You should've gotten an image which looks something like this, and if you go to Greasemonkey's menu while on a LW thread, you should be able to see it in the list of scripts run for that page. Also, note that you have to refresh/load a new page for it to show up after installation.

Oh, and it only works for new comments, not new posts. It should look something like this, and similarly for replies.

ETA: helpful debugging info: if you can, let me know what page it's not working on, and let me know if there's any errors in the developer console (shift-control-K or command-option-K for windows and Mac respectively).

Comment author: [deleted] 09 August 2014 08:59:57AM 0 points [-]

I had interpreted “Save this file as” in an embarrassingly wrong way. It works now!

(Maybe editing the comment should automatically uncheck the box, otherwise I can hit “Reply”, check the box straight away, then start typing my comment.)

Comment author: NancyLebovitz 05 August 2014 02:59:25PM 5 points [-]

"To the very best of my abilities" seems excessive to me, or at least I seem to do reasonably well with "according to the amount of work I'm willing to put in, and based on pretty good habits".

I'm not even sure what I could do to improve my posting much. I could be more careful to not post when I'm tired or angry, and that probably makes sense to institute as a habit. On the other hand, that's getting rid of some of the dubious posting, which is not the same thing as improving the average or the best posts.

Comment author: satt 07 August 2014 02:01:16AM 2 points [-]

Even when I'd only been here a few weeks, your posting had already caught my eye as unusually mindful & civil, and nothing since has changed my impression that you're far better than most of us at conversing in good faith and with equanimity.

Comment author: ChristianKl 05 August 2014 01:38:42PM 2 points [-]

Given the recent discussion about how rituals can give the appearance of cultishness, it's probably not good time to bring that up at the moment ;)

Comment author: BereczFereng 04 August 2014 11:30:07PM 14 points [-]

Does anyone know if something urgent has been going on with MIRI, other than the Effective Altruism Summit? I am a job application candidate -- I have no idea about my status as one. But I was promised a chat today, days ago, and nothing was arranged regarding time or medium. Now it is the end of the day. I sent my application weeks ago and have been in contact with 3 of the employees who seem to work on the management side of things. This is a bit frustrating. Ironically, I applied as Office Manager, and hope that (if hired) I would be doing my best to take care of these things -- putting things on a calendar, working to help create a protocol for 'rejecting' or 'accepting' or 'deferring' employee applications, etc. Have other people had similar, disorganized correspondences with MIRI? Or have they mostly been organized, suggesting that I take this experience as a sure sign of rejection?

Comment author: eggman 08 August 2014 06:25:09AM 3 points [-]

Apparently, in the days leading up to the Effective Altruism Summit, there was a conference on Artificial Intelligence keeping the research associates out of town. The source is my friend interning at the MIRI right now. So, anyway they might have been even busier than you thought. I hope this has cleared up now.

Comment author: BereczFereng 10 August 2014 03:53:11AM 0 points [-]

Still haven't heard anything back from them in any sort of way. But thanks for making their circumstances even more clear!

Comment author: BereczFereng 13 August 2014 07:57:54PM 1 point [-]

Heard back & talked with them. My personal issue is now resolved.

Comment author: [deleted] 05 August 2014 01:41:24PM 10 points [-]

Have other people had similar, disorganized correspondences with MIRI?

Yes.

Comment author: Metus 04 August 2014 11:04:36PM 1 point [-]

What is the general opinion on neurofeedback? Apparently there is scientific evidence pointing to its efficacy, but have there been controlled studies showing greater benefit to neurofeedback over traditional methods if they are known?

Comment author: James_Miller 05 August 2014 03:51:10AM 2 points [-]

I have done a lot of neurofeedback. It's more of an art than a science right now. I think there have been many studies that have shown some benefit, although I don't know if any are long-term. But the studies might not be of much value since there is so much variation in treatment since it is supposed to be customized for your brain. The first step is going to a neurofeedback provider and having him or her look at your qEEG to see how your brain differs from a typical persons' brain. Ideally for treatment, you would say I have this problem, and the provider would say, yes this is due to your having ... and with 20 sessions we can probably improve you. Although I am not a medical doctor, I would strongly advise anyone who can afford it to try neurufeedback before they try drugs such as anti-depressants.

Comment author: fubarobfusco 04 August 2014 09:32:34PM 6 points [-]

On the limits of rationality given flawed minds —

There is some fraction of the human species that suffers from florid delusions, due to schizophrenia, paraphrenia, mania, or other mental illnesses. Let's call this fraction D. By a self-sampling assumption, any person has a D chance of being a person who is suffering from delusions. D is markedly greater than one in seven billion, since delusional disorders are reported; there is at least one living human suffering from delusions.

Given any sufficiently interesting set of priors, there are some possible beliefs that have a less than D chance of being true. For instance, Ptolemaic geocentrism seems to me to have a less than D chance of being true. So does the assertion "space aliens are intervening in my life to cause me suffering as an experiment."

If I believe that a belief B has a < D chance of being true, and then I receive what I think is strong evidence supporting B, how can I distinguish the cases "B is true, despite my previous belief that it is quite unlikely" and "I have developed a delusional disorder, despite delusional disorders being quite rare"?

Comment author: gjm 05 August 2014 10:15:49AM 2 points [-]

The relevant number is probably not D (the fraction of people who suffer from delusions) but a smaller number D0 (the fraction of people who suffer from this particular kind of delusion). In fact, not D0 but the probably-larger-in-this-context number D1 (the fraction of people in situations like yours before this happened who suffer from the particular delusion in question).

On the other hand, something like the original D is also relevant: the fraction of people-like-you whose reasoning processes are disturbed in a way that would make you unable to evaluate the available evidence (including, e.g., your knowledge of D1) correctly.

Aside from those quibbles, some other things you can do (mostly already mentioned by others here):

  • Talk to other people whom you consider sane and sensible and intelligent.
  • Check your reasoning carefully. Pay particular attention to points about which you feel strong emotions.
  • Look for other signs of delusions.
  • Apply something resembling scientific method: look for explicitly checkable things that should be true if B and false if not-B, and check them.
  • Be aware that in the end one really can't reliably distinguish delusions from not-delusions from the inside.
Comment author: ChristianKl 05 August 2014 09:29:42AM 1 point [-]

If I believe that a belief B has a < D chance of being true, and then I receive what I think is strong evidence supporting B, how can I distinguish the cases "B is true, despite my previous belief that it is quite unlikely" and "I have developed a delusional disorder, despite delusional disorders being quite rare"?

The basic idea is to talk about your belief in detail with a trusted friend that you consider sane.

Writing your own thought processes down in a diary also helps to be better able to evaluate it.

Comment author: Manfred 04 August 2014 11:07:20PM 8 points [-]

For you to rule out a belief (e.g. geocentrism) as totally unbelievable, not only does it have to be less likely than insanity, it has to be less likely than insanity that looks like rational evidence for geocentrism.

You can test yourself for other symptoms of delusions - and one might think "but I can be deluded about those too," but you can think of it like requiring your insanity to be more and more specific and complicated, and therefore less likely.

Comment author: mathnerd314 04 August 2014 10:59:23PM 2 points [-]

The simple answer is to ask someone else, or better yet a group; if D is small, then D^2 or D^4 will be infinitesimal. However, delusions are "infectious" (see Mass hysteria), so this is not really a good method unless you're mostly isolated from the main population.

The more complicated answer is to track your beliefs and the evidence for each belief, and then when you get new evidence for a belief, add it to the old evidence and re-evaluate. For example, replacing an old wives' tale with a peer-reviewed study is (usually) a no-brainer. On the other hand, if you have conflicting peer-reviewed studies, then your confidence in both should decrease and you should go back to the old wives' tale (which, being old, is probably useful as a belief, regardless of truth value).

Finally, the defeatist answer is that you can't actually distinguish that you are delusional. With the film Shutter Island in mind, I hope you can see that almost nothing is going to shake delusions; you'll just rationalize them away regardless. If you keep notes on your beliefs, you'll dismiss them as being written by someone else. People will either pander to your fantasy or be dismissed as crooks. Every day will be a new one, starting over from your deluded beliefs. In such a situation there's not much hope for change.

For the record, I disagree with "delusional disorders being quite rare"; I believe D is somewhere between 0.5 and 0.8. Certainly, only 3% of these are "serious", but I could fill a book with all of the ways people believe something that isn't true.

Comment author: ChristianKl 05 August 2014 09:30:27AM *  6 points [-]

For example, replacing an old wives' tale with a peer-reviewed study is (usually) a no-brainer.

Given replication rates of scientific studies a single study might not be enough. Single studies that go against your intuition are not enough reason to update. Especially if you only read the abstract.

No need to get people to wash their hands before you do a business deal with them.

Comment author: mathnerd314 05 August 2014 08:12:13PM *  0 points [-]

Given replication rates of scientific studies a single study might not be enough.

Enough for what? My question is whether my hair stylist saying "Shaving makes the hair grow back thicker." is more reliable than http://onlinelibrary.wiley.com/doi/10.1002/ar.1090370405/abstract. In general, the scientists have put more thought into their answer and have conducted actual experiments, so they are more reliable. I might revise that opinion if I find evidence of bias, such as a study being funded by a corporation that finds favorable results for their product, but in my line of life such studies are rare.

Single studies that go against your intuition are not enough reason to update. Especially if you only read the abstract.

I find that in most cases I simply don't have an intuition. What's the population of India? I can't tell you, I'd have to look it up. In the rare cases where I do have some idea of the answer, I can delve back into my memory and recreate the evidence for that idea, then combine it with the study; the update happens regardless of how much I trust the study. I suppose that a well-written anecdote might beat a low-powered statistical study, but again such cases are rare (more often than not they are studying two different phenomena).

No need to get people to wash their hands before you do a business deal with them.

I wash my hands after shaking theirs, as soon as convenient. Or else I just take some ibuprofen after I get sick. (Not certain what you were trying to get at here...)

Comment author: ChristianKl 06 August 2014 09:26:37AM *  0 points [-]

I might revise that opinion if I find evidence of bias, such as a study being funded by a corporation that finds favorable results for their product, but in my line of life such studies are rare.

Humans are biased to overrate bad human behavior as a cause for mistakes. The decent thing is to orient yourself on whether similar studies replicate.

Regardless every publish-or-perish paper has an inherent bias to find spectacular results.

Enough for what?

Let's say wearning red every day.

Thinking that those Israeli judges don't give people parole because they don't have enough sugar in their blood right before mealtime. Going and giving every judge a candy before hearing every case to make it fair isn't warranted.

I find that in most cases I simply don't have an intuition. What's the population of India? I can't tell you, I'd have to look it up.

That's fixable by training Fermi estimates.

I wash my hands after shaking theirs, as soon as convenient. Or else I just take some ibuprofen after I get sick. (Not certain what you were trying to get at here...)

It's a reference to the controversy about whether washing your hands primes you to be more moral. It's a experimental social science result that failed to replicate.

Comment author: mathnerd314 06 August 2014 06:02:01PM *  0 points [-]

Humans are biased to overrate bad human behavior as a cause for mistakes.

If a crocodile bites off your hand, it's generally your fault. If the hurricane hits your house and kills you, it's your fault for not evacuating fast enough. In general, most causes are attributed to humans, because that allows actually considering alternatives. If you just attributed everything to, say, God, then it doesn't give any ideas. I take this a step further: everything is my fault. So if I hear about someone else doing something stupid, I try to figure out how I could have stopped them from doing it. My time and ability are limited in scope, so I usually conclude they were too far away to help (space-like separation), but this has given useful results on a few occasions (mostly when something I'm involved in goes wrong).

The decent thing is to orient yourself on whether similar studies replicate.

Not really, since the replication is more likely to fail than the original study (due to inexperience), and is subject to less peer-review scrutiny (because it's a replication). See http://wjh.harvard.edu/~jmitchel/writing/failed_science.htm. The correct thing to consider is followup work of any kind; for example, if a researcher has a long line of publications all saying the same thing in different experiments, or if it's widely cited as a building block of someone's theory, or if there's a book on it.

Regardless every publish-or-perish paper has an inherent bias to find spectacular results.

Right, people only publish their successes. There are so many failures that it's not worth mentioning or considering them. But they don't need to be "spectacular", just successful. Perhaps you are confusing publishing at all, even in e.g. a blog post, with publishing in "prestigious" journals, which indeed only publish "spectacular" results; looking at only those would give you a biased view, certainly, but as soon as you expand your field of view to "all information everywhere" then that bias (mostly) goes away, and the real problem is finding anything at all.

Let's say wearing red every day.

So the study there links red to aggression; I don't want to be aggressive all the time, so why should I wear red all the time? For example, I don't want a red car because I don't want to get pulled over by the cops all the time. Similarly for most results; they're very limited in scope, of the form "if X then Y" or even "X associate with Y". Many times, Y is irrelevant, so I don't need to even consider X.

Thinking that those Israeli judges don't give people parole because they don't have enough sugar in their blood right before mealtime. Going and giving every judge a candy before hearing every case to make it fair isn't warranted.

Sure, but if I'm involved with a case then I'll be sure to try to get it heard after lunchtime, and offer the judge some candy if I can get away with it.

That's fixable by training Fermi estimates.

You can memorize populations or memorize the Fermi factors and how to combine them, but the point stands regardless; you still have to remember something.

It's a reference to the controversy about whether washing your hands primes you to be more moral. It's a experimental social science result that failed to replicate.

Ah, social science. I need to take more courses in statistics before I can comment... so far I have been sticking to the biology/chemistry/physics side of things (where statistics are rare and the effects are obvious from inspection).

Comment author: mathnerd314 07 August 2014 07:38:57PM 0 points [-]

For example, I don't want a red car because I don't want to get pulled over by the cops all the time.

The car story appears to be a myth nowadays, but that could just be due to the increased use of radar guns and better police training. Radar guns were introduced around the 1950's so all of their policemen quotes are too recent to tell.

Comment author: ChristianKl 06 August 2014 11:41:17PM 0 points [-]

So if I hear about someone else doing something stupid, I try to figure out how I could have stopped them from doing it.

Conflating whether or not you could do something to stop them with finding truth makes it harder to have an accurate view of whether or not the result is true.

Accepting reality for what it is helps to have an accurate perception of reality. Only once you understand the territory should you go out and try to change things. If you do the second step before the first you mess up your epistemology. You fall for a bunch of human biases evolved for finding out whether the neighboring tribe might attack your tribe that aren't useful for clear understanding of todays complex world.

There are so many failures that it's not worth mentioning or considering them. But they don't need to be "spectacular", just successful. Perhaps you are confusing publishing at all, even in e.g. a blog post, with publishing in "prestigious" journals, which indeed only publish "spectacular" results

I spoke about incentives. Researchers have an incentive to publish in prestigious journals and optimize their research practices for doing so. The case with blogs isn't much different. Successful bloggers write polarizing posts that get people talking and engage with the story even there would be a way to be more accurate and less polarizing. The incentives go towards "spectual".

Scott H Young whom I respect and who's a nice fellow wrote his post against spaced repetition and still know recommends now in a later post the usage of Anki for learning vocabulary.

You can memorize populations or memorize the Fermi factors and how to combine them, but the point stands regardless; you still have to remember something.

It's not about remembering it's about being able to make estimates even when you aren't sure. And you can calibrate your error intervals.

So the study there links red to aggression; I don't want to be aggressive all the time, so why should I wear red all the time?

Aggression is not the central word. Status and dominance also appear. People do a bunch of things to appear higher status.

One of the studies in question suggested that it makes woman more attracted to you measured by the physical distance in conversation. Another one suggest that attraction based on photo ratings.

I actually did the comparison on hotOrNot. I tested a blue shirt against a red shirt. Photoshopped so nothing besides the color with different. For my photo blue scored more attractive than red despite the studies saying that red is the color that raises attractiveness.

I have been sticking to the biology/chemistry/physics side of things (where statistics are rare and the effects are obvious from inspection).

The replication rates for cancer biology seem to be even worse than for psychology if you trust the Amgen researchers who could only replicate 6 of 55 landmark studies that they tried to replicate.

Comment author: NancyLebovitz 07 August 2014 04:58:40AM 1 point [-]

Probably a minor point, but were both the red and blue shirts photoshopped? If one of them was an actual photo, it might have looked more natural (color reflected on to your face) than the other.

Comment author: ChristianKl 07 August 2014 10:28:32AM 0 points [-]

In this case no, the blue was the original you are right that this might have screwed with the results. HotOrNot internal algorithms were also a bit opaque.

But to be fair the setup of the original study wasn't natural either. The color in those studies has the color of the border of the photo.

If I wanted to repeat the experiment I would like to it on Amazon Mechanical turk. At the moment I don't really have the spare money for projects like that but maybe someone else on LW cares enough to dress in an attractive way and wants to optimize and has the money.

The whole thing might also work good for a blogger willing to a bit of cash to write an interesting post.

Especially for online dating like Tinder, photo optimisation through empiric measurement of photos can increase success rates a bit.

Comment author: mathnerd314 07 August 2014 08:18:10PM *  0 points [-]

Conflating whether or not you could do something to stop them with finding truth makes it harder to have an accurate view of whether or not the result is true. Accepting reality for what it is helps to have an accurate perception of reality.

I'm not certain where you see conflation. I have separate storage areas for things to think about, evidence, actions, and risk/reward evaluations. They interact as described here. Things I hear about go into the "things to think about" list.

Only once you understand the territory should you go out and try to change things.

The world is changing so I must too. If the apocalypse is tomorrow, I'm ready. I don't need to "understand" the apocalypse or its cause to start preparing for it. IF I learn something later that says I did the wrong thing, so be it. I prefer spending most of my time trying to change things than sitting in a room all day trying to understand. Indeed, some understanding can only be gained through direct experience. So I disagree with you here.

If you do the second step before the first you mess up your epistemology. You fall for a bunch of human biases evolved for finding out whether the neighboring tribe might attack your tribe that aren't useful for clear understanding of todays complex world.

The decision procedure I outlined above accounts for most biases; you're welcome to suggest revisions or stuff I should read.

I spoke about incentives. [...] The incentives go towards "spectual".

You didn't, AFAICT; you spoke about "inherent biases". I think my point still stands though; averaging over "all information everywhere" counteracts most perverse incentives, since perversion is rare, and the few incentives left are incentives that are shared among humans such as survival, reproduction, etc. In general humans are good at that sort of averaging, although of course there are timing and priming effects. Researchers/bloggers are incentivized to produce good results because good results are the most useful and interesting. Good results lead to good products or services (after a 30 year lag). The products/services lead to improved life (at least for some). Improved life leads to more free time and better research methods. And the cycle goes on, the end result AFAICT is a big database of mostly-correct information.

Scott H Young whom I respect and who's a nice fellow wrote his post against spaced repetition and still know recommends now in a later post the usage of Anki for learning vocabulary.

His post is entitled "Why Forgetting Can Be Good" and his mention of Anki is limited to "I’m skeptical of the value of an SRS for most domains of knowledge." If he then recommends Anki for learning vocabulary, this changes relatively little; he's simply found a knowledge domain where he found SRS useful. Different studies, different conclusions, different contributions to different decisions.

It's not about remembering it's about being able to make estimates even when you aren't sure.

You're never sure, so why mention "even when you aren't sure", since it's implied? Striking that out...

It's not about remembering it's about being able to make estimates.

Estimation comes after the evidence-gathering phase. If you have no evidence you can make no estimates. Fermi estimation is just another estimation method, so it doesn't change this. If you have no memory, then you have no evidence. So it is about remembering. "Those who cannot remember the past are condemned to repeat it".

And you can calibrate your error intervals.

If you have no estimates you can't have error intervals either. Indeed, you can't do calibration until you have a distribution of estimates.

Aggression is not the central word. Status and dominance also appear. People do a bunch of things to appear higher status.

It looks like the central word is definitely dominance. Stringing the top words into a sentence I get "Sports teams wear red to show dominance and it has an effect on referees' performance". I guess I was going off of the Mandrill story where signs of dominance are correlated with willingness to be aggressive. This study says dominance and threat are emphasized by wearing red, where "threat" is measured by "How threatening (intimidating, aggressive) did you feel?". Some other papers also relate dominance to aggressiveness. So I feel comfortable confusing the two, since they seem to be strongly correlated and relatively flexible in terms of definition.

The comments do focus on status, so I guess you have a point. But I generally skip over the comments when an article is linked to. And the status discussion was in the comments of Overcoming Bias post, so by no means central.

One of the studies in question suggested that it makes woman more attracted to you measured by the physical distance in conversation. Another one suggest that attraction based on photo ratings. I actually did the comparison on hotOrNot. I tested a blue shirt against a red shirt. Photoshopped so nothing besides the color with different. For my photo blue scored more attractive than red despite the studies saying that red is the color that raises attractiveness.

Would you be referring to, among others, this study? Unfortunately... it still looks like experimental psychology, so again I have to plead lack of statistics.

The replication rates for cancer biology seem to be even worse than for psychology if you trust the Amgen researchers who could only replicate 6 of 55 landmark studies that they tried to replicate.

I've mostly been reading Army / DoD studies, which have a different funding model. But I guess cancer will become relevant eventually (preferably later rather than sooner).

Side note: does LW have a "collapse threads more than N levels deep" feature like reddit? It probably should have triggered a few replies ago, so I didn't post on the wrong child...

Comment author: RichardKennaway 05 August 2014 07:14:52AM 3 points [-]

For the record, I disagree with "delusional disorders being quite rare"; I believe D is somewhere between 0.5 and 0.8. Certainly, only 3% of these are "serious", but I could fill a book with all of the ways people believe something that isn't true.

What sort of beliefs are you talking about here? Are you classifying simply being wrong about something as a "delusional disorder"?

Comment author: mathnerd314 05 August 2014 08:20:41PM *  1 point [-]

Exhibiting symptoms often considered as signs of mental illness. For example, this says 38.6% of general people have hallucinations. This says 40% of general people had paranoid thoughts. Presumably these groups aren't exactly the same, so there you go: between 0.5 and 0.8 of the general population. You can probably pull together some more studies with similar results for other symptoms.

Comment author: Ichneumon 04 August 2014 07:43:19PM 1 point [-]

Does anyone have any experience or thoughts regarding Cal Newport's "Study Hacks" blog, or his books? I'm trying to get an idea of how reliable his advice is before, saying, reading his book about college, or reading all of the blog archives.

Comment author: Kaj_Sotala 05 August 2014 06:42:04AM 2 points [-]
Comment author: Benito 04 August 2014 10:57:23PM 2 points [-]

Cognito Mentoring refer to him a fair bit, and often in mild agreement. Check their blog and wiki.

Comment author: chaosmage 04 August 2014 07:21:31PM *  2 points [-]

I've been looking for tools to help organize complex arguments and systems into diagrams, and ran into Flying Logic and Southbeach modeller. Could anyone here with experience using these comment on their value?

Comment author: mathnerd314 10 August 2014 03:49:23PM *  0 points [-]

And UnBBayes does computational analyses, similar to Flying Logic, except it uses Bayesian probability.

Comment author: mathnerd314 04 August 2014 11:17:04PM *  1 point [-]

I don't have experience with those, but I'll recommend Graphviz as a free (and useful) alternative. See e.g. http://k0s.org/mozilla/workflow.svg

Comment author: Pablo_Stafforini 04 August 2014 07:17:29PM *  7 points [-]

There is a common idea in the “critical thinking”/"traditional rationality" community that (roughly) you should, when exposed to an argument, either identify a problem with it or come to believe the argument’s conclusion. From a Bayesian framework, however, this idea seems clearly flawed. When presented with an argument for a certain conclusion, my failure to spot a flaw in the argument might be explained by either the argument’s being sound or by my inability to identify flawed arguments. So the degree to which I should update in either direction depends on my corresponding prior beliefs. In particular, if I have independent evidence that the argument’s conclusion is false and that my skills for detecting flaws in arguments are imperfect, it seems perfectly legitimate to say, “Look, your argument appears sound to me, but given what I know, both about the matter at hand and about my own cognitive abilities, it is much more likely that there’s a flaw in your argument which I cannot detect than that its conclusion is true.” Yet it is extremely rare to see LW folk or other rationalists say things like this. Why is this so?

Comment author: Viliam_Bur 05 August 2014 08:05:48PM 1 point [-]

A similar situation that used to happen frequently to me in real life, was when the argument was too long, too complex, used information that I couldn't verify... or ever could, but the verification would take a lot of time... something like: "There is this 1000 pages long book containing complex philosophical arguments and information from non-mainstream but cited sources, which totally proves that my religion is correct." And there is nothing obviously incorrect within the first five pages. But I am certainly not going to read it all. And the other person tries to use my self-image of an intelligent person against me, insisting that I should promise that I will read the whole book and then debate about it (which is supposedly the rational thing to do in such situation: hey, here is the evidence, you just refuse to look at it), or else I am not really intelligent.

And in such situations I just waved my hands and said -- well, I guess you just have to consider me unintelligent -- and went away.

I didn't think about how to formalize this properly. It was just this: I recognize the trap, and refuse to walk inside. If it happened to me these days, I could probably try explaining my reaction in Bayesian terms, but it would be still socially awkward. I mean, in the case of religion, the true answer would show that I believe my opponent is either dishonest or stupid (which is why I expect him to give me false arguments); which is not a nice thing to say to people. And yeah, it seems similar to ignoring evidence for irrational reasons.

Comment author: Lumifer 05 August 2014 08:14:59PM 1 point [-]

Nothing, including rationality, requires you to look at ALL evidence that you could possibly access. Among other things, your time is both finite and valuable.

Comment author: palladias 05 August 2014 03:38:51PM 3 points [-]

I say things like this a lot in contexts where I know there are experts, but I have put no effort into learning which are the reliable ones. So when someone asserts something about (a) nutritional science (b) Biblical translation nuances (c) assorted other things in this category, I tend to say, "I really don't have the relevant background to evaluate your argument, and it's not a field I'm planning to do the legwork to understand very well."

Comment author: gjm 05 August 2014 10:21:51AM 0 points [-]

Related link: Peter van Inwagen's article Is it wrong everywhere, always, and for everyone, to believe anything on insufficient evidence?. van Inwagen suggests not, on the grounds that if it were then no philosopher could ever continue believing something firmly when there are other smarter equally well informed philosophers who strongly disagree. I find this argument less compelling than van Inwagen does.

Comment author: Benito 08 August 2014 12:34:00PM 0 points [-]

Haha. You should believe exactly what the evidence suggests, and exactly to the degree that it suggests it. The argument is also an amusing example of 'one man's modus ponens...'.

Comment author: SolveIt 05 August 2014 09:17:34AM 5 points [-]

This idea seems like a manifestation of epistemic learned helplessness.

Comment author: ChristianKl 05 August 2014 09:11:35AM 3 points [-]

Yet it is extremely rare to see LW folk or other rationalists say things like this. Why is this so?

In my experience there are LW people who would in such cases simply declare that they won't be convinced of the topic at hand and suggest to change the subject.

I particularly remember a conversation at the LW community camp about geopolitics where a person simply declared that they aren't able to evaluate arguments on the matter and therefore won't be convinced.

Comment author: philh 06 August 2014 02:46:57PM 0 points [-]

That was probably me. I don't think I handled the situation particularly gracefully, but I really didn't want to continue that conversation, and I couldn't see whether the person in question was wearing a crocker's rules tag.

I don't remember my actual words, but I think I wasn't trying to go for "nothing could possibly convince me", so much as "nothing said in this conversation could convince me".

Comment author: ChristianKl 06 August 2014 03:22:09PM 0 points [-]

It's still more graceful than the "I think you are wrong based on my heuristics but I can't tell you where you are wrong" that Pablo Stafforini advocates.

Comment author: Protagoras 05 August 2014 12:09:40AM 0 points [-]

Because that ends the discussion. I think a lot of people around here just enjoy debating arguments (certainly I do).

Comment author: iarwain1 04 August 2014 11:45:10PM 1 point [-]

I actually do say things like this pretty frequently, though I haven't had the opportunity to do so on LW yet.

Comment author: Lumifer 04 August 2014 07:24:49PM 11 points [-]

Why is this so?

Because the case where you are entirely wedded to a particular conclusion and want to just ignore the contrary evidence would look awfully similar...

Comment author: faul_sname 07 August 2014 07:05:28AM 0 points [-]

Awfully similar, but not identical.

In the first case, you have independent evidence that the conclusion is false, so you're basically saying "If I considered your arguments in isolation, I would be convinced of your conclusion, but here are several pieces of external evidence which contradict your conclusion. I trust this external evidence more than I trust my ability to evaluate arguments."

In the second case, you're saying "I have already concluded that your conclusion is false because I have concluded that mine is true. I think it's more likely that there is a flaw in your conclusion that I can't detect than that there is a flaw in the reasoning that led to my conclusion."

The person in the first case is far more likely to respond with "I don't know" in response to the question of "So what do you think the real answer is, then?" In our culture (both outside, and, to a lesser but still significant degree inside LW), there is a stigma against arguing against a hypothesis without providing an alternative hypothesis. An exception is the argument of the form "If Y is true, how do you explain X?" which is quite common. Unfortunately, this form of argument is used extensively by people who are, as you say, entirely wedded to a particular conclusion, so using it makes you seem like one of those people and therefore less credible, especially in the eyes of LWers.

Rereading your comment, I see that there are two ways to interpret it. The first is "Rationalists do not use this form of argument because it makes them look like people who are wedded to a particular conclusion." The second is "Rationalists do not use this form of argument because it is flawed -- they see that anyone who is wedded to a particular conclusion can use it to avoid updating on evidence." I agree with the first interpretation, but not the second -- that form of argument can be valid, but reduces the credibility of the person using it in the eyes of other rationalists.

Comment author: Lumifer 07 August 2014 02:46:44PM 1 point [-]

In the first case, you have independent evidence that the conclusion is false

"Independent evidence" is a tricky concept. Since we are talking Bayesianism here, at the moment you're rejecting the argument it's not evidence any more, it's part of your prior. Maybe there was evidence in the past that you've updated on, but when you refuse to accept the argument, you're refusing to accept it solely on the basis of your prior.

In the second case, you're saying "I have already concluded that your conclusion is false because I have concluded that mine is true."

Which is pretty much equivalent to saying "I have seen evidence that your conclusion is false, so I already updated that it is false and my position is true and that's why I reject your argument".

I see that there are two ways to interpret it.

I think both apply.

Comment author: Azathoth123 06 August 2014 04:40:13AM 2 points [-]

In fact that case is just a special case of the former with you having bad priors.

Comment author: Lumifer 06 August 2014 02:46:58PM 1 point [-]

Not quite, your priors might be good. We're talking here about ignoring evidence and that's a separate issue from whether your priors are adequate or not.

Comment author: jamesf 04 August 2014 05:36:21PM 2 points [-]

Suppose you wanted to find out all the correlates for particular Big Five personality traits. Where would you look, besides the General Social Survey?

Comment author: gwern 04 August 2014 07:06:18PM 3 points [-]

Would 'Google Scholar' be too glib an answer here?

Comment author: jamesf 04 August 2014 07:42:53PM 2 points [-]

It gave me mostly psychological and physiological correlates. I'm interested more in behavioral and social/economic things. I suppose you can get from the former to the latter, though with much less confidence than a directly observed correlation.

Your answer is exactly as glib as it should be, but only because I didn't really specify what I'm curious about.

Comment author: Pablo_Stafforini 04 August 2014 05:33:43PM *  2 points [-]

Another piece of potentially useful information that may be new to some folks here: sleeping more ~7.5 hours is associated to a higher mortality risk (and the risk is comparable to sleeping less than ~5 hours).

Relevant literature reviews:

Cappuccio FP, D'Elia L, Strazzullo P, et al. Sleep duration and all-cause mortality: a systematic review and meta-analysis of prospective studies. Sleep 2010;33(5):585-592.

Background: Increasing evidence suggests an association between both short and long duration of habitual sleep with adverse health outcomes. Objectives: To assess whether the population longitudinal evidence supports the presence of a relationship between duration of sleep and all-cause mortality, to investigate both short and long sleep duration and to obtain an estimate of the risk. Methods: We performed a systematic search of publications using MEDLINE (1966-2009), EMBASE (from 1980), the Cochrane Library, and manual searches without language restrictions. We included studies if they were prospective, had follow-up >3 years, had duration of sleep at baseline, and all-cause mortality prospectively. We extracted relative risks (RR) and 95% confidence intervals (CI) and pooled them using a random effect model. We carried out sensitivity analyses and assessed heterogeneity and publication bias. Results: Overall, the 16 studies analyzed provided 27 independent cohort samples. They included 1,382,999 male and female participants (follow-up range 4 to 25 years), and 112,566 deaths. Sleep duration was assessed by questionnaire and outcome through death certification. In the pooled analysis, short duration of sleep was associated with a greater risk of death (RR: 1.12; 95% CI 1.06 to 1.18; P < 0. 01) with no evidence of publication bias (P = 0.74) but heterogeneity between studies (P = 0.02). Long duration of sleep was also associated with a greater risk of death (1.30; [1.22 to 1.38]; P < 0.0001) with no evidence of publication bias (P = 0.18) but significant heterogeneity between studies (P < 0.0001). Conclusion: Both short and long duration of sleep are significant predictors of death in prospective population studies.

Grandner MA, Hale L, Moore M, et al . Mortality associated with short sleep duration: the evidence, the possible mechanisms, and the future. Sleep Med Rev 2010;14(3):191-203.

This review of the scientific literature examines the widely observed relationship between sleep duration and mortality. As early as 1964, data have shown that 7-h sleepers experience the lowest risks for all-cause mortality, whereas those at the shortest and longest sleep durations have significantly higher mortality risks. Numerous follow-up studies from around the world (e.g., Japan, Israel, Sweden, Finland, the United Kingdom) show similar relationships. We discuss possible mechanisms, including cardiovascular disease, obesity, physiologic stress, immunity, and socioeconomic status. We put forth a social–ecological framework to explore five possible pathways for the relationship between sleep duration and mortality, and we conclude with a four-point agenda for future research.

Grandner MA, Drummond SP. Who are the long sleepers? Towards an understanding of the mortality relationship. Sleep Med Rev. Oct 2007;11(5):341–60.

While much is known about the negative health implications of insufficient sleep, relatively little is known about risks associated with excessive sleep. However, epidemiological studies have repeatedly found a mortality risk associated with reported habitual long sleep. This paper will summarize and describe the numerous studies demonstrating increased mortality risk associated with long sleep. Although these studies establish a mortality link, they do not sufficiently explain why such a relationship might occur. Possible mechanisms for this relationship will be proposed and described, including (1) sleep fragmentation, (2) fatigue, (3) immune function, (4) photoperiodic abnormalities, (5) lack of challenge, (6) depression, or (7) underlying disease process such as (a) sleep apnea, (b) heart disease, or (c) failing health. Following this, we will take a step back and carefully consider all of the historical and current literature regarding long sleep, to determine whether the scientific evidence supports these proposed mechanisms and ascertain what future research directions may clarify or test these hypotheses regarding the relationship between long sleep and mortality.

Comment author: ChristianKl 04 August 2014 10:04:11PM 0 points [-]

Based on that data, I think a blanket suggestion that everybody should sleep 8 hours isn't warranted. It seems that some people with illnesses or who are exposed to other stressors need 8 hours.

I would advocate that everybody sleeps enough to be fully rested instead of trying to sleep a specific number of hours that some authority considers to be right for the average person.

I think the same goes for daily water consumption. Optimize values like that in a way that makes you feel good on a daily basis instead of targeting a value that seems to be optimal for the average person.

Comment author: Pablo_Stafforini 04 August 2014 10:14:57PM *  0 points [-]

What are your grounds for making this recommendation? The parallel suggestion that everyone should eat enough to feel fully satisfied doesn't seem like a recipe for optimal health, so why think things should be different with sleep? Indeed, the analogy between food and sleep is drawn explicitly in one of the papers I cited, and it seems that a "wisdom of nature" heuristic (due to "changed tradeoffs"; see Bostrom & Sandberg, sect. 2) might support a policy of moderation in both food and sleep. Although this is all admittedly very speculative.

Comment author: ChristianKl 04 August 2014 10:42:02PM 1 point [-]

What are your grounds for making this recommendation?

Years of thinking about the issue that aren't easily compressed.

In general alarm clocks don't seem to be healthy devices. The idea of habitually breaking sleep at a random point of the sleep circle doesn't seem good.

Let's say we look at a person who needs 8 hours of sleep to feel fully rested. The person has health issue X. When we solve X than they only need 7 hours of sleep. The obvious way isn't to wake up the person after 7 hours of sleep but to actually fix X.

That idea of sleep seems to both reflect the research that forcibly cutting peoples sleep in a way that leads to sleep deprivation is bad. It also explains why the people who sleep 8 hours on average die earlier than the people who sleep 7 hours.

If I get a cold my body needs additional sleep during that time. I have a hard time imagine that cutting that sleep needs away is healthy.

If we look at eating I also think similar things are true. There not much evidence that forced dieting is healthy. Fixing underlying issues seems to be preferable over forcibly limiting food consumption.

While we are at the topic of sleep and mortality it's worth pointing out that sleeping pills are very harmful to health.

Comment author: gwern 04 August 2014 07:09:31PM *  10 points [-]

I don't find these results to be of much value. There's a long history of various sleep-duration correlations turning out to be confounds from various diseases and conditions (as your quote discusses), so there's more than usual reason to minimize the possibility of causation, and if you do that, why would anyone care about the results? I don't think a predictive relationship is much good for say retirement planning or diagnosing your health from your measured sleep. And on the other hand, there's plenty of experimental studies on sleep deprivation, chronic or acute, affecting mental and physical health, which overrides these extremely dubious correlates. It's not a fair fight.

Comment author: Pablo_Stafforini 04 August 2014 07:27:02PM *  1 point [-]

Yes, my primary reason for posting these studies was actually to elicit a discussion about the kinds of conclusions we may or may not be entitled to draw from them (though I failed to make this clear in my original comment). I would like to have a better epistemic framework for drawing inferences from correlational studies, and it is unclear to me whether the sheer (apparent) poor track-record of correlational studies when assessed in light of subsequent experiments is enough to dismiss them altogether as sources of evidence for causal hypotheses. And if we do accept that sometimes correlational studies are evidentially causally relevant, can we identify an explicit set of conditions that need to obtain for that to be the case, or are these grounds so elusive that we can only rely on subjective judgment and intuition?

Comment author: sixes_and_sevens 04 August 2014 01:55:32PM *  9 points [-]

Oblique request made without any explanation: can anyone provide examples of beliefs that are incontrovertibly incorrect, but which intelligent people will nonetheless arrive at quite reasonably through armchair-theorising?

I am trying to think up non-politicised, non-controversial examples, yet every one I come up with is a reliable flame-war magnet.

ETA: I am trying to reason about disputes where on the one hand you have an intelligent, thoughtful person who has very expertly reasoned themselves into a naive but understandable position p, and on the other hand, you have an individual who possesses a body of knowledge that makes a strong case for the naivety of p.

What kind of ps exist, and do they have common characteristics? All I can come up with are politically controversial ps, but I'm starting my search from a politically-controversial starting point. The motivating example for this line of reasoning is so controversial that I'm not touching it with a shitty-stick.

Comment author: philh 28 August 2014 09:31:24PM 1 point [-]

When I was ~16, I came up with group selection to explain traits like altruism.

Comment author: Leonhart 09 August 2014 02:14:28PM -2 points [-]

Perhaps "The person who came out of the teleporter isn't me, because he's not made of the same atoms"?

Comment author: pragmatist 08 August 2014 06:02:49AM *  1 point [-]

Bell's spaceship paradox.

According to Bell, he surveyed his colleagues at CERN (clearly a group of intelligent, qualified people) about this question, and most of them got it wrong. Although, to be fair, the conflict here is not between expert reasoning and domain knowledge, since the physicists at CERN presumably possessed all the knowledge you need (basic special relativity, really) to get the right answer.

Comment author: satt 07 August 2014 12:56:59AM 11 points [-]

I thought about this on & off over the last couple of days and came up with more candidates than you can shake a shitty stick at. Some of these are somewhat political or controversial, but I don't think any are reliable flame-war magnets. I expect some'll ring your cherries more than others, but since I can't tell which, I'll post 'em all and let you decide.

  1. The answer to the Sleeping Beauty puzzle is obviously 1/2.

  2. Rational behaviour, being rational, entails Pareto optimal results.

  3. Food availability sets a hard limit on the number of kids people can have, so when people have more food they have more kids.

  4. Truth is an absolute defence against a libel accusation.

  5. If a statistical effect is so small that a sample of several thousand is insufficient to reliably observe it, the effect's too small to matter.

  6. Controlling for an auxiliary variable, or matching on that variable, never worsens the bias of an estimate of a causal effect.

  7. Human nature being as brutish as it is, most people are quite willing to be violent, and their attempts at violence are usually competent.

  8. In the increasingly fast-paced and tightly connected United States, residential mobility is higher than ever.

  9. The immediate cause of death from cancer is most often organ failure, due to infiltration or obstruction by spreading tumours.

  10. Aumann's agreement theorem means rationalists may never agree to disagree.

  11. Friction, being a form of dissipation, plays no role in explaining how wings generate lift.

  12. Seasons occur because Earth's distance from the Sun changes during Earth's annual orbit.

  13. Beneficial mutations always evolve to fixation.

  14. Multiple discovery is rare & anomalous.

  15. The words "male" & "female" are cognates.

  16. Given the rise of online piracy, the ridiculous cost of tickets, and the ever-growing convenience of other forms of entertainment, cinema box office receipts must be going down & down.

  17. Looking at voting in an election from the perspective of timeless decision theory, my voting decision is probably correlated and indeed logically linked with that of thousands of people relatively likely to agree with my politics. This could raise the chance of my influencing an election above negligibility, and I should vote accordingly.

  18. The countries with the highest female life expectancies are approaching a physiologically fixed hard limit of 65 — sorry, 70 — sorry, 80 — sorry, 85 years.

  19. The answer to the Sleeping Beauty puzzle is obviously 1/3.

Language in general might be a rich source of these, between false etymologies, false cognates, false friends, and eggcorns.

Comment author: bramflakes 11 August 2014 03:49:16PM 0 points [-]

Food availability sets a hard limit on the number of kids people can have, so when people have more food they have more kids.

... don't they? (in the long run)

Comment author: satt 11 August 2014 11:34:18PM 1 point [-]

... don't they? (in the long run)

In the "long-long run", given ad hoc reproductive patterns, yeah, I'd expect evolution to ratchet average human fertility higher & higher until much of humanity slammed into the Malthusian limit, at which point "when people have more food they have more kids" would become true.

Nonetheless, it isn't true today, it's unlikely to be true for the next few centuries unless WWIII kicks off, and may never come to pass (humanity might snuff itself out of existence before we go Malthusian, or the threat of Malthusian Assured Destruction might compel humanity to enforce involuntary fertility limits). So here in 2014 I rate the idea incontrovertibly false.

Comment author: Lumifer 11 August 2014 04:12:46PM 2 points [-]

... don't they?

No, they don't -- look at contemporary Western countries and their birth rates.

Comment author: bramflakes 11 August 2014 05:05:10PM 0 points [-]

Oh yes I know that, I just meant in the long-long run. This voluntary limiting of birth rates can't last for obvious evolutionary reasons.

Comment author: Lumifer 11 August 2014 05:19:34PM 0 points [-]

I have no idea about the "long-long" run :-)

The limiting of birth rates can last for a very long time as long as you stay at replacement rates. I don't think "obvious evolutionary reasons" apply to humans any more, it's not likely another species will outcompete us by breeding faster.

Comment author: bramflakes 11 August 2014 06:54:11PM *  0 points [-]

Any genes that make people defect by having more children are going to be (and are currently being) positively selected.

Besides, reducing birthrates to replacement isn't anything near a universal phenomenon, see the Mormons and Amish.

It's got nothing to do with another species out-competing us - competition between humans is more than enough.

Comment author: Lumifer 11 August 2014 06:57:27PM 0 points [-]

Any genes that make people defect by having more children are going to be (and are currently being) positively selected.

This observation should be true throughout the history of the human race, and yet the birth rates in the developed countries did fall off the cliff...

Comment author: Azathoth123 13 August 2014 05:42:01AM 2 points [-]

and yet the birth rates in the developed countries did fall off the cliff...

This happened barely half a generational cycle ago. Give evolution time.

Comment author: Lumifer 13 August 2014 02:44:47PM 1 point [-]

Give evolution time.

So what's your prediction for what will happen when?

Comment author: bramflakes 11 August 2014 07:16:42PM 2 points [-]

And animals don't breed well in captivity.

Until they do.

Comment author: pragmatist 08 August 2014 06:09:07AM 2 points [-]

Thanks for that list. I believed (or at least, assigned a probability greater than 0.5 to) about five of those.

Comment author: sixes_and_sevens 07 August 2014 09:48:31AM 2 points [-]

Thanks for this. These are all really good.

Comment author: satt 08 August 2014 03:00:03AM 2 points [-]

Now I just need to think of another 21 and I'll have enough for a philosophy article!

Comment author: KnaveOfAllTrades 07 August 2014 12:43:46AM *  1 point [-]

Generalising from 'plane on a treadmill'; a lot of incorrect answers to physics problems and misconceptions of physics in general. For any given problem or phenomenon, one can guess a hundred different fake explanations, numbers, or outcomes using different combinations of passwords like 'because of Newton's Nth law', 'because of drag', 'because of air resistance', 'but this is unphysical so it must be false' etc. For the vast majority of people, the only way to narrow down which explanations could be correct is to already know the answer or perform physical experiments, since most people don't have a good enough physical intuition to know in advance what types of physical arguments go through, so should be in a state of epistemic learned helplessness with respect to physics.

Comment author: sixes_and_sevens 07 August 2014 09:53:21AM 1 point [-]

I have a strange request. Without consulting some external source, can you please briefly define "learned helplessness" as you've used it in this context, and (privately, if you like) share it with me? I promise I'll explain at some later date.

Comment author: KnaveOfAllTrades 07 August 2014 10:38:03AM *  4 points [-]

There will probably be holes and not quite capture exactly what I mean, but I'll take a shot. Let me know if this is not rigorous or detailed enough and I'll take another stab, or if you have any other follow-up. I have answered this immediately, without changing tab, so the only contamination is saccading my LW inbox beforing clicking through to your comment, the titles of other tabs, etc. which look (as one would expect) to be irrelevant.

Helplessness about topic X - One is not able to attain a knowably stable and confident opinion about X given the amount of effort one is prepared to put in or the limits of one's knowledge or expertise etc. One's lack of knowledge of X includes lack of knowledge about the kinds of arguments or methods that tend to work in X, lack of experience spotting crackpot or amateur claims about X, and lack of general knowledge of X that would allow one to notice one's confusion at false basic claims and reject them. One is unable to distinguish between ballsy amateurs and experts.

Learned helplessness about X - The helplessness is learned from experience of X; much like the sheep in Animal Farm, one gets opinion whiplash on some matter of X that makes one realise that one knows so little about X that one can be argued into any opinion about it.

(This has ended up more like a bunch of arbitrary properties pointing to the sense of learned helplessness rather than a slick definition. Is it suitable for your purposes, or should I try harder to cut to the essence?)

Rant about learned helplessness in physics: Puzzles in physics, or challenges to predict the outcome of a situation or experiment, often seem like they have many different possible explanations leading to a variety of very different answers, with the merit of these explanations not being distinguishable except to those who have done lots of physics and seen lots of tricks, and maybe even then maybe you just need to already know the answer before you can pick the correct answer.

Moreover, one eventually learns that the explanations at a given level of physics instruction are probably technically wrong in that they are simplified (though I guess less so as one progresses).

Moreover moreover, one eventually becomes smart enough to see that the instructors do not actually even spot their leaps in logic. (For example, it never seemed to occur to any of my instructors that there's no reason you can't have negative wavenumbers when looking at wavefunctions in basic quantum. It turns out that when I run the numbers, everything rescales since the wavefunction bijects between -n and n and one normalizes the wavefunction anyway, so that it doesn't matter, but one could only know this for sure after reasoning it out and justifying discarding the negative wavenumbers. It basically seemed like the instructors saw an 'n' in sin(n*pi/L) or whatever and their brain took it as a natural number without any cognitive reflection that the letter could have just as easily been a k or z or something, and to check that the notation was justified by the referent having to be a natural.)

Moreover, it takes a high level of philosophical ability to reason about physics thought experiments and their standards of proof. Take the 'directly downwind faster than the wind' problem. The argument goes back and forth, and, like the sheep, at every point the side that's speaking seems to be winning. Terry Tao comes along and says it's possible, and people link to videos of carts with propellers apparently going downwind faster than the wind and wheels with rubber bands attached allegedly proving it. But beyond deferring to his general hard sciences problem-solving ability, one has no inside view way to verify Tao's solution; what are the standards of proof for a thought experiment? After all, maybe the contraptions in the video only work (assuming they do work as claimed, which isn't assured) because of slight side-to-side effects rather than directly down wind or some other property of the test conditions implicitly forbidden by the thought experiment.

Since any physical experiment for a physics thought experiment will have additional variables, one needs some way to distinguish relevant and irrelevant variables. Is the thought experiment the limit as extraneous variables become negligible, or is there a discontinuity? What if different sets of variables give rise to different limits? How does anyone ever know what the 'correct' answer is to an idealised physics thought experiment of a situation that never actually arises? Etc.

Comment author: sixes_and_sevens 07 August 2014 12:22:30PM 5 points [-]

Thanks for that. The whole response is interesting.

I ask because up until quite recently I was labouring under a wonky definition of "learned helplessness" that revolved around strategic self-handicapping.

An example would be people who foster a characteristic of technical incompetence, to the point where they refuse to click next-next-finish on a noddy software installer. Every time they exhibit their technical incompetence, they're reinforced in this behaviour by someone taking the "hard" task away from them. Hence their "helplessness" is "learned".

It wasn't until recently that I came across an accurate definition in a book on reinforcement training. I'm pretty sure I've had "learned helplessness" in my lexicon for over a decade, and I've never seen it used in a context that challenged my definition, or used it in a way that aroused suspicion. It's worth noting that I probably picked up my definition through observing feminist discussions. Trying a mental find-and-replace on ten years' conversations is kind of weird.

I am also now bereft of a term for what I thought "learned helplessness" was. Analogous ideas come up in game theory, but there's no snappy self-contained way available to me for expressing it.

Comment author: satt 12 August 2014 12:12:17AM 0 points [-]

An example would be people who foster a characteristic of technical incompetence, to the point where they refuse to click next-next-finish on a noddy software installer. Every time they exhibit their technical incompetence, they're reinforced in this behaviour by someone taking the "hard" task away from them. Hence their "helplessness" is "learned".

Making up a term for this..."reinforced helplessness"? (I dunno whether it'd generalize to cover the rest of what you formerly meant by "learned helplessness".)

Comment author: KnaveOfAllTrades 07 August 2014 12:41:42PM *  2 points [-]

Good chance you've seen both of these before, but:

http://en.wikipedia.org/wiki/Learned_helplessness and http://squid314.livejournal.com/350090.html

I am also now bereft of a term for what I thought "learned helplessness" was. Analogous ideas come up in game theory, but there's no snappy self-contained way available to me for expressing it.

Damn, if only someone had created a thread for that, ho ho ho

Strategic incompetence?

I'm not sure if maybe Schelling uses a specific name (self-sabotage?) for that kind of thing?

Comment author: sixes_and_sevens 07 August 2014 01:35:05PM 3 points [-]

Schelling does talk about strategic self-sabotage, but it captures a lot of deliberated behaviour that isn't implied in my fake definition.

Also interesting to note, I have read that Epistemic Learned Helplessness blog entry before, and my fake definition is sufficiently consistent with it that it doesn't stand out as obviously incorrect.

Comment author: satt 12 August 2014 12:25:53AM 0 points [-]

Also interesting to note, I have read that Epistemic Learned Helplessness blog entry before, and my fake definition is sufficiently consistent with it that it doesn't stand out as obviously incorrect.

Now picturing a Venn diagram with three overlapping circles labelled "epistemic learned helplessness", "what psychologists call 'learned helplessness'", and "what sixes_and_sevens calls 'learned helplessness'"!

Comment author: philh 06 August 2014 03:04:38PM 3 points [-]

This isn't very interesting, but I used to believe that the rules about checkmate didn't really change the nature of chess. Some of the forbidden moves - moving into check, or failing to move out if possible - are always a mistake, so if you just played until someone captured the king, the game would only be different in cases where someone made an obvious mistake.

But if you can't move, the game ends in stalemate. So forbidding you to move into check means that some games end in draws, where capture-the-king would have a victor.

(This is still armchair theorising on my part.)

Comment author: [deleted] 06 August 2014 01:00:40AM 1 point [-]

The sun revolves around the earth.

Comment author: gwern 06 August 2014 02:18:08AM 3 points [-]

The earth revolving around the sun was also armchair reasoning, and refuted by empirical data like the lack of observable parallax of stars. Geocentrism is a pretty interesting historical example because of this: the Greeks reached the wrong conclusion with right arguments. Another example in the opposite direction: the Atomists were right about matter basically being divided up into very tiny discrete units moving in a void, but could you really say any of their armchair arguments about that were right?

Comment author: Douglas_Knight 09 August 2014 01:37:56AM 0 points [-]

It is not clear that the Greeks rejected heliocentrism at all, let alone any reason other than heresy. On the contrary, Hipparchus refused to choose, on the grounds of Galilean relativity.

The atomists got the atomic theory from the Brownian motion of dust in a beam of light. the same way that Einstein convinced the final holdouts thousands of years later.

Comment author: gwern 13 August 2014 11:32:34PM 0 points [-]

It is not clear that the Greeks rejected heliocentrism at all, let alone any reason other than heresy. On the contrary, Hipparchus refused to choose, on the grounds of Galilean relativity.

Eh? I was under the impression that most of the Greeks accepted geocentrism, eg Aristotle. Double-checking https://en.wikipedia.org/wiki/Heliocentrism#Greek_and_Hellenistic_world and https://en.wikipedia.org/wiki/Ancient_Greek_astronomy I don't see any support for your claim that heliocentrism was a respectable position and geocentrism wasn't overwhelmingly dominant.

The atomists got the atomic theory from the Brownian motion of dust in a beam of light.

Cite? I don't recall anything like that in the fragments of the Pre-socratics, whereas Eleatic arguments about Being are prominent.

Comment author: Douglas_Knight 14 August 2014 12:51:25AM 1 point [-]

Lucretius talks about the motion of dust in light, but he doesn't claim that it is the origin of the theory. When I google "Leucippus dust light" I get lots of people making my claim and more respectable sources making weaker claims, like "According to traditional accounts the philosophical idea of simulacra is linked to Leucippus’ contemplation of a ray of light that made visible airborne dust," but I don't see any citations to where this tradition is recorded.

Comment author: Douglas_Knight 14 August 2014 12:24:13AM *  1 point [-]

The Greeks cover hundreds of years. They made progress! You linked to a post about the supposed rejection of Aristarchus's heliocentric theory. It's true that no one before Aristarchus was heliocentric. That includes Aristotle who died when Aristarchus was 12. Everyone agrees that the Hellenistic Greeks who followed Aristotle were much better at astronomy than the Classical Greeks. The question is whether the Hellenistic Greeks accepted Aristarchus's theory, particularly Archimedes, Apollonius, and Hipparchus. But while lots of writings of Aristotle remain, practically nothing of the later astronomers remain.

It's true that secondary sources agree that Archimedes, Apollonius, and Hipparchus were geocentric. However, they give no evidence for this. Try the scholarly article cited in the post you linked. It's called "The Greek Heliocentric Theory and Its Abandonment" but it didn't convince me that there was an abandonment. That's where I got the claim about Hipparchus refusing to choose.

I didn't claim that there was any evidence that it was respectable, let alone dominant, only that there was no evidence that it was rejected. The only solid evidence one way or the other is the only surviving Hellenistic astronomy paper, Archimedes's Sandreckoner, which uses Aristarchus's model. I don't claim that Archimedes was heliocentric, but that sure sounds to me like he respected heliocentrism.

Maybe heliocentrism survived a century and was finally rejected by Hipparchus. That's a world of difference from saying that Seleucus was his only follower. Or maybe it was just the two of them, but we live in a state of profound ignorance.

As for the ultimate trajectory of Greek science, that is a difficult problem. Lucio Russo suggests that Roman science is all mangled Greek science and proposes to extract the original. For example, Seneca claims that the retrograde motion of the planets is an illusion, which sounds like he's quoting someone who thinks the Earth moves, even if he doesn't. More colorful are Pliny and Vitruvius who claim that the retrograde motion of the planets is due to the sun shooting triangles at them. This is clearly a heliocausal theory, even if the authors claim to be geocentric. Less clear is Ruso's interpretation, that this is a description of a textbook diagram that they don't understand.

Comment author: gwern 17 August 2014 12:47:39AM 0 points [-]

So, you just have an argument from silence that heliocentrism was not clearly rejected?

I didn't claim that there was any evidence that it was respectable, let alone dominant, only that there was no evidence that it was rejected. The only solid evidence one way or the other is the only surviving Hellenistic astronomy paper, Archimedes's Sandreckoner, which uses Aristarchus's model. I don't claim that Archimedes was heliocentric, but that sure sounds to me like he respected heliocentrism.

I just read through the bits of Sand Reckoner referring to Aristarchus (Mendell's translation), and throughout Archimedes seems to be at pains to distance himself from Aristarchus's model, treating it as a minority view (emphasis added):

You grasp [King Gelon, the recipient of Archimedes's letter The Sand Reckoner] that the world is called by most astronomers the sphere whose center is the center of the earth and whose line from the center is equal to the straight-line between the center of the sun and the center of the earth, since you have heard these things in the proofs written by the astronomers. But Aristarchus of Samos produced writings of certain hypotheses in which it follows from the suppositions that the world is many times what is now claimed.

Not language which suggests he takes it particularly seriously, much less endorses it.

In fact, it seems that the only reason Archimedes brings up Aristarchus at all is as a form of 'worst-case analysis': some fools doubt the power of mathematics and numbers, but Archimedes will show that even under the most ludicrously inflated estimate of the size of the universe (one implied by Aristarchus's heliocentric model), he can still calculate & count the number of grains of sands it would take to fill it up; hence, he can certainly calculate & count the number for something smaller like the Earth. From the same chapter:

[1] Some people believe, King Gelon, that the number of sand is infinite in multitude. I mean not only of the sand in Syracuse and the rest of Sicily, but also of the sand in the whole inhabited land as well as the uninhabited. There are some who do not suppose that it is infinite, and yet that there is no number that has been named which is so large as to exceed its multitude.

[2] It is clear that if those who hold this opinion should conceive of a volume composed of the sand as large as would be the volume of the earth when all the seas in it and hollows of the earth were filled up in height equal to the highest mountains, they would not know, many times over, any number that can be expressed exceeding the number of it.

[3] I will attempt to prove to you through geometrical demonstrations, which you will follow, that some of the numbers named by us and published in the writings addressed to Zeuxippus exceed not only the number of sand having a magnitude equal to the earth filled up, just as we said, but also the number of the sand having magnitude equal to the world.

...[7] In fact we say that even if a sphere of sand were to become as large in magnitude as Aristarchus supposes the sphere of the fixed stars to be, we will also prove that some of the initial numbers having an expression (or: "numbers named in the Principles," cf. Heath, Archimedes, 222, and Dijksterhuis, Archimedes, 363) exceed in multitude the number of sand having a magnitude equal to the mentioned sphere, when the following are supposed.

And he triumphantly concludes in ch4:

[18] ... Thus, it is obvious that the multitude of sand having a magnitude equal to the sphere of the fixed stars which Aristarchus supposes is smaller than 1000 myriads of the eighth numbers.

[19] King Gelon, to the many who have not also had a share of mathematics I suppose that these will not appear readily believable, but to those who have partaken of them and have thought deeply about the distances and sizes of the earth and sun and moon and the whole world this will be believable on the basis of demonstration. Hence, I thought that it is not inappropriate for you too to contemplate these things.

Comment author: Douglas_Knight 17 August 2014 03:42:54AM 0 points [-]

All I have ever said is that you should stop telling fairy tales about why the Greeks rejected heliocenrism. If the Sandreckoner convinces you that Archimedes rejected heliocentrism, fine, whatever, but it sure doesn't talk about parallax.

I listed several pieces of positive evidence, but I'm not interested in the argument.

Comment author: gwern 17 August 2014 04:03:48PM 2 points [-]

If the Sandreckoner convinces you that Archimedes rejected heliocentrism, fine, whatever, but it sure doesn't talk about parallax.

The Sand Reckoner implies the parallax objection when it uses an extremely large heliocentric universe! Lack of parallax is the only reason for such extravagance. Or was there some other reason Aristarchus's model had to imply a universe lightyears in extent...?

Comment author: Douglas_Knight 17 August 2014 04:47:09PM 0 points [-]

Aristarchus using a large universe is evidence that he thought about parallax. It is not evidence that his opponents thought about parallax.

You are making a circular argument: you say that the Greeks rejected heliocentrism for a good reason because they invoked parallax, but you say that they invoked parallax because you assume that they had a good reason.

There is a contemporary recorded reason for rejecting Aristarchus: heresy. There is also a (good) reason recorded by Ptolemy 400 years later, namely wind speed.

Comment author: ChristianKl 06 August 2014 09:38:51AM 1 point [-]

Atoms can actually be divided into parts, so it's not clear that the atomists where right. If you would tell some atomist about quantum states, I would doubt that they would find that to be a valid example of what they mean with "atom".

Comment author: gwern 06 August 2014 04:16:55PM 3 points [-]

The atomists were more right than the alternatives: the world is not made of continuously divisible bone substances, which are bone no matter how finely you divide them, nor is it continuous mixtures of fire or water or apeiron.

Comment author: RichardKennaway 06 August 2014 10:01:24AM 0 points [-]

Atoms can actually be divided into parts, so it's not clear that the atomists where right

You could say the same of Dalton.

Comment author: [deleted] 05 August 2014 06:49:40PM 8 points [-]

If a plane is on a conveyor belt going at the same speed in the opposite direction, will it take off?

I remember reading this in other places I don't remember, and it seems to inspire furious arguments despite being non-political and not very controversial.

Comment author: [deleted] 06 August 2014 05:44:57PM 0 points [-]

Same speed with respect to what? This sound kind of like the tree-in-a-forest one.

Comment author: satt 06 August 2014 11:26:38PM 2 points [-]

As I remember the problem, the plane's wheels are supposed to be frictionless so that their rotation is uncoupled from the rest of the plane's motion. Hence the speed of the conveyor belt is irrelevant and the plane always takes off. Now, if you had a helicopter on a turntable...

Comment author: [deleted] 09 August 2014 08:42:54AM 0 points [-]

What I mean is, on hearing that I thought of a conveyor belt whose top surface was moving at a speed -x with respect to the air, and a plane on top of it moving at a speed x with respect to the top of the conveyor belt, i.e. the plane was stationary with respect to the air. But on reading the Snopes link what was actually meant was that the conveyor belt was moving at speed -x and the plane's engines were working as hard as needed to move at speed x on stationary ground with no wind.

Comment author: tut 07 August 2014 04:04:47PM 0 points [-]

While at the same time the rolling speed of the plane, which is the sum of it's forward movement and the speed of the treadmill, is supposed to be equal to the speed of the treadmill. Which is impossible if the plane moves forward.

Comment author: satt 08 August 2014 03:02:22AM 0 points [-]

I'm not sure what you mean by "rolling speed of the plane", "it's forward movement", and "speed of the treadmill". The phrase "rolling speed" sounds like it refers to the component of the plane's forward motion due to the turning of its wheels, but that's not a coherent thing to talk about if one accepts my assumption that the wheels are uncoupled from the plane.

Comment author: tut 08 August 2014 07:36:17AM *  0 points [-]

Rolling speed = how fast the wheels turn, described in terms of forward speed. So it's the circumference of the wheels multiplied by their angular speed. And the wheels are not uncoupled from the plane they are driven by the plane. It was only assumed that the friction in the wheel bearings is irrelevant.

Forward movement of the plane = speed of the plane relative to something not on the treadmill. I guess I should have called it airspeed, which it would be if there is no wind.

Speed of the treadmill = how fast the surface of the treadmill moves.

And that is more time than I wanted to spend rehashing this old nonsense. The grandparent was only meant to explain why the great grandparent would not have settled the issue, not to settle it on its own. The only further comment I have is the whole thing is based on an unrealistic setup, which becomes incoherent if you assume that it is about real planes and real treadmills.

Comment author: satt 09 August 2014 04:43:17PM *  0 points [-]

And that is more time than I wanted to spend rehashing this old nonsense.

Fair enough. I have to chip in with one last comment, but you'll be happy to hear it's a self-correction! My comments don't account for potential translational motion of the wheels, and they should've done. (The translational motion could matter if one assumes the wheels experience friction with the belt, even if there's no internal wheel bearing friction.)

Comment author: NancyLebovitz 06 August 2014 03:20:37PM 1 point [-]

That reminds me of the question of whether hot water freezes faster than cold water.

Comment author: tut 06 August 2014 06:41:24AM 0 points [-]

That's different though. The Plane on a Treadmill started with somebody specifying some physically impossible conditions, and then the furious arguments were between people stating the implication of the stated conditions on one side and people talking about the real world on the other hand.

Comment author: shminux 05 August 2014 07:26:10PM 0 points [-]

That's a great example. If I recall, people who get worked up about it generally feel that the answer is obvious and the other side is stupid for not understanding the argument.

Comment author: John_Maxwell_IV 05 August 2014 04:26:28AM *  1 point [-]

Why not also spend an equally amount of time searching for examples that prove the opposite of the point you're trying to make? Or are you speaking to an audience that doesn't agree this is possible in principle?

Edit: Might Newtonian physics be an example?

Comment author: Manfred 04 August 2014 11:17:58PM *  4 points [-]

Downwind faster than the wind. See seven pages of posts here for examples of people getting it wrong.

Kant was famously wrong when he claimed that space had to be flat.

Comment author: [deleted] 05 August 2014 02:00:11PM 2 points [-]

Kant was famously wrong when he claimed that space had to be flat.

As discussed previously, this exact claim seems suspiciously absent from the first Critique.

Comment author: Manfred 06 August 2014 12:10:23AM 2 points [-]

Take, for example, the proposition: "Two straight lines cannot enclose a space, and with these alone no figure is possible," and try to deduce it from the conception of a straight line and the number two; or take the proposition: "It is possible to construct a figure with three straight lines," and endeavour, in like manner, to deduce it from the mere conception of a straight line and the number three. All your endeavours are in vain, and you find yourself forced to have recourse to intuition, as, in fact, geometry always does.

Geometry, nevertheless, advances steadily and securely in the province of pure a priori cognitions, without needing to ask from philosophy any certificate as to the pure and legitimate origin of its fundamental conception of space.

I agree that Kant doesn't seem to have ever considered non-euclidean geometry, and thus can't really be said to be making an argument that space is flat. If we could drop an explanation of general relativity, he'd probably come to terms with it. On the other hand, he just assumes that two straight lines can only intersect once, and that this describes space, which seems pretty much what he was accused of.

Comment author: [deleted] 06 August 2014 10:46:19AM *  1 point [-]

On the other hand, he just assumes that two straight lines can only intersect once, and that this describes space,

I don't see this in the quoted passage. He's trying to illustrate the nature of propositions in geometry, and doesn't appear to be arguing that the parallel postulate is universally true. "Take, for example," is not exactly assertive.

Also, have a care: those two paragraphs are not consecutive in the Critique.

Comment author: solipsist 04 August 2014 07:36:43PM *  7 points [-]

If your twin's going away for 20 years to fly around space at close to the speed of light, they'll be 20 years older when they come back.

A spinning gyroscope, when pushed, will react in a way that makes sense.

If another nation can't do anything as well as your nation, there is no self-serving reason to trade with them.

You shouldn't bother switching in the Monty Hall problem

The sun moves across the sky because it's moving.

EDIT Corrected all statements to be false

Comment author: gjm 04 August 2014 09:23:40PM 0 points [-]

Open trade [...]

I think you may have expressed this one the wrong way around; the way you've phrased it ("can make you better off") is the surprising truth, not the surprising untruth.

Comment author: gjm 04 August 2014 09:22:53PM 0 points [-]

If your twin flies through space for 20 years at close to the speed of light, they'll be 20 years older when they come back.

They will. I think you mean: If your twin flies through space at close to the speed of light and arrives back 20 years later, they'll be 20 years older when they come back. That one's false.

Comment author: solipsist 04 August 2014 09:39:35PM 0 points [-]

Reversed polarity on a few statements. Thanks.

Comment author: Gurkenglas 05 August 2014 06:20:16AM 0 points [-]

Your first statement is still correct.