Open thread, August 4 - 10, 2014
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one.
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (307)
New open thread
I've never tried to fnord something before, did I do it right?
Frankenstein's monster doomsayers overwhelmed by Terminator's Skynet become ever-more clever singularity singularity the technological singularity idea that has taken on a life of its own techno-utopians wealthy middle-aged men singularity as their best chance of immortality Singularitarians prepared to go to extremes to stay alive for long enough to benefit from a benevolent super-artificial intelligence a man-made god that grants transcendence doomsayers the techno-dystopians Apocalypsarians equally convinced super-intelligent AI no interest in curing cancer or old age or ending poverty malevolently or maybe just accidentally bring about the end of human civilisation Hollywood Golem Frankenstein's monster Skynet and the Matrix fascinated by the old story man plays god and then things go horribly wrong singularity chain reaction even the smartest humans cannot possibly comprehend how it works out of control singularity technological singularity cautious and prepared optimistic obsessively worried by a hypothesised existential risk a sequence of big ifs risk while not impossible is improbable worrying unnecessarily we're falling into a trap fallacy taking our eyes off other risks none of this has brought about the end of civilisation a huge gulf obsessing about the risk of super-intelligent AI cautious and prepared we should be worrying about present-day AI rather than future super-intelligent AI.
Artificial intelligence will not turn into a Frankenstein's monster, Alan Winfield, Observer, Sunday 10 August 2014
I'm at Otakon 2014, and there was a panel today about philosophy and videogames. The description read like Less Wrongese. I couldn't get in (it was full) but I'm wondering if anyone here was responsible for it.
A history of anime fandom
I'm not vouching for this, but it sounds plausible.
Quoted in full from here:
I see the broad point Waytz is making, but the ranty delivery is pretty silly. Why is the doctor's act not selfless? It certainly appears to be motivated by altruism (even if that altruism is misguided, from a utilitarian perspective). Having a non-utilitarian moral code is not the same thing as selfishness.
Second, the anger in that comment seems to have more to do with a distaste for deontological altruistic gestures than anything else. I really doubt Waytz would be as mad if the doctor had simply decided that he had had enough of working in the medical profession and decided to open a bistro instead.
How to Work with "Stupid" People
The hypothesis is that people frequently underestimate the intelligence of those they work with. The article suggests some ways people could get the wrong impression, and some strategies for improving communications and relationships. It all seems very plausible.
However, the author doesn't offer any examples, and the comments are full of complaints about unchangeably stupid coworkers.
I believe I had the opposite problem most of my life. I was taught to be humble, to never believe I am better than anyone else, et cetera. Nice political slogans, and probably I should publicly pretend to believe it. But there is a problem that I have a lot of data of people doing stupid things, and I need some explanation. And of course, if I forbid myself to use the potentially correct explanation, then I am pushing myself towards the incorrect ones.
Sometimes the problem is that I didn't understand something, so the seemingly stupid behavior wasn't actually stupid, it was me not understanding something. Yes, sometimes this happens, so it is reasonable to consider this hypothesis seriously. But oftentimes, even after careful exploration, the stupid behavior is stupid. When people keep saying that 2+2=5, it could mean they have secret mathematical knowledge unknown to you; but it is more likely that they are simply wrong.
But the worse problem is that refusing to believe in other people's stupidity deprives you of wisdom of "Never attribute to malice that which is adequately explained by stupidity." Not believing in stupidity can make you paranoid, because if those people don't do stupid things because of stupidity, then they must have some purpose doing it. And if it's a stupid thing that happens to harm you, it means they hate you, or at least don't mind when you are harmed. Ignorance starts to seem like strategical plausible deniability.
I had to overcome my upbringing and say to myself: "Viliam, your IQ is at least four sigma over the average, so when many people seem retarded to you, even many university-educated people, that's because they really are retarded, compared with you. They are usually not passively aggressive; they are trying to do their best, their best is just often very unimpressive to you (but probably impressive in their own eyes, and in eyes of their peers). You are expecting from them more than they can realistically provide; and they often even don't understand what you are saying. And they live in their world, where they are the norm; you are the exception. And it will never change, so you better get used to it, otherwise you prepare yourself for a lifetime of disappointment."
From that moment, when I see someone doing something stupid, I consider a hypothesis "maybe that's the best their intelligence allows them to do". And suddenly, I am not angry at most people around me. They are nice people, they are just not my equals, and it's not their fault. Often they have a knowledge that I don't have, and I can learn from them. (Intelligence does not equal knowledge.) But also, they often do something completely stupid that likely doesn't seem stupid in their eyes. I should not assume that everything they do makes sense. I should not expect them to able to understand everything I am trying to explain; I can try, but I shouldn't become too involved in it; sometimes I have to give up and accept some stupidity as a part of my environment.
The proper way to work with stupid people is to realize their limitations and don't blame them for not being what you want them to be. (Of course you should always check whether your estimates are correct. But they are not always wrong.)
That blog post assumes that actual stupidity is never the "real" problem. I beg to disagree.
Or does it?
This seems to mean exactly "maybe they are stupid after all", but expressed using a different set of words.
(I would guess that the author at some point adopted "never think that someone is stupid" as a deontological rule, and then unintentionally evolved a different set of words to be able to think about stupidity without triggering the filter...)
I think purely from a fundamental attribution error point of view we should expect the average "stupid" person we encounter to be less stupid than they seem.
(which is not to say stupidity doesn't exist of course, just that we might tend to overestimate its prevalence)
I guess the other question would be, are there any biases that might lead us to underestimate someone's stupidity? Illusion of transparency, perhaps, or the halo effect? I still think we're on net biased against thinking other people are as smart as us.
Are you saying that charlatans and cranks don't exist or at least never manage to obtain any followers?
Sex appeal, of course :-D
You're right. I'm sure that actual stupidity is sometimes the real problem. On the other hand, it would surprise me if it's always the real problem. At that point, the question becomes how much effort is worth putting in.
Non-conventional thinking here, feel free to tell me why this is wrong/stupid/dangerous.
I am young and healthy, and when I catch a cold, I think " cool, when I recover immune system +1." I take this one step further though, when I don't get sick for a long time, I start to hope I get sick because I want to exercise my immune system. I know this might sound obviously wrong but can we just discuss why exactly?
My priors tell me that actively avoiding any germs and people to prevent getting sick is unhealthy. So I have lived my life not avoiding germs but also not asking people to cough on me either. But is there room to optimize? I caught something pretty nasty that lasted a month, and I am sure I got it from being at a large music festival breathing hot breathy air, but better now than catching that strain of what ever it was, when I am 70 right? And I don't mean I want to catch a serious case of pneumonia and potentially die, I mean what if there was a way to catch a strain of the common cold every now and then deliberately.
The catch I'd expect here is for the marginal immunological benefit from an extra cold to be less than the marginal cost of suffering an extra cold, although a priori I'm not sure which way a cost-benefit analysis would go.
It'd depend on how well colds help your immune system fight other diseases; the expected marginal number of colds prevented per extra cold suffered; the risk of longer-term side effects of colds; how the cost of getting sick changes with age (which you mentioned); the chance that you'll mistakenly catch something else (like influenza) if you try to catch someone else's cold; and the doloric cost of suffering through a cold. One might have to trawl through epidemiology papers to put usable numbers on these.
Consuming probiotics (or even specks of dirt picked up from the ground) might be easier & safer.
Maybe, but I don't think you can find out -- the data is too noisy and the variance is too big.
Besides, of course, the better your immune system gets, the more rarely will you get sick with infectious diseases...
Your immune system is already being subjected to constant demands by the simple fact that you don't live in a quarantine bunker. Let it do its job. Intentional germ-seeking is reckless.
There are over 100 strains of the common cold. If you gain immunity to one, this will not significantly decrease your chance of catching a cold in the far future. On the other hand, good hygiene will significantly decrease your chance of being infected by most contagious diseases.
It's at least plausible that people become less vulnerable to colds as they get older.
http://www.nytimes.com/2013/08/06/science/can-immunity-to-the-common-cold-come-with-age.html?_r=0
He's not talking about gaining immunity in the vaccination sense. He's talking about developing a better, stronger immune system.
What it means to be statistically educated, a list by the American Statistical Association. Not half bad.
Source, it's from back in 2002
In your open thread inbox, less wrong comments have the options "context" and "report" (in that order), whereas private messages have "report" and "reply" (in that order). Many times I've accidentally pressed "report" on a private message, and fortunately caught myself before continuing.
I'd suggest reversing the order of "report" and "reply", so that they fit with the comments options.
Right, that's my tiny suggestion for this month :-)
Is there a way to see if I can vote both ways?
A month or so ago I started to get errors saying I can't downvote. I don't really care that much (it's not me that's gaining from my vote), but if I can't downvote I want to make sure I don't upvote so I don't bias things.
I had those too. It stopped rather quickly.
Your downvotes are limited by your karma (I think it's four downvotes to a karma point). I don't think you will meaningfully bias anything if you continue to upvote things you like while accumulating enough karma to downvote again.
That they are, even when everything works perfectly. There was also an error a while ago that gave the same error message to (some?) people who were not at their limit.
Yeah it's the principle. I guess I'll just try a down before I up going forward. Thanks Al
Physics puzzle: Being exposed to cold air while the wind is blowing causes more heat loss/feels colder than simply being exposed to still cold air.
So, if the ambient air temperature is above body temperature, and ignoring the effects of evaporation, would a high wind cause more heat gain/feel warmer than still hot air?
Yes, it's how hair dryers work.
Yes. This happens sometimes in a really wet Sauna.
But conditions in which you actually feel this also kill you in less than a day. You need to lose about 100 W of heat in order to keep a stable body temperature, and moving air only feels hotter than still air if you are gaining heat from the air.
Yes. Your body would try to cool your face exposed to hot air by circulating more blood through it, creating a temperature gradient through the surface layer. Consequently, the air nearest your face would be colder than ambient. A wind would blow away the cooler air, resulting in the air with ambient temperature touching your skin. Of course, in reality humidity and sweating are major factors, negating the above analysis.
Yes, though ignoring the effects of evaporation is ignoring a major factor.
Anybody have any advice on how to successfully implement doublethink?
Once upon a time I tried using what I could coin "quicklists". I took a receipt, turned it over to the back (clear side), and jotted down 5-10 things that I wanted to believe. Then I set a timer for 24 hours and, before that time elapsed, acted as if I believed those things. My experiment was too successful; by the time 24 hours were up I had ended up in a different county, with little recollection of what I'd been doing, and some policemen asking me pointed questions. (I don't believe any drugs were involved, just sleep deprivation, but I can't say for certain).
More recently, I rented and saw the film Memento, which explores these techniques in a fictional setting. The concept of short-term forgetting seemed reasonable and the techniques the character uses to work around it are easily adapted in real life. My initial test involved printing out a pamphlet with some dentistry stuff in tiny type (7 12-pt pages shrunk to fit on front-back of 1 page, folded in quarters), and carrying it with me to my dentist appointment. I was able to discuss most of the things from my pamphlet, and it did seem that the level of conversation was raised, but there were many other variables as well so it's hard to quantify the exact effect.
I'm not certain these techniques actually count as "doublethink", since the contradiction is between my "internal" beliefs and the beliefs I wrote down, but it does allow some exploration of the possibilities beyond rationality. I can override my system 2 with a piece of paper, and then system 1 follows.
NB: Retrieving your original beliefs after you've been going off of the ones from the paper is left as an exercise to the student
I would like to read more about this. Would you consider writing it up?
I thought I had written all I could. What sort of things should I add?
I think a little more elaboration on the quicklists experiment would be appreciated, and in particular a clearer description of what you think transpired when it went "too right". For me, at least, your experimental outcome might be extremely surprising (depending on the extent of the sleep deprivation involved), but I'm not even sure yet what model I should be re-assessing.
Anchoring in marathon runners.
correct link
That's a pretty cool histogram in figure 2.
Thought that people (particularly in the UK) might be interested to see this, a blog from one of the broadsheets on Bostrom's Superintelligence
http://blogs.telegraph.co.uk/news/tomchiversscience/100282568/a-robot-thats-smarter-than-us-theres-one-big-problem-with-that/
Oh, dear.
Harry Potter And The Cryptocurrency of Stars
Another attempt at a sleep sensor, currently funded on Kickstarter.
I have been considering finding a group of writers/artists to associate with in order to both provide me a catalyst for self-improvement and a set of peers who are serious about their work. I have several friends who are "into" writing or comics or whatever other medium, but most of them are as "into" it as the time between video games, drinking, and staying up late to binge Dexter episodes allows.
We have a whole sequences here on LessWrong about the Craft and the Community. So I don't feel the need to provide some bits of anecdotal evidence for why I think having a community for your craft is a good idea.
Instead, I'll just ask, to the writers: how have you found a community for your craft/have you bothered?
I put writing online for free and siphoned off spare HPMoR fans until I had enough fanbase to maintain my own stable of beta readers, set of tumblr tags, and modestly populated forum. This is more how I cultivated a fandom than a set of colleagues, but some of the people I collected this way also cowrite with me and most of them are available to spur me along.
I was once part of an online community on sffworld writing forum. There were regular posters like on any forum and there was also a small workshop (6-8 people) and each week two people would submit something for the rest of the group to read and provide feedback on. It was motivating and fun.
I frequent a sci-fi fan club in my city and from that group emerged a tiny writing workshop (6 members currently). The couple of guys who came up with the idea had heard that I wrote some small stuff and won a local contest, and thus I got invited. Every two Sundays we meet via Skype to comment on the stories that we've posted to our FB group since the last meeting. It has been helpful for me; we've agreed to be brutally honest with one another.
Not sure if this belongs here, but not sure where else it should go.
Many pages on the internet disappear, returning 404's when looking for them (especially older pages). The material I found on LW and OB is of such great quality that I would really hate it if a part of the pages here also disappeared (as in became harder to access for me). I am not sure if this is in any part realistic, but the thought does bother me. So I was hoping to somehow make a local backup of LW/OB, downloading all pages to a hard drive. There are other reasons for wanting this same thing: I am frequently in regions without internet access, and also this might finally allow me to organise the posts (the categories on LW leave much to be desired, the closest thing to a good structure I found is the chronological list on OB, which seems to be absent on LW?).
So my triple question: should I be worried about pages disappearing (probably not too much), would it still be a good idea to try to make a local backup (probably yes, storage is cheap and I think it would be useful for me personally to have LW offline, even only the older posts) and how does one go about this?
Pages here are disappearing - someone's been going through the archive deleting posts they don't like. (c.f. [1] versus [2].) (The post is still slightly available, but the 152 comments are no longer associated with it.) So get archiving sooner rather than later.
You might be interested in reading Gwern's page on Archiving URLs and Link Rot
As a person living very far away from west Africa, how worried should I be about the current Ebola outbreak?
TL;DR: Ebola is very hard to transmit person to person. Don't think flu, think STDs.
Ebola isn't airborne, so breathing the same air, being on the same plane as an Ebola case will not give you Ebola. It doesn't spread quite like STDs, but it does require getting an infected person's bodily fluids (urine, semen, blood, and vomit) mixed up in your bodily fluids or in contact with a mucous membrane.
So, don't sex up your recently returned Peace Corps friend who's been feeling a little fluish, and you should be a-ok.
A person infected with Ebola is very contagious during the period they are showing symptoms. The CDC recommends casual contact and droplet precautions.
Note the following description of (casual) contact:
(Much more contagious than an STD.)
But Lumifer is also correct. People without symptoms are not contagious, and people with symptoms are conspicuous (e.g. Patrick Sawyer was very conspicuous when he infected staff and healthcare workers in Nigeria) and unlikely to be ambulatory. The probability of a given person in West Africa being infected is very small (2000 cases divided by approximately 20 million people in Guinea, Sierra Leone and Liberia) and the probability of a given person outside this area being infected is truly negligible. If we cannot contain the virus in the area, there will be a lot of time between the observation of a burning 'ember' (or 10 or 20) and any change in these probabilities -- plenty of time to handle and douse out any further hotspots that form.
The worst case scenario in my mind is that it continues unchecked in West Africa or takes hold in more underdeveloped countries. This scenario would mean more unacceptable suffering and would also mean the outbreak gets harder and harder to squash and contain, increasing the risk to all countries.
We need to douse it while it is relatively small -- I feel so frustrated when I hear there are hospitals in these regions without supplies such as protective gear. What is the problem? Rich countries should be dropping supplies already.
Um. Given that an epidemic is actually happening and given that more than one doctor attending Ebola patients got infected, I'm not sure that "very hard" is the right term here.
Having said that, if you don't live in West Africa your chances of getting Ebola are pretty close to zero. You should be much more afraid of lightning strikes, for example.
No, You're Not Going To Get Ebola
Sorry, realized I don't feel comfortable commenting on such a high-profile topic. Will wait a few minutes and then delete this comment (just to make sure there are no replies.)
(Not in any way an expert; just going by what I've heard elsewhere.) I think the answer probably depends substantially on how much you care about the welfare of West Africans. It is very unlikely to have any impact to speak of in the US or Western Europe, for instance.
I wrote a userscript / Chrome extension / zero-installation bookmarklet to make finding recent comments over at Slate Star Codex a lot easier. Observe screenshots. I'll also post this next time SSC has a new open thread (unless Yvain happens to notice this).
I tried downloading it by clicking on "install the extension", but it doesn't seem to get to my browser (Chrome). Am I missing something?.
"Install the extension" is a link bringing you to the chrome web store, where you can install it by clicking in the upper-right. The link is this, in case it's Github giving you trouble somehow.
If the Chrome web store isn't recognizing that you're running Chrome, that's probably not a thing I can fix, though you could try saving this link as something.user.js, opening chrome://extensions, and dragging the file onto the window.
Thank you. That worked. I never would have guessed that an icon which simply had the word "free" on it was the download button.
Would it be worth your while to do this for LW? It makes me crazy that the purple edges for new comments are irretrievably lost if the page is downloaded again.
Sure. Remarkably little effort required, it turned out. (Chrome extension is here.)
I guess I'll make a post about this too, since it's directly relevant to LW.
This doesn't seem to handle stuff deep enough in the reply chain to be behind "continue this thread" links. On the massive threads where you most need the thing, a lot of the discussion is going to end up beyond those.
It seems to work for me. "Continue this thread" brings you to a new page, so you'll have to set the time again, is all. Comments under a "Load more" won't be properly highlighted until you click in and out of the time textbox after loading them.
The use case is that I go to the top page of a huge thread, the only new messages are under a "Continue this thread" link, and I want the widget to tell me that there are new messages and help me find them. I don't want to have to open every "Continue" link to see if there are new messages under one of them.
Ah. That's much more work, since there's no way of knowing if there's new comments in such a situation without fetching all of those pages. I might make that happen at some point, but not tonight.
Thanks very much. I think there's an "unpack the whole page" program somewhere. Anyone remember it?
Thanks a million!
Great idea and nicely done! It also had the additional benefit of constituting my very first interaction with javascript because I needed to modify somethings. (Specifically, avoid the use of localStorage.)
I'm curious what you used instead (cookies?), or did you just make a historyless version? Also, why did you need that? localStorage isn't exactly a new feature (hell, IE has supported it since version 8, I think).
It appears that my Firefox profile has some security features that mess with localStorage in a way that I don't understand. I used Greasemonkey's GM_[sg]etValue instead. (Important and maybe obvious, but not to me: their use has to be desclared with @grant in the UserScript preamble.)
This looks excellent.
I wrote a userscript to add a delay and checkbox reading "I swear by all I hold sacred that this comment supports the collective search for truth to the very best of my abilities." before allowing you to comment on LW. Done in response to a comment by army1987 here.
Edit: per NancyLebovitz and ChristianKl below, solicitations for alternative default messages are welcomed.
Testing this...
Nope, doesn't seem to work. (I am probably doing something wrong as I never used Greasemonkey before.)
Just tested this on a clean FF profile, so it's almost certainly something on your end. Did you successfully install the script? You should've gotten an image which looks something like this, and if you go to Greasemonkey's menu while on a LW thread, you should be able to see it in the list of scripts run for that page. Also, note that you have to refresh/load a new page for it to show up after installation.
Oh, and it only works for new comments, not new posts. It should look something like this, and similarly for replies.
ETA: helpful debugging info: if you can, let me know what page it's not working on, and let me know if there's any errors in the developer console (shift-control-K or command-option-K for windows and Mac respectively).
I had interpreted “Save this file as” in an embarrassingly wrong way. It works now!
(Maybe editing the comment should automatically uncheck the box, otherwise I can hit “Reply”, check the box straight away, then start typing my comment.)
"To the very best of my abilities" seems excessive to me, or at least I seem to do reasonably well with "according to the amount of work I'm willing to put in, and based on pretty good habits".
I'm not even sure what I could do to improve my posting much. I could be more careful to not post when I'm tired or angry, and that probably makes sense to institute as a habit. On the other hand, that's getting rid of some of the dubious posting, which is not the same thing as improving the average or the best posts.
Even when I'd only been here a few weeks, your posting had already caught my eye as unusually mindful & civil, and nothing since has changed my impression that you're far better than most of us at conversing in good faith and with equanimity.
Given the recent discussion about how rituals can give the appearance of cultishness, it's probably not good time to bring that up at the moment ;)
Does anyone know if something urgent has been going on with MIRI, other than the Effective Altruism Summit? I am a job application candidate -- I have no idea about my status as one. But I was promised a chat today, days ago, and nothing was arranged regarding time or medium. Now it is the end of the day. I sent my application weeks ago and have been in contact with 3 of the employees who seem to work on the management side of things. This is a bit frustrating. Ironically, I applied as Office Manager, and hope that (if hired) I would be doing my best to take care of these things -- putting things on a calendar, working to help create a protocol for 'rejecting' or 'accepting' or 'deferring' employee applications, etc. Have other people had similar, disorganized correspondences with MIRI? Or have they mostly been organized, suggesting that I take this experience as a sure sign of rejection?
Apparently, in the days leading up to the Effective Altruism Summit, there was a conference on Artificial Intelligence keeping the research associates out of town. The source is my friend interning at the MIRI right now. So, anyway they might have been even busier than you thought. I hope this has cleared up now.
Still haven't heard anything back from them in any sort of way. But thanks for making their circumstances even more clear!
Heard back & talked with them. My personal issue is now resolved.
Yes.
What is the general opinion on neurofeedback? Apparently there is scientific evidence pointing to its efficacy, but have there been controlled studies showing greater benefit to neurofeedback over traditional methods if they are known?
I have done a lot of neurofeedback. It's more of an art than a science right now. I think there have been many studies that have shown some benefit, although I don't know if any are long-term. But the studies might not be of much value since there is so much variation in treatment since it is supposed to be customized for your brain. The first step is going to a neurofeedback provider and having him or her look at your qEEG to see how your brain differs from a typical persons' brain. Ideally for treatment, you would say I have this problem, and the provider would say, yes this is due to your having ... and with 20 sessions we can probably improve you. Although I am not a medical doctor, I would strongly advise anyone who can afford it to try neurufeedback before they try drugs such as anti-depressants.
On the limits of rationality given flawed minds —
There is some fraction of the human species that suffers from florid delusions, due to schizophrenia, paraphrenia, mania, or other mental illnesses. Let's call this fraction D. By a self-sampling assumption, any person has a D chance of being a person who is suffering from delusions. D is markedly greater than one in seven billion, since delusional disorders are reported; there is at least one living human suffering from delusions.
Given any sufficiently interesting set of priors, there are some possible beliefs that have a less than D chance of being true. For instance, Ptolemaic geocentrism seems to me to have a less than D chance of being true. So does the assertion "space aliens are intervening in my life to cause me suffering as an experiment."
If I believe that a belief B has a < D chance of being true, and then I receive what I think is strong evidence supporting B, how can I distinguish the cases "B is true, despite my previous belief that it is quite unlikely" and "I have developed a delusional disorder, despite delusional disorders being quite rare"?
The relevant number is probably not D (the fraction of people who suffer from delusions) but a smaller number D0 (the fraction of people who suffer from this particular kind of delusion). In fact, not D0 but the probably-larger-in-this-context number D1 (the fraction of people in situations like yours before this happened who suffer from the particular delusion in question).
On the other hand, something like the original D is also relevant: the fraction of people-like-you whose reasoning processes are disturbed in a way that would make you unable to evaluate the available evidence (including, e.g., your knowledge of D1) correctly.
Aside from those quibbles, some other things you can do (mostly already mentioned by others here):
The basic idea is to talk about your belief in detail with a trusted friend that you consider sane.
Writing your own thought processes down in a diary also helps to be better able to evaluate it.
For you to rule out a belief (e.g. geocentrism) as totally unbelievable, not only does it have to be less likely than insanity, it has to be less likely than insanity that looks like rational evidence for geocentrism.
You can test yourself for other symptoms of delusions - and one might think "but I can be deluded about those too," but you can think of it like requiring your insanity to be more and more specific and complicated, and therefore less likely.
The simple answer is to ask someone else, or better yet a group; if D is small, then D^2 or D^4 will be infinitesimal. However, delusions are "infectious" (see Mass hysteria), so this is not really a good method unless you're mostly isolated from the main population.
The more complicated answer is to track your beliefs and the evidence for each belief, and then when you get new evidence for a belief, add it to the old evidence and re-evaluate. For example, replacing an old wives' tale with a peer-reviewed study is (usually) a no-brainer. On the other hand, if you have conflicting peer-reviewed studies, then your confidence in both should decrease and you should go back to the old wives' tale (which, being old, is probably useful as a belief, regardless of truth value).
Finally, the defeatist answer is that you can't actually distinguish that you are delusional. With the film Shutter Island in mind, I hope you can see that almost nothing is going to shake delusions; you'll just rationalize them away regardless. If you keep notes on your beliefs, you'll dismiss them as being written by someone else. People will either pander to your fantasy or be dismissed as crooks. Every day will be a new one, starting over from your deluded beliefs. In such a situation there's not much hope for change.
For the record, I disagree with "delusional disorders being quite rare"; I believe D is somewhere between 0.5 and 0.8. Certainly, only 3% of these are "serious", but I could fill a book with all of the ways people believe something that isn't true.
Given replication rates of scientific studies a single study might not be enough. Single studies that go against your intuition are not enough reason to update. Especially if you only read the abstract.
No need to get people to wash their hands before you do a business deal with them.
Enough for what? My question is whether my hair stylist saying "Shaving makes the hair grow back thicker." is more reliable than http://onlinelibrary.wiley.com/doi/10.1002/ar.1090370405/abstract. In general, the scientists have put more thought into their answer and have conducted actual experiments, so they are more reliable. I might revise that opinion if I find evidence of bias, such as a study being funded by a corporation that finds favorable results for their product, but in my line of life such studies are rare.
I find that in most cases I simply don't have an intuition. What's the population of India? I can't tell you, I'd have to look it up. In the rare cases where I do have some idea of the answer, I can delve back into my memory and recreate the evidence for that idea, then combine it with the study; the update happens regardless of how much I trust the study. I suppose that a well-written anecdote might beat a low-powered statistical study, but again such cases are rare (more often than not they are studying two different phenomena).
I wash my hands after shaking theirs, as soon as convenient. Or else I just take some ibuprofen after I get sick. (Not certain what you were trying to get at here...)
Humans are biased to overrate bad human behavior as a cause for mistakes. The decent thing is to orient yourself on whether similar studies replicate.
Regardless every publish-or-perish paper has an inherent bias to find spectacular results.
Let's say wearning red every day.
Thinking that those Israeli judges don't give people parole because they don't have enough sugar in their blood right before mealtime. Going and giving every judge a candy before hearing every case to make it fair isn't warranted.
That's fixable by training Fermi estimates.
It's a reference to the controversy about whether washing your hands primes you to be more moral. It's a experimental social science result that failed to replicate.
If a crocodile bites off your hand, it's generally your fault. If the hurricane hits your house and kills you, it's your fault for not evacuating fast enough. In general, most causes are attributed to humans, because that allows actually considering alternatives. If you just attributed everything to, say, God, then it doesn't give any ideas. I take this a step further: everything is my fault. So if I hear about someone else doing something stupid, I try to figure out how I could have stopped them from doing it. My time and ability are limited in scope, so I usually conclude they were too far away to help (space-like separation), but this has given useful results on a few occasions (mostly when something I'm involved in goes wrong).
Not really, since the replication is more likely to fail than the original study (due to inexperience), and is subject to less peer-review scrutiny (because it's a replication). See http://wjh.harvard.edu/~jmitchel/writing/failed_science.htm. The correct thing to consider is followup work of any kind; for example, if a researcher has a long line of publications all saying the same thing in different experiments, or if it's widely cited as a building block of someone's theory, or if there's a book on it.
Right, people only publish their successes. There are so many failures that it's not worth mentioning or considering them. But they don't need to be "spectacular", just successful. Perhaps you are confusing publishing at all, even in e.g. a blog post, with publishing in "prestigious" journals, which indeed only publish "spectacular" results; looking at only those would give you a biased view, certainly, but as soon as you expand your field of view to "all information everywhere" then that bias (mostly) goes away, and the real problem is finding anything at all.
So the study there links red to aggression; I don't want to be aggressive all the time, so why should I wear red all the time? For example, I don't want a red car because I don't want to get pulled over by the cops all the time. Similarly for most results; they're very limited in scope, of the form "if X then Y" or even "X associate with Y". Many times, Y is irrelevant, so I don't need to even consider X.
Sure, but if I'm involved with a case then I'll be sure to try to get it heard after lunchtime, and offer the judge some candy if I can get away with it.
You can memorize populations or memorize the Fermi factors and how to combine them, but the point stands regardless; you still have to remember something.
Ah, social science. I need to take more courses in statistics before I can comment... so far I have been sticking to the biology/chemistry/physics side of things (where statistics are rare and the effects are obvious from inspection).
The car story appears to be a myth nowadays, but that could just be due to the increased use of radar guns and better police training. Radar guns were introduced around the 1950's so all of their policemen quotes are too recent to tell.
Conflating whether or not you could do something to stop them with finding truth makes it harder to have an accurate view of whether or not the result is true.
Accepting reality for what it is helps to have an accurate perception of reality. Only once you understand the territory should you go out and try to change things. If you do the second step before the first you mess up your epistemology. You fall for a bunch of human biases evolved for finding out whether the neighboring tribe might attack your tribe that aren't useful for clear understanding of todays complex world.
I spoke about incentives. Researchers have an incentive to publish in prestigious journals and optimize their research practices for doing so. The case with blogs isn't much different. Successful bloggers write polarizing posts that get people talking and engage with the story even there would be a way to be more accurate and less polarizing. The incentives go towards "spectual".
Scott H Young whom I respect and who's a nice fellow wrote his post against spaced repetition and still know recommends now in a later post the usage of Anki for learning vocabulary.
It's not about remembering it's about being able to make estimates even when you aren't sure. And you can calibrate your error intervals.
Aggression is not the central word. Status and dominance also appear. People do a bunch of things to appear higher status.
One of the studies in question suggested that it makes woman more attracted to you measured by the physical distance in conversation. Another one suggest that attraction based on photo ratings.
I actually did the comparison on hotOrNot. I tested a blue shirt against a red shirt. Photoshopped so nothing besides the color with different. For my photo blue scored more attractive than red despite the studies saying that red is the color that raises attractiveness.
The replication rates for cancer biology seem to be even worse than for psychology if you trust the Amgen researchers who could only replicate 6 of 55 landmark studies that they tried to replicate.
Probably a minor point, but were both the red and blue shirts photoshopped? If one of them was an actual photo, it might have looked more natural (color reflected on to your face) than the other.
In this case no, the blue was the original you are right that this might have screwed with the results. HotOrNot internal algorithms were also a bit opaque.
But to be fair the setup of the original study wasn't natural either. The color in those studies has the color of the border of the photo.
If I wanted to repeat the experiment I would like to it on Amazon Mechanical turk. At the moment I don't really have the spare money for projects like that but maybe someone else on LW cares enough to dress in an attractive way and wants to optimize and has the money.
The whole thing might also work good for a blogger willing to a bit of cash to write an interesting post.
Especially for online dating like Tinder, photo optimisation through empiric measurement of photos can increase success rates a bit.
I'm not certain where you see conflation. I have separate storage areas for things to think about, evidence, actions, and risk/reward evaluations. They interact as described here. Things I hear about go into the "things to think about" list.
The world is changing so I must too. If the apocalypse is tomorrow, I'm ready. I don't need to "understand" the apocalypse or its cause to start preparing for it. IF I learn something later that says I did the wrong thing, so be it. I prefer spending most of my time trying to change things than sitting in a room all day trying to understand. Indeed, some understanding can only be gained through direct experience. So I disagree with you here.
The decision procedure I outlined above accounts for most biases; you're welcome to suggest revisions or stuff I should read.
You didn't, AFAICT; you spoke about "inherent biases". I think my point still stands though; averaging over "all information everywhere" counteracts most perverse incentives, since perversion is rare, and the few incentives left are incentives that are shared among humans such as survival, reproduction, etc. In general humans are good at that sort of averaging, although of course there are timing and priming effects. Researchers/bloggers are incentivized to produce good results because good results are the most useful and interesting. Good results lead to good products or services (after a 30 year lag). The products/services lead to improved life (at least for some). Improved life leads to more free time and better research methods. And the cycle goes on, the end result AFAICT is a big database of mostly-correct information.
His post is entitled "Why Forgetting Can Be Good" and his mention of Anki is limited to "I’m skeptical of the value of an SRS for most domains of knowledge." If he then recommends Anki for learning vocabulary, this changes relatively little; he's simply found a knowledge domain where he found SRS useful. Different studies, different conclusions, different contributions to different decisions.
You're never sure, so why mention "even when you aren't sure", since it's implied? Striking that out...
Estimation comes after the evidence-gathering phase. If you have no evidence you can make no estimates. Fermi estimation is just another estimation method, so it doesn't change this. If you have no memory, then you have no evidence. So it is about remembering. "Those who cannot remember the past are condemned to repeat it".
If you have no estimates you can't have error intervals either. Indeed, you can't do calibration until you have a distribution of estimates.
It looks like the central word is definitely dominance. Stringing the top words into a sentence I get "Sports teams wear red to show dominance and it has an effect on referees' performance". I guess I was going off of the Mandrill story where signs of dominance are correlated with willingness to be aggressive. This study says dominance and threat are emphasized by wearing red, where "threat" is measured by "How threatening (intimidating, aggressive) did you feel?". Some other papers also relate dominance to aggressiveness. So I feel comfortable confusing the two, since they seem to be strongly correlated and relatively flexible in terms of definition.
The comments do focus on status, so I guess you have a point. But I generally skip over the comments when an article is linked to. And the status discussion was in the comments of Overcoming Bias post, so by no means central.
Would you be referring to, among others, this study? Unfortunately... it still looks like experimental psychology, so again I have to plead lack of statistics.
I've mostly been reading Army / DoD studies, which have a different funding model. But I guess cancer will become relevant eventually (preferably later rather than sooner).
Side note: does LW have a "collapse threads more than N levels deep" feature like reddit? It probably should have triggered a few replies ago, so I didn't post on the wrong child...
What sort of beliefs are you talking about here? Are you classifying simply being wrong about something as a "delusional disorder"?
Exhibiting symptoms often considered as signs of mental illness. For example, this says 38.6% of general people have hallucinations. This says 40% of general people had paranoid thoughts. Presumably these groups aren't exactly the same, so there you go: between 0.5 and 0.8 of the general population. You can probably pull together some more studies with similar results for other symptoms.
Does anyone have any experience or thoughts regarding Cal Newport's "Study Hacks" blog, or his books? I'm trying to get an idea of how reliable his advice is before, saying, reading his book about college, or reading all of the blog archives.
Some LW discussions of his books: A summary and broad points of agreement and disagreement with Cal Newport's book on high school extracurriculars, Book Review: So Good They Can’t Ignore You, by Cal Newport, The failed simulation effect and its implications for the optimization of extracurricular activities.
Cognito Mentoring refer to him a fair bit, and often in mild agreement. Check their blog and wiki.
I've been looking for tools to help organize complex arguments and systems into diagrams, and ran into Flying Logic and Southbeach modeller. Could anyone here with experience using these comment on their value?
And UnBBayes does computational analyses, similar to Flying Logic, except it uses Bayesian probability.
I don't have experience with those, but I'll recommend Graphviz as a free (and useful) alternative. See e.g. http://k0s.org/mozilla/workflow.svg
There is a common idea in the “critical thinking”/"traditional rationality" community that (roughly) you should, when exposed to an argument, either identify a problem with it or come to believe the argument’s conclusion. From a Bayesian framework, however, this idea seems clearly flawed. When presented with an argument for a certain conclusion, my failure to spot a flaw in the argument might be explained by either the argument’s being sound or by my inability to identify flawed arguments. So the degree to which I should update in either direction depends on my corresponding prior beliefs. In particular, if I have independent evidence that the argument’s conclusion is false and that my skills for detecting flaws in arguments are imperfect, it seems perfectly legitimate to say, “Look, your argument appears sound to me, but given what I know, both about the matter at hand and about my own cognitive abilities, it is much more likely that there’s a flaw in your argument which I cannot detect than that its conclusion is true.” Yet it is extremely rare to see LW folk or other rationalists say things like this. Why is this so?
A similar situation that used to happen frequently to me in real life, was when the argument was too long, too complex, used information that I couldn't verify... or ever could, but the verification would take a lot of time... something like: "There is this 1000 pages long book containing complex philosophical arguments and information from non-mainstream but cited sources, which totally proves that my religion is correct." And there is nothing obviously incorrect within the first five pages. But I am certainly not going to read it all. And the other person tries to use my self-image of an intelligent person against me, insisting that I should promise that I will read the whole book and then debate about it (which is supposedly the rational thing to do in such situation: hey, here is the evidence, you just refuse to look at it), or else I am not really intelligent.
And in such situations I just waved my hands and said -- well, I guess you just have to consider me unintelligent -- and went away.
I didn't think about how to formalize this properly. It was just this: I recognize the trap, and refuse to walk inside. If it happened to me these days, I could probably try explaining my reaction in Bayesian terms, but it would be still socially awkward. I mean, in the case of religion, the true answer would show that I believe my opponent is either dishonest or stupid (which is why I expect him to give me false arguments); which is not a nice thing to say to people. And yeah, it seems similar to ignoring evidence for irrational reasons.
Nothing, including rationality, requires you to look at ALL evidence that you could possibly access. Among other things, your time is both finite and valuable.
I say things like this a lot in contexts where I know there are experts, but I have put no effort into learning which are the reliable ones. So when someone asserts something about (a) nutritional science (b) Biblical translation nuances (c) assorted other things in this category, I tend to say, "I really don't have the relevant background to evaluate your argument, and it's not a field I'm planning to do the legwork to understand very well."
Related link: Peter van Inwagen's article Is it wrong everywhere, always, and for everyone, to believe anything on insufficient evidence?. van Inwagen suggests not, on the grounds that if it were then no philosopher could ever continue believing something firmly when there are other smarter equally well informed philosophers who strongly disagree. I find this argument less compelling than van Inwagen does.
Haha. You should believe exactly what the evidence suggests, and exactly to the degree that it suggests it. The argument is also an amusing example of 'one man's modus ponens...'.
This idea seems like a manifestation of epistemic learned helplessness.
In my experience there are LW people who would in such cases simply declare that they won't be convinced of the topic at hand and suggest to change the subject.
I particularly remember a conversation at the LW community camp about geopolitics where a person simply declared that they aren't able to evaluate arguments on the matter and therefore won't be convinced.
That was probably me. I don't think I handled the situation particularly gracefully, but I really didn't want to continue that conversation, and I couldn't see whether the person in question was wearing a crocker's rules tag.
I don't remember my actual words, but I think I wasn't trying to go for "nothing could possibly convince me", so much as "nothing said in this conversation could convince me".
It's still more graceful than the "I think you are wrong based on my heuristics but I can't tell you where you are wrong" that Pablo Stafforini advocates.
Because that ends the discussion. I think a lot of people around here just enjoy debating arguments (certainly I do).
I actually do say things like this pretty frequently, though I haven't had the opportunity to do so on LW yet.
Because the case where you are entirely wedded to a particular conclusion and want to just ignore the contrary evidence would look awfully similar...
Awfully similar, but not identical.
In the first case, you have independent evidence that the conclusion is false, so you're basically saying "If I considered your arguments in isolation, I would be convinced of your conclusion, but here are several pieces of external evidence which contradict your conclusion. I trust this external evidence more than I trust my ability to evaluate arguments."
In the second case, you're saying "I have already concluded that your conclusion is false because I have concluded that mine is true. I think it's more likely that there is a flaw in your conclusion that I can't detect than that there is a flaw in the reasoning that led to my conclusion."
The person in the first case is far more likely to respond with "I don't know" in response to the question of "So what do you think the real answer is, then?" In our culture (both outside, and, to a lesser but still significant degree inside LW), there is a stigma against arguing against a hypothesis without providing an alternative hypothesis. An exception is the argument of the form "If Y is true, how do you explain X?" which is quite common. Unfortunately, this form of argument is used extensively by people who are, as you say, entirely wedded to a particular conclusion, so using it makes you seem like one of those people and therefore less credible, especially in the eyes of LWers.
Rereading your comment, I see that there are two ways to interpret it. The first is "Rationalists do not use this form of argument because it makes them look like people who are wedded to a particular conclusion." The second is "Rationalists do not use this form of argument because it is flawed -- they see that anyone who is wedded to a particular conclusion can use it to avoid updating on evidence." I agree with the first interpretation, but not the second -- that form of argument can be valid, but reduces the credibility of the person using it in the eyes of other rationalists.
"Independent evidence" is a tricky concept. Since we are talking Bayesianism here, at the moment you're rejecting the argument it's not evidence any more, it's part of your prior. Maybe there was evidence in the past that you've updated on, but when you refuse to accept the argument, you're refusing to accept it solely on the basis of your prior.
Which is pretty much equivalent to saying "I have seen evidence that your conclusion is false, so I already updated that it is false and my position is true and that's why I reject your argument".
I think both apply.
In fact that case is just a special case of the former with you having bad priors.
Not quite, your priors might be good. We're talking here about ignoring evidence and that's a separate issue from whether your priors are adequate or not.
Suppose you wanted to find out all the correlates for particular Big Five personality traits. Where would you look, besides the General Social Survey?
Would 'Google Scholar' be too glib an answer here?
It gave me mostly psychological and physiological correlates. I'm interested more in behavioral and social/economic things. I suppose you can get from the former to the latter, though with much less confidence than a directly observed correlation.
Your answer is exactly as glib as it should be, but only because I didn't really specify what I'm curious about.
Another piece of potentially useful information that may be new to some folks here: sleeping more ~7.5 hours is associated to a higher mortality risk (and the risk is comparable to sleeping less than ~5 hours).
Relevant literature reviews:
Cappuccio FP, D'Elia L, Strazzullo P, et al. Sleep duration and all-cause mortality: a systematic review and meta-analysis of prospective studies. Sleep 2010;33(5):585-592.
Grandner MA, Hale L, Moore M, et al . Mortality associated with short sleep duration: the evidence, the possible mechanisms, and the future. Sleep Med Rev 2010;14(3):191-203.
Grandner MA, Drummond SP. Who are the long sleepers? Towards an understanding of the mortality relationship. Sleep Med Rev. Oct 2007;11(5):341–60.
Based on that data, I think a blanket suggestion that everybody should sleep 8 hours isn't warranted. It seems that some people with illnesses or who are exposed to other stressors need 8 hours.
I would advocate that everybody sleeps enough to be fully rested instead of trying to sleep a specific number of hours that some authority considers to be right for the average person.
I think the same goes for daily water consumption. Optimize values like that in a way that makes you feel good on a daily basis instead of targeting a value that seems to be optimal for the average person.
What are your grounds for making this recommendation? The parallel suggestion that everyone should eat enough to feel fully satisfied doesn't seem like a recipe for optimal health, so why think things should be different with sleep? Indeed, the analogy between food and sleep is drawn explicitly in one of the papers I cited, and it seems that a "wisdom of nature" heuristic (due to "changed tradeoffs"; see Bostrom & Sandberg, sect. 2) might support a policy of moderation in both food and sleep. Although this is all admittedly very speculative.
Years of thinking about the issue that aren't easily compressed.
In general alarm clocks don't seem to be healthy devices. The idea of habitually breaking sleep at a random point of the sleep circle doesn't seem good.
Let's say we look at a person who needs 8 hours of sleep to feel fully rested. The person has health issue X. When we solve X than they only need 7 hours of sleep. The obvious way isn't to wake up the person after 7 hours of sleep but to actually fix X.
That idea of sleep seems to both reflect the research that forcibly cutting peoples sleep in a way that leads to sleep deprivation is bad. It also explains why the people who sleep 8 hours on average die earlier than the people who sleep 7 hours.
If I get a cold my body needs additional sleep during that time. I have a hard time imagine that cutting that sleep needs away is healthy.
If we look at eating I also think similar things are true. There not much evidence that forced dieting is healthy. Fixing underlying issues seems to be preferable over forcibly limiting food consumption.
While we are at the topic of sleep and mortality it's worth pointing out that sleeping pills are very harmful to health.
I don't find these results to be of much value. There's a long history of various sleep-duration correlations turning out to be confounds from various diseases and conditions (as your quote discusses), so there's more than usual reason to minimize the possibility of causation, and if you do that, why would anyone care about the results? I don't think a predictive relationship is much good for say retirement planning or diagnosing your health from your measured sleep. And on the other hand, there's plenty of experimental studies on sleep deprivation, chronic or acute, affecting mental and physical health, which overrides these extremely dubious correlates. It's not a fair fight.
Yes, my primary reason for posting these studies was actually to elicit a discussion about the kinds of conclusions we may or may not be entitled to draw from them (though I failed to make this clear in my original comment). I would like to have a better epistemic framework for drawing inferences from correlational studies, and it is unclear to me whether the sheer (apparent) poor track-record of correlational studies when assessed in light of subsequent experiments is enough to dismiss them altogether as sources of evidence for causal hypotheses. And if we do accept that sometimes correlational studies are evidentially causally relevant, can we identify an explicit set of conditions that need to obtain for that to be the case, or are these grounds so elusive that we can only rely on subjective judgment and intuition?
Oblique request made without any explanation: can anyone provide examples of beliefs that are incontrovertibly incorrect, but which intelligent people will nonetheless arrive at quite reasonably through armchair-theorising?
I am trying to think up non-politicised, non-controversial examples, yet every one I come up with is a reliable flame-war magnet.
ETA: I am trying to reason about disputes where on the one hand you have an intelligent, thoughtful person who has very expertly reasoned themselves into a naive but understandable position p, and on the other hand, you have an individual who possesses a body of knowledge that makes a strong case for the naivety of p.
What kind of ps exist, and do they have common characteristics? All I can come up with are politically controversial ps, but I'm starting my search from a politically-controversial starting point. The motivating example for this line of reasoning is so controversial that I'm not touching it with a shitty-stick.
When I was ~16, I came up with group selection to explain traits like altruism.
Perhaps "The person who came out of the teleporter isn't me, because he's not made of the same atoms"?
Bell's spaceship paradox.
According to Bell, he surveyed his colleagues at CERN (clearly a group of intelligent, qualified people) about this question, and most of them got it wrong. Although, to be fair, the conflict here is not between expert reasoning and domain knowledge, since the physicists at CERN presumably possessed all the knowledge you need (basic special relativity, really) to get the right answer.
I thought about this on & off over the last couple of days and came up with more candidates than you can shake a shitty stick at. Some of these are somewhat political or controversial, but I don't think any are reliable flame-war magnets. I expect some'll ring your cherries more than others, but since I can't tell which, I'll post 'em all and let you decide.
The answer to the Sleeping Beauty puzzle is obviously 1/2.
Rational behaviour, being rational, entails Pareto optimal results.
Food availability sets a hard limit on the number of kids people can have, so when people have more food they have more kids.
Truth is an absolute defence against a libel accusation.
If a statistical effect is so small that a sample of several thousand is insufficient to reliably observe it, the effect's too small to matter.
Controlling for an auxiliary variable, or matching on that variable, never worsens the bias of an estimate of a causal effect.
Human nature being as brutish as it is, most people are quite willing to be violent, and their attempts at violence are usually competent.
In the increasingly fast-paced and tightly connected United States, residential mobility is higher than ever.
The immediate cause of death from cancer is most often organ failure, due to infiltration or obstruction by spreading tumours.
Aumann's agreement theorem means rationalists may never agree to disagree.
Friction, being a form of dissipation, plays no role in explaining how wings generate lift.
Seasons occur because Earth's distance from the Sun changes during Earth's annual orbit.
Beneficial mutations always evolve to fixation.
Multiple discovery is rare & anomalous.
The words "male" & "female" are cognates.
Given the rise of online piracy, the ridiculous cost of tickets, and the ever-growing convenience of other forms of entertainment, cinema box office receipts must be going down & down.
Looking at voting in an election from the perspective of timeless decision theory, my voting decision is probably correlated and indeed logically linked with that of thousands of people relatively likely to agree with my politics. This could raise the chance of my influencing an election above negligibility, and I should vote accordingly.
The countries with the highest female life expectancies are approaching a physiologically fixed hard limit of 65 — sorry, 70 — sorry, 80 — sorry, 85 years.
The answer to the Sleeping Beauty puzzle is obviously 1/3.
Language in general might be a rich source of these, between false etymologies, false cognates, false friends, and eggcorns.
... don't they? (in the long run)
In the "long-long run", given ad hoc reproductive patterns, yeah, I'd expect evolution to ratchet average human fertility higher & higher until much of humanity slammed into the Malthusian limit, at which point "when people have more food they have more kids" would become true.
Nonetheless, it isn't true today, it's unlikely to be true for the next few centuries unless WWIII kicks off, and may never come to pass (humanity might snuff itself out of existence before we go Malthusian, or the threat of Malthusian Assured Destruction might compel humanity to enforce involuntary fertility limits). So here in 2014 I rate the idea incontrovertibly false.
No, they don't -- look at contemporary Western countries and their birth rates.
Oh yes I know that, I just meant in the long-long run. This voluntary limiting of birth rates can't last for obvious evolutionary reasons.
I have no idea about the "long-long" run :-)
The limiting of birth rates can last for a very long time as long as you stay at replacement rates. I don't think "obvious evolutionary reasons" apply to humans any more, it's not likely another species will outcompete us by breeding faster.
Any genes that make people defect by having more children are going to be (and are currently being) positively selected.
Besides, reducing birthrates to replacement isn't anything near a universal phenomenon, see the Mormons and Amish.
It's got nothing to do with another species out-competing us - competition between humans is more than enough.
This observation should be true throughout the history of the human race, and yet the birth rates in the developed countries did fall off the cliff...
This happened barely half a generational cycle ago. Give evolution time.
So what's your prediction for what will happen when?
And animals don't breed well in captivity.
Until they do.
Thanks for that list. I believed (or at least, assigned a probability greater than 0.5 to) about five of those.
Thanks for this. These are all really good.
Now I just need to think of another 21 and I'll have enough for a philosophy article!
Generalising from 'plane on a treadmill'; a lot of incorrect answers to physics problems and misconceptions of physics in general. For any given problem or phenomenon, one can guess a hundred different fake explanations, numbers, or outcomes using different combinations of passwords like 'because of Newton's Nth law', 'because of drag', 'because of air resistance', 'but this is unphysical so it must be false' etc. For the vast majority of people, the only way to narrow down which explanations could be correct is to already know the answer or perform physical experiments, since most people don't have a good enough physical intuition to know in advance what types of physical arguments go through, so should be in a state of epistemic learned helplessness with respect to physics.
I have a strange request. Without consulting some external source, can you please briefly define "learned helplessness" as you've used it in this context, and (privately, if you like) share it with me? I promise I'll explain at some later date.
There will probably be holes and not quite capture exactly what I mean, but I'll take a shot. Let me know if this is not rigorous or detailed enough and I'll take another stab, or if you have any other follow-up. I have answered this immediately, without changing tab, so the only contamination is saccading my LW inbox beforing clicking through to your comment, the titles of other tabs, etc. which look (as one would expect) to be irrelevant.
Helplessness about topic X - One is not able to attain a knowably stable and confident opinion about X given the amount of effort one is prepared to put in or the limits of one's knowledge or expertise etc. One's lack of knowledge of X includes lack of knowledge about the kinds of arguments or methods that tend to work in X, lack of experience spotting crackpot or amateur claims about X, and lack of general knowledge of X that would allow one to notice one's confusion at false basic claims and reject them. One is unable to distinguish between ballsy amateurs and experts.
Learned helplessness about X - The helplessness is learned from experience of X; much like the sheep in Animal Farm, one gets opinion whiplash on some matter of X that makes one realise that one knows so little about X that one can be argued into any opinion about it.
(This has ended up more like a bunch of arbitrary properties pointing to the sense of learned helplessness rather than a slick definition. Is it suitable for your purposes, or should I try harder to cut to the essence?)
Rant about learned helplessness in physics: Puzzles in physics, or challenges to predict the outcome of a situation or experiment, often seem like they have many different possible explanations leading to a variety of very different answers, with the merit of these explanations not being distinguishable except to those who have done lots of physics and seen lots of tricks, and maybe even then maybe you just need to already know the answer before you can pick the correct answer.
Moreover, one eventually learns that the explanations at a given level of physics instruction are probably technically wrong in that they are simplified (though I guess less so as one progresses).
Moreover moreover, one eventually becomes smart enough to see that the instructors do not actually even spot their leaps in logic. (For example, it never seemed to occur to any of my instructors that there's no reason you can't have negative wavenumbers when looking at wavefunctions in basic quantum. It turns out that when I run the numbers, everything rescales since the wavefunction bijects between -n and n and one normalizes the wavefunction anyway, so that it doesn't matter, but one could only know this for sure after reasoning it out and justifying discarding the negative wavenumbers. It basically seemed like the instructors saw an 'n' in sin(n*pi/L) or whatever and their brain took it as a natural number without any cognitive reflection that the letter could have just as easily been a k or z or something, and to check that the notation was justified by the referent having to be a natural.)
Moreover, it takes a high level of philosophical ability to reason about physics thought experiments and their standards of proof. Take the 'directly downwind faster than the wind' problem. The argument goes back and forth, and, like the sheep, at every point the side that's speaking seems to be winning. Terry Tao comes along and says it's possible, and people link to videos of carts with propellers apparently going downwind faster than the wind and wheels with rubber bands attached allegedly proving it. But beyond deferring to his general hard sciences problem-solving ability, one has no inside view way to verify Tao's solution; what are the standards of proof for a thought experiment? After all, maybe the contraptions in the video only work (assuming they do work as claimed, which isn't assured) because of slight side-to-side effects rather than directly down wind or some other property of the test conditions implicitly forbidden by the thought experiment.
Since any physical experiment for a physics thought experiment will have additional variables, one needs some way to distinguish relevant and irrelevant variables. Is the thought experiment the limit as extraneous variables become negligible, or is there a discontinuity? What if different sets of variables give rise to different limits? How does anyone ever know what the 'correct' answer is to an idealised physics thought experiment of a situation that never actually arises? Etc.
Thanks for that. The whole response is interesting.
I ask because up until quite recently I was labouring under a wonky definition of "learned helplessness" that revolved around strategic self-handicapping.
An example would be people who foster a characteristic of technical incompetence, to the point where they refuse to click next-next-finish on a noddy software installer. Every time they exhibit their technical incompetence, they're reinforced in this behaviour by someone taking the "hard" task away from them. Hence their "helplessness" is "learned".
It wasn't until recently that I came across an accurate definition in a book on reinforcement training. I'm pretty sure I've had "learned helplessness" in my lexicon for over a decade, and I've never seen it used in a context that challenged my definition, or used it in a way that aroused suspicion. It's worth noting that I probably picked up my definition through observing feminist discussions. Trying a mental find-and-replace on ten years' conversations is kind of weird.
I am also now bereft of a term for what I thought "learned helplessness" was. Analogous ideas come up in game theory, but there's no snappy self-contained way available to me for expressing it.
Making up a term for this..."reinforced helplessness"? (I dunno whether it'd generalize to cover the rest of what you formerly meant by "learned helplessness".)
Good chance you've seen both of these before, but:
http://en.wikipedia.org/wiki/Learned_helplessness and http://squid314.livejournal.com/350090.html
Damn, if only someone had created a thread for that, ho ho ho
Strategic incompetence?
I'm not sure if maybe Schelling uses a specific name (self-sabotage?) for that kind of thing?
Schelling does talk about strategic self-sabotage, but it captures a lot of deliberated behaviour that isn't implied in my fake definition.
Also interesting to note, I have read that Epistemic Learned Helplessness blog entry before, and my fake definition is sufficiently consistent with it that it doesn't stand out as obviously incorrect.
Now picturing a Venn diagram with three overlapping circles labelled "epistemic learned helplessness", "what psychologists call 'learned helplessness'", and "what sixes_and_sevens calls 'learned helplessness'"!
This isn't very interesting, but I used to believe that the rules about checkmate didn't really change the nature of chess. Some of the forbidden moves - moving into check, or failing to move out if possible - are always a mistake, so if you just played until someone captured the king, the game would only be different in cases where someone made an obvious mistake.
But if you can't move, the game ends in stalemate. So forbidding you to move into check means that some games end in draws, where capture-the-king would have a victor.
(This is still armchair theorising on my part.)
The sun revolves around the earth.
The earth revolving around the sun was also armchair reasoning, and refuted by empirical data like the lack of observable parallax of stars. Geocentrism is a pretty interesting historical example because of this: the Greeks reached the wrong conclusion with right arguments. Another example in the opposite direction: the Atomists were right about matter basically being divided up into very tiny discrete units moving in a void, but could you really say any of their armchair arguments about that were right?
It is not clear that the Greeks rejected heliocentrism at all, let alone any reason other than heresy. On the contrary, Hipparchus refused to choose, on the grounds of Galilean relativity.
The atomists got the atomic theory from the Brownian motion of dust in a beam of light. the same way that Einstein convinced the final holdouts thousands of years later.
Eh? I was under the impression that most of the Greeks accepted geocentrism, eg Aristotle. Double-checking https://en.wikipedia.org/wiki/Heliocentrism#Greek_and_Hellenistic_world and https://en.wikipedia.org/wiki/Ancient_Greek_astronomy I don't see any support for your claim that heliocentrism was a respectable position and geocentrism wasn't overwhelmingly dominant.
Cite? I don't recall anything like that in the fragments of the Pre-socratics, whereas Eleatic arguments about Being are prominent.
Lucretius talks about the motion of dust in light, but he doesn't claim that it is the origin of the theory. When I google "Leucippus dust light" I get lots of people making my claim and more respectable sources making weaker claims, like "According to traditional accounts the philosophical idea of simulacra is linked to Leucippus’ contemplation of a ray of light that made visible airborne dust," but I don't see any citations to where this tradition is recorded.
The Greeks cover hundreds of years. They made progress! You linked to a post about the supposed rejection of Aristarchus's heliocentric theory. It's true that no one before Aristarchus was heliocentric. That includes Aristotle who died when Aristarchus was 12. Everyone agrees that the Hellenistic Greeks who followed Aristotle were much better at astronomy than the Classical Greeks. The question is whether the Hellenistic Greeks accepted Aristarchus's theory, particularly Archimedes, Apollonius, and Hipparchus. But while lots of writings of Aristotle remain, practically nothing of the later astronomers remain.
It's true that secondary sources agree that Archimedes, Apollonius, and Hipparchus were geocentric. However, they give no evidence for this. Try the scholarly article cited in the post you linked. It's called "The Greek Heliocentric Theory and Its Abandonment" but it didn't convince me that there was an abandonment. That's where I got the claim about Hipparchus refusing to choose.
I didn't claim that there was any evidence that it was respectable, let alone dominant, only that there was no evidence that it was rejected. The only solid evidence one way or the other is the only surviving Hellenistic astronomy paper, Archimedes's Sandreckoner, which uses Aristarchus's model. I don't claim that Archimedes was heliocentric, but that sure sounds to me like he respected heliocentrism.
Maybe heliocentrism survived a century and was finally rejected by Hipparchus. That's a world of difference from saying that Seleucus was his only follower. Or maybe it was just the two of them, but we live in a state of profound ignorance.
As for the ultimate trajectory of Greek science, that is a difficult problem. Lucio Russo suggests that Roman science is all mangled Greek science and proposes to extract the original. For example, Seneca claims that the retrograde motion of the planets is an illusion, which sounds like he's quoting someone who thinks the Earth moves, even if he doesn't. More colorful are Pliny and Vitruvius who claim that the retrograde motion of the planets is due to the sun shooting triangles at them. This is clearly a heliocausal theory, even if the authors claim to be geocentric. Less clear is Ruso's interpretation, that this is a description of a textbook diagram that they don't understand.
So, you just have an argument from silence that heliocentrism was not clearly rejected?
I just read through the bits of Sand Reckoner referring to Aristarchus (Mendell's translation), and throughout Archimedes seems to be at pains to distance himself from Aristarchus's model, treating it as a minority view (emphasis added):
Not language which suggests he takes it particularly seriously, much less endorses it.
In fact, it seems that the only reason Archimedes brings up Aristarchus at all is as a form of 'worst-case analysis': some fools doubt the power of mathematics and numbers, but Archimedes will show that even under the most ludicrously inflated estimate of the size of the universe (one implied by Aristarchus's heliocentric model), he can still calculate & count the number of grains of sands it would take to fill it up; hence, he can certainly calculate & count the number for something smaller like the Earth. From the same chapter:
And he triumphantly concludes in ch4:
All I have ever said is that you should stop telling fairy tales about why the Greeks rejected heliocenrism. If the Sandreckoner convinces you that Archimedes rejected heliocentrism, fine, whatever, but it sure doesn't talk about parallax.
I listed several pieces of positive evidence, but I'm not interested in the argument.
The Sand Reckoner implies the parallax objection when it uses an extremely large heliocentric universe! Lack of parallax is the only reason for such extravagance. Or was there some other reason Aristarchus's model had to imply a universe lightyears in extent...?
Aristarchus using a large universe is evidence that he thought about parallax. It is not evidence that his opponents thought about parallax.
You are making a circular argument: you say that the Greeks rejected heliocentrism for a good reason because they invoked parallax, but you say that they invoked parallax because you assume that they had a good reason.
There is a contemporary recorded reason for rejecting Aristarchus: heresy. There is also a (good) reason recorded by Ptolemy 400 years later, namely wind speed.
Atoms can actually be divided into parts, so it's not clear that the atomists where right. If you would tell some atomist about quantum states, I would doubt that they would find that to be a valid example of what they mean with "atom".
The atomists were more right than the alternatives: the world is not made of continuously divisible bone substances, which are bone no matter how finely you divide them, nor is it continuous mixtures of fire or water or apeiron.
You could say the same of Dalton.
If a plane is on a conveyor belt going at the same speed in the opposite direction, will it take off?
I remember reading this in other places I don't remember, and it seems to inspire furious arguments despite being non-political and not very controversial.
Same speed with respect to what? This sound kind of like the tree-in-a-forest one.
As I remember the problem, the plane's wheels are supposed to be frictionless so that their rotation is uncoupled from the rest of the plane's motion. Hence the speed of the conveyor belt is irrelevant and the plane always takes off. Now, if you had a helicopter on a turntable...
What I mean is, on hearing that I thought of a conveyor belt whose top surface was moving at a speed -x with respect to the air, and a plane on top of it moving at a speed x with respect to the top of the conveyor belt, i.e. the plane was stationary with respect to the air. But on reading the Snopes link what was actually meant was that the conveyor belt was moving at speed -x and the plane's engines were working as hard as needed to move at speed x on stationary ground with no wind.
While at the same time the rolling speed of the plane, which is the sum of it's forward movement and the speed of the treadmill, is supposed to be equal to the speed of the treadmill. Which is impossible if the plane moves forward.
I'm not sure what you mean by "rolling speed of the plane", "it's forward movement", and "speed of the treadmill". The phrase "rolling speed" sounds like it refers to the component of the plane's forward motion due to the turning of its wheels, but that's not a coherent thing to talk about if one accepts my assumption that the wheels are uncoupled from the plane.
Rolling speed = how fast the wheels turn, described in terms of forward speed. So it's the circumference of the wheels multiplied by their angular speed. And the wheels are not uncoupled from the plane they are driven by the plane. It was only assumed that the friction in the wheel bearings is irrelevant.
Forward movement of the plane = speed of the plane relative to something not on the treadmill. I guess I should have called it airspeed, which it would be if there is no wind.
Speed of the treadmill = how fast the surface of the treadmill moves.
And that is more time than I wanted to spend rehashing this old nonsense. The grandparent was only meant to explain why the great grandparent would not have settled the issue, not to settle it on its own. The only further comment I have is the whole thing is based on an unrealistic setup, which becomes incoherent if you assume that it is about real planes and real treadmills.
Fair enough. I have to chip in with one last comment, but you'll be happy to hear it's a self-correction! My comments don't account for potential translational motion of the wheels, and they should've done. (The translational motion could matter if one assumes the wheels experience friction with the belt, even if there's no internal wheel bearing friction.)
That reminds me of the question of whether hot water freezes faster than cold water.
That's different though. The Plane on a Treadmill started with somebody specifying some physically impossible conditions, and then the furious arguments were between people stating the implication of the stated conditions on one side and people talking about the real world on the other hand.
That's a great example. If I recall, people who get worked up about it generally feel that the answer is obvious and the other side is stupid for not understanding the argument.
Why not also spend an equally amount of time searching for examples that prove the opposite of the point you're trying to make? Or are you speaking to an audience that doesn't agree this is possible in principle?
Edit: Might Newtonian physics be an example?
Downwind faster than the wind. See seven pages of posts here for examples of people getting it wrong.
Kant was famously wrong when he claimed that space had to be flat.
As discussed previously, this exact claim seems suspiciously absent from the first Critique.
I agree that Kant doesn't seem to have ever considered non-euclidean geometry, and thus can't really be said to be making an argument that space is flat. If we could drop an explanation of general relativity, he'd probably come to terms with it. On the other hand, he just assumes that two straight lines can only intersect once, and that this describes space, which seems pretty much what he was accused of.
I don't see this in the quoted passage. He's trying to illustrate the nature of propositions in geometry, and doesn't appear to be arguing that the parallel postulate is universally true. "Take, for example," is not exactly assertive.
Also, have a care: those two paragraphs are not consecutive in the Critique.
If your twin's going away for 20 years to fly around space at close to the speed of light, they'll be 20 years older when they come back.
A spinning gyroscope, when pushed, will react in a way that makes sense.
If another nation can't do anything as well as your nation, there is no self-serving reason to trade with them.
You shouldn't bother switching in the Monty Hall problem
The sun moves across the sky because it's moving.
EDIT Corrected all statements to be false
I think you may have expressed this one the wrong way around; the way you've phrased it ("can make you better off") is the surprising truth, not the surprising untruth.
They will. I think you mean: If your twin flies through space at close to the speed of light and arrives back 20 years later, they'll be 20 years older when they come back. That one's false.
Reversed polarity on a few statements. Thanks.
Your first statement is still correct.