Vladimir_Nesov comments on 2012 Survey Results - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (640)
You know that, Katydee, but do all the people who are taking the survey think that way? The majority of them haven't even finished the sequences. I agree with you that it's ideal for us to be good rationalists all the time, but mental stamina is a big factor.
Being rational takes more energy than being irrational. You have to put thought into it. Some people have a lot of mental energy. To refer to something less vague and more scientific: there are different levels of intelligence and different levels of intellectual supersensitivity (A term from Dabrowski that refers to how excitable certain aspects of your nervous system are.) Long story short: Some people cannot analyze constantly because it's too difficult for them to do so. They run out of juice. Perhaps you are one of those rare people who has such high stamina for analysis that you rarely run into your limit. If that's the case, it probably seems strange to you that anybody wouldn't attempt to maintain a state of constant analysis. Most people with unusual intellectual stamina seem to view others as lazy when they observe that those other people aren't doing intellectual things all the time. It frequently does not occur to them to consider that there may be an intellectual difference. The sad truth is that most people have much lower limits on how much intellectual activity they can do in a day than "constant". If you want to see evidence of this, you can look at Ford's studies where he shows that 40 hours a week is the optimum number of hours for his employees to work. Presumably, they were just doing factory work assembling car parts, which (if it fits the stereotype of factory work being repetitive) was probably pretty low on the scale for what's intellectually demanding, but he found that if they tried to work 60 hours for two weeks in a row, their output would dip below the amount he'd normally get from 40 hours. This is because of mistakes. You'd think that the average human brain could do repetitive tasks constantly but evidently, even that tires the brain.
So in reality, the vast majority of people are not capable of the kind of constant meta-cognitive analysis that is required to be rational all the time. You use the word "ingrained" and I have seen Eliezer talk about how patterns of behavior can become habits (I assume he means that the thoughts are cached) and I think this kind of habit / ingrained response works beautifully when no decision-making is required and you can simply do the same thing that you usually do. But whenever one is trying to figure something out (like for instance working out the answers to questions on a survey) they're going to need to put additional brainpower into that.
I had an experience where, due to unexpected circumstances, I developed some vitamin deficiencies. I would run out of mental energy very quickly if I tried to think much. I had, perhaps, a half an hour of analysis available to me in a day. This is very unusual for me because I'm used to having a brain that loves analysis and seems to want to do it constantly (I hadn't tested the actual number of hours for which I was able to analyze, but I would feel bored if I wasn't doing something like psychoanalysis or problem-solving for the majority of the day). When I was deficient, I began to ration my brainpower. That sounds terrible, but that is what I did. I needed to protect my ability to analyze to make sure I had enough left over to be able to do all the tasks I needed to do each day. I could feel that slipping away while I was working on problems and I could observe what happened to me after I fatigued my brain. (Vegetable like state.)
As I used my brainpower rationing strategies, it dawned on me that others ration brainpower, too. I see it all the time. Suddenly, I understood what they were doing. I understood why they kept telling me things like "You think too much!" They needed to change the subject so they wouldn't become mentally fatigued. :/
Even if the average IQ at LessWrong is in the gifted range, that doesn't give everyone the exact same abilities, and doesn't mean that everyone has the stamina to analyze constantly. Human abilities vary wildly from person to person. Everyone has a limit when it comes to how much thinking they can do in a day. I have no way of knowing exactly what LessWrong's average limit is, but I would not be surprised if most of them use strategies for rationing brainpower and have to do things like prioritize answering survey questions lower on their list of things to "give it their all" on, especially when there are a lot of them, and they're getting tired.
Re the problem of having to think all the time: a good start is to develop a habit of rejecting certainty about judgments and beliefs that you haven't examined sufficiently (that is, if your intuition shouts at you that something is quite clear, but you haven't thought about that for a few minutes, ignore (and mark as a potential bug) that intuition unless you understand a reliable reason to not ignore it in that case). If you don't have the stamina or incentives to examine such beliefs/judgments in more detail, that's all right, as long as you remain correspondingly uncertain, and realize that the decisions you make might be suboptimal for that reason (which should suitably adjust your incentives for thinking harder, depending on the importance of the decisions).
The process of choosing a probability is not quite that simple. You're not just making a boolean decision about whether you know enough to know, you're actually taking the time to distinguish between 10 different amounts of confidence (10%, 20%, 30%, etc), and then making ten more tiny distinctions (30%, 31%, 32% for instance)... at least that's the way that I do it. (More efficient than making enough distinctions to choose between 100 different options.) When you are wondering exactly how likely you are to know something in order to choose a percentage, that's when you have to start analyzing things. In order to answer the question, my thought process looked like this:
Bayes. I have to remember who that is. Okay, that's the guy that came up with Bayesian probability. (This was instant, but that doesn't mean it took zero mental work.)
Do I have his birthday in here? Nothing comes to mind.
Digs further: Do I have any reason to have read about his birthday at any point? No. Do I remember seeing a page about him? I can't remember anything I read about his birthday.
Considers whether I should just go "I don't know" and put a random year with a 0% probability. Decides that this would be copping out and I should try to actually figure this out.
When was Bayesian probability invented? Let's see... at what point in history would that have occurred?
Try to brainstorm events that may have required Bayesian probability, or that would have suggested it didn't exist yet.
Try to remember the time periods for when these events happened.
Defines a vague section of time in history.
Considers whether there might be some method of double-checking it.
Considers the meaning of "within 20 years either way" and what that means for the probability that I'm right.
Figures out where in my vague section of time the 40 year range should be fit.
Figures out which year is in the middle of the 40 year range and types it in.
Consider how many years Bayes would likely have to have lived for before giving his theorems to the world and adjust the year to that.
Considers whether it was at all possible for Bayesian probability to have existed before or after each event.
If possible, consider how likely it was that Baye's probability existed before/after each event.
Calculate how many 40-year ranges there are in the vague section of time between the events where Bayes could not have been born.
Calculate the chance that I chose the correct 40-year section out of all the possible sections, if odds are equal.
Compare this to my probabilities regarding how likely it was for Bayes theorem to have existed before and after certain events.
Adjust my probability figure to take all that into account.
My answer to this question took at least twenty steps, and that doesn't even count all the steps I went through for each event, nor does it count all the sub steps I went through for things that I sort of hand-waved like "Adjust my probability figure to take all that into account".
If you think figuring out stuff is instant, you underestimate the number of steps your brain does in order to figure things out. I highly recommend doing meditation to improve your meta-cognition. Meta-cognition is awesome.
The straightforward interpretation of your words evaluates as a falsity, as you can't estimate informal beliefs to within 1%.
I'd put it more in terms of decibels of log-odds than percentages of probability. Telling 98% from 99% (i.e. +17 dB from +20 dB) sounds easier to me than telling 50% from 56% (i.e. 0 dB from +1 dB).
Well, you can, but it would be a waste of time.
No, I'm pretty certain you can't. You can't even formulate truth conditions for correctness of such an evaluation. Only in very special circumstances getting to that point would be plausible (when a conclusion is mostly determined by data that is received in an explicit form or if you work with a formalizable specification of a situation, as in probability theory problems; this is not what I meant by "informal beliefs").
(I was commenting on a skill/habit that might be useful in the situations where you don't/can't make the effort of explicitly reasoning about things. Don't fight the hypothetical.)
Is it your position that there is a thinking skill that is actually accurate for figuring stuff out without thinking about it?
I expect you can improve accuracy in the sense of improving calibration, by reducing estimated precision, avoiding unwarranted overconfidence, even when you are not considering questions in detail, if your intuitive estimation has an overconfidence problem, which seems to be common (more annoying in the form of an "The solution is S!" for some promptly confabulated arbitrary S, when quantifying uncertainty isn't even on the agenda).
(I feel the language of there being "positions" has epistemically unhealthy connotations of encouraging status quo bias with respect to beliefs, although it's clear what you mean.)