All of Julia_Galef's Comments + Replies

... By the way, you might've misunderstood the point of the Elon Musk examples. The point wasn't that he's some exemplar of honesty. It was that he was motivated to try to make his companies succeed despite believing that the most likely outcome was failure. (i.e., he is a counterexample to the common claim "Entrepreneurs have to believe they are going to succeed, or else they won't be motivated to try")

Thanks! I do also rely to some extent on reasoning... for example, Chapter 3 is my argument for why we should expect to be better off with (on the margin) more scout mindset and less soldier mindset, compared to our default settings. I point out some basic facts about human psychology (e.g., the fact that we over-weight immediate consequences relative to delayed consequences) and explain why it seems to me those facts imply that we would have a tendency to use scout mindset less often than we should, even just for our own self interest.

The nice thing about argumentation (as compared to citing studies) is that it's pretty transparent -- the reader can evaluate my logic for themselves and decide if they buy it.

3habryka
That's good to hear! I haven't yet gotten super far into the book, so can't judge for myself yet, and my guess about doing more first-principles reasoning was mostly based on priors.

Hey Ozzie! Thanks for reading / reviewing.

I originally hoped to write a more “scholarly” book, but I spent months reading the literature on motivated reasoning and thought it was mostly pretty bad, and anyway not the actual cause of my confidence in the core claims of the book such as “You should be in scout mindset more often.” So instead I focused on the goal of giving lots of examples of scout mindset in different domains, and addressing some of the common objections to scout mindset, in hopes of inspiring people to practice it more often. 

I left i... (read more)

2ozziegooen
Thanks so much, that makes a lot of sense. Reviewing works can be tricky, because I'd focus on very different aspects when targeting different people. When describing books to potential readers, I'd focus on very different aspects than when trying to comment on how good of a job the author did to advance the topic.  In this case the main issue is that I wasn't sure what kind of book to expect, so wanted to make that clear to other potential readers. It's like when a movie has really scary trailers but winds up being being a nice romantic drama.  Some natural comparison books in this category are Superforecasting and Thinking Fast and Slow, where the authors basically took information from decades of their own original research. Of course, this is an insanely high bar and really demands an entire career. I'm curious how you would categorize The Scout Mindset. ("Journalistic?" Sorry if the examples I pointed to seemed negative) I think you specifically did a really good job given the time you wanted to allocate to it (you probably didn't want to wait another 30 years to publish), but that specific question isn't particularly relevant to potential readers, so it's tricky to talk about all things at once. I'd also note that I think there's also a lot of non-experimental work that could be done in the area, similar to The Elephant in the Brain, or many Philosophical works (I imagine habryka thinks similarly). This sort of work would probably sell much worse, but is another avenue I'm interested in for future research. (About The Village, I just bring this up because it was particularly noted for people having different expectations from what the movie really was. I think many critics really like it at this point.)
habryka170

I am really glad about this choice, and also made similar epistemic updates over the last few years, and my guess is if I was to write a book, I would probably make a similar choice (though probably with more first-principles reasoning and a lot more fermi-estimates, though the latter sure sounds like it would cut into my sales :P).

This doesn't really ring true to me (as a model of my personal subjective experience).

The model in this post says despair is "a sign that important evidence has been building up in your buffer, unacknowledged, and that it’s time now to integrate it into your plans."

But most of the times that I've cycled intermittently into despair over some project (or relationship), it's been because of facts I already knew, consciously, about the project. I'm just becoming re-focused on them. And I wouldn't be surprised if things like low blood sugar or anxie... (read more)

0ScottL
I think this is a good point. Despair, which I see as perceived hopelessness, originates in an individual and so it depends on how that individual perceives the situation. Perception is not like receiving a reflection of the world in the mind. It is like meshing together the neural activity from percepts with the existing neural activity ongoing in the brain. The result is that it is context dependent. It is affected by priming and emotions, for example. I think the advice in this post, essentially embrace despair, isn't probably that helpful. What do you think about this advice: "Notice despair for it is a signal of hopelessness. It indicates that you may be stuck in a mental rut or that the way that you are viewing a situation may be inducing unnecessary anxiety. In summary, it tells you to rethink how you are trying to solve the problem that you are facing. The first thing you should do is check that it is real. Get advice and talk to others about it. Try to get out of your head. Also, try and find out if it is misattributed. It may be due to low blood sugar or anxiety spilling over from other parts of your life, for example. If you have done this and now know that the despair is real, i.e. resulting from a complex problem that matters to you and that you can't solve, then try to understand the problem you are facing and your plan to solve it. Once you are happy with the plan then you can embrace the incoming depression. Do not view it as anathema, but instead as your body's mechanism to move you into the necessary focused and analytical state that you need to be in to be able to solve the complex problem that you are facing".

Hey, I'm one of the founders of CFAR (and used to teach the Reference Class Hopping session you mentioned).

You seem to be misinformed about what CFAR is claiming about our material. Just to use Reference Class Hopping as an example: It's not the same as reference class forecasting. It involves doing reference class forecasting (in the first half of the session), then finding ways to put yourself in a different reference class so that your forecast will be more encouraging. We're very explicit about the difference.

I've emailed experts in reference class for... (read more)

I usually try to mix it up. A quick count shows 6 male examples and 2 female examples, which was not a deliberate choice, but I guess I can be more intentional about a more even split in future?

4gwillen
Oddly, I also came away with an impression of 'male pronoun as default', and on rereading it seems that e.g. I strongly noticed the male pronoun in 13, but did not notice the female pronoun in 14. I guess I've just been trained to notice default-male-pronoun usages. (You did also use 'singular they' in example 7, which to me reads much more naturally than pronoun alternation.)
2alicey
nod. Sounds reasonable! It might help to be more intentional, to prevent people from having jarring experiences like that.
9Error
I think Eliezer mentioned once that he flips a coin for such cases. I think it's a pretty good policy.

Thanks for showing up and clarifying, Sam!

I'd be curious to hear more about the ways in which you think CFAR is over-(epistemically) hygienic. Feel free to email me if you prefer, but I bet a lot of people here would also be interested to hear your critique.

Sure, here's a CDC overview: http://www.cdc.gov/handwashing/show-me-the-science-hand-sanitizer.html They seem to be imperfect but better than nothing, and since people are surely not going to be washing their hands every time they cough, sneeze, or touch communal surfaces, supplementing normal handwashing practices with hand sanitizer seems like a probably-helpful precaution.

But note that this has turned out to be an accidental tangent since the "overhygienic" criticism was actually meant to refer to epistemic hygiene! (I am potentially also indi... (read more)

Edited to reflect the fact that, no, we certainly don't insist. We just warn people that it's common to get sick during the workshop because you're probably getting less sleep and in close contact with so many other people (many of whom have recently been in airports, etc.). And that it's good practice to use hand sanitizers regularly, not just for your own sake but for others'.

4ChristianKl
Is that recommendation based on concret evidence, if so, could you link sources?
Lumifer100

and in close contact with so many other people

So, people who commute by public transportation in a big city are just screwed, aren't they? :-)

it's good practice to use hand sanitizers regularly

I don't think so -- not for people with a healthy immune system.

Perhaps this is silly of me, but the single word in the article that made me indignantly exclaim "What!?" was when he called CFAR "overhygienic."

I mean... you can call us nerdy, weird in some ways, obsessed with productivity, with some justification! But how can you take issue with our insistence [Edit: more like strong encouragement!] that people use hand sanitizer at a 4-day retreat with 40 people sharing food and close quarters?

[Edit: The author has clarified above that "overhygienic" was meant to refer to epistemic hygiene, not literal hygiene.]

0Dr_Manhattan
You can add "literal" to that :-p
6ChristianKl
I would guess >95% of 4-day retreats where 40 people are sharing food and close quarters don't include recommendations about the usage of hand sanitizer.
4Vaniver
So, I have noticed that I am overhygienic relative to the general population (when it comes to health; not necessarily when it comes to appearance), and I think that's standard for LWers. I think this is related to taking numbers and risk seriously; to use dubious leftovers as an example, my father's approach to food poisoning is "eh, you can eat that, it's probably okay" and my approach to food poisoning is "that's only 99.999% likely to be okay, no way is eating that worth 10 micromorts!"
6Lumifer
You insisted (instead of just offering)? I would have found it weird. And told you "No, thank you", too.
2devi
This is not something that would cross my mind if I was organizing such a retreat. Making sure people who handled food washed their hands with soap, yes, but not hand sanitizer. Perhaps this is a cultural difference between (parts of) US and Europe.

"I'm beginning to find this conversation aversive, and I'm not sure why. I propose we hold off until I've figured that out."

I read this suggested line and felt a little worried. I hope rationalist culture doesn't head in that direction.

There are plenty of times when I agree a policy of frankness can be useful, but one of the risks of such a policy is that it can become an excuse to abdicate responsibility for your effect on other people.

If you tell me that you're having an aversive reaction to our conversation, but can't tell me why, it's goi... (read more)

3wedrifid
[I do not endorse that particular conversation move. Nor do I particularly discourage it, between Tell culture users.] I observe that this objection to the exit strategy the problem is that 'Tell culture' is not being used by the receiving party. The receiving party is interpreting the information through the filter of some variety of non-Tell culture and essentially reading a different message than the one sent. This is a real problem but it is a real problem relating to speaking a language different to the audience, not a problem that applies to the communication via the language itself. Speaking 'Tell Culture' phrases to someone who is not both familiar with the communication style and happy to use it should not be expected to work well. The complimentary risk here is that your opposing policy can become (or inherently is) an excuse to abdicate responsibility for ones own thoughts and behaviour onto someone else. Neither are particularly healthy habits. Note that the speakers words explicitly claim responsibility and even go so far as to propose that even if the other person can figure the stuff out the speaker still has to figure it out for herself before the condition is met. It also contains no more (in fact, almost certainly much less) information than is contained in the uncontrollable communication via facial expressions, voice tone and body language while ending the conversation. The difference is there isn't level of social 'role play' where people pretend that information has not been communicated and where if that information is formally acknowledged to be communicated it is the equivalent to shouting or using all-caps. Or if looking at from the perspective of assigning responsibility to the active party that's a non-negligible burden that, someone walked up and forcibly took as there own because it wasn't kept hidden. The speaker actually set up boundaries around the aversion-experience-analysis territory that imply that would be somewhat presumpt
5therufs
If I'm having some kind of internal experience that may color my interpretation of what my interlocutor is trying to tell me, I feel like I owe it to them and whatever we're discussing to stop the conversation as soon as I realize something is wrong, since if e.g., it turns out I'm sleepy, taking a nap wouldn't (I think) be sufficient to fully counteract the negative opinion of the topic I formed when I was crabby. Could you give an example of a graceful exit? For me, interrupting a conversation without saying why I'm actually doing it feels dishonest/rude, especially if we're discussing something that's important enough for me to care that I treat it fairly.
-4ialdabaoth
That can easily be exploited, however. If people know this is your reaction, then they have an easy button to push to exclude you from any conversation where they don't want your voice heard. EDIT: I will retract this statement if someone explains what's wrong with it.

Yes, my version of this always goes, "I'm finding this conversation aversive and I don't know why. Hold on while I figure it out." In other words, it doesn't delay a conversation until later, but it does mean that I close my eyes for 60 seconds and think.

I also find that line a bit strange. In nearly all cases where I would expect that someone says: "I'm beginning to find this conversation aversive, and I'm not sure why" I think I would take it as a topic change to why the conversation might bring up negative emotions in the person.

If we are in an environment of open conversation and I say something that brings up an emotional trauma in another person and that person doesn't have the self-awareness to know why he's feeling unwell, that's not a good time to leave him alone.

Creutzer360

Interesting, I have the exact opposite gut reaction. It could be rephrased in slight variations, e.g. "until we've figured that out", or, as shokwave below suggested, with a request for assistance, but in general, if someone said that to me, I would, ceteris paribus, infer that they are a self-aware and peaceful/cooperative person and that they are not holding anything in particular against me.

Whereas when someone leaves a conversation with an excuse that may or may not be genuine, it leaves me totally stressed-out because I have no idea what's g... (read more)

8shokwave
Something like "I'm finding this conversation aversive, and I'm not sure why. Can you help me figure it out?" would be way more preferable. Something in rationalist culture that I actually do like is using "This is a really low-value conversation, are you getting any value? We should stop." to end unproductive arguments.

Yes, that makes a lot of sense!

Since we don't have any programmers on staff at the moment, we went with the less-than-ideal solution of a manual thermometer, which we update about once a day -- but it certainly would be better to have it happen automatically.

For now, I've gone with the kluge-y solution of an "Updated January XXth" note directly above the menu bar. Thanks for the comment.

several mainstream media articles about CFAR on their way, including one forthcoming shortly in the Wall Street Journal

That article's up now -- it was on the cover of the Personal Journal section of the WSJ, on December 31st. Here's the online version: More Rational Resolutions

Agreed. I might add them to a future version of this map.

This time around I held off mainly because I was confounded by how to add them; drugs really do pervade so many of these groups, in different variants: psychadelics are strong among the counterculture and New Age culture, nootropics are more popular among rationalists and biohackers/Quantified Self, and both are popular among transhumanists. (See this H+ article for a discussion of psychadelic transhumanists.)

Well, I'd say that LW does take account of who we are. They just haven't had the impetus to do so quite as thoroughly as CFAR has. As a result there are aspects of applied rationality, or "rationality for humans" as I sometimes call it, that CFAR has developed and LW hasn't.

If it makes you feel less hesitant, we've given refunds twice. One person at a workshop last year who said he'd expected polish and suits, and another who said he enjoyed it but wasn't sure it was going to help enough with his current life situation to be worth it.

5Shmi
Oh, the refund clause would not have mattered to me personally, for the reasons outlined (I know I would never ask for one, no the least because I would thoroughly enjoy the event). I would dearly love to attend, but for the reasons I am not willing to discuss here it would not be a rational decision for me. My comment was just an observation that your claim of low risk is not really accurate, except for a rare person who has a certain mindset.

Fixed! Thanks, I apparently didn't understand how links worked in this system.

Not sure what kind of evidence you're looking for here; that's just a description of our selection criteria for attendees.

Preferring utilitarianism is a moral intuition, just like preferring Life Extension. The former's a general intuition, the latter's an intuition about a specific case.

So it's not a priori clear which intuition to modify (general or specific) when the two conflict.

2TheOtherDave
I don't agree that preferring utilitarianism is necessarily a moral intuition, though I agree that it can be. Suppose I have moral intuitions about various (real and hypothetical) situations that lead me to make certain judgments about those situations. Call the ordered set of situations S and the ordered set of judgments J. Suppose you come along and articulate a formal moral theory T which also (and independently) produces J when evaluated in the context of S. In this case, I wouldn't call my preference for T a moral intuition at all. I'm simply choosing T over its competitors because it better predicts my observations of the world; the fact that those observations are about moral judgments is beside the point. If I subsequently make judgment Jn about situation Sn, and then evaluate T in the context of Sn and get Jn' instead, there's no particular reason for me to change my judgment of Sn (assuming I even could). I would only do that if I had substituted T for my moral intuitions... but I haven't done that. I've merely observed that evaluating T does a good job of predicting my moral intuitions (despite failing in the case of Sn). If you come along with an alternate theory T2 that gets the same results T did except that it predicts Jn given Sn, I might prefer T2 to T for the same reason I previously preferred T to its competitors. This, too, would not be a moral intuition.

Right -- I don't claim any of my moral intuitions to be true or correct; I'm an error theorist, when it comes down to it.

But I do want my intuitions to be consistent with each other. So if I have the intuition that utility is the only thing I value for its own sake, and I have the intuition that Life Extension is better than Replacement, then something's gotta give.

When our intuitions in a particular case contradict the moral theory we thought we held, we need some justification for amending the moral theory other than "I want to."

4Luke_A_Somers
I think the point is, Utilitarianism is very very flexible, and whatever it is about us that tells us to prefer life extension should already be there - the only question is, how do we formalize that?
-1[anonymous]
Well if you view moral theories as if they were scientific hypothesis, you could reason in the following way: If a moral theory/hypothesis makes a counter intuitive prediction you could 1) reject the your intuition or 2) reject the hypothesis ("I want to") 3) revise your hypothesis. It would be practical if one could actually try out an moral theory, but I don't see how one could go about doing that. . .
0TheOtherDave
Presumably that depends on how we came to think we held that moral theory in the first place. If I assert moral theory X because it does the best job of reflecting my moral intuitions, for example, then when I discover that my moral intuitions in a particular case contradict X, it makes sense to amend X to better reflect my moral intuitions. That said, I certainly agree that if I assert X for some reason unrelated to my moral intuitions, then modifying X based on my moral intuitions is a very questionable move. It sounds like you're presuming that the latter is generally the case when people assert utilitarianism?

I agree, and that's why my intuition pushes me towards Life Extension. But how does that fact fit into utilitarianism? And if you're diverging from utilitarianism, what are you replacing it with?

4[anonymous]
That birth doesn't create any utility for the person being born (since it can't be said to satisfy their preferences), but death creates disutility for the person who dies. Birth can still create utility for people besides the one being born, but then the same applies to death and disutility. All else being equal, this makes death outweigh birth.

One doesn't have to be better than the other. That's what's in dispute.

I think making this comparison is important philosophically, because of the implications our answer has for other utilitarian dilemmas, but it's also important practically, in shaping our decisions about how to allocate our efforts to better the world.

Thanks -- but if I'm reading your post correctly, your arguments hinge on the utility experienced in Life Extension being greater than that in Replacement. Is that right? If I stipulate that the utility is equal, would your answer change?

3steven0461
If utility per life year is equal, and total life years are equal, then total utility is equal and total utilitarianism is indifferent. But for the question to be relevant for decision-making purposes, you have to keep constant not utility itself, but various inputs to utility, such as wealth. Nobody is facing the problem of how to distribute a fixed utility budget. (And then after that, of course, you can analyze how those inputs themselves would vary as a result of life extension.) I object to the phrasing "utility experienced". Utility isn't something you experience, it's a statement about a regularity in someone's preference ordering -- in this case, mine.

Ah, true! I edited it again to include the original setup, so that people will know what Logos01 and drethelin are referring to.

Thanks -- I fixed the setup.

3[anonymous]
Please don't do that. OP's comment doesn't make any sense now.

My framing was meant to be encouraging you to disproportionately question beliefs which, if false, make you worse off. But motivated skepticism is disproportionately questioning beliefs that you want to be false. That's an important difference, I think.

Are you claiming that my version is also a form of motivated skepticism (perhaps a weaker form)? Or do you think my version's fine, but that I need to make it clearer in the text how what I'm encouraging is different from motivated skepticism?

2Vladimir_Nesov
The implicit idea is that any improvement in beliefs is beneficial, but it's not what comes to mind when reading that section, it sounds as if suggesting that there is this special kind of beliefs whose revision would be beneficial, as opposed to other kinds of beliefs (this got me confused for a minute). So the actual idea is to focus on belief revisions with high value of information. This is good, but probably needs to be made more explicit and distanced a bit from the examples representative of a different idea (inconvenient beliefs that you would like to go away).
0[anonymous]
If you focus on questioning the beliefs whose presence is particularly inconvenient, that's genuine motivated skepticism (motivations could be different). I think this section needs to be revised in terms of value of information, so that there's symmetry in what kinds of change of mind are considered. Focus on researching the beliefs that, if changed, would affect you most (in whatever way). Dispelling uselessly-hurting prejudices is more of a special case of possible benefits than a special case of the method.

Incidentally, the filmmaker didn't capture my slide with the diagram of the revised model of rationality and emotions in ideal human* decision-making, so I've uploaded it.

The Straw Vulcan model of ideal human* decisionmaking: http://measureofdoubt.files.wordpress.com/2011/11/screen-shot-2011-11-26-at-3-58-00-pm.png

My revised model of ideal human* decisionmaking: http://measureofdoubt.files.wordpress.com/2011/11/screen-shot-2011-11-26-at-3-58-14-pm.png

*I realize now that I need this modifier, at least on Less Wrong!

Great point, in many cases, such as when you're trying to decide what school to go to, and you make the decision deliberatively but taking into account the data from your intuitive reactions to the schools.

But in other cases, such as chess-playing, aren't you mainly just deciding based on your System 1 judgments? (Admittedly I'm no chess player; that's just my impression of how it works.)

I agree you need to use System 2 for your meta-judgment about which system to use in a particular context, but once you've made that meta-judgment, I think there are some cases in which you make the actual judgment based on System 1.

Am I correctly understanding your point?

1lessdazed
To the moderate theist who says he or she believes some things based on science/rationality/reason/etc. and some based on faith, I reply that the algorithm that sorts claims between categories is responsible for all evaluations. This means that when he or she only selects reasonable religious claims to be subject to reason, reason is responsible for none of the conclusions, and faith is responsible for all of them. In the same way, apparently pure System 1 judgments are best thought of as a special case of System 2 judgments so long as System 2 decided how to make them. I think implicit in almost any decision to use System 1 judgments is that if System 2 sees an explicit failure of them, one will not execute the System 1 judgment.

Yup, I went through the same reasoning myself -- I decided on "system 1" and "system 2" for their neutral tone, and also because they're Stanovich's preferred terms.

Good question. My intended meaning was closest to (h). (Although isn't (g) pretty much equivalent?)

0lessdazed
If emotions are necessary but not sufficient for forming goals among humans, the claim might be that rationality has no normative value to humans without goals without addressing rationality's normative value to humans with emotions who don't have goals. If you see them as equivalent, this implies that you believe emotions are necessary and sufficient for forming goals among humans. As much as this might be true for humans, it would be strange to say that after goals are formed, the loss of emotion in a person would obviate all their already formed non-emotional goals. So it's not just that you're discussing the human case and not the AI case, you're discussing the typical human.
3daenerys
Yay! Word of God on the issue! (Warning: TvTropes). Good to know I wasn't too far off-base. I can see how g and h can be considered equivalent using the: emotions-> goals . In fact I would assume that would also make a and b pretty much equivalent, as well as c and d, e and f, etc.

Hey, thanks for the shoutout! @SilasBarta -- Yeah, I first encountered the mirror paradox in G&R, but I ended up explaining it differently than Drescher did, drawing on Gardner as well as some discussions with a friend, so I didn't end up quoting Drescher after all. I do like his explanation, though.

This was a really clarifying post for me. I had gotten to the point of noticing that "What is X?" debates were really just debates over the definition of X, but I hadn't yet taken the next step of asking why people care about how X is defined.

I think another great example of a disguised query is the recurring debate, "Is this art?" People have really widely varying definitions of "art" (e.g., some people's definition includes "aesthetically interesting," other people's definition merely requires "conceptually i... (read more)

Eliezer, you wrote:

But when you're really done, you'll know you're done. Dissolving the question is an unmistakable feeling...

I'm not so sure. There have been a number of mysteries throughout history that were resolved by science, but people didn't immediately feel as if the scientific explanation really resolved the question, even though it does to us now -- like the explanation of light as being electromagnetic waves.

I frequently find it tricky to determine whether a feeling of dissatisfaction indicates that I haven't gotten to the root of a problem,... (read more)

0[anonymous]
Dissolving a question and answering it are two different things. To dissolve a question is to rid yourself of all confusion regarding it, so that either the question reveals itself to be a wrong question, or the answer will become ridiculous obvious (or at least, the way to answer it will become obvious). In the second case, it would still be possible that the ridiculously obvious answer will turn out to be wrong, but this has little to do with whether or not the question has been dissolved. For example, we could one day find evidence that certain species of trees don't make sound waves when they fall and there are no humans within a 10 mile radius. This won't change the fact that the question was fully dissolved.

I like the cuteness of turning an old parlor game into a theory-test. But I suspect a more direct and effective test would be to take one true fact, invert it, and then ask your test subject which statement fits their theory better. (I always try to do that to myself when I'm fitting my own pet theory to a new fact I've just heard, but it's hard once I already know which one is true.)

Other advantages of this test over the original one proposed in the post: (1) You don't have to go to the trouble of thinking up fake data (a problematic endeavor, because the... (read more)