Speculative rationality skills and appropriable research or anecdote
Is rationality training in it's infancy? I'd like to think so, given the paucity of novel, usable information produced by rationalists since the Sequence days. I like to model the rationalist body of knowledge as superset of pertinent fields such as decision analysis, educational psychology and clinical psychology. This reductionist model enables rationalists to examine the validity of rationalist constructs while standing on the shoulders of giants.
CFAR's obscurantism (and subsequent price gouging) capitalises on our [fear of missing out](https://en.wikipedia.org/wiki/Fear_of_missing_out). They brand established techniques like mindfulness as againstness or reference class forecasting as 'hopping' as if it's of their own genesis, spiting academic tradition and cultivating an insular community. In short, Lesswrongers predictably flouts [cooperative principles](https://en.wikipedia.org/wiki/Cooperative_principle).
This thread is to encourage you to speculate on potential rationality techniques, underdetermined by existing research which might be a useful area for rationalist individuals and organisations to explore. I feel this may be a better use of rationality skills training organisations time, than gatekeeping information.
To get this thread started, I've posted a speculative rationality skill I've been working on. I'd appreciate any comments about it or experiences with it. However, this thread is about working towards the generation of rationality skills more broadly.
Thinking like a Scientist
Min/max goal factoring and belief mapping exercise
Edit 3: Removed description of previous edits and added the following:
This thread used to contain the description of a rationality exercise.
I have removed it and plan to rewrite it better.
I will repost it here, or delete this thread and repost in the discussion.
Thank you.
Supporting Effective Altruism through spreading rationality
So does spreading rationality contribute to Effective Altruism? I certainly think so, as a rationality popularizer and an Effective Altruist myself. My own donations of money and time is focused on my project, Intentional Insights, of trying to spread rationality to a broad audience and thus raise the sanity waterline, including about effective, evidence-based philanthropy. Specifically in relation to EA, in blogs for Intentional Insights, and in our resources page, I make sure to highlight EA as an awesome thing to get involved in.
I'd particularly appreciate feedback on a draft fundraising letter (link here) for Effective Altruists on the way that Intentional Insights contributes to improving the world and specifically by getting people more engaged with Effective Altruism. I'd like to hear any thoughts on how I can optimize the letter to make it more effective. You can simply respond in comments, or send an email to gleb@intentionalinsights.org
I'd also like to hear your opinion of the broader issue of how spreading rationality helps contribute to improving the world and the EA movement in particular. Let me share my take. For the first, I think that, as shown by Brian Tomasik in this essay, increasing rational thinking is robustly positive for a broad range of short and long term future outcomes, and thus our broader work contributes to improving people’s lives overall. For the second, getting people to think rationally about themselves and their interactions with the world and use evidence-based means to evaluate reality and make their decisions will result in people applying these methods of thinking to their altruism.
What do you think?
Support for book on finding terminal goals and higher purpose
As part of my project of spreading rationality to a broad audience and thus raising the sanity waterline, I'm writing a book on using rationality-informed strategies to help people find terminal values, with an orientation toward encouraging a positive and externally-oriented higher purpose carried out in an effective way. To be appealing a wide audience, the book is couched in the language of self-improvement, while also being based on and studying much recent research in the sphere of meaning and purpose, from psychology, cognitive neuroscience, medicine, etc. The goal of the book is to get readers to use rationality-informed, science-based methods to find their long-term goals, and also to get them interested in rational, science-based thinking more broadly, the whole point of my project.
To fund the costs of publishing the book, I'm running a crowdfunding campaign, and the campaign page describes the book in full. I would appreciate any support for the campaign, as well as feedback on optimizing the story and rewards in the campaign in such a way as to make it more appealing to a broad audience. Thank you!
EDIT: Several people messaged to ask how much is appropriate to contribute. My answer in these cases is always based on how many utilons and hedons you think this book has the potential to bring to the world. That's how I measure my own giving, and my own approach to rationality as a whole, as I describe in this LW Main Post.
Six Ways To Get Along With People Who Are Totally Wrong*
This is a re-post of something I wrote for the Effective Altruism Forum. Though most of the ideas have been raised here before, perhaps many times, I thought it might still be of interest as a brief presentation of them all!
--
* The people you think are totally wrong may not actually be totally wrong.
Effective altruism is a ‘broad tent’
As is obvious to anyone who has looked around here, effective altruism is based more on a shared interest in the question 'how can you do the most good' than a shared view on the answer. We all have friends who support:
- A wide range of different cause areas.
- A wide range of different approaches to those causes.
- Different values and moral philosophies regarding what it means to 'help others'.
- Different political views on how best to achieve even shared goals. On economic policy for example, we have people covering the full range from far left to far right. In the CEA offices we have voters for every major political party, and some smaller ones too.
Looking beyond just stated beliefs, we also have people with a wide range of temperaments, from highly argumentative, confident and outspoken to cautious, idiosyncratic and humble.
Our wide range of views could cause problems
There is a popular saying that 'opposites attract'. But unfortunately, social scientists have found precisely the opposite to be true: birds of a feather do in fact flock together.
One of the drivers of this phenomenon is that people who are different are more likely to get into conflicts with one another. If my partner and I liked to keep the house exactly the same way, we certainly wouldn't have as many arguments about cleaning (I'll leave you to speculate about who is the untidy one!). People who are different from you may initially strike you as merely amusing, peculiar or mistaken, but when you talk to them at length and they don't see reason, you may start to see them as stupid, biased, rude, impossible to deal with, unkind, and perhaps even outright bad people.
A movement brought together by a shared interest in the question ‘what should we do?’ will inevitably have a greater diversity of priorities, and justifications for those priorities, than a movement united by a shared answer. This is in many ways our core strength. Maintaining a diversity of views means we are less likely to get permanently stuck on the wrong track, because we can learn from one another's scholarship and experiences, and correct course if necessary.
However, it also means we are necessarily committed to ideological pluralism. While it is possible to maintain ‘Big Tent’ social movements they face some challenges. The more people hold opinions that others dislike, the more possible points of friction there are that can cause us to form negative opinions of one another. There have already been strongly worded exchanges online demonstrating the risk.
When a minority holds an unpopular view they can feel set upon and bullied, while the majority feels mystified and frustrated that a small group of people can't see the obvious truth that so many accept.
My first goal with this post is to make us aware of this phenomenon, and offer my support for a culture of peaceful coexistence between people who, even after they share all their reasons and reflect, still disagree.
My second goal is to offer a few specific actions that can help us avoid interpersonal conflicts that don't contribute to making the world a better place:
1. Remember that you might be wrong
Hard as it is to keep in mind when you're talking to someone who strongly disagrees with you, it is always possible that they have good points to make that would change your mind, at least a bit. Most claims are only ‘partially true or false’, and there is almost always something valuable you can learn from someone who disagrees with you, even if it is just an understanding of how they think.
If the other person seems generally as intelligent and informed about the topic as you, it's not even clear why you should give more weight to your own opinion than theirs.
2. Be polite, doubly so if your partner is not
Being polite will make both the person you are talking to, and onlookers, more likely to come around to your view. It also means that you're less likely to get into a fight that will hurt others and absorb your precious time and emotional energy.
Politeness has many components, some notable ones being: not criticising someone personally; interpreting their behaviour and statements in a fairly charitable way; not being a show-off, or patronising and publicly embarrassing others; respecting others as your equals, even if you think they are not; conceding when they have made a good point; and finally keeping the conversation focussed on information that can be shared, confirmed, and might actually prove persuasive.
3. Don't infer bad motivations
While humans often make mistakes in their thinking, it's uncommon for them to be straight out uninterested in the welfare of others or what is right, especially so in this movement. Even if they are, they are probably not aware that that is the case. And even if they are aware, you won't come across well to onlookers by addressing them as though they have bad motivations.
If you really do become convinced the person you are talking to is speaking in bad faith, it's time to walk away. As they say: don't feed the trolls.
4. Stay cool
Even when people say things that warrant anger and outrage, expressing anger or outrage publicly will rarely make the world a better place. Anger being understandable or natural is very different from it being useful, especially if the other person is likely to retaliate with anger of their own.
Being angry does not improve the quality of your thinking, persuade others that you're right, make you happier or more productive, or make for a more harmonious community.
In its defence, anger can be highly motivating. Unfortunately it is indiscriminate about motivating you to do very valuable, ineffective and even harmful things.
Any technique that can keep you calm is therefore useful. If something is making you unavoidably angry, it's typically best to walk away and let other people deal with it.
5. Pick your battles
Not all things are equally important to reach a consensus about. For good or ill, most things we spend our days talking about just aren't that 'action relevant'. If you find yourself edging towards interpersonal conflict on a question that i) isn't going to change anyone's actions much; ii) isn't going to make the world a much better place, even if it does change their actions; or iii) is very hard to persuade others about, maybe it isn't worth the cost of interpersonal tension to explore in detail.
So if someone in the community says something unrelated or peripheral to effective altruism that you disagree with, which could develop into a conflict, you always have the option of not taking the bait. In a week, you and they may not even remember it was mentioned, let alone consider it worth damaging your relationship over.
6. Let it go

The most important advice of all.
Perhaps you are discussing something important. Perhaps you've made great arguments. Perhaps everyone you know agrees with you. You've been polite, and charitable, and kept your cool. But the person you're talking to still holds a view you strongly disagree with and believe is harmful.
If that's the case, it's probably time for you both to walk away before your opinions of one another fall too far, or the disagreement spirals into sectarianism. If someone can't be persuaded, you can at least avoid creating an ill-will between you that ensures they never come around. You've done what you can for now, and that is enough.
Hopefully time will show which of you is right, or space away from a public debate will give one of you the chance to change your mind in private without losing face. In the meantime maybe you can't work closely together, but you can at least remain friendly and respectful.
It isn't likely or even desirable for us to end up agreeing with one another on everything. The world is a horribly complex place; if the questions we are asking had easy answers the research we are doing wouldn't be necessary in the first place.
The cost of being part of a community that accepts and takes an interest in your views, even though many think you are pulling in the wrong direction, is to be tolerant of others in the same way even when you think their views are harmful.
So, sometimes, you just have to let it go.
--
PS
If you agree with me about the above, you might be tempted to post or send it to people every time they aren’t playing by these rules. Unfortunately, this is likely to be counterproductive and lead to more conflict rather than less. It’s useful to share this post in general, but not trot it out as a way of policing others. The most effective way to promote this style of interaction is to exemplify it in the way you treat others, and not get into long conversations with people who have less productive ways of talking to others.
Thanks to Amanda, Will, Diana, Michelle, Catriona, Marek, Niel, Tonja, Sam and George for feedback on drafts of this post.
[Link] Mainstream media writing about rationality-informed approaches
Wanted to share two articles published in mainstream media, namely Ohio newspapers, about how rationality-informed strategies help people improve their lives.
This one is about improving one's thinking, feeling, and behavior patterns overall, and especially one's highest-order goals, presented as "meaning and purpose."
This one is about using rationality to deal with mental illness, and specifically highlights the strategy of "in what world do I want to live?"
I know about these two articles because I was personally involved in their publication as part of my broader project of spreading rationality widely. What other articles are there that others know about?
[Link] Promoting rationality in higher education media channels
Glad to share an op-ed piece I published in one of the most premier higher education media channels on how I as a professor used rationality-informed strategies to deal with mental illness in the classroom. This is part of my broader project to promote rationality to a broad audience and thus raise the sanity waterline, so good news on that front. I'd also be glad to hear your advice about other strategies to promote rationality broadly, and also any collaboration you may be interested in doing together around such public outreach.
Sharing about my mental illness and popularizing future-oriented thinking: feedback appreciated!
I'd appreciate feedback on optimizing a blog post that shares about my mental illness and popularizes future-oriented thinking to a broad audience. I'm using story-telling as the driver of the narrative, and sprinkling in elements of rational thinking, such as hyperbolic discounting, mental maps, and future-oriented thinking, in a strategic way. The target audience is college-age youth and young adults. Any suggestions for what works well, and what can be improved would be welcomed! The blog draft itself is below the line.
P.S. For context, the blog is part of a broader project, Intentional Insights, aimed at promoting rationality to a broad audience, as I described in this LW discussion post. To do so, we couch rationality in the language of self-improvement and present it in a narrative style.
_______________________________________________________________________________________________________________________
Coming Out of the Mental Health Closet
My hand jerked back, as if the computer mouse had turned into a real mouse. I just couldn’t do it. Would they think I am crazy? Would they whisper behind my back? Would they never trust me again? These are the kinds of anxious thoughts that ran through my head as I was about to post on my Facebook profile revealing my mental illness to my Facebook friends, about 6 months after my condition began.
I really wanted to share much earlier about my mental illness, a mood disorder characterized by high anxiety, sudden and extreme fatigue, and panic attacks. It would have felt great to be genuinely authentic with people in my life, and not hide who I am. Plus, I would have been proud to contribute to overcoming the stigma against mental illness in our society, especially since this stigma impacts me on such a personal level.
Ironically, the very stigma against mental illness, combined with my own excessive anxiety response, made it very hard for me to share. I was really anxious about whether friends and acquaintances would turn away from me. I was also very concerned about the impact on my professional career of sharing publicly, due to the stigma in academia against mental illness, including at my workplace, Ohio State, as my colleague and fellow professor described in his article.
Whenever the thought of telling others entered my mind, I felt a wave of anxiety pass through me. My head began to pound, my heart sped up, my breathing became fast and shallow, almost like I was suffocating. If I didn’t catch it in time, the anxiety could lead to a full-blown panic attack, or sudden and extreme fatigue, with my body collapsing in place. Not a pretty picture.
Still, I did eventually start discussing my mental illness with some very close friends who I was very confident would support me. And one conversation really challenged my mental map, in other words how I perceive reality, about sharing my story of mental illness.
My friend told me something that really struck me, namely his perspective about how great would it be if all people who needed professional help with their mental health actually went to get such help. One of the main obstacles, as research shows, is the stigma against mental health. We discussed how one of the best ways to deal with such stigma is for well-functioning people with mental illness to come out of the closet about their condition.
Well, I am one of these well-functioning people. I have a great job and do it well, have wonderful relationships, and participate in all sorts of civic activities. The vast majority of people who know me don’t realize I suffer from a mental illness.
That conversation motivated me to think seriously through the roadblocks thrown up by the emotional part of my brain. Previously, I never sat down for a few minutes and forced myself to think what good things might happen if I pushed past all the anxiety and stress of telling people in my life about my mental illness.
I realized that I was just flinching away, scared of the short-term pain of rejection and not thinking about the long-term benefits to me and to others of sharing my story. I was falling for a thinking error that scientists call hyperbolic discounting, a reluctance to make short-term sacrifices for much higher long-term rewards.
To combat this problem, I imagined what world I wanted to live in a year from now – one where I shared about this situation now on my Facebook profile, or one where I did not. This approach is based on research showing that future-oriented thinking is very helpful for dealing with thinking errors associated with focusing on the present.
In the world where I would share right now about my condition, I would be very anxious about what people think of me. Anytime I saw someone who found out for the first time, I would be afraid about the impact on that person’s opinion of me. I would be watching her or his behavior closely for signs of distancing from me. And this would not only be my anxiety: I was quite confident that some people would not want to associate with me due to my mental illness. However, over time, this close watching and anxious thoughts would diminish. All the people who knew me previously would find out. All new people who met met would learn about my condition, since I would not keep it a secret. I would make the kind of difference I wanted to make in the world by fighting mental stigma in our society, and especially in academia. Just as important, it would be a huge burden off my back to not hide myself and be authentic with people in my life.
I imagined a second world. I would continue to hide my mental health condition from everyone but a few close friends. I would always have to keep this secret under wraps, and worry about people finding out about it. I would not be making the kind of impact on our society that I knew I would be able to make. And likely, people would find out about it anyway, whether if I chose to share about it or some other way, and I would get all the negative consequences later.
Based on this comparison, I saw that the first world was much more attractive to me. So I decided to take the plunge, and made a plan to share about the situation publicly. As part of doing so, I made that Facebook post. I had such a good reaction from my Facebook friends that I decided to make the post publicly available on my Facebook to all, not only my friends. Moreover, I decided to become an activist in talking about my mental condition publicly, as in this essay that you are reading.
What can you do?
So how can you apply this story to your life? Whether you want to come out of the closet to people in your life about some unpleasant news, or more broadly overcome the short-term emotional pain of taking an action that would help you achieve your long-term goals, here are some strategies.
1) Consider the world where you want to live a year from now. What would the world look like if you take the action? What would it look like if you did not take the action?
2) Evaluate all the important costs and benefits of each world. What world looks the most attractive a year from now?
3) Decide on the actions needed to get to that world, make a plan, and take the plunge. Be flexible about revising your plan based on new information such as reactions from others, as I did regarding sharing about my own condition.
What do you think?
- Do you ever experience a reluctance to tell others about something important to you because of your concern about their response? How have you dealt with this problem yourself?
- Is there any area of your life where an orientation to the short term undermines much higher long-term rewards? Do you have any effective strategies for addressing this challenge?
- Do you think the strategy of imagining the world you want to live in a year from now can be helpful in any area of your life? If so, where and how?
___________________________________________________________________________________________________________
Thanks in advance for your feedback and suggestions on optimizing the post!
Translating bad advice
While writing my Magnum Opus I came across this piece of writing advice by Neil Gaiman:
“When people tell you something’s wrong or doesn’t work for them, they are almost always right. When they tell you exactly what they think is wrong and how to fix it, they are almost always wrong.”
And it struck me how true it was, even in other areas of life. People are terrible at giving advice on how to improve yourself, or on how to improve anything really. To illustrate this, here is what you would expect advice from a good rationalist friend to look like:
1) “Hey, I’ve noticed you tend to do X.”
2) “It’s been bugging me for a while, though I’m not really sure why. It’s possible other people think X is bad as well, you should ask them about it.”
3) Paragon option: “Maybe you could do Y instead? I dunno, just think about it.”
4) Renegade option: “From now on I will slap you every time you do X, in order to help you stop being retarded about X.”
I wish I had more friends who gave advice like that, especially the renegade option. Instead, here is what I get in practice:
1) Thinking: Argh, he is doing X again. That annoys me, but I don’t want to be rude.
2) Thinking: Okay, he is doing Z now, which is kind of like X and a good enough excuse to vent my anger about X
3) *Complains about Z in an irritated manner, and immediately forgets that there’s even a difference between X and Z*
4) Thinking: Oh shit, that was rude. I better give some arbitrary advice on how to fix Z so I sound more productive.
As you can see, social rules and poor epistemology really get in the way of good advice, which is incredibly frustrating if you genuinely want to improve yourself! (Needless to say, ignoring badly phrased advice is incredibly stupid and you should never do this. See HPMOR for a fictional example of what happens if you try to survive on your wits alone.) A naïve solution is to tell everybody that you are the sort of person who loves to hear criticism in the hope that they will tell you what they really think. This never works because A) Nobody will believe you since everyone says this and it’s always a lie, and B) It’s a lie, you hate hearing real criticism just like everybody else.
The best solution I have found is to make it a habit to translate bad advice into good advice, in the spirit of what Neil Gaiman said above: Always be on the lookout for people giving subtle clues that you are doing something wrong and ask them about it (preferably without making yourself sound insecure in the process, or they’ll just tell you that you need to be more confident). When they give you some bullshit response that is designed to sound nice, keep at it and convince them to give you their real reasons for bringing it up in the first place. Once you have recovered the original information that lead them to give the poor advice, you can rewrite it as good advice in the format used above. Here is an example from my own work experience:
1) Bad advice person: “You know, you may have your truth, but someone else may have their own truth.”
2) Me, confused and trying not to be angry at bad epistemology: “That’s interesting. What makes you say that?”
3) *5 minutes later*. “Holy shit, my insecurity is being read as arrogance, and as a result people feel threatened by my intelligence which makes them defensive? I never knew that!”
Seriously, apply this lesson. And get a good friend to slap you every time you don’t.
Is my theory on why censorship is wrong correct?
So, I have next to no academic knowledge. I have literally not read or perhaps even picked up any book since eighth grade, which is where my formal education ended, and I turn 20 this year, but I am sitting on some theories pertaining to my understanding of rationality, and procrastinating about expressing them has gotten me here. I'd like to just propose my theory on why censorship is wrong, here. Please tell me whether or not you agree or disagree, and feel free to express anything else you feel you would like to in this thread. I miss bona fide argument, but this community seems way less hostile than the one community I was involved in elsewhere....
Also, I feel I should affirm again that my academic knowledge is almost entirely just not there... I know the LessWrong community has a ton of resources they turn to and indulge in, which is more or less a bible of rationality by which you all abide, but I have read or heard of none of it. I don't mean to offend you with my willful ignorance. Sorry. Also, sorry for possibly incorporating similes and stuff into my expression... I know many out there are on the autistic spectrum and can't comprehend it so I'll try to stop doing that unless I'm making a point.
Okay, so, since the following has been bothering me a lot since I joined this site yesterday and even made me think against titling this what I want, consider the written and spoken word. Humans literally decided as a species to sequence scribbles and mouth noises in an entirely arbitrary way, ascribe emotion to their arbitrary scribbles and mouth noises, and then claim, as a species, that very specific arbitrary scribbles and mouth noises are inherent evil and not to be expressed by any human. Isn't that fucking retarded?
I know what you may be thinking. You might be thinking, "wow, this hoofwall character just fucking wrote a fucking arbitrary scribble that my species has arbitrarily claimed to be inherent evil without first formally affirming, absolutely, that the arbitrary scribble he uttered could never be inherent evil and that writing it could never in itself do any harm. This dude obviously has no interest in successfully defending himself in argument". But fuck that. This is not the same as murdering a human and trying to conceive an excuse defending the act later. This is not the same as effecting the world in any way that has been established to be detrimental and then trying to defend the act later. This is literally sequencing the very letters of the very language the human has decided they are okay with and will use to express themselves in such a way that it reminds the indoctrinated and conditioned human of emotion they irrationally ascribe to the sequence of letters I wrote. This is possibly the purest argument conceivable for demonstrating superfluity in the human world, and the human psyche. There could never be an inherent correlation to one's emotionality and an arbitrary sequence of mouth noises or scribbles or whatever have you that exist entirely independent of the human. If one were to erase an arbitrary scribble that the human irrationally ascribes emotion to, the human will still have the capacity to feel the emotion the arbitrary scribble roused within them. The scribble is not literally the embodiment of emotionality. This is why censorship is retarded.
Mind you, I do not discriminate against literal retards, or blacks, or gays, or anything. I do, however, incorporate the words "retard", "nigger", and "faggot" into my vocabulary literally exclusively because it triggers humans and demonstrates the fact that the validity of one's argument and one's ability to defend themselves in argument does not matter to the human. I have at times proposed my entire argument, actually going so far to quantify the breadth of this universe as I perceive it, the human existence, emotionality, and right and wrong before even uttering a fuckdamn swear, but it didn't matter. Humans think plugging their ears and chanting a mantra of "lalala" somehow gives themselves a valid argument for their bullshit, but whatever. Affirming how irrational the human is is a waste of time. There are other forms of censorship I shout address, as well, but I suppose not before proposing what I perceive the breadth of everything less fundamental than the human to be.
It's probably very easy to deduce the following, but nothing can be proven to exist. Also, please do bear with my what are probably argument by assertion fallacies at the moment... I plan on defending myself before this post ends.
Any opinion any human conceives is just a consequence of their own perception, the likes of which appears to be a consequence of their physical form, the likes of which is a consequence of properties in this universe as we perceive it. We cannot prove our universe's existence beyond what we have access to in our universe as we perceive it, therefore we cannot prove that we exist. We can't prove that our understanding of existence is true existence; we can only prove, within our universe, that certain things appear to be in concurrence with the laws of this universe as we perceive it. We can propose for example that an apple we can see occupies space in this universe, but we can't prove that our universe actually exists beyond our understanding of what existence is. We can't go more fundamental than what composes our universe... We can't go up if we are mutually exclusive with the very idea of "up", or are an inferior consequence of "up" which is superior to us.
I really don't remember what else I would say after this but, I guess, without divulging how much I obsess about breaking emotionality into a science, I believe nudity can't be inherent evil either because it is literally the cause of us, the human, and we are necessary to be able to perceive good and evil in the first place. If humans were not extant to dominate the world and force it to tend to the end they wanted it to anything living would just live, breed, and die, and nothing would be inherently "good" or "evil". It would just be. Until something evolved if it would to gain the capacity to force distinctions between "good" and "evil" there would be no such constructs. We have no reason to believe there would be. I don't know how I can affirm that further. If nudity- and exclusively human nudity, mind you- were to be considered inherent evil that would mean that the human is inherent evil, that everything the human perceives is is inherent evil and that the human's understanding of "rationality" is just a poor, grossly-misled attempt at coping with the evil properties that they retain and is inherently worthless. Which I actually believe, but an opinion that contrary is literally satanism and fuck me if I think I'm going to be expounding all of that here. But fundamentally, human nudity cannot be inherent evil if the human's opinions are to be considered worth anything at all, and if you want to go less fundamental than that and approach it from a "but nudity makes me feel bad" standpoint, you can simply warp your perception of the world to force seeing or otherwise being reminded of things to be correlated to certain emotion within you. I'm autistic it seems so I obsess about breaking emotionality down to a science every day but this isn't the post to be talking about shit like that. In any case, you can't prove that the act of you seeing another human naked is literal evil, so fuck you and your worthless opinions.
Yeah... I don't know what else I could say here, or if censorship exists in forms other than preventing humans from being exposed to human nudity, or human-conceived words. I should probably assert as well that I believe the human's thinking that the inherent evil of human nudity somehow becomes okay to see when a human reaches the age of 18, or 21, or 16, or 12 depending on which subset of human you ask is retarded. Also, by "retarded" I do not literally mean "retarded". I use the word as a trigger word that's meant to embody and convey bad emotion the human decides they want to feel when they're exposed to it. This entire post is dripping with the grossest misanthropy but I'm interested in seeing what the responses to this are... By the way, if you just downvote me without expressing to me what you think I'm doing wrong, as far as I can tell you are just satisfied with vaguely masturbating your dissenting opinion you care not for even defining in my direction, so, whatever makes you sleep at night, if you do that... but you're wrong though, and I would argue that to the death.
Against the internal locus of control
What do you think about these pairs of statements?
- People's misfortunes result from the mistakes they make
- Many of the unhappy things in people's lives are partly due to bad luck
- In the long run, people get the respect they deserve in this world.
- Unfortunately, an individual's worth often passes unrecognized no matter how hard he tries.
- Becoming a success is a matter of hard work; luck has little or nothing to do with it.
- Getting a good job mainly depends on being in the right place at the right time.
They have a similar theme: the first statement suggests that an outcome (misfortune, respect, or a good job) for a person are the result of their own action or volition. The second assigns the outcome to some external factor like bad luck.(1)
People who tend to think their own attitudes or efforts can control what happens to them are said to have an internal locus of control, those who don't, an external locus of control. (Call them 'internals' and 'externals' for short).
Internals seem to do better at life, pace obvious confounding: maybe instead of internals doing better by virtue of their internal locus of control, being successful inclines you to attribute success internal factors and so become more internal, and vice versa if you fail.(2) If you don't think the relationship is wholly confounded, then there is some prudential benefit for becoming more internal.
Yet internal versus external is not just a matter of taste, but a factual claim about the world. Do people, in general, get what their actions deserve, or is it generally thanks to matters outside their control?
Why the external view is right
Here are some reasons in favour of an external view:(3)
- Global income inequality is marked (e.g. someone in the bottom 10% of the US population by income is still richer than two thirds of the population - more here). The main predictor of your income is country of birth, it is thought to explain around 60% of the variance: not only more important than any other factor, but more important than all other factors put together.
- Of course, the 'remaining' 40% might not be solely internal factors either. Another external factor we could put in would be parental class. Include that, and the two factors explain 80% of variance in income.
- Even conditional on being born in the right country (and to the right class), success may still not be a matter of personal volition. One robust predictor of success (grades in school, job performance, income, and so on) is IQ. The precise determinants of IQ remain controversial, it is known to be highly heritable, and the 'non-genetic' factors of IQ proposed (early childhood environment, intra-uterine environment, etc.) are similarly outside one's locus of control.
On cursory examination the contours of how our lives are turned out are set by factors outside our control, merely by where we are born and who our parents are. Even after this we know various predictors, similarly outside (or mostly outside) of our control, that exert their effects on how our lives turn out: IQ is one, but we could throw in personality traits, mental health, height, attractiveness, etc.
So the answer to 'What determined how I turned out, compared to everyone else on the planet?', the answer surely has to by primarily about external factors, and our internal drive or will is relegated a long way down the list. Even if we want to look at narrower questions, like "What has made me turn out the way I am, versus all the other people who were likewise born in rich countries in comfortable circumstances?" It is still unclear whether the locus of control resides within our will: perhaps a combination of our IQ, height, gender, race, risk of mental illness and so on will still do the bulk of the explanatory work.(4)
Bringing the true and the prudentially rational together again
If it is the case that folks with an internal locus of control succeed more, yet also the external view being generally closer to the truth of the matter, this is unfortunate. What is true and what is prudentially rational seem to be diverging, such that it might be in your interests not to know about the evidence in support of an external locus of control view, as deluding yourself about an internal locus of control view would lead to your greater success.
Yet it is generally better not to believe falsehoods. Further, the internal view may have some costs. One possibility is fueling a just world fallacy: if one thinks that outcomes are generally internally controlled, then a corollary is when bad things happen to someone or they fail at something, it was primarily their fault rather than them being a victim of circumstance.
So what next? Perhaps the right view is to say that: although most important things are outside our control, not everything is. Insofar as we do the best with what things we can control, we make our lives go better. And the scope of internal factors - albeit conditional on being a rich westerner etc. - may be quite large: it might determine whether you get through medical school, publish a paper, or put in enough work to do justice to your talents. All are worth doing.

Acknowledgements
Inspired by Amanda MacAskill's remarks, and in partial response of Peter McIntyre. Neither are responsible for what I've written, and the former's agreement or the latter's disagreement with this post shouldn't be assumed.
1) Some ground-clearing: free will can begin to loom large here - after all, maybe my actions are just a result of my brain's particular physical state, and my brain's particular physical state at t depends on it's state at t-1, and so on and so forth all the way to the big bang. If so, there is no 'internal willer' for my internal locus of control to reside.
However, even if that is so, we can parse things in a compatibilist way: 'internal' factors are those which my choices can affect; external factors are those which my choices cannot affect. "Time spent training" is an internal factor as to how fast I can run, as (borrowing Hume), if I wanted to spend more time training, I could spend more time training, and vice versa. In contrast, "Hemiparesis secondary to birth injury" is an external factor, as I had no control over whether it happened to me, and no means of reversing it now. So the first set of answers imply support for the results of our choices being more important; whilst the second set assign more weight to things 'outside our control'.
2) In fairness, there's a pretty good story as to why there should be 'forward action': in the cases where outcome is a mix of 'luck' factors (which are a given to anyone), and 'volitional ones' (which are malleable), people inclined to think the internal ones matter a lot will work hard at them, and so will do better when this is mixed in with the external determinants.
3) This ignores edge cases where we can clearly see the external factors dominate - e.g. getting childhood leukaemia, getting struck by lightning etc. - I guess sensible proponents of an internal locus of control would say that there will be cases like this, but for most people, in most cases, their destiny is in their hands. Hence I focus on population level factors.
4) Ironically, one may wonder to what extent having an internal versus external view is itself an external factor.
Thinking well
Many people want to know how to live well. Part of living well is thinking well, because if one thinks the wrong thoughts it is hard to do the right things to get the best ends.
We think a lot about how to think well, and one of the first things we thought about was how to not think well. Bad ways of thinking repeat in ways we can see coming, because we have looked at how people think and know more now about that than we used to.
But even if we know how other people think bad thoughts, that is not enough. We need to both accept that we can have bad ways of thinking and figure out how to have good ways of thinking instead.
The first is very hard on the heart, but is why we call this place "Less Wrong." If we had called it something like more right, it could have been about how we're more right than other people instead of more right than our past selves.
The second is very hard on the head. It is not just enough to study the bad ways of thinking and turn them around. There are many ways to be wrong, but only a few ways to be right. If you turn left all the way around, it will point right, but we want it to point up.
The heart of our approach has a few parts:
- We are okay with not knowing. Only once we know we don't know can we look.
- We are okay with having been wrong. If we have wrong thoughts, the only way to have right thoughts is to let the wrong ones go.
- We are quick to change our minds. We look at what is when we get the chance.
- We are okay with the truth. Instead of trying to force it to be what we thought it was, we let it be what it is.
- We talk with each other about the truth of everything. If one of us is wrong, we want the others to help them become less wrong.
- We look at the world. We look at both the time before now and the time after now, because many ideas are only true if they agree with the time after now, and we can make changes to check those ideas.
- We like when ideas are as simple as possible.
- We make plans around being wrong. We look into the dark and ask what the world would look like if we were wrong, instead of just what the world would look like if we were right.
- We understand that as we become less wrong, we see more things wrong. We try to fix all the wrong things, because as soon as we accept that something will always be wrong we can not move past that thing.
- We try to be as close to the truth as possible.
- We study as many things as we can. There is only one world, and to look at a part tells you a little about all the other parts.
- We have a reason to do what we do. We do these things only because they help us, not because they are their own reason.
Feedback on promoting rational thinking about one's career choice to a broad audience
I'd appreciate feedback on optimizing a blog post that promotes rational thinking about one's career choice to a broad audience in a way that's engaging, accessible, and fun to read. I'm aiming to use story-telling as the driver of the narrative, and sprinkling in elements of rational thinking, such as agency and mere-exposure effect, in a strategic way. The target audience is college-age youth and young adults, as you'll see from the narrative. Any suggestions for what works well, and what can be improved would be welcomed! The blog draft itself is below the line.
P.S. For context, the blog is part of a broader project, Intentional Insights, aimed at promoting rationality to a broad audience, as I described in this LW discussion post. To do so, we couch rationality in the language of self-improvement and present it in a narrative style.
____________________________________________________________________________________________________________
Title:
"Stop and Think Before It's Too Late!"
Body:
Back when I was in high school and through the first couple of years in college, I had a clear career goal.
I planned to become a medical doctor.
Why? Looking back at it, my career goal was a result of the encouragement and expectations from my family and friends.
My family immigrated from the Soviet Union when I was 10, and we spent the next few years living in poverty. I remember my parents’ early jobs in America, my dad driving a bread delivery truck and my mom cleaning other people’s houses. We couldn’t afford nice things. I felt so ashamed in front of other kids for not being able to get that latest cool backpack or wear cool clothes – always on the margins, never fitting in. My parents encouraged me to become a medical doctor. They gave up successful professional careers when they moved to the US, and they worked long and hard to regain financial stability. It’s no wonder that they wanted me to have a career that guaranteed a high income, stability, and prestige.
My friends also encouraged me to go into medicine. This was especially so with my best friend in high school, who also wanted to become a medical doctor. He wanted to have a prestigious job and make lots of money, which sounded like a good goal to have and reinforced my parents’ advice. In addition, friendly competition was a big part of what my best friend and I did. Whether debating complex intellectual questions, trying to best each other on the high school chess team, or playing poker into the wee hours of the morning. Putting in long hours to ace the biochemistry exam and get a high score on the standardized test to get into medical school was just another way for us to show each other who was top dog. I still remember the thrill of finding out that I got the higher score on the standardized test. I had won!
As you can see, it was very easy for me to go along with what my friends and family encouraged me to do.
I was in my last year of college, working through the complicated and expensive process of applying to medical schools, when I came across an essay question that stopped in me in my tracks:
“Why do you want to be a medical doctor?”
The question stopped me in my tracks. Why did I want to be a medical doctor? Well, it’s what everyone around me wanted me to do. It was what my family wanted me to do. It was what my friends encouraged me to do. It would mean getting a lot of money. It would be a very safe career. It would be prestigious. So it was the right thing for me to do. Wasn’t it?
Well, maybe it wasn’t.
I realized that I never really stopped and thought about what I wanted to do with my life. My career is how I would spend much of my time every week for many, many years, but I never considered what kind of work I would actually want to do, not to mention whether I would want to do the work that’s involved in being a medical doctor. As a medical doctor, I would work long and sleepless hours, spend my time around the sick and dying, and hold people’s lives in my hands. Is that what I wanted to do?
There I was, sitting at the keyboard, staring at the blank Word document with that essay question at the top. Why did I want to be a medical doctor? I didn’t have a good answer to that question.
My mind was racing, my thoughts were jumbled. What should I do? I decided to talk to someone I could trust, so I called my girlfriend to help me deal with my mini-life crisis. She was very supportive, as I thought she would be. She told me I shouldn’t do what others thought I should do, but think about what would make me happy. More important than making money, she said, is having a lifestyle you enjoy, and that lifestyle can be had for much less than I might think.
Her words provided a valuable outside perspective for me. By the end of our conversation, I realized that I had no interest in doing the job of a medical doctor. And that if I continued down the path I was on, I would be miserable in my career, doing it just for the money and prestige. I realized that I was on the medical school track because others I trust - my parents and my friends - told me it was a good idea so many times that I believed it was true, regardless of whether it was actually a good thing for me to do.
Why did this happen?
I later learned that I found myself in this situation because of a common thinking error which scientists call the mere-exposure effect. It means that we tend our tendency to believe something is true and good just because we are familiar with it, regardless of whether it is actually true and good.
Since I learned about the mere-exposure effect, I am much more suspicious of any beliefs I have that are frequently repeated by others around me, and go the extra mile to evaluate whether they are true and good for me. This means I can gain agency and intentionally take actions that help me toward my long-term goals.
So what happened next?
After my big realization about medical school and the conversation with my girlfriend, I took some time to think about my actual long-term goals. What did I - not someone else - want to do with my life? What kind of a career did I want to have? Where did I want to go?
I was always passionate about history. In grade school I got in trouble for reading history books under my desk when the teacher talked about math. As a teenager, I stayed up until 3am reading books about World War II. Even when I was on the medical school track in college I double-majored in history and biology, with history my love and joy. However, I never seriously considered going into history professionally. It’s not a field where one can make much money or have great job security.
After considering my options and preferences, I decided that money and security mattered less than a profession that would be genuinely satisfying and meaningful. What’s the point of making a million bucks if I’m miserable doing it, I thought to myself. I chose a long-term goal that I thought would make me happy, as opposed to simply being in line with the expectations of my parents and friends. So I decided to become a history professor.
My decision led to some big challenges with those close to me. My parents were very upset to learn that I no longer wanted to go to medical school. They really tore into me, telling me I would never be well off or have job security. Also, it wasn’t easy to tell my friends that I decided to become a history professor instead of a medical doctor. My best friend even jokingly asked if I was willing to trade grades on the standardized medical school exam, since I wasn’t going to use my score. Not to mention how painful it was to accept that I wasted so much time and effort to prepare for medical school only to realize that it was not the right choice for me. I really I wish this was something I realized earlier, not in my last year of college.
3 steps to prevent this from happening to you:
If you want to avoid finding yourself in a situation like this, here are 3 steps you can take:
1. Stop and think about your life purpose and your long-term goals. Write these down on a piece of paper.
2. Now review your thoughts, and see whether you may be excessively influenced by messages you get from your family, friends, or the media. If so, pay special attention and make sure that these goals are also aligned with what you want for yourself. Answer the following question: if you did not have any of those influences, what would you put down for your own life purpose and long-term goals? Recognize that your life is yours, not theirs, and you should live whatever life you choose for yourself.
3. Review your answers and revise them as needed every 3 months. Avoid being attached to your previous goals. Remember, you change throughout your life, and your goals and preferences change with you. Don’t be afraid to let go of the past, and welcome the current you with arms wide open.
What do you think?
· Do you ever experience pressure to make choices that are not necessarily right for you?
· Have you ever made a big decision, but later realized that it wasn’t in line with your long-term goals?
· Have you ever set aside time to think about your long-term goals? If so, what was your experience?
Personal Notes On Productivity (A categorization of various resources)
For each topic, I’ve curated a few links that I’ve found to be pretty high quality.
- Meta:(Epiphany Addiction, Reversing Advice, Excellence Porn)
- @Learning:
- Success People: (Mastery),(ChoosingTopics: Osci,PG)
- Thinking: (Ikigai, Stoicism, Rationality)
- HabitChange: (!ShootDog)
- Productivity.Principles/Energy/Relaxation:(FullEngagement, ArtOfLearning)
- Productivity.Systems/Hacks: (Autofocus, GTD/ZTD, EatFrog),(Scott Young)
- Depression/Anxiety:
- Social:
- Meditation
Full List: https://workflowy.com/s/zUTEaY0ZcJ
I'd like feedback on:
- What other categories/links would you include (I'm sure there's lots of interesting stuff I'm missing.)? What do you think of the categorization ("Thinking" is a pretty large category.)?
- Whether you think I should make cross-posts about sub-topics here. The main benefit of making more cross posts is that the discussion/comments would be more focused on those topics. In particular, I think that looking at SuccessfulPeople.Startups, SuccessfulPeople.Science, and the Meditation document are the most original parts of this post.
- SuccessfulPeople.Startups contains a categorization of some of Paul Graham's essays (e.g. Having ideas, fund-raising, executing, etc).
- The SuccessfulPeople.Science link contains a separate categorization of advice specifically for scientists (e.g. Picking ideas, the importance of being persistent, the importance of reading widely, etc).
- The meditation document lists a few high quality meditation resources that I've found (and I've read ~10 books on meditation. Most of it is crap. Some of the stuff I list is orders of magnitude better than the median meditation book I've read.).
- Whatever seems salient to you.
PredictIt, a prediction market out of New Zealand, now in beta.
From their website:
PredictIt is an exciting new, real money site that tests your knowledge of political and financial events by letting you make and trade predictions on the future.
Taking part in PredictIt is simple and easy. Pick an event you know something about and see what other traders believe is the likelihood it will happen. Do you think they have it right? Or do you think you have the knowledge to beat the wisdom of the crowd?
The key to success at PredictIt is timing. Make your predictions when most people disagree with you and the price is low. When it turns out that your view may be right, the value of your predictions will rise. You’ll need to choose the best time to sell!
Keep in mind that, although the stakes are limited, PredictIt involves real money so the consequences of being wrong can be painful. Of course, winning can also be extra sweet.
For detailed instructions on participating in PredictIt, How It Works.
PredictIt is an educational purpose project of Victoria University, Wellington of New Zealand, a not-for-profit university, with support provided by Aristotle International, Inc., a U.S. provider of processing and verification services. Prediction markets, like this one, are attracting a lot of academic and practical interest (see our Research section). So, you get to challenge yourself and also help the experts better understand the wisdom of the crowd.
Rationality promoted by the American Humanist Association
Happy to share that I got to discuss rationality-informed thinking strategies on the American Humanist Association's well-known and popular podcast, the Humanist Hour (here's the link to the interview). Now, this was aimed at secular audiences, so even before the interview the hosts steered me to orient specifically toward what they thought the audience would find valuable. Thus, the interview focused more on secular issues, such as finding meaning and purpose from a science-based perspective. Still, I got to talk about map and territory and other rationality strategies, as well as cognitive biases such as planning fallacy and sunken costs. So I'd call that a win. I'd appreciate any feedback from you all on how to optimize the way I present rationality-informed strategies in future media appearances.
Making a Rationality-promoting blog post more effective and shareable
I wrote a blog post that popularizes the "false consensus effect" and the debiasing strategy of "imagining the opposite" and "avoiding failing at other minds." Thoughts on where the post works and where it can be improved would be super-helpful for improving our content and my writing style. Especially useful would be feedback on how to make this post more shareable on Facebook and other social media, as we'd like people to be motivated to share these posts with their friends. For example, what would make you more likely to share it? What would make others you know more likely to share it?
For a bit of context, the blog post is part of the efforts of Intentional Insights to promote rational thinking to a broad audience and thus raise the sanity waterline, as described here. The target audience for the blog post is reason-minded youth and young adults who are either not engaged with rationality or are at the beginning stage of becoming aspiring rationalists. Our goal is to get such people interested in exploring rationality more broadly, eventually getting them turned on to more advanced rationality, such as found on Less Wrong itself, in CFAR workshops, etc. The blog post is written in a style aimed to create cognitive ease, with a combination of personal stories and an engaging narrative, along with citations of relevant research and descriptions of strategies to manage one’s mind more effectively. This is part of our broader practice of asking for feedback from fellow Less Wrongers on our content (this post for example). We are eager to hear from you and revise our drafts (and even published content offerings) based on your thoughtful comments, and we did so previously, as you see in the Edit to this post. Any and all suggestions are welcomed, and thanks for taking the time to engage with us and give your feedback – much appreciated!
An alarming fact about the anti-aging community
Past and Present
Ten years ago teenager me was hopeful. And stupid.
The world neglected aging as a disease, Aubrey had barely started spreading memes, to the point it was worth it for him to let me work remotely to help with Metuselah foundation. They had not even received that initial 1,000,000 donation from an anonymous donor. The Metuselah prize was running for less than 400,000 if I remember well. Still, I was a believer.
Now we live in the age of Larry Page's Calico, 100,000,000 dollars trying to tackle the problem, besides many other amazing initiatives, from the research paid for by Life Extension Foundation and Bill Faloon, to scholars in top universities like Steve Garan and Kenneth Hayworth fixing things from our models of aging to plastination techniques. Yet, I am much more skeptical now.
Individual risk
I am skeptical because I could not find a single individual who already used a simple technique that could certainly save you many years of healthy life. I could not even find a single individual who looked into it and decided it wasn't worth it, or was too pricy, or something of that sort.
That technique is freezing some of your cells now.
Freezing cells is not a far future hope, this is something that already exists, and has been possible for decades. The reason you would want to freeze them, in case you haven't thought of it, is that they are getting older every day, so the ones you have now are the youngest ones you'll ever be able to use.
Using these cells to create new organs is not something that may help you if medicine and technology continue progressing according to the law of accelerating returns in 10 or 30 years. We already know how to make organs out of your cells. Right now. Some organs live longer, some shorter, but it can be done - for instance to bladders - and is being done.
Hope versus Reason
Now, you'd think if there was an almost non-invasive technique already shown to work in humans that can preserve many years of your life and involves only a few trivial inconveniences - compared to changing diet or exercising for instance- the whole longevist/immortalist crowd would be lining up for it and keeping back up tissue samples all over the place.
Well I've asked them. I've asked some of the adamant researchers, and I've asked the superwealthy; I've asked the cryonicists and supplement gorgers; I've asked those who work on this 8 hour a day every day, and I've asked those who pay others to do so. I asked it mostly for selfish reasons, I saw the TEDs by Juan Enriquez and Anthony Atala and thought: hey look, clearly beneficial expected life length increase, yay! let me call someone who found this out before me - anyone, I'm probably the last one, silly me - and fix this.
I've asked them all, and I have nothing to show for it.
My takeaway lesson is: whatever it is that other people are doing to solve their own impending death, they are far from doing it rationally, and maybe most of the money and psychology involved in this whole business is about buying hope, not about staring into the void and finding out the best ways of dodging it. Maybe people are not in fact going to go all-in if the opportunity comes.
How to fix this?
Let me disclose first that I have no idea how to fix this problem. I don't mean the problem of getting all longevists to freeze their cells, I mean the problem of getting them to take information from the world of science and biomedicine and applying it to themselves. To become users of the technology they are boasters of. To behave rationally in a CFAR or even homo economicus sense.
I was hoping for a grandiose idea in this last paragraph, but it didn't come. I'll go with a quote from this emotional song sung by us during last year's Secular Solstice celebration
Do you realize? that everyone, you know, someday will die...
And instead of sending all your goodbyes
How to save (a lot of) money on flying
I was going to wait to post this for reasons, but realized that was pretty dumb when the difference of a few weeks could literally save people hundreds, if not thousands of collective dollars.
If you fly regularly (or at all), you may already know about this method of saving money. The method is quite simple: instead of buying a round-trip ticket from the airline or reseller, you hunt down much cheaper one-way flights with layovers at your destination and/or your point of origin. Skiplagged is a service that will do this automatically for you, and has been in the news recently because the creator was sued by United Airlines and Orbitz. While Skiplagged will allow you to click-through to purchase the one-way ticket to your destination, they have broken or disabled the functionality of the redirect to the one-way ticket back (possibly in order to raise more funds for their legal defense). However, finding the return flight manually is fairly easy as the provide all the information to filter for it on other websites (time, airline, etc). I personally have benefited from this - I am flying to Texas from Southern California soon, and instead of a round-trip ticket which would cost me about $450, I spent ~$180 on two one-way tickets (with the return flight being the "layover" at my point-of-origin). These are, perhaps, larger than usual savings; I think 20-25% is more common, but even then it's a fairly significant amount of money.
Relevant warnings by gwillen:
Additionally, you should do all of your airline/hotel/etc shopping using whatever private browsing mode your web browser has. This will often let you purchase the exact same product for a cheaper price.
That is all.
LINK: Superrationality and DAOs
The cryptocurrency ethereum is mentioned here occasionally, and I'm not surprised to see an overlap in interests from that sphere. Vitalik Buterin has recently published a blog post discussing some ideas regarding how smart contracts can be used to enforce superrationality in the real world, and which cases those actually are.
Explaining “map and territory” and “fundamental attribution error” to a broad audience
I am working on a blog post that aims to convey the concepts of “map and territory” and the “fundamental attribution error” to a broad audience in an engaging and accessible way. Since many people here focus on these subjects, I think it would be really valuable to get your feedback on what I’ve written.
For a bit of context, the blog post is part of the efforts of Intentional Insights to promote rational thinking to a broad audience and thus raise the sanity waterline, as described here. The target audience for the blog post is reason-minded youth and young adults who are either not engaged with rationality or are at the beginning stage of becoming aspiring rationalists. Our goal is to get such people interested in exploring rationality more broadly, eventually getting them turned on to more advanced rationality, such as found on Less Wrong itself, in CFAR workshops, etc. The blog post is written in a style aimed to create cognitive ease, with a combination of personal stories and an engaging narrative, along with citations of relevant research and descriptions of strategies to manage one’s mind more effectively.
This is part of our broader practice of asking for feedback from fellow Less Wrongers on our content (this post for example). We are eager to hear from you and revise our drafts (and even published content offerings) based on your thoughtful comments, and we did so previously, as you see in the Edit to this post.
Below the line is the draft post itself. After we get your suggestions, we will find an appropriate graphic to illustrate this article and post it on the Intentional Insights website. Any and all suggestions are welcomed, and thanks for taking the time to engage with us and give your feedback – much appreciated!
_______________________________________________________________________________________________________________________
Where Do Our Mental Maps Lead Us Astray?
So imagine you are driving on autopilot, as we all do much of the time. Suddenly the car in front of you cuts you off quite unexpectedly. You slam your brakes and feel scared and indignant. Maybe you flash your lights or honk your horn at the other car. What’s your gut feeling about the other driver? I know my first reaction is that the driver is rude and obnoxious.
Now imagine a different situation. You’re driving on autopilot, minding your own business, and you suddenly realize you need to turn right at the next intersection. You quickly switch lanes and suddenly hear someone behind you honking their horn. You now realize that there was someone in your blind spot and you forgot to check it in the rush to switch lanes. So you cut them off pretty badly. Do you feel that you are a rude driver? The vast majority of us do not. After all, we did not deliberately cut that car off, we just failed to see the driver. Or let’s imagine another situation: say your friend hurt herself and you are rushing her to the emergency room. You are driving aggressively, cutting in front of others. Are you a rude driver? Not generally. You’re merely doing the right thing for the situation.
So why do we give ourselves a pass, while attributing an obnoxious status to others? Why does our gut always make us out to be the good guys, and other people bad guys? Clearly, there is a disconnect between our gut reaction and reality here. It turns out that this pattern is not a coincidence. Basically, our immediate gut reaction attributes the behavior of others to their personality and not to the situation in which the behavior occurs. The scientific name for this type of error in thinking and feeling is called the fundamental attribution error, also called the correspondence bias. So if we see someone behaving rudely, we immediately and intuitively feel that this person IS rude. We don’t automatically stop to consider whether an unusual situation may cause someone to act this way. With the driver example, maybe the person who cut you off did not see you. Or maybe they were driving their friend to the emergency room. But that’s not what our automatic reaction tells us. On the other hand, we attribute our own behavior to the situation, and not our personality. Much of the time we feel like we have valid explanations for our actions.
Learning about the fundamental attribution error helped me quite a bit. I became less judgmental about others. I realized that the people around me were not nearly as bad as my gut feelings immediately and intuitively assumed. This decreased my stress levels, and I gained more peace and calm. Moreover, I became more humble. I realized that my intuitive self-evaluation is excessively positive and that in reality I am not quite the good guy as my gut reaction tells me. Additionally, I realized that those around me who are unaware of this thinking and feeling error, are more judgmental of me than my intuition suggested. So I am striving to be more mindful and thoughtful about the impression I make on others.
The fundamental attribution error is one of many problems in our natural thinking and feeling patterns. It is certainly very helpful to learn about all of these errors, but it’s hard to focus on avoiding all of them in our daily life. A more effective strategy for evaluating reality more intentionally to have more clarity and thus gain greater agency is known as “map and territory.” This strategy involves recognizing the difference between the mental map of the world that we have in our heads and the reality of the actual world as it exists – the territory.
For myself, internalizing this concept has not been easy. It’s been painful to realize that my understanding of the world is by definition never perfect, as my map will never match the territory. At the same time, this realization was strangely freeing. It made me recognize that no one is perfect, and that I do not have to strive for perfection in my view of the world. Instead, what would most benefit me is to try to refine my map to make it more accurate. This more intentional approach made me more willing to admit to myself that though I intuitively and emotionally feel something is right, I may be mistaken. At the same time, the concept of map and territory makes me really optimistic, because it provides a constant opportunity to learn and improve my assessment of the situation.
Now, what are the strategies for most effectively learning this information, and internalizing the behaviors and mental patterns that can help you succeed? Well, educational psychology research illustrates that engaging with this information actively, personalizing it to your life, linking it to your goals, and deciding on a plan and specific next steps you will take are the best practices for this purpose. So take the time to answer the questions below to gain long-lasting benefit from reading this article:
- What do you think of the concept of map and territory?
- How can it be used to address the fundamental attribution error?
- Where can the notion of map and territory help you in your life?
- What challenges might arise in applying this concept, and how can these challenges be addressed?
- What plan can you make and what specific steps can you take to internalize these strategies?
[Link]How to Achieve Impossible Career Goals (My manifesto on instrumental rationality)
Hey guys,
Don't normally post from my blog to here, but the latest massive post on goal achievement in 2015 has a ton that would be relevant to people here.
Some things that I think would be of particular interest to LWers:
- The section called "Map the Path to Your Goal" has some really great stuff on planning that haven't seen many other places. I know planning gets a bad wrap here, but when combined with the "Contigency Plans" method near the bottom of the post, I've found this stuff to be killer for getting results for students.
- At the bottom, there's a section called "Choosing More Habits" that breaks down habits into the only five categories you should ever focus on. If you're planning to systematically take on new habits in 2015, this will help.
- The section called "a proactive mindset" has some fun mental reframes to play around with.
The Limits of My Rationality
As requested here is an introductory abstract.
The search for bias in the linguistic representations of our cognitive processes serves several purposes in this community. By pruning irrational thoughts, we can potentially effect each other in complex ways. Leaning heavy on cognitivist pedagogy, this essay represents my subjective experience trying to reconcile a perceived conflict between the rhetorical goals of the community and the absence of a generative, organic conceptualization of rationality.
The Story
Though I've only been here a short time, I find myself fascinated by this discourse community. To discover a group of individuals bound together under the common goal of applied rationality has been an experience that has enriched my life significantly. So please understand, I do not mean to insult by what I am about to say, merely to encourage a somewhat more constructive approach to what I understand as the goal of this community: to apply collectively reinforced notions of rational thought to all areas of life.
As I followed the links and read the articles on the homepage, I found myself somewhat disturbed by the juxtaposition of these highly specific definitions of biases to the narrative structures of parables providing examples in which a bias results in an incorrect conclusion. At first, I thought that perhaps my emotional reaction stemmed from rejecting the unfamiliar; naturally, I decided to learn more about the situation.
As I read on, my interests drifted from the rhetorical structure of each article (if anyone is interested I might pursue an analysis of rhetoric further though I'm not sure I see a pressing need for this), towards the mystery of how others in the community apply the lessons contained therein. My belief was that the parables would cause most readers to form a negative association of the bias with an undesirable outcome.
Even a quick skim of the discussions taking place on this site will reveal energetic debate on a variety of topics of potential importance, peppered heavily with accusations of bias. At this point, I noticed the comments that seem to get voted up are ones that are thoughtfully composed, well informed, soundly conceptualized and appropriately referential. Generally, this is true of the articles as well, and so it should be in productive discourse communities. Though I thought it prudent to not read every conversation in absolute detail, I also noticed that the most participated in lines of reasoning were far more rhetorically complex than the parables' portrayal of bias alone could explain. Sure the establishment of bias still seemed to represent the most commonly used rhetorical device on the forums ...
At this point, I had been following a very interesting discussion on this site about politics. I typically have little or no interest in political theory, but "NRx" vs. "Prog" Assumptions: Locating the Sources of Disagreement Between Neoreactionaries and Progressives (Part 1) seemed so out of place in a community whose political affiliations might best be summarized the phrase "politics is the mind killer" that I couldn't help but investigate. More specifically, I was trying to figure out why it had been posted here at all (I didn't take issue with either the scholarship or intent of the article, but the latter wasn't obvious to me, perhaps because I was completely unfamiliar with the coinage "neoreactionary").
On my third read, I made a connection to an essay about the socio-historical foundations of rhetoric. In structure, the essay progressed through a wide variety of specific observations on both theory and practice of rhetoric in classical Europe, culminating in a well argued but very unwieldy thesis; at some point in the middle of the essay, I recall a paragraph that begins with the assertion that every statement has political dimensions. I conveyed this idea as eloquently as I could muster, and received a fair bit of karma for it. And to think that it all began with a vague uncomfortable feeling and a desire to understand!
The Lesson
So you are probably wondering what any of this has to do with rationality, cognition, or the promise of some deeply insightful transformative advice mentioned in the first paragraph. Very good.
Cognition, a prerequisite for rationality, is a complex process; cognition can be described as the process by which ideas form, interact and evolve. Notice that this definition alone cannot explain how concepts like rationality form, why ideas form or how they should interact to produce intelligence. That specific shortcoming has long crippled cognitivist pedagogies in many disciplines -- no matter which factors you believe to determine intelligence, it is undeniably true that the process by which it occurs organically is not well-understood.
More intricate models of cognition traditionally vary according to the sets of behavior they seek to explain; in general, this forum seems to concern itself with the wider sets of human behavior, with a strange affinity for statistical analysis. It also seems as if most of the people here associate agency with intelligence, though this should be regarded as unsubstantiated anecdote; I have little interest in what people believe, but those beliefs can have interesting consequences. In general, good models of cognition that yield a sense of agency have to be able to explain how a mushy organic collection of cells might become capable of generating a sense of identity. For this reason, our discussion of cognition will treat intelligence as a confluence of passive processes that lead to an approximation of agency.
Who are we? What is intelligence? To answer these or any natural language questions we first search for stored-solutions to whatever we perceive as the problem, even as we generate our conception of the question as a set of abstract problems from interactions between memories. In the absence of recognizing a pattern that triggers a stored solution, a new solution is generated by processes of association and abstraction. This process may be central to the generation of every rational and irrational thought a human will ever have. I would argue that the phenomenon of agency approximates an answer to the question: "who am I?" and that any discussion of consciousness should at least acknowledge how critical natural language use is to universal agreement on any matter. I will gladly discuss this matter further and in greater detail if asked.
At this point, I feel compelled to mention that my initial motivation for pursuing this line of reasoning stems from the realization that this community discusses rationality in a way that differs somewhat from my past encounters with the word.
Out there, it is commonly believed that rationality develops (in hindsight) to explain the subjective experience of cognition; here we assert a fundamental difference between rationality and this other concept called rationalization. I do not see the utility of this distinction, nor have I found a satisfying explanation of how this distinction operates within accepted models for human learning in such a way that does not assume an a priori method of sorting the values which determine what is considered "rational". Thus we find there is a general derth of generative models of rational cognition beside a plethora of techniques for spotting irrational or biased methods of thinking.
I see a lot of discussion on the forums very concerned with objective predictions of the future wherein it seems as if rationality (often of a highly probabilistic nature) is, in many cases, expected to bridge the gap between the worlds we can imagine to be possible and our many somewhat subjective realities. And the force keeping these discussions from splintering off into unproductive pissing about is a constant search for bias.
I know I'm not going to be the first among us to suggest that the search for bias is not truly synonymous with rationality, but I would like to clarify before concluding. Searching for bias in cognitive processes can be a very productive way to spend one's waking hours, and it is a critical element to structuring the subjective world of cognition in such a way that allows abstraction to yield the kind of useful rules that comprise rationality. But it is not, at its core, a generative process.
Let us consider the cognitive process of association (when beliefs, memories, stimuli or concepts become connected to form more complex structures). Without that period of extremely associative and biased cognition experienced during early childhood, we might never learn to attribute the perceived cause of a burn to a hot stove. Without concepts like better and worse to shape our young minds, I imagine many of us would simply lack the attention span to learn about ethics. And what about all the biases that make parables an effective way of conveying information? After all, the strength of a rhetorical argument is in it's appeal to the interpretive biases of it's intended audience and not the relative consistency of the conceptual foundations of that argument.
We need to shift discussions involving bias towards models of cognition more complex than portraying it as simply an obstacle to rationality. In my conception of reality, recognizing the existence of bias seems to play a critical role in the development of more complex methods of abstraction; indeed, biases are an intrinsic side effect of the generative grouping of observations that is the core of Bayesian reasoning.
In short, biases are not generative processes. Discussions of bias are not necessarily useful, rational or intelligent. A deeper understanding of the nature of intelligence requires conceptualizations that embrace the organic truths at the core of sentience; we must be able to describe our concepts of intelligence, our "rationality", such that it can emerge organically as the generative processes at the core of cognition.
The Idea
I'd be interested to hear some thoughts about how we might grow to recognize our own biases as necessary to the formative stages of abstraction alongside learning to collectively search for and eliminate biases from our decision making processes. The human mind is limited and while most discussions in natural language never come close to pressing us to those limits, our limitations can still be relevant to those discussions as well as to discussions of artificial intelligences. The way I see things, a bias free machine possessing a model of our own cognition would either have to have stored solutions for every situation it could encounter or methods of generating stored solutions for all future perceived problems (both of which sound like descriptions of oracles to me, though the latter seems more viable from a programmer's perspective).
A machine capable of making the kinds of decisions considered "easy" for humans, might need biases at some point during it's journey to the complex and self consistent methods of decision making associated with rationality. This is a rhetorically complex community, but at the risk of my reach exceeding my grasp, I would be interested in seeing an examination of the Affect Heuristic in human decision making as an allegory for the historic utility of fuzzy values in chess AI.
Thank you for your time, and I look forward to what I can only hope will be challenging and thoughtful responses.
Feedback requested by Intentional Insights on workbook conveying rational thinking about meaning and purpose to a broad audience
We at Intentional Insights would appreciate your help with feedback on optimize a workbook that conveys rational thinking to find meaning and purpose in life for a broad audience. Last time, we asked for your feedback, and we changed our content offerings based on comments we received from fellow Less Wrongers, as you can see from the Edit to this post. We would be glad to update our beliefs again and revise the workbook based on your feedback.
For a bit of context, the workbook is part of our efforts to promote rational thinking to a broad audience and thus raise the sanity waterline. It’s based on research on how other societies besides the United States helped their citizens find meaning and purpose, such as research I did on the Soviet Union and Zuckerman did on Sweden and Denmark. It’s also based on research on the contemporary United States by psychologists such as Steger, Duffy and Dik, Seligman, and others.
The target audience is reason-minded youth and young adults, especially secular-oriented ones. The goal is to get such people to engage with academic research on how our minds work, and thus get them interested in exploring rational thinking more broadly, eventually getting them turned on to more advanced rationality, such as found on Less Wrong itself. The workbook is written in a style aimed to create cognitive ease, with narratives, personal stories, graphics, and research-based exercises.
Here is the link to the workbook draft itself. Any and all suggestions are welcomed, and thanks for taking the time to engage with this workbook and give your feedback – much appreciated!
Is this dark arts and if it, is it justified?
I'd like the opinion of Less Wrongers on the extent to which it is appropriate to use Dark Arts as a means of promoting rationality.
I and other fellow aspiring rationalists in the Columbus, OH Less Wrong meetup have started up a new nonprofit organization, Intentional Insights, and we're trying to optimize ways to convey rational thinking strategies widely and thus raise the sanity waterline. BTW, we also do some original research, as you can see in this Less Wrong article on "Agency and Life Domains," but our primary focus is promoting rational thinking widely, and all of our research is meant to accomplish that goal.
To promote rationality as widely as possible, we decided it's appropriate to speak the language of System 1, and use graphics, narrative, metaphors, and orientation toward pragmatic strategies to communicate about rationality to a broad audience. Some example are our blog posts about gaining agency, about research-based ways to find purpose and meaning, about dual process theory and other blog posts, as well as content such as videos on evaluating reality and on finding meaning and purpose in life.
Our reasoning is that speaking the language of System 1 would help us to reach a broad audience who are currently not much engaged in rationality, but could become engaged if instrumental and epistemic rationality strategies are presented in such a way as to create cognitive ease. We think the ends of promoting rationality justify the means of using such light Dark Arts - although the methods we use do not convey 100% epistemic rationality, we believe the ends of spreading rationality are worthwhile, and that once broad audiences who engage with our content realize the benefits of rationality, they can be oriented to pursue more epistemic accuracy over time. However, some Less Wrongers disagreed with this method of promoting rationality, as you can see in some of the comments on this discussion post introducing the new nonprofit. Some commentators expressed the belief that it is not appropriate to use methods that speak to System 1.
So I wanted to bring up this issue for a broader discussion on Less Wrong, and get a variety of opinions. What are your thoughts about the utility of using light Dark Arts of the type I described above if the goal is to promote rationality - do the ends justify the means? How much Dark Arts, if any, is it appropriate to use to promote rationality?
Edit: After reading the comments, I see that this is not crossing into real Dark Arts territory in the traditional sense after all. I wasn't sure how LessWrong would perceive things, so thanks for your feedback!
Optimizing ways to convey rational thinking strategies to broad audience
What do you think of this post as a way to use graphics, narrative, metaphors, and orientation toward pragmatic strategies to communicate about dual process theory to a broad audience? It's part of the work of our new nonprofit organization, and we're trying to optimize ways to convey rational thinking strategies widely and thus raise the sanity waterline. So advice on how to improve this post, as well as our other posts, with an orientation toward a broad audience, would be helpful. Thanks, all!
Rationalist house
At the Australia online hangout; one of the topics we discussed (before I fell asleep on camera for a bunch of people) Was writing a rationality TV show as an outreach task. Of course there being more ways for this to go wrong than right I figured its worth mentioning the ideas and getting some comments.
The strategy is to have a set of regular characters who's rationality behaviour seems nuts. Effectively sometimes because it is; when taken out of context. Then to have one "blank" person who tries to join - "rationality house". and work things out. My aim was to have each episode straw man a rationality behaviour and then steelman it. Where by the end of the episode it saves the day; makes someone happy; achieves a goal - or some other <generic win-state>.
Here is a list of notes of characters from the hangout or potential topics to talk about.
- No showers. Bacterial showers
- Stopwatches everywhere
- temperature controls everywhere, light controls.
- radical honesty person.
- Soylent only eating person
- born-again atheist
- bayesian person
- Polyphasic sleep cycles.
Why I Am Not a Rationalist, or, why several of my friends warned me that this is a cult
A common question here is how the LW community can grow more rapidly. Another is why seemingly rational people choose not to participate.
I've read all of HPMOR and some of the sequences, attended a couple of meetups, am signed up for cryonics, and post here occasionally. But, that's as far as I go. In this post, I try to clearly explain why I don't participate more and why some of my friends don't participate at all and have warned me not to participate further.
-
Rationality doesn't guarantee correctness. Given some data, rational thinking can get to the facts accurately, i.e. say what "is". But, deciding what to do in the real world requires non-rational value judgments to make any "should" statements. (Or, you could not believe in free will. But most LWers don't live like that.) Additionally, huge errors are possible when reasoning beyond limited data. Many LWers seem to assume that being as rational as possible will solve all their life problems. It usually won't; instead, a better choice is to find more real-world data about outcomes for different life paths, pick a path (quickly, given the time cost of reflecting), and get on with getting things done. When making a trip by car, it's not worth spending 25% of your time planning to shave off 5% of your time driving. In other words, LW tends to conflate rationality and intelligence.
-
In particular, AI risk is overstated There are a bunch of existential threats (asteroids, nukes, pollution, unknown unknowns, etc.). It's not at all clear if general AI is a significant threat. It's also highly doubtful that the best way to address this threat is writing speculative research papers, because I have found in my work as an engineer that untested theories are usually wrong for unexpected reasons, and it's necessary to build and test prototypes in the real world. My strong suspicion is that the best way to reduce existential risk is to build (non-nanotech) self-replicating robots using existing technology and online ordering of materials, and use the surplus income generated to brute-force research problems, but I don't know enough about manufacturing automation to be sure.
-
LW has a cult-like social structure. The LW meetups (or, the ones I experienced) are very open to new people. Learning the keywords and some of the cached thoughts for the LW community results in a bunch of new friends and activities to do. However, involvement in LW pulls people away from non-LWers. One way this happens is by encouraging contempt for less-rational Normals. I imagine the rationality "training camps" do this to an even greater extent. LW recruiting (hpmor, meetup locations near major universities) appears to target socially awkward intellectuals (incl. me) who are eager for new friends and a "high-status" organization to be part of, and who may not have many existing social ties locally.
-
Many LWers are not very rational. A lot of LW is self-help. Self-help movements typically identify common problems, blame them on (X), and sell a long plan that never quite achieves (~X). For the Rationality movement, the problems (sadness! failure! future extinction!) are blamed on a Lack of Rationality, and the long plan of reading the sequences, attending meetups, etc. never achieves the impossible goal of Rationality (impossible because "is" cannot imply "should"). Rationalists tend to have strong value judgments embedded in their opinions, and they don't realize that these judgments are irrational.
-
LW membership would make me worse off. Though LW membership is an OK choice for many people needing a community (joining a service organization could be an equally good choice), for many others it is less valuable than other activities. I'm struggling to become less socially awkward, more conventionally successful, and more willing to do what I enjoy rather than what I "should" do. LW meetup attendance would work against me in all of these areas. LW members who are conventionally successful (e.g. PhD students at top-10 universities) typically became so before learning about LW, and the LW community may or may not support their continued success (e.g. may encourage them, with only genuine positive intent, to spend a lot of time studying Rationality instead of more specific skills). Ideally, LW/Rationality would help people from average or inferior backgrounds achieve more rapid success than the conventional path of being a good student, going to grad school, and gaining work experience, but LW, though well-intentioned and focused on helping its members, doesn't actually create better outcomes for them.
-
"Art of Rationality" is an oxymoron. Art follows (subjective) aesthetic principles; rationality follows (objective) evidence.
I desperately want to know the truth, and especially want to beat aging so I can live long enough to find out what is really going on. HPMOR is outstanding (because I don't mind Harry's narcissism) and LW is is fun to read, but that's as far as I want to get involved. Unless, that is, there's someone here who has experience programming vision-guided assembly-line robots who is looking for a side project with world-optimization potential.
How do you notice when you are ignorant of necessary alternative hypotheses?
So I just wound up in a debate with someone over on Reddit about the value of conventional academic philosophy. He linked me to a book review, in which both the review and the book are absolutely godawful. That is, the author (and the reviewer following him) start with ontological monism (the universe only contains a single kind of Stuff: mass-energy), adds in the experience of consciousness, reasons deftly that emergence is a load of crap... and then arrives to the conclusion of panpsychism.
WAIT HOLD ON, DON'T FLAME YET!
Of course panpsychism is bunk. I would be embarrassed to be caught upholding it, given the evidence I currently have, but what I want to talk about is the logic being followed.
1) The universe is a unified, consistent whole. Good!
2) The universe contains the experience/existence of consciousness. Easily observable.
3) If consciousness exists, something in the universe must cause or give rise to consciousness. Good reasoning!
4) "Emergence" is a non-explanation, so that can't be it. Good!
5) Therefore, whatever stuff the unified universe is made of must be giving rise to consciousness in a nonemergent way.
6) Therefore, the stuff must be innately "mindy".
What went wrong in steps (5) and (6)? The man was actually reasoning more-or-less correctly! Given the universe he lived in, and the impossibility of emergence, he reallocated his probability mass to the remaining answer. When he had eliminated the impossible, whatever remained, however low its prior, must be true.
The problem was, he eliminated the impossible, but left open a huge vast space of possible hypotheses that he didn't know about (but which we do): the most common of these is the computational theory of mind and consciousness, which says that we are made of cognitive algorithms. A Solomonoff Inducer can just go on to the next length of bit-strings describing Turing machines, but we can't.
Now, I can spot the flaw in the reasoning here. What frightens me is: what if I'm presented with some similar argument, and I can't spot the flaw? What if, instead, I just neatly and stupidly reallocate my belief to what seems to me to be the only available alternative, while failing to go out and look for alternatives I don't already know about? Notably, it seems like expected evidence is conserved, but expecting to locate new hypotheses means I should be reducing my certainty about all currently-available hypotheses now to have some for dividing between the new possibilities.
If you can notice when you're confused, how do you notice when you're ignorant?
[LINK] How Do Top Students Study?
I found this Quora discussion very informative.
2. Develop the ability to become an active reader. Don't just passively read material you are given. But pose questions, develop hypotheses and actively test them as you read through the material. I think this is what another poster referred to when he advised that you should develop a "mental model" of whatever concept they are teaching you. Having a mental model will give you the intuition and ability to answer a wider range of questions than would be otherwise possible if you lacked such a mental model.Where do you get this model? You creatively develop one as you are reading to try to explain the facts as they are presented to you. Sometimes you have to guess the model based on scarce evidence. Sometimes it is handed to you. If your model is a good one it should at least be able to explain what you are reading.
Having a model also tells you what to look for to disprove it -- so you can be hypersensitive for this disconfirming evidence. In fact, while you are reading you should be making predictions (in the form of one or more scenarios of where the narrative could lead) and carefully checking if the narrative is going there. You should also be making predictions and seeking contradictions to these predictions -- so you can quickly find out if your model is wrong.
Sometimes you may have two or more different models that can explain the evidence, so you task will be to quickly formulate questions that can prove one model while disconfirming the others. I suggest focusing on raising questions that could confirm/disprove the mostly likely one while disproving the others (think: differential diagnoses in medicine).
But once you have such a model that (i) explains the evidence and (ii) passes all the disconfirming tests you can throw at it then you have something you can interpolate and extrapolate from to answer far more than was initially explained to you.
Such models also makes retention easier because you only need to remember the model as opposed to the endless array of facts it explains. Of course, your model could be wrong, but that is why you actively test it as you are reading, and adjust it as necessary. Think of this process as the scientific method being applied by you, to try to discover the truth as best you can..
Sometimes you will still be left with contradictions. I often found speaking to the professor after class was an efficient way of resolving them.
The author lists 8 other criteria, but this one had the biggest "light bulb" moment for me.
It was interesting to me because I intuitively would use this technique while listening/taking notes during lectures. But I never actually made a conscious decision to apply this consistently in all of my classes; it would only happen in classes I was interested in.
Does this seem to you like evidence for the existence of psychic abilities in humans?
I was recently reminded of something I have encountered that seems to me to be good evidence for paranormal phenomena. Can anyone help me figure out what might be going on?
When I was a little younger, I used to play the online riddle game Notpron. In this game, the player (essentially) has to analyze a webpage for clues towards the URL to the next webpage, and then repeat for 140 stages. The creator of this game, DavidM, at some point became a huge new age conspiracy theory loony type. Three years after the original ending of the riddle went online, he revised it to include an additional final level: Level Nu. This level is very different than the ones preceding it. I can't link to the page for obvious reasons, but I will transcribe it here:
835 492 147 264
Remote view the photography this number represents!
Email me all your results to david@david-m.org. I'll get you some feedback. Get me all elements or impressions that seem really strong for you. Or send me your sketches if you like.
Don't bruteforce, or you'll be banned from this one. You have as many attempts as you like, take your time.
Yes, I mean it. No tricks here, just pure remote viewing. The number represents a picture, I want to know what's on there.
So learn some remote viewing technique you like best and go ahead. The internet has lots of information. Have fun!
Please do this ALL by yourself, not even with your very very close friends. Because its boring and stupid, and because you can put bullshit into each others head, which is hard to get rid of again, because the mind needs to be shut down for this to work properly. So do it alone, just talk to me about it, please.
(Yes, this really works, one friend got the content of the picture on first try...and yes, he only got the number from me.)
- 31 people have successfully completed this level.
- Before this level went up, around 200 people had successfully completed the game (iirc). Given that Notpron has declined in popularity since Level Nu was created in 2008, I would estimate that around 300 people in total are in a position to attempt Level Nu, although it could be more. However, I would imagine that many people 1) probably did not come back once they had already finished, 2) were too intimidated by remote viewing and the trivial inconvenience of having an email discussion with DavidM, 3) did not even bother due to disbelief in remote viewing.
- The first person who solved it did so by dreaming about the answer. She dreamt night after night that a German man (DavidM is German) was aggressively trying to sell her a boat. The solution picture was of a boat. One of the very first posts on the thread was her talking about her dream and saying "I think this has something to do with Notpron, but I don't know what". DavidM had to immediately remove the post so as not to give away the answer.
- The second person solved it on their first try with just one word (presumably "boat").
- Someone who solved it said "What I got was literally a much sharper much detailed version of a badly scribbled picture in my mind". This person apparently also got "the one right word that you need to solve it" (boat).
- Someone on the forum writes: "Mailed my visions. I swear it was first thing i saw in my head. But no doubts i was wrong =)". Immediately after, DavidM replies saying that he figured it out.
- "The last 3 or 4 people solved the thing at the first attempt. Some little inaccuracies everytime, but the main 2 objects were always named first."
- "i didn't have any "visions". just was reading my university-stuff, when snowman "forced" me to write david. i thought it could be funny though and wrote the first shit of which i was thinking at that second. didn't even look at the numbers or anything."
- Someone's first idea that he sent was what David planned as the future solution. It seems like what he said was "rainbow colors" for a picture of an assortment of fruit. David told him to look at the current solution instead, and again, his first idea was correct.
- Same guy: "weird thing is. i got the "future solution" picture in my head right away. without even trying. then i just send it in.and when david asked me to get the current one. my gf came to me with my son in her arms saying i had to take him and i just: "Hold on, i just need to get a picture in my head". and while she was standing there with my son crying next to me. i got a pic up in my head immidietly, but that didnt feel right so i pushed it away and got another on right away and mailed it in. and it was the right one. hehe. :) and especially the second pic, i saw very clearly. even colours."
- Post where he reveals the original answer: "Most people just said right away, "it's a boat" or "boat/raft on a lake/sea/river". Or one said "going fishing", which was vague, but I let it count. What I got a lot as well was the skyline and water. 2 guys have been listening to a song called "I'm on a boat" while solving the riddle, and I watched the video clip. One scene in it looks just like the solution. Crazy."
- Post where he reveals the second answer: He says several times that he believes that this one was harder than the first. "Almost [all? sic] saw round things. Some interpreted it as ball(sport), circles, pom poms, the sun or the moon etc. So I'm glad this round-element was so dominant. CTRL saw rainbow colours right away. At least something. Kasper then pretty much nailed it in his this attempt: I saw two things O.o i saw an animal and fruit/vegetables maybe animals eating fruit/vegatables." It seems like only two people solved it during this time, although there may be more.
- Finally, someone who doesn't believe: "(This is Jooly, who used to be a mod here and one of the first solvers of the fair levels, and whose account has been mysteriously deactivated since she started discussing DavidM's increasingly wacky ideas a while ago) I spoke with one of the level Nu solvers, who explained to me exactly how it was solved. Remote viewing had nothing to do with it. Duping a very very gullible (desperately wanting to believe?) DavidM was all it took, and it was very easy too. I won't bother, having solved the real notpron levels. But for those of you who must have the new certificate, don't worry. It doesn't take any magic powers or much effort to do so." (David denies that he deactivated Jooly's account and says Jooly is free to disagree with him.)
- I personally talked to the skeptic in question on IRC back in the day. I can't recall the conversation too well, but he refused to give any concrete details on how he solved it exactly. I asked him "Was it something like, for example, you say 'Is it blue?', David says 'no', you say 'Is it red?', David says 'no', you say 'Is it big?', David says 'no', you say 'it's an apple', David says you figured it out?". He said it was something close to that. Note that as far as I can tell, everyone else who solved it either believes in remote viewing or remains agnostic.
- On how someone solved the level: "Yeah, she asked a friend about the number. He said the correct answer, and there you go."
- The third answer is revealed. There's too much stuff here to copy and paste, but he reveals a bunch of successful attempts, some of which are pretty uncanny. The most interesting part is: "Kimmo, who was not considered to have solved it said: 'It is something that is approaching me, not sure what it is. It is that kind of situation where you need to react to and not stay there just looking what it is.' (Now I don't really see why I didn't let him pass; if you're reading this, contact me!)"
- After around twenty-something solves, DavidM maintains that most people guessed it on their first try.
- "Most people" apparently guessed it on their first try.
- According to David, about half the people who tried it have solved it.
- The dream thing - absolutely insane, hard to imagine that it's a coincidence.
- David did not consider the guy who guessed the shark as "something approaching me, it is a situation that I need to react to" to have solved the level. This shows that he requires fairly high standards of accuracy.
- David implies that in order to have guessed the boat, you need to say the word "boat", also implying high standards.
- David did not really give me very much help or "lead" me anywhere when I tried to solve it.
- One person who solved it says that he did not solve it using remote viewing.
- It didn't work for me at all.
- David might very well be exaggerating both the percentage of people who successfully solved it and the percentage of people who guessed it on their first try.
- David might be (and in fact probably is) only reporting the "best" answers in his forum posts. For the fruit and the shark, he seems to be posting about half of the people who solved it in that time period. For the boat, he doesn't really give specifics, and instead says "Most people just said it was a boat on their first guess."
- Maybe DavidM is in fact "leading" people to the answer through a series of multiple guesses. For this to be true, however, a few things would have to be the case. First of all, his assertion that most people guessed it on their first try would have to be greatly exaggerated. Let's imagine that David is outright lying about most people guessing it on their first try and that half the people who attempted the riddle solved it. However, at least six people (I don't feel like going back through all 29 pages and counting) posted on the forum that they solved it on their first try. Let's imagine that all 300 people who reached the level attempted it. This is still a 1/50 "first guess" rate, and that's out of all the photographs in the world. However, maybe by some conjunction of 1) exaggerating those two numbers, 2) his dialogue with me being atypical, 3) the answers he posted on the forum being atypical, 4) his refusal to accept "something approaching me" being atypical and 5) the dream being a total coincidence, it may be true that he actually is doing a form of "leading" and is covering it up well. This feels like a really unsatisfactory answer. It relies on a lot of conjunctions and it seems clear that the only way to arrive at it is by a thorough search for some sort of answer that fits nicely in with our pre-existing worldview. That being said, I suspect it might be the most likely answer.
- Perhaps the level is an elaborate joke. In reality there is some other more conventional means of arriving at a solution, and people who solve it are told to play along. I can sort of see this being the case, given that 1) there are some other levels of Notpron that have "prankster-ish" elements and 2) I have actually myself been a part of a very similar joke on an even bigger scale, so I know that it can happen. However, on the other hand, DavidM really strongly believes in the conspiracy theory new age stuff and vigorously promotes it, so it seems unlikely that he would sabotage his own ideology like that. Also, while there are other prankster-ish levels of Notpron, nothing comes close to being as clever or elaborate as this scenario would be.
Should I take an academic class on rationality?
This would count toward my major, and if I weren't going to take it, the likely replacement would be a course in experimental/"folk" philosophy. But I'd also like to hear your thoughts on the virtues of academic rationality courses in general.
(The main counterargument, I'd imagine, is that the Sequences cover most of the same material in a more fluid and comprehensible fashion.)
Here is the syllabus: http://www.yale.edu/darwall/PHIL+333+Syllabus.pdf
Other information: I sampled one lecture for the course last year. It was a noncommital discussion of Newcomb's problem, which I found somewhat interesting despite having read most of the LW material on the subject.
When I asked what Omega would do if we activated a random number generator with a 50.01% chance of one-boxing us, the professors didn't dismiss the question as irrelevant, but they also didn't offer any particular answer.
I help run a rationality meetup at Yale, and this seems like a good place to meet interested students. On the other hand, I could just as easily leave flyers around before the class begins.
Related question: Could someone quickly sum up what might be meant by the "feminist critique" of rationality, as would be discussed in the course? I've read a few abstracts, but I'm still not sure I know the most important points of these critiques.
Skills and Antiskills
One useful little concept that a friend and I have is that of the antiskill. Like a normal skill, an antiskill gives you both the ability and the affordance to do things that you wouldn't otherwise be able to do. The difference between a skill and an antiskill is that a skill gives you the ability and affordance to do things that are positive on net, while an antiskill gives you the ability and affordance to do things that are negative on net.
For instance, my friend believes that dancing is often an antiskill, because it gives you an affordance to dance rather than have interesting conversations while at parties, and he considers having interesting conversations to be much more valuable than dancing-- therefore, knowing how to dance serves primarily to enable choices that are bad on net.
I disagree with the specific point in this case, but I nevertheless think it's a good example because it illustrates another key principle of skills and antiskills-- whether something is a skill or an antiskill is context-dependent. If dancing will largely prevent you from having interesting conversations, it may well be an antiskill-- but if you go to a lot of nightclubs where loud music makes conversation difficult, knowing how to dance seems very useful indeed!
Another example is the skill of knowing how to fix computers. In many respects this is very useful, and can indeed lead to a profitable career in IT. But-- as I'm sure many of you may have experienced-- having your friends and family know that you know how to fix computers can be very negative on net!
Overall, I find the skill/antiskill framework quite useful when it comes to navigating what sorts of skills, abilities, and knowledge I should acquire. Before choosing my next priority, I often pause to think:
- What affordances will learning this give me?
- In what contexts will those affordances be most relevant?
- Will this be positive or negative on net?
Using this framework has enabled me to discern strengths and weaknesses that I had previously not considered, and in some cases those strengths and weaknesses have proven decisive to my planning.
The Problem of "Win-More"
In Magic: the Gathering and other popular card games, advanced players have developed the notion of a "win-more" card. A "win-more" card is one that works very well, but only if you're already winning. In other words, it never helps turn a loss into a win, but it is very good at turning a win into a blowout. This type of card seems strong at first, but since these games usually do not use margin of victory scoring in tournaments, they end up being a trap-- instead of using cards that convert wins into blowouts, you want to use cards that convert losses into wins.
This concept is useful and important and you should never tell a new player about it, because it tends to make them worse at the game. Without a more experienced player's understanding of core concepts, it's easy to make mistakes and label cards that are actually good as being win-more.
This is an especially dangerous mistake to make because it's relatively uncommon for an outright bad card to seem like a win-more card; win-more cards are almost always cards that look really good at first. That means that if you end up being too wary of win-more cards, you're going to end up misclassifying good cards as bad, and that's an extremely dangerous mistake to make. Misclassifying bad cards as good is relatively easy to deal with, because you'll use them and see that they aren't good; misclassifying good cards as bad is much more dangerous, because you won't play them and therefore won't get the evidence you need to update your position.
I call this the "win-more problem." Concepts that suffer from the win-more problem are those that-- while certainly useful to an advanced user-- are misleading or net harmful to a less skillful person. Further, they are wrong or harmful in ways that are difficult to detect, because they screen off feedback loops that would otherwise allow someone to realize the mistake.
[LINK] Joseph Bottum on Politics as the Mindkiller
One of my favourite Less Wrong articles is Politics is the mindkiller. Part of the reason that political discussion so bad is the poor incentives - if you have little chance to change the outcome, then there is little reason to strive for truth or accuracy - but a large part of the reason is our pre-political attitudes and dispositions. I don't mean to suggest that there is a neat divide; clearly, there is a reflexive relation between the incentives within political discussion and our view of the appropriate purpose and scope of politics. Nevertheless, I think it's a useful distinction to make, and so I applaud the fact that Eliezer doesn't start his essays on the subject by talking about incentives, feedback or rational irrationality - instead he starts with the fact that our approach to politics is instinctively tribal.
This brings me to Joseph Bottum's excellent recent article in The American, The Post-Protestant Ethic and Spirit of America. This charts what he sees as the tribal changes within America that have shaped current attitudes to politics. I think it's best seen in conjunction with Arnold Kling's excellent The Three Languages of Politics; while Kling talks about the political language and rhetoric of modern American political groupings, Bottum's essay is more about the social changes that have led to these kinds of language and rhetoric.
We live in what can only be called a spiritual age, swayed by its metaphysical fears and hungers, when we imagine that our ordinary political opponents are not merely mistaken, but actually evil. When we assume that past ages, and the people who lived in them, are defined by the systematic crimes of history. When we suppose that some vast ethical miasma, racism, radicalism, cultural self-hatred, selfish blindness, determines the beliefs of classes other than our own. When we can make no rhetorical distinction between absolute wickedness and the people with whom we disagree. The Republican Congress is the Taliban. President Obama is a Communist. Wisconsin’s governor is a Nazi.
...
The real question, of course, is how and why this happened. How and why politics became a mode of spiritual redemption for nearly everyone in America, but especially for the college-educated upper-middle class, who are probably best understood not as the elite, but as the elect, people who know themselves as good, as relieved of their spiritual anxieties by their attitudes toward social problems.
Video of a related lecture can also be found here.
[LINK] Reinventing Explanation: Data Presentation as Intuition Pump
A great article by Michael Nielsen on failures of intuition and ways to present data more effectively so that we don't get caught by those failures. It reminded me of concepts like log odds in common use around here, and also to the recent discussion of teaching rationality techniques to average people.
Rationality & Low-IQ People
This post is to raise a question about the demographics of rationality: Is rationality something that can appeal to low-IQ people as well?
I don't mean in theory, I mean in practice. From what I've seen, people who are concerned about rationality (in the sense that it has on LW, OvercomingBias, etc.) are overwhelmingly high-IQ.
Meanwhile, HPMOR and other stories in the "rationality genre" appeal to me, and to other people I know. However I wonder: Perhaps part of the reason they appeal to me is that I think of myself as a smart person, and this allows me to identify with the main characters, cheer when they think their way to victory, etc. If I thought of myself as a stupid person, then perhaps I would feel uncomfortable, insecure, and alienated while reading the same stories.
So, I have four questions:
1.) Do we have reason to believe that the kind of rationality promoted on LW, OvercomingBias, CFAR, etc. appeals to a fairly normal distribution of people around the IQ mean? Or should we think, as I suggested, that people with lower IQ's are disposed to find the idea of being rational less attractive?
2.) Ditto, except replace "being rational" with "celebrating rationality through stories like HPMOR." Perhaps people think that rationality is a good thing in much the same way that being wealthy is a good thing, but they don't think that it should be celebrated, or at least they don't find such celebrations appealing.
3.) Supposing #1 and #2 have the answers I am suggesting, why?
4.) Making the same supposition, what are the implications for the movement in general?
Note: I chose to use IQ in this post instead of a more vague term like "intelligence," but I could easily have done the opposite. I'm happy to do whichever version is less problematic.
On Straw Vulcan Rationality
There's a core meme of rationalism that I think is fundamentally off-base. It's been bothering me for a long time — over a year now. It hasn't been easy for me, living this double life, pretending to be OK with propagating an instrumentally expedient idea that I know has no epistemic grounding. So I need to get this off my chest now: Our established terminology is not consistent with an evidence-based view of the Star Trek canon.
According to TVtropes, a straw Vulcan is a character used to show that emotion is better than logic. I think a lot of people take "straw Vulcan rationality" it to mean something like, "Being rational does not mean being like Vulcans from Star Trek."
This is not fair to Vulcans from Star Trek.
Central to the character of Spock — and something that it's easy to miss if you haven't seen every single episode and/or read a fair amount of fan fiction — is that he's being a Vulcan all wrong. He's half human, you see, and he's really insecure about that, because all the other kids made fun of him for it when he was growing up on Vulcan. He's spent most of his life resenting his human half, trying to prove to everyone (especially his father) that he's Vulcaner Than Thou. When the Vulcan Science Academy worried that his human mother might be an obstacle, it was the last straw for Spock. He jumped ship and joined Starfleet. Against his father's wishes.
Spock is a mess of poorly handled emotional turmoil. It makes him cold and volatile.
Real Vulcans aren't like that. They have stronger and more violent emotions than humans, so they've learned to master them out of necessity. Before the Vulcan Reformation, they were a collection of warring tribes who nearly tore their planet apart. Now, Vulcans understand emotions and are no longer at their mercy. Not when they apply their craft successfully, anyway. In the words of the prophet Surak, who created these cognitive disciplines with the purpose of saving Vulcan from certain doom, "To gain mastery over the emotions, one must first embrace the many Guises of the Mind."
Successful application of Vulcan philosophy looks positively CFARian.
There is a ritual called "kolinahr" whose purpose is to completely rid oneself of emotion, but it was not developed by Surak, nor, to my knowledge, was it endorsed by him. It's an extreme religious practice, and I think the wisest Vulcans would consider it misguided1. Spock attempted kolinahr when he believed Kirk had died, which I take to be a great departure from cthia (the Vulcan Way) — not because he ultimately failed to complete the ritual2, but because he tried to smash his problems with a hammer rather than applying his training to sort things out skillfully. If there ever were such a thing as a right time for kolinahr, that would not have been it.
So Spock is both a straw Vulcan and a straw man of Vulcans. Steel Vulcans are extremely powerful rationalists. Basically, Surak is what happens when science fiction authors try to invent Eliezer Yudkowsky without having met him.
1) I admit that I notice I'm a little confused about this. Sarek, Spock's father and a highly influential diplomat, studied for a time with the Acolytes of Gol, who are the masters of kolinahr. If I've ever known what came of that, I've forgotten. I'm not sure whether that's canon, though.
2) "Sorry to meditate and run, but I've gotta go mind-meld with this giant space crystal thing. ...It's complicated."
[Link] Bet Your Friends to Be More Right
This article does a good job of explaining how betting can be a useful rationality practice. An excerpt:
The interesting thing about this practice was that it made us both think very carefully about the accuracy of all of our statements. The most embarrassing thing ever was to say, "I bet you anything that I'll be on time..." and then be unwilling to back up the assertion with a bet. Failing to bet was an admission that you'd just said something that you had no real confidence in.
= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
You should be EXTREMELY CAREFUL when using this strategy. It is, at a minimum, against airline policy.
If you have any kind of airline status or membership, and you do this too often, they will cancel it. If you try to do this on a round-trip ticket, they will cancel your return. If the airlines have any means of making your life difficult available to them, they WILL use it.
Obviously you also cannot check bags when using this strategy, since they will go to the wrong place (your ostensible, rather than your actual, destination.) This also means that if you have an overhead-sized carryon, and you board late and are forced to check it, your bag will NOT make it to your intended destination; it will go to the final destination marked on your ticket. If you try to argue about this, you run the risk of getting your ticket cancelled altogether, since you're violating airline policies by using a ticket in this way.