You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Group rationality diary, 10/1/12

1 cata 02 October 2012 09:15AM

This is the public group instrumental rationality diary for the week of October 1st. It's a place to record and chat about it if you have done, or are actively doing, things like:

  • Established a useful new habit
  • Obtained new evidence that made you change your mind about some belief
  • Decided to behave in a different way in some set of situations
  • Optimized some part of a common routine or cached behavior
  • Consciously changed your emotions or affect with respect to something
  • Consciously pursued new valuable information about something that could make a big difference in your life
  • Learned something new about your beliefs, behavior, or life that surprised you
  • Tried doing any of the above and failed

Or anything else interesting which you want to share, so that other people can think about it, and perhaps be inspired to take action themselves.  Try to include enough details so that everyone can use each other's experiences to learn about what tends to work out, and what doesn't tend to work out.

Thanks to everyone who contributes!

Previous diaryarchive of prior diaries.

Group rationality diary, 9/17/12

3 cata 19 September 2012 11:08AM

This is the public group instrumental rationality diary for the week of September 17th. It's a place to record and chat about it if you have done, or are actively doing, things like:

  • Established a useful new habit
  • Obtained new evidence that made you change your mind about some belief
  • Decided to behave in a different way in some set of situations
  • Optimized some part of a common routine or cached behavior
  • Consciously changed your emotions or affect with respect to something
  • Consciously pursued new valuable information about something that could make a big difference in your life
  • Learned something new about your beliefs, behavior, or life that surprised you
  • Tried doing any of the above and failed

Or anything else interesting which you want to share, so that other people can think about it, and perhaps be inspired to take action themselves.  Try to include enough details so that everyone can use each other's experiences to learn about what tends to work out, and what doesn't tend to work out.

Thanks to everyone who contributes!

Previous diaryarchive of prior diaries.

(Sorry for being late, I don't even have an excuse at all!  Oh well.)

Group rationality diary, 9/3/12

2 cata 04 September 2012 09:42AM

This is the public group instrumental rationality diary for the week of September 3rd. It's a place to record and chat about it if you have done, or are actively doing, things like:

  • Established a useful new habit
  • Obtained new evidence that made you change your mind about some belief
  • Decided to behave in a different way in some set of situations
  • Optimized some part of a common routine or cached behavior
  • Consciously changed your emotions or affect with respect to something
  • Consciously pursued new valuable information about something that could make a big difference in your life
  • Learned something new about your beliefs, behavior, or life that surprised you
  • Tried doing any of the above and failed

Or anything else interesting which you want to share, so that other people can think about it, and perhaps be inspired to take action themselves.  Try to include enough details so that everyone can use each other's experiences to learn about what tends to work out, and what doesn't tend to work out.

Thanks to everyone who contributes!

Previous diaryarchive of prior diaries.

What Are You Doing for Self-Quantification?

7 hackerkiba 29 August 2012 06:14PM

Right now, I am counting steps each day and I logged them on paper everyday, as close to midnight as much as possible. Only yesterday did I achieve my 10,000 steps goal. I have only been doing it since Sunday. This is my fourth day. Yesterday, I started logging my weight (on paper) to see if walking 10,000 steps will help me lose weight. Granted, it's rather manual, but also easy to do.  If I try to purchase a pedometer that syncs to your computer automatically, it will costs me 99 USD brand new. Adding wireless output to my weighing scale will cost money too. There are two conditions that could lead me to purchasing sophisticated solutions: I am loaded with money and or I am overwhelmed with data input.

For now, the important thing is that I keep doing it for 30-60 days for habit formations. 

 

What are you doing for self-quantification?

Group rationality diary, 8/20/12

4 cata 21 August 2012 09:42AM

This is the public group instrumental rationality diary for the week of August 20th. It's a place to record and chat about it if you have done, or are actively doing, things like:

  • Established a useful new habit
  • Obtained new evidence that made you change your mind about some belief
  • Decided to behave in a different way in some set of situations
  • Optimized some part of a common routine or cached behavior
  • Consciously changed your emotions or affect with respect to something
  • Consciously pursued new valuable information about something that could make a big difference in your life
  • Learned something new about your beliefs, behavior, or life that surprised you
  • Tried doing any of the above and failed

Or anything else interesting which you want to share, so that other people can think about it, and perhaps be inspired to take action themselves.  Try to include enough details so that everyone can use each other's experiences to learn about what tends to work out, and what doesn't tend to work out.

Thanks to everyone who contributes!

Last week's diaryarchive of prior diaries.

Group rationality diary, 8/6/12

6 cata 08 August 2012 05:58AM

This is the public group instrumental rationality diary for the week of August 6th. It's a place to record and chat about it if you have done, or are actively doing, things like:

  • Established a useful new habit
  • Obtained new evidence that made you change your mind about some belief
  • Decided to behave in a different way in some set of situations
  • Optimized some part of a common routine or cached behavior
  • Consciously changed your emotions or affect with respect to something
  • Consciously pursued new valuable information about something that could make a big difference in your life
  • Learned something new about your beliefs, behavior, or life that surprised you
  • Tried doing any of the above and failed

Or anything else interesting which you want to share, so that other people can think about it, and perhaps be inspired to take action themselves.  Try to include enough details so that everyone can use each other's experiences to learn about what tends to work out, and what doesn't tend to work out.

Thanks to everyone who contributes!

Last week's diaryarchive of prior diaries.

(Sorry for being late this week -- I'm on vacation and got distracted :-)

Group rationality diary, 7/23/12

4 cata 24 July 2012 08:49AM

This is the public group instrumental rationality diary for the week of July 23rd. It's a place to record and chat about it if you have done, or are actively doing, things like:

  • Established a useful new habit
  • Obtained new evidence that made you change your mind about some belief
  • Decided to behave in a different way in some set of situations
  • Optimized some part of a common routine or cached behavior
  • Consciously changed your emotions or affect with respect to something
  • Consciously pursued new valuable information about something that could make a big difference in your life
  • Learned something new about your beliefs, behavior, or life that surprised you
  • Tried doing any of the above and failed

Or anything else interesting which you want to share, so that other people can think about it, and perhaps be inspired to take action themselves.  Try to include enough details so that everyone can use each other's experiences to learn about what tends to work out, and what doesn't tend to work out.

Thanks to everyone who contributes!

Last week's diary; archive of prior diaries.

Group rationality diary, 7/9/12

3 cata 10 July 2012 08:35AM

This is the public group instrumental rationality diary for the week of July 9th. It's a place to record and chat about it if you have done, or are actively doing, things like:

  • Established a useful new habit
  • Obtained new evidence that made you change your mind about some belief
  • Decided to behave in a different way in some set of situations
  • Optimized some part of a common routine or cached behavior
  • Consciously changed your emotions or affect with respect to something
  • Consciously pursued new valuable information about something that could make a big difference in your life
  • Learned something new about your beliefs, behavior, or life that surprised you
  • Tried doing any of the above and failed

Or anything else interesting which you want to share, so that other people can think about it, and perhaps be inspired to take action themselves.  Try to include enough details so that everyone can use each other's experiences to learn about what tends to work out, and what doesn't tend to work out.

Thanks to everyone who contributes!

Academian put up a wiki page with links to the prior May and June threads for reference.  Good idea, thanks!

Group rationality diary, 6/25/12

4 cata 26 June 2012 08:31AM

This is the public group instrumental rationality diary for the week of June 25th. (Based on the rate of participation in prior threads, I thought it might be a good idea to start posting every other week instead of every week.  Only so much new stuff happens in a week.) It's a place to record and chat about it if you have done, or are actively doing, things like:

  • Established a useful new habit
  • Obtained new evidence that made you change your mind about some belief
  • Decided to behave in a different way in some set of situations
  • Optimized some part of a common routine or cached behavior
  • Consciously changed your emotions or affect with respect to something
  • Consciously pursued new valuable information about something that could make a big difference in your life
  • Learned something new about your beliefs, behavior, or life that surprised you
  • Tried doing any of the above and failed

Or anything else interesting which you want to share, so that other people can think about it, and perhaps be inspired to take action themselves.  Try to include enough details so that everyone can use each other's experiences to learn about what tends to work out, and what doesn't tend to work out.

Thanks to everyone who contributes!

(Previously: 5/14/125/21/125/28/126/4/12, 6/11/12)

Group rationality diary, 6/11/12

2 cata 12 June 2012 06:39AM

This is the public group instrumental rationality diary for the week of June 11th. It's a place to record and chat about it if you have done, or are actively doing, things like:

  • Established a useful new habit
  • Obtained new evidence that made you change your mind about some belief
  • Decided to behave in a different way in some set of situations
  • Optimized some part of a common routine or cached behavior
  • Consciously changed your emotions or affect with respect to something
  • Consciously pursued new valuable information about something that could make a big difference in your life
  • Learned something new about your beliefs, behavior, or life that surprised you
  • Tried doing any of the above and failed

Or anything else interesting which you want to share, so that other people can think about it, and perhaps be inspired to take action themselves.  Try to include enough details so that everyone can use each other's experiences to learn about what tends to work out, and what doesn't tend to work out.

Thanks to everyone who contributes!

(Previously: 5/14/125/21/125/28/12, 6/4/12)

 

Group rationality diary, 6/4/12

2 cata 05 June 2012 04:12AM

This is the public group instrumental rationality diary for the week of June 4th. It's a place to record and chat about it if you have done, or are actively doing, things like:

  • Established a useful new habit
  • Obtained new evidence that made you change your mind about some belief
  • Decided to behave in a different way in some set of situations
  • Optimized some part of a common routine or cached behavior
  • Consciously changed your emotions or affect with respect to something
  • Consciously pursued new valuable information about something that could make a big difference in your life
  • Learned something new about your beliefs, behavior, or life that surprised you
  • Tried doing any of the above and failed

Or anything else interesting which you want to share, so that other people can think about it, and perhaps be inspired to take action themselves.  Try to include enough details so that everyone can use each other's experiences to learn about what tends to work out, and what doesn't tend to work out.

Thanks to everyone who contributes!

(Previously: 5/14/125/21/125/28/12)

 

Group rationality diary, 5/28/12

3 cata 29 May 2012 04:10AM

This is the public group instrumental rationality diary for the week of May 28th.  It's a place to record and chat about it if you have done, or are actively doing, things like:

  • Established a useful new habit
  • Obtained new evidence that made you change your mind about some belief
  • Decided to behave in a different way in some set of situations
  • Optimized some part of a common routine or cached behavior
  • Consciously changed your emotions or affect with respect to something
  • Consciously pursued new valuable information about something that could make a big difference in your life
  • Learned something new about your beliefs, behavior, or life that surprised you
  • Tried doing any of the above and failed

Or anything else interesting which you want to share, so that other people can think about it, and perhaps be inspired to take action themselves.  Try to include enough details so that everyone can use each other's experiences to learn about what tends to work out, and what doesn't tend to work out.

Thanks to everyone who contributes!

(Previously: 5/14/125/21/12)

Group rationality diary, 5/21/12

6 cata 22 May 2012 02:21AM

Previously: 5/14/12 (and explanation)

This is the public group instrumental rationality diary for the week of May 21st.  It's a place to record and chat about it if you have done, or are actively doing, things like:

  • Established a useful new habit
  • Obtained new evidence that made you change your mind about some belief
  • Decided to behave in a different way in some set of situations
  • Optimized some part of a common routine or cached behavior
  • Consciously changed your emotions or affect with respect to something
  • Consciously pursued new valuable information about something that could make a big difference in your life
  • Learned something new about your beliefs, behavior, or life that surprised you
  • Tried doing any of the above and failed

Or anything else interesting which you want to share, so that other people can think about it, and perhaps be inspired to take action themselves.  Try to include enough details so that everyone can use each other's experiences to learn about what tends to work out, and what doesn't tend to work out.

Thanks to everyone who contributes!

Group rationality diary, 5/14/12

20 cata 15 May 2012 03:01AM

Background:  I and many other attendees at the CFAR rationality minicamp last weekend learned a lot about applied rationality, and made big personal lists of things we wanted to try out in our everyday lives.  I think that a regular (weekly or maybe semi-weekly) post where people mention any new interesting habits, decisions, and actions they have taken recently would be cool as a supplement to this; it ought to be rewarding for people to be able to write a list of the cool things they did, and I expect it'll also be interesting for other people to peek in and see the sorts of goals and self-modifications people are working on.  Others at minicamp seemed enthusiastic about the idea, so I hope it takes off.  Feel free to meta-discuss whether this is a good idea or if it can be done better.

Addendum 5/15: By the way, non-minicamp people should feel free to post too!  I am highly certain that minicamp attendees are not the only ones working on interesting things in their lives.

This is the public group instrumental rationality diary for the week of May 14th.  It's a place to record and chat about it if you have done, or are actively doing, things like:

  • Established a useful new habit
  • Obtained new evidence that made you change your mind about some belief
  • Decided to behave in a different way in some set of situations
  • Optimized some part of a common routine or cached behavior
  • Consciously changed your emotions or affect with respect to something
  • Consciously pursued new valuable information about something that could make a big difference in your life
  • Learned something new about your beliefs, behavior, or life that surprised you
  • Tried doing any of the above and failed

Or anything else interesting which you want to share, so that other people can think about it, and perhaps be inspired to take action themselves.  Try to include enough details so that everyone can use each other's experiences to learn about what tends to work out, and what doesn't tend to work out.

Discussion's likely to continue gradually through the week, so try to remember to check back now and then!

[link] How habits control our behavior, and how to modify them

25 Kaj_Sotala 21 February 2012 07:23AM

The New York Times just recently ran an article titled "How Companies Learn Your Secrets", which was partially discussing data mining and partially discussing habits. I thought the bits on habits seemed to offer many valuable insights on how to improve our behavior, excerpts:

The process within our brains that creates habits is a three-step loop. First, there is a cue, a trigger that tells your brain to go into automatic mode and which habit to use. Then there is the routine, which can be physical or mental or emotional. Finally, there is a reward, which helps your brain figure out if this particular loop is worth remembering for the future. Over time, this loop — cue, routine, reward; cue, routine, reward — becomes more and more automatic. The cue and reward become neurologically intertwined until a sense of craving emerges. What’s unique about cues and rewards, however, is how subtle they can be. Neurological studies like the ones in Graybiel’s lab have revealed that some cues span just milliseconds. And rewards can range from the obvious (like the sugar rush that a morning doughnut habit provides) to the infinitesimal (like the barely noticeable — but measurable — sense of relief the brain experiences after successfully navigating the driveway). Most cues and rewards, in fact, happen so quickly and are so slight that we are hardly aware of them at all. But our neural systems notice and use them to build automatic behaviors.

Habits aren’t destiny — they can be ignored, changed or replaced. But it’s also true that once the loop is established and a habit emerges, your brain stops fully participating in decision-making. So unless you deliberately fight a habit — unless you find new cues and rewards — the old pattern will unfold automatically. [...]

Luckily, simply understanding how habits work makes them easier to control. Take, for instance, a series of studies conducted a few years ago at Columbia University and the University of Alberta. Researchers wanted to understand how exercise habits emerge. In one project, 256 members of a health-insurance plan were invited to classes stressing the importance of exercise. Half the participants received an extra lesson on the theories of habit formation (the structure of the habit loop) and were asked to identify cues and rewards that might help them develop exercise routines.

The results were dramatic. Over the next four months, those participants who deliberately identified cues and rewards spent twice as much time exercising as their peers. Other studies have yielded similar results. According to another recent paper, if you want to start running in the morning, it’s essential that you choose a simple cue (like always putting on your sneakers before breakfast or leaving your running clothes next to your bed) and a clear reward (like a midday treat or even the sense of accomplishment that comes from ritually recording your miles in a log book). After a while, your brain will start anticipating that reward — craving the treat or the feeling of accomplishment — and there will be a measurable neurological impulse to lace up your jogging shoes each morning.

Our relationship to e-mail operates on the same principle. When a computer chimes or a smartphone vibrates with a new message, the brain starts anticipating the neurological “pleasure” (even if we don’t recognize it as such) that clicking on the e-mail and reading it provides. That expectation, if unsatisfied, can build until you find yourself moved to distraction by the thought of an e-mail sitting there unread — even if you know, rationally, it’s most likely not important. On the other hand, once you remove the cue by disabling the buzzing of your phone or the chiming of your computer, the craving is never triggered, and you’ll find, over time, that you’re able to work productively for long stretches without checking your in-box. [...]

When they got back to P.& G.’s headquarters, the researchers watched their videotapes again. Now they knew what to look for and saw their mistake in scene after scene. Cleaning has its own habit loops that already exist. In one video, when a woman walked into a dirty room (cue), she started sweeping and picking up toys (routine), then she examined the room and smiled when she was done (reward). In another, a woman scowled at her unmade bed (cue), proceeded to straighten the blankets and comforter (routine) and then sighed as she ran her hands over the freshly plumped pillows (reward). P.& G. had been trying to create a whole new habit with Febreze, but what they really needed to do was piggyback on habit loops that were already in place. The marketers needed to position Febreze as something that came at the end of the cleaning ritual, the reward, rather than as a whole new cleaning routine.

The company printed new ads showing open windows and gusts of fresh air. More perfume was added to the Febreze formula, so that instead of merely neutralizing odors, the spray had its own distinct scent. Television commercials were filmed of women, having finished their cleaning routine, using Febreze to spritz freshly made beds and just-laundered clothing. Each ad was designed to appeal to the habit loop: when you see a freshly cleaned room (cue), pull out Febreze (routine) and enjoy a smell that says you’ve done a great job (reward). When you finish making a bed (cue), spritz Febreze (routine) and breathe a sweet, contented sigh (reward). Febreze, the ads implied, was a pleasant treat, not a reminder that your home stinks.

And so Febreze, a product originally conceived as a revolutionary way to destroy odors, became an air freshener used once things are already clean. The Febreze revamp occurred in the summer of 1998. Within two months, sales doubled. A year later, the product brought in $230 million. Since then Febreze has spawned dozens of spinoffs — air fresheners, candles and laundry detergents — that now account for sales of more than $1 billion a year. Eventually, P.& G. began mentioning to customers that, in addition to smelling sweet, Febreze can actually kill bad odors. Today it’s one of the top-selling products in the world. [...]

But when some customers were going through a major life event, like graduating from college or getting a new job or moving to a new town, their shopping habits became flexible in ways that were both predictable and potential gold mines for retailers. The study found that when someone marries, he or she is more likely to start buying a new type of coffee. When a couple move into a new house, they’re more apt to purchase a different kind of cereal. When they divorce, there’s an increased chance they’ll start buying different brands of beer.

Consumers going through major life events often don’t notice, or care, that their shopping habits have shifted, but retailers notice, and they care quite a bit. At those unique moments, Andreasen wrote, customers are “vulnerable to intervention by marketers.” In other words, a precisely timed advertisement, sent to a recent divorcee or new homebuyer, can change someone’s shopping patterns for years. [...]

Before I met Andrew Pole, before I even decided to write a book about the science of habit formation, I had another goal: I wanted to lose weight.

I had got into a bad habit of going to the cafeteria every afternoon and eating a chocolate-chip cookie, which contributed to my gaining a few pounds. Eight, to be precise. I put a Post-it note on my computer reading “NO MORE COOKIES.” But every afternoon, I managed to ignore that note, wander to the cafeteria, buy a cookie and eat it while chatting with colleagues. Tomorrow, I always promised myself, I’ll muster the willpower to resist.

Tomorrow, I ate another cookie.

When I started interviewing experts in habit formation, I concluded each interview by asking what I should do. The first step, they said, was to figure out my habit loop. The routine was simple: every afternoon, I walked to the cafeteria, bought a cookie and ate it while chatting with friends.

Next came some less obvious questions: What was the cue? Hunger? Boredom? Low blood sugar? And what was the reward? The taste of the cookie itself? The temporary distraction from my work? The chance to socialize with colleagues?

Rewards are powerful because they satisfy cravings, but we’re often not conscious of the urges driving our habits in the first place. So one day, when I felt a cookie impulse, I went outside and took a walk instead. The next day, I went to the cafeteria and bought a coffee. The next, I bought an apple and ate it while chatting with friends. You get the idea. I wanted to test different theories regarding what reward I was really craving. Was it hunger? (In which case the apple should have worked.) Was it the desire for a quick burst of energy? (If so, the coffee should suffice.) Or, as turned out to be the answer, was it that after several hours spent focused on work, I wanted to socialize, to make sure I was up to speed on office gossip, and the cookie was just a convenient excuse? When I walked to a colleague’s desk and chatted for a few minutes, it turned out, my cookie urge was gone.

All that was left was identifying the cue.

Deciphering cues is hard, however. Our lives often contain too much information to figure out what is triggering a particular behavior. Do you eat breakfast at a certain time because you’re hungry? Or because the morning news is on? Or because your kids have started eating? Experiments have shown that most cues fit into one of five categories: location, time, emotional state, other people or the immediately preceding action. So to figure out the cue for my cookie habit, I wrote down five things the moment the urge hit:

Where are you? (Sitting at my desk.)

What time is it? (3:36 p.m.)

What’s your emotional state? (Bored.)

Who else is around? (No one.)

What action preceded the urge? (Answered an e-mail.)

The next day I did the same thing. And the next. Pretty soon, the cue was clear: I always felt an urge to snack around 3:30.

Once I figured out all the parts of the loop, it seemed fairly easy to change my habit. But the psychologists and neuroscientists warned me that, for my new behavior to stick, I needed to abide by the same principle that guided Procter & Gamble in selling Febreze: To shift the routine — to socialize, rather than eat a cookie — I needed to piggyback on an existing habit. So now, every day around 3:30, I stand up, look around the newsroom for someone to talk to, spend 10 minutes gossiping, then go back to my desk. The cue and reward have stayed the same. Only the routine has shifted. It doesn’t feel like a decision, any more than the M.I.T. rats made a decision to run through the maze. It’s now a habit. I’ve lost 21 pounds since then (12 of them from changing my cookie ritual).

Pooling resources for valuable actuarial calculations

12 michaelcurzi 15 February 2012 05:01PM

It occurred to me this morning that, if it's actually valuable, generating true beliefs about the world must be someone's comparative advantage. If truth is instrumentally important, important people must be finding ways to pay to access it. I can think of several examples of this, but the one that caught my attention was actuarial science.

I know next to nothing about what actuaries actually do, but Wikipedia says:

"Actuaries mathematically evaluate the likelihood of events and quantify the contingent outcomes in order to minimize losses, both emotional and financial, associated with uncertain undesirable events."

Why, that sounds right up our alley. 

So what I'm wondering is: for those who can afford it, wouldn't it be worth contracting with actuaries to make important personal decisions? Not merely with regards to business, but everything else as well? My preliminary ideas include:

  • Lifestyle choices to reduce personal risk of death
  • Health and wellness decisions
  • Vehicle choice for economic and safety considerations
  • Where to send your kid to college and otherwise improve life success
Lastly, if consulting actuaries is worth doing as a wealthy individual, shouldn't it also be worth doing as a group? Couldn't we pool money to get excellent information about questions that haven't yielded answers to our research attempts?
If I am not misunderstanding the work that actuaries do, there may indeed be low-hanging fruit here. 

How confident should we be?

5 michaelcurzi 01 January 2012 03:57PM

What should a rationalist do about confidence? Should he lean harder towards

  1. relentlessly psyching himself up to feel like he can do anything, or
  2. having true beliefs about his abilities in all areas, coldly predicting his likelihood of success in a given domain?

I don't want to falsely construe these as dichotomous. The real answer will probably dissolve 'confidence' into smaller parts and indicate which parts go where. So which parts of 'confidence' correctly belong in our models of the world (which must never be corrupted) or our motivational systems (which we may cut apart and put together however helps us achieve our goals)? Note that this follows the distinction between epistemic and instrumental rationality.

Eliezer offers a decision criterion in The Sin of Underconfidence:

Does this way of thinking make me stronger, or weaker?  Really truly?

It makes us stronger to know when to lose hope already, and it makes us stronger to have the mental fortitude to kick our asses into shape so we can do the impossible. Lukeprog prescribes boosting optimism "by watching inspirational movies, reading inspirational biographies, and listening to motivational speakers." That probably makes you stronger too.

But I don't know what to do about saying 'I can do it' when the odds are against me. What do you do when you probably won't succeed, but believing that Heaven's army is at your back would increase your chances?

My default answer has always been to maximize confidence, but I acted this way long before I discovered rationality, and I've probably generated confidence for bad reasons as often as I have for good reasons. I'd like to have an answer that prescribes the right action, all of the time. I want know when confidence steers me wrong, and know when to stop increasing my confidence. I want the real answer, not the historically-generated heuristic.

I can't help but feeling like I'm missing something basic here. What do you think?

[Link]: GiveWell is aiming to have a new #1 charity by December

19 Normal_Anomaly 29 November 2011 03:11AM

GiveWell, LessWrong's most cited organization for optimal philanthropy, is currently re-evaluating its charity rankings with the goal of naming a new #1 charity by December 2011. Essentially, VillageReach (the current top charity) has met all of its short-term funding needs, to the point where it no longer has the greatest marginal return.

Our current top-rated charity is VillageReach. In 2010, we directed over $1.1 million to it, which met its short-term funding needs (i.e., its needs for the next year or so).

VillageReach still has longer-term needs, and in the absence of other giving opportunities that we consider comparable, we’ve continued to feature it as #1 on our website. However, we’ve also been focusing most of our effort this year on identifying and investigating other potential top-rated charities, with the hope that we can refocus attention on an organization with shorter-term needs this December. (In general, the vast bulk of our impact on donations comes in December.) We believe that we will be able to do so. We don’t believe we’ll be able to recommend a giving opportunity as good as giving to VillageReach was last year, but given VillageReach’s lack of short-term (1-year) room for more funding, we do expect to have a different top recommendation by this December.

EDIT: The new charities are up! They are the Against Malaria Foundation and the Schistosomiasis Control Initiative.

[Link] Walking Through Doors Causes Forgetting

5 khafra 21 November 2011 02:56PM

We investigated the ability of people to retrieve information about objects as they moved through rooms in a virtual space. People were probed with object names that were either associated with the person (i.e., carried) or dissociated from the person (i.e., just set down). Also, people either did or did not shift spatial regions (i.e., go to a new room). Information about objects was less accessible when the objects were dissociated from the person. Furthermore, information about an object was also less available when there was a spatial shift. However, the spatial shift had a larger effect on memory for the currently associated object. These data are interpreted as being more supportive of a situation model explanation, following on work using narratives and film. Simpler memory-based accounts that do not take into account the context in which a person is embedded cannot adequately account for the results.

http://www.springerlink.com/content/m6lq80675m22232h/ 

There's probably some deep implications to this I'm not qualified to plumb.  But next time I'm concentrating on something, and need to get up from the computer and walk around a bit, I'm going to try avoiding doorways.

"The True Rejection Challenge" - Thread 2

7 Armok_GoB 02 July 2011 11:49AM

The old thread (found here: http://lesswrong.com/lw/6dc/the_true_rejection_challenge/ ) was becoming very unwieldy and hard to check, so many people suggested we made a second one. I just realized that the only reason it didn't exist yet was bystander effect-like, so I desiced to just do this one.

From the original thread:

An exercise:

Name something that you do not do but should/wish you did/are told you ought, or that you do less than is normally recommended.  (For instance, "exercise" or "eat vegetables".)

Make an exhaustive list of your sufficient conditions for avoiding this thing.  (If you suspect that your list may be non-exhaustive, mention that in your comment.)

Precommit that: If someone comes up with a way to do the thing which doesn't have any of your listed problems, you will at least try it.  It counts if you come up with this response yourself upon making your list.

(Based on: Is That Your True Rejection?)

Edit to add: Kindly stick to the spirit of the exercise; if you have no advice in line with the exercise, this is not the place to offer it.  Do not drift into confrontational or abusive demands that people adjust their restrictions to suit your cached suggestion, and do not offer unsolicited other-optimizing.

[Altruist Support] How to determine your utility function

7 Giles 01 May 2011 06:33AM

Follows on from HELP! I want to do good.

What have I learned since last time? I've learned that people want to see an SIAI donation; I'll do it as soon as PayPal will let me. I've learned that people want more "how" and maybe more "doing"; I'll write a doing post soon, but I've got this and two other background posts to write first. I've learned that there's a nonzero level of interest in my project. I've learned that there's a diversity of opinions; it suggests if I'm wrong, then I'm at least wrong in an interesting way. I may have learned that signalling low status - to avoid intimidating outsiders - may be less of a good strategy than signalling that I know what I'm talking about. I've learned that I am prone to answering a question other than that which was asked.

Somewhere in the Less Wrong archives there is a deeply shocking, disturbing post. It's called Post Your Utility Function.

It's shocking because basically no-one had any idea. At the time I was still learning but I knew that having a utility function was important - that it was what made everything else make sense. But I didn't know what mine was supposed to be. And neither, apparently, did anyone else.

Eliezer commented 'in prescriptive terms, how do you "help" someone without a utility function?'. This post is an attempt to start to answer this question.

Firstly, what the utility function is and what it's not. It belongs to the field of instrumental rationality, not epistemic rationality; it is not part of the territory. Don't expect it to correspond to something physical.

Also, it's not supposed to model your revealed preferences - that is, your current behavior. If it did then it would mean you were already perfectly rational. If you don't feel that's the case then you need to look beyond your revealed preferences, toward what you really want.

In other words, the wrong way to determine your utility function is to think about what decisions you have made, or feel that you would make, in different situations. In other words, there's a chance, just a chance, that up until now you've been doing it completely wrong. You haven't been getting what you wanted.

So in order to play the utility game, you need humility. You need to accept that you might not have been getting what you want, and that it might hurt. All those little subgoals, they might just have been getting you nowhere more quickly.

So only play if you want to.

The first thing is to understand the domain of the utility function. It's defined over entire world histories. You consider everything that has happened, and will happen, in your life and in the rest of the world. And out of that pops a number. That's the idea.

This complexity means that utility functions generally have to be defined somewhat vaguely. (Except if you're trying to build an AI). The complexity will also allow you a lot of flexibility in deciding what you really value.

The second thing is to think about your preferences. Set up some thought experiments to decide whether you prefer this outcome or that outcome. Don't think about what you'd actually do if put in a situation to decide between them; then you will worry about the social consequences of making the "unethical" decision. If you value things other than your own happiness, don't ask which outcome you'd be happier in. Instead just ask, which outcome seems preferable?. Which would you consider good news, and which bad news?

You can start writing things down if you like. One of the big things you'll need to think about is how much you value self versus everyone else. But this may matter less than you think, for reasons I'll get into later.

The third thing is to think about preferences between uncertain outcomes. This is somewhat technical, and I'd advise a shut-up-and-multiply approach. (You can try and go against that if you like, but you have to be careful not to end up in weirdness such as getting different answers if you phrase something as one big decision or as a series of identical little decisions).

The fourth thing is to ask whether this preference system satisfies the von Neumann-Morgenstern axioms. If it's at all sane, it probably will. (Again, this is somewhat technical).

The last thing is to ask yourself: if I prefer outcome A over outcome B, do I want to act in such a way that I bring about outcome A? (continue only if the answer here is "yes").

That's it - you now have a shiny new utility function. And I want to help you optimize it. (Though it can grow and develop and change along with yourself; I want this to be a speculative process, not one in which you suddenly commit to an immutable life goal).

You probably don't feel that anything has changed. You're probably feeling and behaving exactly the same as you did before. But this is something I'll have to leave for a later post. Once you start really feeling that you want to maximize your utility then things will start to happen. You'll have something to protect.

Oh, you wanted to know my utility function? It goes something like this:

It's the sum of the things I value. Once a person is created, I value that person's life; I also value their happiness, fun and freedom of choice. I assign negative value to that person's disease, pain and sadness. I value concepts such as beauty and awesomeness. I assign a large bonus negative value to the extinction of humanity. I weigh the happiness of myself and those close to me more highly than that of strangers, and this asymmetry is more pronounced when my overall well-being becomes low.

Four points: It's actually going to be a lot more complicated than that. I'm aware that it's not quantitative and no terminology is defined. I'm prepared to change it if someone points out a glaring mistake or problem, or if I just feel like it for some reason. And people should not start criticizing my behavior for not adhering to this, at least not yet. (I have a lot of explaining still to do).

HELP! I want to do good

15 Giles 28 April 2011 05:29AM

There are people out there who want to do good in the world, but don't know how.

Maybe you are one of them.

Maybe you kind of feel that you should be into the "saving the world" stuff but aren't quite sure if it's for you. You'd have to be some kind of saint, right? That doesn't sound like you.

Maybe you really do feel it's you, but don't know where to start. You've read the "How to Save the World" guide and your reaction is, ok, I get it, now where do I start? A plan that starts "first, change your entire life" somehow doesn't sound like a very good plan.

All the guides on how to save the world, all the advice, all the essays on why cooperation is so hard, everything I've read so far, has missed one fundamental point.

If I could put it into words, it would be this:

AAAAAAAAAAAGGGHH WTF CRAP WHERE DO I START EEK BLURFBL

If that's your reaction then you're half way there. That's what you get when you finally grasp how much pointless pain, misery, risk, death there is in the world; just how much good could be done if everyone would get their act together; just how little anyone seems to care.

If you're still reading, then maybe this is you. A little bit.

And I want to help you.

How will I help you? That's the easy part. I'll start a community of aspiring rationalist do-gooders. If I can, I'll start it right here in the comments section of this post. If anything about this post speaks to you, let me know. At this point I just want to know whether there's anybody out there.

And what then? I'll listen to people's opinions, feelings and concerns. I'll post about my worldview and invite people to criticize, attack, tear it apart. Because it's not my worldview I care about. I care about making the world better. I have something to protect.

The posts will mainly be about what I don't see enough of on Less Wrong. About reconciling being rational with being human. Posts that encourage doing rather than thinking. I've had enough ideas that I can commit to writing 20 discussion posts over a reasonable timescale, although some might be quite short - just single ideas.

Someone mentioned there should be a "saving the world wiki". That sounds like a great idea and I'm sure that setting one up would be well within my power if someone else doesn't get around to it first.

But how I intend to help you is not the important part. The important part is why.

To answer that I'll need to take a couple of steps back.

Since basically forever, I've had vague, guilt-motivated feelings that I ought to be good. I ought to work towards making the world the place I wished it would be. I knew that others appeared to do good for greedy or selfish reasons; I wasn't like that. I wasn't going to do it for personal gain.

If everyone did their bit, then things would be great. So I wanted to do my bit.

I wanted to privately, secretively, give a hell of a lot of money to a good charity. So that I would be doing good and that I would know I wasn't doing it for status or glory.

I started small. I gave small amounts to some big-name charities, charities I could be fairly sure would be doing something right. That went on for about a year, with not much given in total - I was still building up confidence.

And then I heard about GiveWell. And I stopped giving. Entirely.

WHY??? I can't really give a reason. But something just didn't seem right to me. People who talked about GiveWell also tended to mention that the best policy was to give only to the charity listed at the top. And that didn't seem right either. I couldn't argue with the maths, but it went against what I'd been doing up until that point and something about that didn't seem right.

Also, I hadn't heard of GiveWell or any of the charities they listed. How could I trust any of them? And yet how could I give to anyone else if these charities were so much more effective? Big akrasia time.

It took a while to sink in. But when it did, I realised that my life so far had mostly been a waste of time. I'd earned some money, but I had no real goals or ambitions. And yet, why should I care if my life so far had been wasted? What I had done in the past was irrelevant to what I intended to do in the future. I knew what my goal was now and from that a whole lot became clear.

One thing mattered most of all. If I was to be truly virtuous, altruistic, world-changing then I shouldn't deny myself status or make financial sacrifices. I should be completely indifferent to those things. And from that the plan became clear: the best way to save the world would be to persuade other people to do it for me. I'm still not entirely sure why they're not already doing it, but I will use the typical mind prior and assume that for some at least, it's for the same reasons as me. They're confused. And that to carry out my plan I won't need to manipulate anyone into carrying out my wishes, but simply help them carry out their own.

I could say a lot more and I will, but for now I just want to know. Who will be my ally?

Get data points on your current utility function via hypotheticals

1 Dorikka 24 April 2011 06:44PM

I've recently found that my utility function valued personal status and fame a whole lot more than I thought it did -- I previously had thought that it mostly relied on the consequences of my actions for other sentiences, but it turned out I was wrong. Obviously, this is a valuable insight -- I definitely want to know what my current utility function is; from there, I can decide whether I should change my actions or my utility function if the two aren't coordinated.

I did this by imagining how I would feel if I found out certain things. For example, how would I feel if everyone else was also trying to save the world? The emotional response I had was sort of a hollow feeling in the pit of my stomach, like I was a really mediocre being. This obviously wasn't a result of calculating that the marginal utility of my actions would be a whole lot lower in this hypothetical world (and so I should go do something else); instead, it was the fact that me trying to save the world didn't make me special any more -- I wouldn't stand out, in this sort of world.

(Epilogue: I decided that I hadn't done a good enough job programming my brain and am attempting to modify my utility function to rely on the world actually getting saved.)

Discussion: What other hypotheticals are useful?

The right kind of fun?

4 Dorikka 16 April 2011 11:20PM

If you consider that the utility generated by working is much greater than the utility directly generated by having fun, then the main thing that you're going to optimizing when you have fun is how much motivation the memory of having that fun increases your working capabilities. This is distinctly different from optimizing for the direct preference fulfillment generated by the fun, even if the same activities are optimal for both utility functions.

The same model works for any action A such that the utility generated by the effect of that action on another action is much greater than the utility generated by the action itself. This probably applies to most maintainance actions, such as doing laundry, sleeping, eating, but this is more obvious to us -- we usually don't see laundry as an end unto itself, but we often do pursue fun for it's own sake. I'm not advocating that we shouldn't have fun, but that we (or at least I) seem to be optimizing for the wrong thing -- direct preference fulfillment, rather than motivation.

This feels like a significant insight, but I tend to get a significant number of false positives. Any ideas on how we might use this?

Hunger can make you stupid

11 Dorikka 13 April 2011 04:17PM

When I originally wrote "When to scream 'Error!'", I was mainly thinking of bad patterns of thought or bad problem-solving strategies as being the source of the error. Since then, I've come to realize that my own most common source of stupidity is because I've neglected some comfort. I may be hungry without consciously paying attention to it, dehydrated because I've been living on coffee for too long, or simply have a headache and need to take an Ibuprofen -- as a result, I don't think well, get irritated at the fact that I'm not thinking well, and generally begin a death spiral if I don't realize why.

In hindsight, it feels obvious that I should take care of the physiological needs that I can because they're likely preventing me from thinking straight. However, I've failed to do this on numerous occasions and so thought it worth mentioning.

In summary: Whenever you're screaming "Error", I suggest you stop and figure out whether you're hungry, thirsty, tired, or hurting before trying to find a problem in your thinking itself, especially if you're not usually good at noticing such things.

Q: What has Rationality Done for You?

11 atucker 02 April 2011 04:13AM

So after reading SarahC's latest post I noticed that she's gotten a lot out of rationality.

More importantly, she got different things out of it than I have.

Off the top of my head, I've learned...

On top of becoming a little bit more effective at a lot of things, and with many fewer problems.
(I could post more on the consequences of this, but I'm going for a different point)

Where she got...

  • a habit of learning new skills
  • better time-management habits
  • an awesome community
  • more initiative
  • the idea that she can change the world

I've only recently making a habit out of trying new things, and that's been going really well for me. Is there other low hanging fruit that I'm missing?

What cool/important/useful things has rationality gotten you?

Reading the Sequences before Starting to Post: Costs and Benefits

13 Normal_Anomaly 31 March 2011 02:01AM

This post arose from this discussion in the "Philosophy: a Diseased Discipline" post.

Current Practice

There have been several conversations lately about the costs and benefits of scholarship, the effort of reading the sequences, and attempts to repackage the sequence material in an easier form [1]. There also used to be a practice on LW of telling newbies who weren't producing good content to come back when they'd read the sequences. However, David Gerard, who has been paying more attention than me, has noticed that this practice has stopped. One plausible explanation is that the stoppage is due to a rising awareness of the effort that reading the sequences takes. 

In an impromptu unscientific poll, 10 respondents said that they had read the sequences while still lurking on LW, 3 that they read them after creating accounts, and 8 that they had read them while they were still on OB. Nobody said that they still hadn't read the sequences [2]. So, assuming that this roughly represents the status quo, most LW posts/comments come from people who have read the sequences. The questions are: One, is this situation changing (are fewer people reading the sequences than in the past)? And two, should it change, and in what direction?

To answer this, one needs to look at the costs and benefits.

Costs

Length: The sequences comprise over a million words, not counting the comments. They cover material as diverse as semantics, quantum theory, cognitive science, metaethics, and how to write a good eutopia.

Interdependency: Each post in a sequence requires understanding of the previous posts in that sequence, and sometimes posts from earlier sequences. As well as being a source of intimidating and annoying tab explosions, this exacerbates the problem of length. It's hard to read the sequences except going through large chunks systematically, so they can't be broken up and read in a person's spare time.

Possible Memetic Hazard: Some of the ideas in the Sequences are controversial [3]. These points are often clearly marked in the posts and debated in the comments, so they won't sneak up on anyone; on the other hand, Memetic Hazard was used to describe controversial topics here, so at least someone thinks it's a problem. Some potential readers may not want to be exposed to treatments of controversial issues that argue for one side before they read balanced overviews. Also, discomfort has been expressed over the possibility of LW being a cult.  I don't want this post to turn into a forum for the is-it-a-cult conversation, so it's up here as something that may cause disutility to some people who read the Sequences.

Benefits

Usefulness: various people [4] have discussed the various benefits of rationality knowledge in helping them "Win at Life". These benefits vary widely from person to person, so there are many ways to take advantage of the sequences in one's own life.

Informativeness: On questions that don't have immediate practical relevance, it's still good for the community if everyone is familiar with the basic material. Discussions of uploading, for example, wouldn't go very far if people had to stop to explain why they believe that consciousness is physical. Having all participants start out with a minimum number of undissolved confusions improves the SNR of Less Wrong even when it doesn't directly help the individual members win.

A Common Vocabulary: on a forum where everyone has read the sequences, it's easy to refer to them in conversation. Telling someone that their position is equivalent to two-boxing on Newcomb's problem will quickly convey what you mean and allow the person an easy way to craft an answer. Pointing out that a debate is over the meaning of a word will do more to prevent it from expanding into a giant mess than if the participants hadn't read Making Beliefs Pay Rent. And using examples like Bleggs and Rubes or similar can connect a commenter's example to ver audience's current knowlege of the concept.

Please comment to suggest more costs and benefits, provide more info on the sequence-reading habits of commenters, share your experience, or explain why everything I just said is wrong.


[1] Some examples: The Neglected Virtue of Scholarship, Costs and Benefits of Scholarship, Rationality Workbook Proposal.

[2] This option in the poll was created after the others and would up being elsewhere on the page, so it is probably underrepresented. I'm just taking the results as a first approximation, and will edit this post if the comments suggest the status quo is not what I thought it was.

[3] Some examples: The Many-Worlds and Timeless formulations of quantum mechanics are still being debated by Physicists. Perhaps less importantly, as an average reader can understand the debate and form ver own opinion, issues like the Zombie World are still being debated by philosophers.

[4] See this post for an example: Reflections on Rationality a Year Out

College Selection Advice

4 atucker 09 March 2011 10:13PM

I, and a lot of other people my age, are currently facing a pretty big life decision -- where to go to college. Since this is probably going to have a pretty big impact on my life, I'd like to get some more information on this.

Seeing as a lot of people here have probably made this choice already, gone through with some of the consequences of it, and are rational, I decided to ask here.

My current considerations are:

 

  • Academic rigor
  • Money (i.e. if a school gives me a full ride, should I go there rather than plunk down $250k over 4 years)
  • Ability to do undergrad research
  • Flexibility
  • Likelihood to meet cool people
  • Novelty (this one's a lot weaker though)
My current situation is:
  • Accepted to MIT, University of Southern California, University of Maryland, Swarthmore, Harvey Mudd, Harvard, and CMU
  • Getting some form of scholarships at USC and UMD, amount TBD
  • Not likely to receive that much need-based financial aid
  • Probably going to start in Engineering, might double major with Comp Sci, Statistics, or maybe Math. If I go to CMU, probably Engineering and Public Policy
  • I also like and am competent in Economics, History, and English (though, definitely not getting a degree in the last 2)
  • Maryland is my home state, and I would know a lot of people at UMD
So if you have any advice, for me or in general, I'd love to hear it. If you have any questions yourself, feel free to ask them.

 

Go Try Things

14 atucker 25 February 2011 06:23AM

So this isn't quite done, and its late here so I don't quite trust my judgements about writing at this hour. I've never done a top-level post before, so I wanted to get some feedback first.

 


Failure isn’t that Bad

You’ve probably read about how to properly turn information into beliefs, and how to squeeze every last bit from your data. What seems not to have received as much attention is the importance of just going and getting data.

For precise and well-defined fields and problems, clear thinking and reasoning will get you really far. Mathematics departments don’t use that much equipment, and they’ve been going fine for hundreds of years.

For more mundane day-to-day concerns, getting data is probably more important than being rational. Where Rationality helps you get an accurate model of the world based on the data, Data gets you well, data. And practice. Your human brain can’t rederive social rules in a vacuum, no matter how smart you are, so you have to go out and get information about it. But rationality with data is far better than either alone.

Sometimes you have to get your data by actually trying. Some things are just hard to explain in words and video. Your brain has all of this built in hardware for detecting and interpreting emotions and body language, but people are comparatively terrible at talking about it. This makes learning about different social or mood-variant things online difficult. Motions are also hard to teach online. I can kind of visualize how to do a front handspring, but I really can’t transmit what it feels like to someone else without just asking them to try it. Note: I’m not saying that asking others is useless, but I am saying that its mostly only effective as a complement to actually trying.

Practice is important. As any akrasiatic or novice would know, knowledge in a field or domain doesn’t translate directly to success in it. Like muscle memory, you need practice in order to get your brain to incorporate what you know to the point that you can use it automatically. Consciously thinking about what you’re doing while you’re doing it tends to cause lag and awkwardness, and in some fields (like conversation or physical activities) is a pretty large detriment.

I had/have the problem of hesitating on acting until I’m sure that whatever I’m considering attempting is going to be successful. I’m afraid of it not working, and am willing to do anything short of doing it in order to ensure success.

This kind of hesitation though, is pretty useless. In many cases failure to act is about the same as your action failing. It avoids doing things that you regret, but it also avoids doing things in general. And if your hesitation doesn’t result in a well thought-out plan to guarantee success in the future, then not only do you fail it that one time you hesitate, you’re not going to make progress on succeeding in the future.

Sometimes failure is actually a problem (like you’ll break something if you try extreme parkour tricks and fail), but I feel like in most instances I grossly overestimate how bad failing is. To combat this I do a few things:

  • Consider a failure to act as an implicit failure. Not trying is as bad as trying and failing, except for whatever costs a failed attempt incur.
  • Not regret failing. As long as I learn from my mistakes then making them results in a net gain. In the long term having failed at something and learning what to do is better than not attempting it.
  • Attempt to minimize the cost of a failed attempt. I hesitate a lot with social things. If I fail with a stranger and never see them again, it’s not that big of a deal. They might be annoyed, but as long as I didn’t do something super horrible to them then they’re probably going to forget about it.

So long story short, try things out. Improvement is hard unless you do, and failure seriously isn’t that bad.

 

Bridging Inferential Gaps and Explaining Rationality to Other People

9 atucker 13 February 2011 06:22AM

This post is going in discussion until I get it edited enough that I feel like its post-worthy, or if it does well.


Core Post:

Rationality has helped me do a lot of things (in the past year: being elected President of my robotics team, getting a girlfriend, writing good college apps (and getting into a bunch of good schools), etc.), and I feel sort of guilty for not helping other people use it.

I had made progress on a lot of those fronts before, but a bunch of things fell into place in a relatively short period of time after I started trying to optimize them. Some of my friends have easyish problems, but unsolicited risky counterintuitive advice is uncouth and unhelpful.

More pressingly, I want to pass on a lot of rationality knowledge to people I know before I graduate high school. Being in a fairly good Math/Science/Computer Science Magnet Program, I have access to a lot of smart, driven people who have a lot of flexibility in their lives and I think it would be a shame if there were things I could tell them that would make them do a lot better. On top of that, I want to pass on this knowledge within my robotics team so that they continue doing well.

Basically, I want to learn how to explain useful rationality concepts to other people in a non-annoying and effective way. As far as I can tell, many people want to do similar things, and find it difficult to do so.

I suspect that this topic is broad enough that it would be hard for a single person to tackle it in one post. So that people don't need to have enough information for an entire post (which, would be awesome by the way) before they talk about it, here's a thread to respond to.

I'd particularly like to encourage people who have successfully bridged inferential distances to reply with where people started and how the conversation went. Please. An example:

In my Origins of Science (basically a philosophy) class, a conversation like this (paraphrased, happened a few days ago) took place. I'm not sure where the other people in the class started, but it got them to the point that they understood how you model reality, but that beliefs are supposed to reflect reality, and you can't just make things up entirely.

W: "I feel like if people want to think God exists, then God exists for them, but if they want to ignore him then he won't."

me: "But that's not how existing works. In our thoughts and opinions, we make a map of how the world exists. But the map is not the territory."

W: "But it will still seem real to you..."

me: "Like, you can put whatever you want in your map like dragons or whatever, but that doesn't actually put dragons in the territory. And now its a failure of your map to reflect the territory, not of the territory to reflect your map"

I could have said the last part better, but I definitely remember saying the last sentence.

The map vs. territory example seems to be really effective, a few people complimented it (and I admitted that I had read it somewhere else). Not sure how much it propagates into other beliefs, I'll update later with how much it seems to affect later conversations in the class.

Questions:
What basic rationality ideas are the most helpful to the most people?

Would it be helpful to try and categorize where people are inferentially? Is it possible?

Observations:

  • Inferential Distance is a big deal. Hence the first part of the title. I was able to explain transhumanism to someone in 3 minutes, and have them totally agree. Other people don't even accept the possibility of AI, let alone that morality can happen when God doesn't exist.
  • Its much easier to convince people who know and like you.
  • There's a difference between getting someone to ostensibly agree with something, and getting it to propagate through their beliefs.
  • People remember rationality best when they benefit from learning it, and it applies to what they're specifically trying to do.
  • It's difficult to give someone specific advice and have them pick up on the thought process that you used to come up with it.
  • Atheists seem to be pretty inferentially close to Singularity-cluster ideas.
  • From an earlier post I got a bunch of helpful feedback, particularly from Nornagest's comment and TheOtherDave. The short versions:
    • Asking people to do specific things is creepy, teaching someone is much more effective if you just tell them the facts and let them do whatever they want with it.
    • People need specifics to actually do something, and its hard to make them decide to do something substantially different than what they already are doing
  • And from a comment by David Gerard: People need to want to learn/do something, its hard to push them into it.
  • A lot of people are already doing useful things (research, building businesses), so it might be more helpful to make a bunch of them better than a few of them do something entirely different.

Rationality for Other People

4 atucker 11 February 2011 05:41AM

I'm putting this through discussion because I’ve never written a main section post before… If you have helpful criticism please comment with it, and if it does well I’ll post it in the main section when I get back from school tomorrow.

Things between the bars are intended to be in the final post, the rest are comments


There’s lots of things which can end the world. There’s even more things which can help improve or save the world. Having more people working more effectively on these things will make the world progress and improve faster, or better fight existential risks, respectively.


And yet for all of my intention to help do those things, I haven’t gotten a single other person to do it as well. Convincing someone else to work towards something is like devoting another lifetime to it, or doubling your efforts. And you only need to convince them once.

So there’s two things I want to learn how to do:

  1. Convince people to try and save the world
  2. Convince people to use more effective methodologies (especially with regards to world-saving)

I think that the rationalist community as a whole isn’t particularly good at doing these. Small efforts are made by individuals, but I think that most of the people who do try to do these run into the same problems.

I propose that we do more to centralize and document the solutions to these problems in order for our individual efforts to be more effective. This thread is for people who encounter problems and solutions for convincing other people.


  • I think that the activity of convincing people to try and save the world and using more effective methodologies should have a word or phrase. Suggestions?
  • Should it just be a thread? I feel like some of the particularly good comments would make good independent posts. Just link to the post version from in the thread?
  • I’m a bit worried that this sounds a bit culty… If you disagree please mention, and if you agree please tell me why.
  • This is a bit prompted by Alicorn's post , and some things which have recently happened in my life.

 

Link: Chessboxing could help train automatic emotion regulation

4 Sniffnoy 22 January 2011 11:40PM

EDIT: Argh, I really failed to read this closely. Rewriting...

Just saw this over at Not Exactly Rocket Science. Chessboxing (or similar games) could help train automatic emotion regulation.  Obviously this should generalize.  Has this - by which I mean finding things that can help train automatic emotion regulation - been done before? This doesn't seem to be anything new - and this is extrapolation, not experimental results - but it's a neat application.

Help Request: How to maintain focus when emotionally overwhelmed

5 throwaway 07 December 2010 11:29PM

So my personal life just got very interesting. In a net-positive way, certainly, but still, I am, as Calculon put it, "filled with a large number of powerful emotions!" -- some of which are anxious and/or panicky.

This is making it annoyingly difficult to focus at work. I am an absolutely textbook "Attention Deficit Oh-look-a-squirrel!" case at the best of times, and this seems to have made it much, much worse. I can handle small tasks, but anything where I'm going to have to spend an hour solving multiple problems before producing results, I can hardly make myself start.

Has anyone dealt with the problem of maintaining productive focus while emotionally overwhelmed/exhausted, and if so, do you have any pointers?

You don't need barefoot shoes to start walking differently.

-3 Kevin 15 October 2010 06:41AM

I bought into the hype and decided that I was going to get a pair of Vibrams. My intention was not to use them as running shoes, but as everyday walking shoes. Then my girlfriend told me that I wasn't allowed, that they were too hideous to be worn in public. In almost two years together, this is the only thing that she has forbidden me from doing, and I regularly do completely ridiculous things so I deferred to her judgement. I thought about getting barefoot dress shoes but $150 seemed excessive.

I then decided that I didn't need fancy shoes to stop walking heel first. I started wearing a pair of casual brown slip-on shoes with a fair amount of cushioning but little support. From the start, I thought it felt good to actually walk on the balls of my feet.

It took three weeks for my feet to stop hurting, but now I naturally walk on the balls of my feet. You can do the same thing. It will probably be easier in a light pair of shoes rather than a clunky pair of dress shoes or boots.

Video: Getting Things Done Author at DO Lectures

2 JamesAndrix 11 October 2010 08:33AM

If nothing else, this is a distillation of him spending a lot of time analyzing how people ineffectively manage their time.

Link:

http://www.dolectures.com/speakers/speakers-2010/david-allen

I expect to watch this two more times.

 

 

View more: Prev | Next