5-second level case study: Value of information

23 Kaj_Sotala 22 November 2011 01:44PM

This post started off as a comment to Vaniver's post Value of Information: Four Examples. This post also heavily builds on Eliezer's post The 5-Second Level. The five second level is the idea that to develop a rationality skill, you need to automatically recognize a problem and then apply a stored, actionable procedural skill to deal with it, all in about five seconds or so. In here, I take the value of information concept and develop it into a five second skill, summarizing my thought process as I do so. Hopefully this will help others develop things into five second skills.

So upon reading this, I thought "the value of information seems like a valuable concept", but didn't do much more. A little later, I thought, "I want to make sure that I actually apply this concept when it is warranted. How do I make sure of that?" In other words, "how do I get this concept to the five second level?" Then I decided to document my thought process in the hopes of it being useful to others. This is quite stream-of-consciousness, but I hope that seeing my thought process helps to learn from it. (Or to offer me valuable criticism on how I should have thought.)

First off, "how do I apply this concept?" is too vague to be useful. A better question would be, "in what kinds of situations might this concept be useful?". With a bit of thought, it was easy to find at least three situations, ones where I am:

1. ...tempted to act now without gathering more information, despite the VoI being high.
2. ...tempted to gather more information, despite the VoI being low.
3. ...not sure of whether I should seek information or not.

#3 implies that I'm already reflecting on the situation, and am therefore relatively likely to remember VoI as a possible mental tool anyway. So developing a five-second level reaction for that one isn't as important. But in #1 and #2 I might just proceed by default, never realizing that I could do better. So I'll leave #3 aside, concentrating on #1 and #2.

Now in these situations, the relevant thing is that the VoI might be "high" or "low". Time to get more concrete - what does that mean? Looking at Vaniver's post, the VoI is high if 1) extra information is likely to make me choose B when I had intended on choosing A, and 2) there's a high payoff in choosing correctly between A and B. If at least 2 is false, VoI is low. The intermediate case is the one where 2 is true but 1 is false, in which case it depends on how extreme the values are. E.g. only a 1% chance of changing my mind given extra information might sometimes imply a high VoI, if the difference between the correct and incorrect choice is a million euros, say.

So sticking just to #1 for simplicity, and because I think that's a worse problem for me, I'd need to train myself to immediately notice and react if:

continue reading »

Poker with Lennier

15 HonoreDB 15 November 2011 10:21PM

In J. Michael Straczynski's science fiction TV show Babylon 5, there's a character named Lennier. He's pretty Spock-like: he's a long-lived alien who avoids displaying emotion and feels superior to humans in intellect and wisdom. He's sworn to always speak the truth. In one episode, he and another character, the corrupt and rakish Ambassador Mollari, are chatting. Mollari is bored. But then Lennier mentions that he's spent decades studying probability. Mollari perks up, and offers to introduce him to this game the humans call poker.

continue reading »

The True Rejection Challenge

43 Alicorn 27 June 2011 07:18AM

An exercise:

Name something that you do not do but should/wish you did/are told you ought, or that you do less than is normally recommended.  (For instance, "exercise" or "eat vegetables".)

Make an exhaustive list of your sufficient conditions for avoiding this thing.  (If you suspect that your list may be non-exhaustive, mention that in your comment.)

Precommit that: If someone comes up with a way to do the thing which doesn't have any of your listed problems, you will at least try it.  It counts if you come up with this response yourself upon making your list.

(Based on: Is That Your True Rejection?)

Edit to add: Kindly stick to the spirit of the exercise; if you have no advice in line with the exercise, this is not the place to offer it.  Do not drift into confrontational or abusive demands that people adjust their restrictions to suit your cached suggestion, and do not offer unsolicited other-optimizing.

To alleviate crowding, Armok_GoB has created a second thread for this challenge.

Overcoming suffering: Emotional acceptance

38 Kaj_Sotala 29 May 2011 10:57AM

Follow-up to: Suffering as attention-allocational conflict.

In many cases, it may be possible to end an attention-allocational conflict by looking at the content of the conflict and resolving it. However, there are also many cases where this simply won't work. If you're afraid of public speaking, say, the "I don't want to do this" signal is going to keep repeating itself regardless of how you try to resolve the conflict. Instead, you have to treat the conflict in a non-content-focused way.

In a nutshell, this is just the map-territory distinction as applied to emotions. Your emotions have evolved as a feedback and attention control mechanism: their purpose is to modify your behavior. If you're afraid of a dog, this is a fact about you, not about the dog. Nothing in the world is inherently scary, bad or good. Furthermore, emotions aren't inherently good or bad either, unless we choose to treat them as such.

We all know this, right? But we don't consistently apply it to our thinking of emotions. In particular, this has two major implications:

1. You are not the world: It's always alright to feel good. Whether you're feeling good or bad won't change the state of the world: the world is only changed by the actual actions you take. You're never obligated to feel bad, or guilty, or ashamed. In particular, since you can only influence the world through your actions, you will accomplish more and be happier if your emotions are tied to your actions, not states of the world.
2. Emotional acceptance: At the same time, "negative" emotions are not something to suppress or flinch away from. They're a feedback mechanism which imprints lessons directly into your automatic behavior (your elephant). With your subconsciousness having been trained to act better in the future, your conscious mind is free to concentrate on other things. If the feedback system is broken and teaching you bad lessons, then you should act to correct it. But if the pain is about some real mistake or real loss you suffered, then you should welcome it.

Internalizing these lessons can have some very powerful effects. I've been making very good progress on consistently feeling better after starting to train myself to think like this. But some LW posters are even farther along; witness Will Ryan:

continue reading »

Measuring aversion and habit strength

79 Academian 27 May 2011 01:05AM

Not me! tl;dr: Strong aversions don't always originate from strong feelings (see Ugh fields). It's useful to measure the strength of an aversion by how effectively it averts your thoughts/behavior instead of how saliently you can feel it, or even remember feeling it. If there's a low cost behaviour that you somehow always "end up not doing", there's evidence for a mechanism steering you away from it. Try to find it, and defy it.

Story

Right after writing Break your habits: be more empirical, someone asked me to a live music show, and I declined, with some explanation about being busy. This felt a little forced, and I realized: I always decline live music shows. This counts as a habit. The interesting thing was that I declined them for many different, unrelated reasons. This was evidence for something more systemic, because it would be a coincidence if random, unrelated reasons always came up to prevent me from attending live music.

So I asked myself if I really disliked live music. Emotions returned: "Not really. It's not awesome, but it's not terrible." Now, there was a time when I would have stopped thinking there. My time is valuable, and mediocrity is enough to stop me from doing anything, right?

But wait... is it? Is it enough to always stop me? If it was only mediocre, and not terrible, than surely on one of the many occasions I could have seen live music, there would have been sufficient justification to go... a particularly good composer, a particularly interesting group of people go with, a particular need to get out and do something different... but no, somehow I always didn't go.

And that's when I realized I probably had an aversion to live music: some brain mechanism that consistently and effectively averted me from seeing it, and in this case, not something I could feel. In particular, it wasn't accompanied by any sense of "Ugh". So since I couldn't feel the aversion, I took an outside view to ask what could have caused it, if it indeed exists...

continue reading »

Pancritical Rationalism Can Apply to Preferences and Behavior

1 TimFreeman 25 May 2011 12:06PM

ETA: As stated below, criticizing beliefs is trivial in principle, either they were arrived at with an approximation to Bayes' rule starting with a reasonable prior and then updated with actual observations, or they weren't.  Subsequent conversation made it clear that criticizing behavior is also trivial in principle, since someone is either taking the action that they believe will best suit their preferences, or not.  Finally, criticizing preferences became trivial too -- the relevant question is "Does/will agent X behave as though they have preferences Y", and that's a belief, so go back to Bayes' rule and a reasonable prior. So the entire issue that this post was meant to solve has evaporated, in my opinion. Here's the original article, in case anyone is still interested:

Pancritical rationalism is a fundamental value in Extropianism that has only been mentioned in passing on LessWrong. I think it deserves more attention here. It's an approach to epistemology, that is, the question of "How do we know what we know?", that avoids the contradictions inherent in some of the alternative approaches.

The fundamental source document for it is William Bartley's Retreat to Commitment. He describes three approaches to epistemology, along with the dissatisfying aspects of the other two:

  • Nihilism. Nothing matters, so it doesn't matter what you believe. This path is self-consistent, but it gives no guidance.
  • Justificationlism. Your belief is justified because it is a consequence of other beliefs. This path is self-contradictory. Eventually you'll go in circles trying to justify the other beliefs, or you'll find beliefs you can't jutify. Justificationalism itself cannot be justified.
  • Pancritical rationalism. You have taken the available criticisms for the belief into account and still feel comfortable with the belief. This path gives guidance about what to believe, although it does not uniquely determine one's beliefs. Pancritical rationalism can be criticized, so it is self-consistent in that sense.

Read on for a discussion about emotional consequences and extending this to include preferences and behaviors as well as beliefs.

continue reading »

Suffering as attention-allocational conflict

49 Kaj_Sotala 18 May 2011 03:12PM

I previously characterized Michael Vassar's theory on suffering as follows: "Pain is not suffering. Pain is just an attention signal. Suffering is when one neural system tells you to pay attention, and another says it doesn't want the state of the world to be like this." While not too far off the mark, it turns out this wasn't what he actually said. Instead, he said that suffering is a conflict between two (or more) attention-allocation mechanisms in the brain.

I have been successful at using this different framing to reduce the amount of suffering I feel. The method goes like this. First, I notice that I'm experiencing something that could be called suffering. Next, I ask, what kind of an attention-allocational conflict is going on? I consider the answer, attend to the conflict, resolve it, and then I no longer suffer.

An example is probably in order, so here goes. Last Friday, there was a Helsinki meetup with Patri Friedman present. I had organized the meetup, and wanted to go. Unfortunately, I already had other obligations for that day, ones I couldn't back out from. One evening, I felt considerable frustration over this.

Noticing my frustration, I asked: what attention-allocational conflict is this? It quickly become obvious that two systems were fighting it out:

* The Meet-Up System was trying to convey the message: ”Hey, this is a rare opportunity to network with a smart, high-status individual and discuss his ideas with other smart people. You really should attend.”
* The Prior Obligation System responded with the message: ”You've already previously agreed to go somewhere else. You know it'll be fun, and besides, several people are expecting you to go. Not going bears an unacceptable social cost, not to mention screwing over the other people's plans.”

Now, I wouldn't have needed to consciously reflect on the messages to be aware of them. It was hard to not be aware of them: it felt like my consciousness was in a constant crossfire, with both systems bombarding it with their respective messages.

But there's an important insight here, one which I originally picked up from PJ Eby. If a mental subsystem is trying to tell you something important, then it will persist in doing so until it's properly acknowledged. Trying to push away the message means it has not been properly addressed and acknowledged, meaning the subsystem has to continue repeating it.

continue reading »

No, Seriously. Just Try It.

48 lukeprog 20 April 2011 04:11PM

In Scientific Self-Help, I explained that huge sections of the self-help industry pay little or no attention to the scientific data on self-help. Partly, this is because self-help products are usually written to sell, not to help.

Another reason for this is that there are huge gaps in our scientific knowledge about self-help. Unlike electrons, humans are complex beings and very different from each other.

When considering a self-help goal, it may be helpful to at least start with methods that have been scientifically demonstrated to work on a large number of people. On the other hand, there are so many gaps in our knowledge that it's definitely worth just trying things to see what works for you. This point has been recently emphasized by atucker in Go Try Things, Don't Fear Failure, and Just Try It: Quantity Trumps Quality. Also see: Use the Try Harder, Luke and Break Your Habits: Be More Empirical.

Self-experimenters like Tim Ferris and Seth Roberts are masters of the Just Try It method.

To cure his insomnia, Seth Roberts tried exercise, calcium supplements, and adjusting the lamps near his bed. In the end what worked was delaying his breakfast until 11am. Within a week, his insomnia was gone. Three months later he tried eating at 7am again, and the insomnia returned.

No controlled scientific study says that delaying breakfast until 11am will cure insomnia. For most insomniacs, it probably won't work. That's why it's important to Just Try It. In a way, you are a special snowflake, and the only way to figure out what works for you is to Just Try It. Controlled scientific studies are, pardon my language, a godsend - but you can't wait for busy scientists to decode your personal psychology. You're going to have to do that yourself.

Roberts did the same with dieting, trying an endless combination of things and weighing himself constantly. He found that drinking unflavored fructose water between meals did the trick, and he lost 35 pounds. Later, he discovered that a few teaspoonfuls of flavorless vegetable oil worked just as well.

See How to Run a Successful Self-Experiment and Quantified Self for ideas.

continue reading »

Details of Taskforces; or, Cooperate Now

15 paulfchristiano 05 April 2011 05:16PM

Recently I've spent a lot of time thinking about what exactly I should be doing with my life. I'm lucky enough to be in an environment where I can occasionally have productive conversations about the question with smart peers, but I suspect I would think much faster if I spent more of my time with a community grappling with the same issues. Moreover, I expect I could be more productive if I spent time with others trying to get similar things done, not to mention the benefits of explicit collaboration.

I would like to organize a nonstandard sort of meetup: regular gatherings with people who are dealing with the question "How do I do the most good in the world?" focused explicitly on answering the question and acting on the answer. If I could find a group with which I am socially compatible, I might spend a large part of my time working with them. I am going to use the term "taskforce" because I don't know of a better one. It is vaguely related to but quite different from the potential taskforces Eliezer discusses.

Starting such a taskforce requires making many decisions.

Size:

I believe that even two people who think through issues together and hold each other accountable are significantly more effective than two people working independently. At the other limit, eventually the addition of individuals doesn't increase the effectiveness of the group and increases coordination costs. Based on a purely intuitive feeling for group dynamics, I would feel most comfortable with a group of 5-6 until I knew of a better scheme for productively organizing large groups of rationalists (at which point I would want to grow as large as that scheme could support). I suspect in practice there will be huge constraints based on interest and commitment; I don't think this is a terminal problem, because there are probably significant gains even for 2-4 people, and I don't think its a permanent one, because I am optimistic about our ability as a community to grow rapidly.

Frequency:

Where I am right now in life, I believe that thinking about this question and gathering relevant evidence is the most important thing for me to be doing. I would be comfortable spending several hours several times a week working with a group I got along with. Due to scheduling issues and interest limitations, I think this means that I would like to invest as much time as schedules and interests allow. I think the best plan is to allow and expect self-modification: make the choice of time-commitment an explicit decision controlled by the group. Meeting once a week seems like a fair default which can be supported by most schedules.

Concreteness:

There are three levels of concreteness I can imagine for the initial goals of a taskforce:

  • The taskforce is created with a particular project or a small collection of possible projects in mind. Although the possibility of abandoning a project is available (like all other changes), having a strong concrete focus may help a great deal with maintaining initial enthusiasm, attracting people, and fostering a sense of having a real effect on the world rather than empty theorizing. The risk is that, while I suspect many of us have many good ideas, deciding what projects are best is really an important part of why I care about interacting with other people. Just starting something may be the quickest way to get a sense of what is most important, but it may also slow progress down significantly.
  • The taskforce is created with the goal of converging to a practical project quickly. The discussion is of the form "How should we be doing the most good right now: what project are we equipped to solve given our current resources?" While not quite as focused as the first possibility, it does at least keep the conversation grounded.
  • The taskforce is created with the most open-ended possible goal. Helping its members decide how to spend their time in the coming years is just as important as coming up with a project to work on next week. A particular project is adopted only if the value of that project exceeds the value of further deliberation, or if working on a project is a good way to gather evidence or develop important skills.

I am inclined towards the most abstract level if it is possible to get enough support, since it is always capable of descending to either of the others. I think the most important question is how much confidence you have in a group of rationalists to understand the effectiveness of their own collective behavior and modify appropriately. I have a great deal, especially when the same group meets repeatedly and individuals have time to think carefully in between meetings.

Metaness:

A group may spend a long time discussing efficient structures for organizing, communicating, gathering information, making decisions, etc. Alternatively, a group may avoid these issues in favor of actually doing things--even if by doing things we only mean discussing the issues the group was created to discuss. Most groups I have been a part of have very much tried to do things instead of refining their own processes.

My best plan is to begin by working on non-meta issues. However, the ability of groups of rationalists to efficiently deliberate is an important one to develop, so it is worth paying a lot of attention to anything that reduces effectiveness. In particular, I would support very long digressions to deal with very minor problems as long as they are actually problems. Our experiences can be shared, any question answered definitively remains answered definitively, and any evidence gathered is there for anyone else who wants to see it. A procedural digression should end when it is no longer the best use of time--not because of a desire to keep getting things done for the sake of getting things done. Improving our rationality as individuals should be treated similarly; I am no longer interested in setting out to improve my rationality for the sake of becoming more rational, but I am interested in looking very carefully for failures of rationality that actually impact my effectiveness.

I can see how this approach might be dangerous; but it has the great advantage of being able to rescue itself from failure, by correctly noticing that entertaining procedural digressions is counter-productive. In some sense this is universally true: a system which does not limit self-examination can at least in principle recover from arbitrary failures. Moreover, it offers the prospect of refining the rationality of the group, which in turn improves the group's ability to select and implement efficient structures, which closes a feedback loop whose limit may be an unusually effective group.

Homogeneity:

A homogeneous taskforce is composed of members who face similar questions in their own lives, and who are more likely to agree about which issues require discussion and which projects they could work profitably on. An inhomogeneous taskforce is composed of members with a greater variety of perspectives, who are more likely to be able have complementary information and to avoid failures. In general, I believe that working for the common good involves enough questions of general importance (ie, of importance to people in very different positions) that the benefits of inhomogeneity seem greater than the costs. 

In practice, this issue is probably forced for now. Whoever is interested enough to participate will participate (and should be encouraged to participate), until there is enough interest that groups can form selectively.

Atmosphere:

In principle the atmosphere of a community is difficult to control. But the content of discussion and structure of expectations prior to the first meeting have a significant effect on the atmosphere. Intuitively, I expect there is a significant risk of a group falling apart immediately for a variety of reasons: social incompatibility, apparent uselessness, inability to maintain initial enthusiasm based on unrealistic expectations, etc. Forcing even a tiny commmunity into existence is hard (though I suspect not impossible).

I think the most important part of the atmosphere of a community is its support for criticism, and willingness to submit beliefs to criticism. There is a sense (articulated by Orson Scott Card somewhere at some point) that you maintain status by never showing your full hand; by never admitting "That's it. That's all I have. Now you can help me decide whether I am right or wrong." This attitude is very dangerous coupled with normal status-seeking, because its not clear to me that it is possible to recover from it. I don't believe that having rational members is enough to avoid this failure.

I don't have any other observations, except that factors controlling atmosphere should be noted when trying to understand the effectiveness of particular efforts to start communities of any sort, even though such factors are difficult to measure or describe.

Finding People:

The internet is a good place to find people, but there is only a weak sense of personal responsibility throughout much of it, and committing to dealing with people you don't know well is hard/unwise. The real world is a much harder place to find people, but conversations in person quickly establish a sense of personal responsibility and can be used to easily estimate social compatibility. Most people are strangers, and the set of people who could possibly be convinced to work with a taskforce is extremely sparse. On the other hand, your chances of convincing an acquaintance to engage in an involved project with you seem to be way higher.

My hope is that LW is large enough, and unusual enough, that it may be possible to start something just by exchanging cheap talk here. At least, I think this is possible and therefore worth acting on, since alternative states of the world will require more time to get something like this rolling. Another approach is to use the internet to orchestrate low-key meetings, and then bootstrap up from modest personal engagement to something more involved. Another is to try and use the internet to develop a community which can better support/encourage the desired behavior. Of course there are approaches that don't go through the internet, but those approaches will be much more difficult and I would like to explore easy possibilities first.

Recovery from Failure:

I can basically guarantee that if anything comes of my desire, it will include at least one failure. The real cost of failure is extremely small. My fear, based on experience, is that every time an effort at social organization fails it significantly decreases enthusiasm for similar efforts in the future. My only response to this fear is: don't be too optimistic, and don't be too pessimistic. Don't stake too much of your hope on the next try, but don't assume the next try will fail just because the last one did. In short: be rational.


Conclusion:

There are more logistical issues, many reasons a taskforce might fail, and many reasons it might not be worth the effort. But I believe I can do much more good in the future than I have done in the past, and that part of that will involve more effectively exploiting the fact that I am not alone as a rationalist. Even if the only conclusion of a taskforce is to disband itself, I would like to give it a shot.

As groups succeed or fail, different answers to these questions can be tested. My initial impulse in favor of starting abstractly and self-modifying towards concreteness can be replaced by emulating the success of other groups. Of course, this is an optimistic vision: for now, I am focused on getting one group to work once.

I welcome thoughts on other high-level issues, criticism of my beliefs,  or (optimistically) discussions/prioritization of particular logistical issues. But right now I would mostly like to gauge interest. What arguments could convince you that such a taskforce would be useful / what uncertainties would have to be resolved? What arguments could convince you to participate? Under what conditions would you be likely to participate? Where do you live?

I am in Cambridge, am willing to travel anywhere in the Boston area, need no additional arguments to convince me that such a taskforce would be useful, and would participate in any group I thought had a reasonable chance of moderate success.

Don't Fear Failure

30 atucker 03 April 2011 10:52PM

Last post, I talked about how trying things out yourself is a good way to learn about them. This post, I'm going to talk about ideas that helped me overcome one of my major obstacles to trying something -- fear of failure.

Overestimation of Damages: "Its not that big a deal"

In most cases, failure really isn't that big of a deal. Really. The difference between failure and a null action is the attempt. If the attempt isn't damaging, failure isn't damaging.

A few cases:

  • Trying a new food/recipe? Maybe you don't like it and waste a few dollars. Maybe you mess it up. So you eat something else. Just don't do it for your first dinner with the in-laws and it should be fine. But a new dish might be totally delicious.
  • Total stranger you think might be a cool person? Maybe they get annoyed at you. Then you can just break off and never talk to/see them again. Chatting with people I run into has made college visits much more enjoyable.
  • Competition you might want to enter? Worst case scenario is that you lose. I learned a lot from Moody's Mega Math Challenge, and even though I don't think we did a particularly good job at modeling Lake Powell I still learned a lot about how mathematical modeling works.
  • Dance you want to try? The worst that's likely is that you look a bit silly. Laugh it off.

There's lots of things where an attempt is actually worse than doing nothing. Jumping halfway to the other side of the ledge, for instance. Or only removing most of the toxic part of a pufferfish. But for a lot of potentially high-value things, a failed attempt doesn't really do much, so you might as well try them.

Rationality and Failure: "Don't panic"

Some people I know basically buckle under failure. A common failure mode seems to be to do something badly, establish an ugh field around that area, and then continue in a downward spiral. Getting a B on a math test turns into "Ugh, math", turns into "well I was never really good at that anyway", turns into a complete lack of effort. Here a little failure becomes a huge problem. A failure isn't catastrophic on its own, but giving up is.

continue reading »

View more: Prev | Next