Related to: Go forth and create the art!, A sense that more is possible.

If you talk to any skilled practitioner of an art, they have a sense of the depths beyond their present skill level.  This sense is important.  To create an art, or to learn one, one must have a sense of the goal.

By contrast, when I chat with many at Less Wrong meet-ups, I often hear a sense that mastering the sequences will take one most of the way to "rationality", and that the main thing to do, after reading the sequences, is to go and share the info with others.  I would therefore like to sketch the larger thing that I hope our rationality can become.  I have found this picture useful for improving my own rationality; I hope you may find it useful too.

To avoid semantic disputes, I tried to generate my picture of "rationality" by asking not "What should 'rationality' be?" but "What is the total set of simple, domain-general hacks that can help humans understand the most important things, and achieve our goals?"  or "What simple tricks can help turn humans -- haphazard evolutionary amalgams that we are -- into coherent agents?"

The branches

The larger "rationality" I have in mind would include some arts that are well-taught on Less Wrong, others that that don't exist yet at all, and others have been developed by outside communities from which we could profitably steal.

Specifically, a more complete art of rationality might teach the following arts:

1.  Having beliefs: the art of having one's near-mode anticipations and far-mode symbols work together, with the intent of predicting the outside world.  (The Sequences, especially Mysterious answers to mysterious questions, currently help tremendously with these skills.)

2.  Making your beliefs less buggy -- about distant or abstract subjects.  This art aims to let humans talk about abstract domains in which the data doesn’t hit you upside the head -- such as religion, politics, the course of the future, or the efficacy of cryonics -- without the conversation turning immediately into nonsense.  (The Sequences, and other discussions of common biases and of the mathematics of evidence, are helpful here as well.)

3. Making your beliefs less buggy -- about yourself.  Absent training, our models of ourselves are about as nonsense-prone as our models of distant or abstract subjects.  We often have confident, false models of what emotions we are experiencing, why we are taking a given action, how our skills and traits compare to those around us, how long a given project will take, what will and won’t make us happy, and what our goals are.  This holds even for many who've studied the Sequences and who are reasonably decent on abstract topics; other skills are needed.[1]

4.  Chasing the most important info: the art of noticing what knowledge would actually help you. A master of this art would continually ask themselves: "What do I most want to accomplish?  What do I need to know, in order to achieve that thing?". They would have large amounts of cached knowledge about how to make money, how to be happy, how to learn deeply, how to effectively improve the world, and how to achieve other common goals.  They would continually ask themselves where telling details could be found, and they would become interested in any domain that could help them.[2]

As with the art of self-knowledge, Less Wrong has barely started on this one.

5.  Benefiting from everyone else's knowledge.  This branch of rationality would teach us:

  • Which sorts of experts, and which sorts of published studies, are what sorts of trustworthy; and
  • How to do an effective literature search, read effectively, interview experts effectively, or otherwise locate the info we need.

Less Wrong and Overcoming Bias have covered pieces of this, but I'd bet there's good knowledge to be found elsewhere.

6.  The art of problem-solving: how to brainstorm up a solution once you already know what the question is.  Eliezer has described parts of such an art for philosophy problems[3], and Luke Grecki summarized Polya's "How to Solve It" for math problems, but huge gaps remain.

7.  Having goals.  In our natural state, humans do not have goals in any very useful sense.  This art would change that, e.g. by such techniques as writing down and operationalizing one's goals, measuring progress, making plans, and working through one's emotional responses until one is able, as a whole person, to fully choose a particular course.

Much help with goal-achievement can be found in the self-help and business communities; it would be neat to see that knowledge fused with Less Wrong.[4]

8.  Making your goals less buggy.  Even insofar as we do act on coherent goals, our goals are often "buggy" in the sense of carrying us in directions we will predictably regret. Some skills that can help include:

  • Skill in noticing and naming your emotions and motivations (art #2 above);
  • Understanding what ethics is, and what you are.  Sorting out religion, free will, fake utility functions, social signaling patterns, and other topics that disorient many.
  • Being on the look-out for lost purposes, cached goals or values, defensiveness, wire-heading patterns, and other tricks your brain tends to play on you.
  • Being aware of, and accepting, as large a part of yourself as possible.

Parts of a single discipline

Geometry, algebra, and arithmetic are all “branches of mathematics”, rather than stand-alone arts.  They are all “branches of mathematics” because they build on a common set of thinking skills, and because skill in each of these branches can boost one’s problem-solving ability in other branches of mathematics.

My impression is that the above arts are all branches of a single discipline ("rationality") in roughly the same sense in which arithmetic, algebra, etc. are branches of mathematics.  For one thing, all of these arts have a common foundation: they all involve noticing what one's brain is doing, and asking if those mental habits are serving one's purpose or if some other habits would work better.

For another thing, skill at many of the above arts can help with many of the others. For example, knowing your motivations can help you debug your reasoning, since you’re much more likely to find the truth when you want the truth.  Asking “what would I expect to see, if my theory was true? if it was false?” is useful for both modeling the future and modeling yourself.  Acquiring coherent goals makes it easier to wholeheartedly debug ones beliefs, without needing to flinch away.  And so on.

It therefore seems plausible that jointly studying the entire above discipline (including whatever branches I left out) would give one a much larger cross-domain power boost, and higher performance in each of the above arts, than one gets from only learning the Less Wrong sequences.

 


[1] That is: Bayes' theorem and other rules of reasoning do work for inferring knowledge about oneself.  But Less Wrong hasn't walked us through the basics of applying them to self-modeling, such as noting that one must infer one's motives through a process of ordinary inference (“What actions would I expect to see if I was trying to cooperate?  What actions would I expect to see if I was instead trying to vent anger?") and not by consulting one's verbal self-model. It also has said very little about how to gather data about oneself, how to reduce ones biases on the subject, etc. (although Alicorn's Luminosity sequence deserves mention).

[2] Michael Vassar calls this skill “lightness of curiosity”, by analogy to the skill “lightness of beliefs” from Eliezer’s 12 virtues of rationality.  The idea here is that a good rationalist should have a curiosity that moves immediately as they learn what information can help them, much as an good rationalist should have beliefs that move immediately as they learn which way the evidence points.  Just as a good rationalist should not call reality "surprising", so also a good rationalist should not call useful domains "boring".

[3] I.e., Eliezer's posts describe parts of an art for cases such as "free will" in which the initial question is confused, and must be dissolved rather than answered.  He also notes the virtue of sustained effort.

[4] My favorite exceptions are Eliezer's post Something to protect and Alicorn's City of Lights technique.  If you're looking for good reading offsite on how to have coherent goals, I'd second Patri's recommendation of Brian Tracy's books.

New Comment
66 comments, sorted by Click to highlight new comments since: Today at 4:01 PM

"What simple tricks can help turn humans -- haphazard evolutionary amalgams that we are -- into coherent agents?"

One trick that we can always apply: disassemble the human and use his atoms to build a paperclip maximizer. The point is, we don't just want to turn humans into coherent agents, we want to turn them into coherent agents who can be said to have the same preferences as the original humans. But given that we don't have a theory of preferences for incoherent agents, how do we know that any given trick intended to improve coherence is preference-preserving? Right now we have little to guide us except intuition.

To borrow an example from Robin Hanson, we have both preferences that are consciously held, and preferences that are unconsciously held, and many "rationality techniques" seem to emphasize the consciously held preferences at the expense of unconsciously held preferences. It's not clear this is kosher.

I think there are many important unsolved problems in the theoretical/philosophical parts of rationality, and this post seems to under-emphasize them.

I think there are many important unsolved problems in the theoretical/philosophical parts of rationality, and this post seems to under-emphasize them.

What would a picture of rationality’s goal that correctly emphasized them look like?

I think it should at least mention prominently that there is a field that might be called "theory of rationality" and perhaps subdivided into "theory of ideal agents" and "theory of flawed agents", and we still know very little about these subjects (the latter even less than the former), and as a result we have little theoretical guidance for the practical work.

I'm tempted to further say that work on theory should take precedence over work on practice at this point, but that's probably just my personal bias speaking. In any case people will mostly work on what they intuitively think is interesting or important, so I just want to make sure that people who are interested in "rationality" know that there are lots of theoretical problems that they might consider interesting or important.

The point is, we don't just want to turn humans into coherent agents, we want to turn them into coherent agents who can be said to have the same preferences as the original humans. But given that we don't have a theory of preferences for incoherent agents, how do we know that any given trick intended to improve coherence is preference-preserving? Right now we have little to guide us except intuition.

I absolutely agree. The actual question I had written on my sheet, as I tried to figure out what a more powerful “rationality” might include, was “... into coherent agents, with something like the goals ‘we’ wish to have?” Branch #8 above is exactly the art of not having the goals-one-acts-on be at odds with the goals-one-actually-cares-about (and includes much mention of the usefulness of theory).

My impression, though, is that some of the other branches of rationality in the post are very helpful for self-modifying in a manner you’re less likely to regret. Philosophy always holds dangers, but a person approaching the question of “What goals shall I choose?”, and encountering confusing information that may affect what he wants (e.g., encountering arguments in meta-ethics, or realizing his religion is false, or realizing he might be able to positively or negatively affect a disorienting number of lives) will be much better off if he already has good self-knowledge and has accepted that his current state is his current state (vs. if he wants desperately to maintain that, say, he doesn’t care about status and that only utilitarian expected-global-happiness-impacts affect his behavior -- a surprisingly common nerd failure mode).

I don’t know how to extrapolate the preferences of myself or other people either, but my guess is, while further theoretical work is critical, it’ll be easier to do this work in a non-insane fashion in the context of a larger, or more whole-personed, rationality. What are your thoughts here?

very helpful for self-modifying in a manner you’re less likely to regret.

I don't think regret is the concern here... Your future self might be perfectly happy making paperclips. I almost think "not wanting your preferences changed" deserves a new term...Hmm, "pre-gret"?

Useful concept, bad example.

Upvoted for 'pregret.'

I like to imagine another copy of my mind watching what I'm becoming, and being pleased. If I can do that, then I feel good about my direction.

You will find people who are willing to bite the "I won't care when I'm dead" bullet, or at least claim to - it's probably just the abstract rule-based part of them talking.

will be much better off if he already has good self-knowledge and has accepted that his current state is his current state

Everything here turns on the meaning of "accept". Does it mean "acknowledge as a possibly fixable truth" or does it mean "consciously endorse"? I think you're suggesting the latter but only defending the former, which is much more obviously true.

he wants desperately to maintain that, say, he doesn’t care about status and that only utilitarian expected-global-happiness-impacts affect his behavior -- a surprisingly common nerd failure mode

Is the disagreement here about what his brain does, or about what parts of his brain to label as himself? If the former, it's not obviously common, if the latter, it's not obviously a failure mode.

will be much better off if he already has good self-knowledge and has accepted that his current state is his current state

Everything here turns on the meaning of "accept". Does it mean "acknowledge as a possibly fixable truth" or does it mean "consciously endorse"?

Those both sound like basically verbal/deliberate activities, which is probably not what Anna meant. I would say "not be averse to the thought of".

I don't have much data here, but I guess none of us do. Personally, I haven't found it terribly helpful to learn that I'm probably driven in large part by status seeking, and not just pure intellectual curiosity. I'm curious what data points you have.

That is interesting to me because finding out I am largely a status maximizer (and that others are as well) has been one of the most valuable bits of information I've learned from OB/LW. This was especially true at work, where I realized I needed to be maximizing my status explicitly as a goal and not feel bad about it, which allowed me to do so far more efficiently.

You, upon learning that you're largely a status maximizer, decided to emphasize status seeking even more, by doing it on a conscious level. But that means other competing goals (I assume you must have some) have been de-emphasized, since the cognitive resources of your conscious mind are limited.

I, on the other hand, do not want to want to seek status. Knowing that I'm driven largely by status seeking makes me want to self-modify in a way that de-emphasizes status seeking as a goal (*). But I'm not really sure either of these responses are rational.

(*) Unfortunately I don't know how to do so effectively. Before, I'd just spend all of my time thinking about a problem on the object level. Now I can't help but periodically wonder if I believe or argue for some position because it's epistemically justified, or because it helps to maximize status. For me, this self doubt seems to sap energy and motivation without reducing bias enough to be worth the cost.

This is the simple version of the explicit model I have in my head at work now: I have two currencies, Dollars and Status. Every decision I make likely has some impact both in terms of our company's results (Dollars) and also in terms of how I and others will be perceived (Status). The cost in Status to make any given decision is a reducing function of current Status. My long term goal is to maximize Dollars. However, often the correct way to maximize Dollars in the long term is to sacrifice Dollars for Status, bank the Status and use it to make better decisions later.

I think this type of thing should be common. Status is a resource that is used to acquire what you want, so in my mind there's no shame in going after it.

How do time constraints play into this model?

Do you ever find yourself in situations where you would predict different things if you thought you were a pure-intellectual-curiosity-satisfier than if you think you're in part a status-maximizer?

If so, is making more accurate predictions in such situations useful, or do accurate predictions not matter much?

I suspect that if I thought of myself as a pure-intellectual-curiosity-satisfier, I would be a lot more bewildered by my behavior and my choices than I am, and struggle with them a lot more than I do, and both of those would make me less happy.

If the way you seek status is ethical ("do good work" more than "market yourself as doing good work") then you may not want to change anything once you discover your "true motivation". And the alternative "don't care about anything" hardly entices.

I, the entity that is typing these words, do not approve of unconscious preferences when they conflict with conscious ones.

I think there are many important unsolved problems in the theoretical/philosophical parts of rationality, and this post seems to under-emphasize them.

Agreed to an extent, but most folk aren't out to become Friendliness philosophers. One branch that went unmentioned in the post and would be useful for both philosophers and pragmatists includes the ability to construct (largely cross-domain / overarching) ontologies out of experience and abstract knowledge, the ability to maintain such ontologies (propagating beliefs across domains, noticing implications of belief structures and patterns, noticing incoherence), and the disposition of staying non-attached to familiar ontologies (e.g. naturalism/reductionism) and non-averse to unfamiliar/enemy ontologies (e.g. spiritualism/phenomenology). This is largely what distinguishes exemplary rationalists from merely good rationalists, and it's barely talked about at all on Less Wrong.

It seems that one useful distinction that isn't made often enough is between "knowing how one ought to think" and "implementing the necessary changes".

"knowing how one ought to think" is covered in depth here, while "implementing the necessary changes" is glossed over for the most part.

To some extent, the fixing part happens on its own. If you call the grocery store and they're closed, you're not going to want to go there anymore to pick up food- even without doing anything explicitly to propagate that belief.

However, not everything happens automatically for all people. It looks like some aspects of religion refuse to leave even seven years after becoming atheist. These are also the types of things that most people wouldn't notice at all, let alone the connection to their former religion.

I think more effort is due on this front, but I don't know of any easy to steal from source. There are a lot of related areas (CBT,NLP, salesmanship,self help,etc.), but they tend to be low signal to noise and serve more as a pointer towards ideas to explore rather than a definitive source.

We are probably inadvertently selecting for people who have some self taught/innate ability to self modify, since people who aren't good at this don't get as much out of the sequences. I'm currently working with a couple people that are bad at this to see if its something simple and easily fixed, and will report back with lessons learned.

Yeah. I've been thinking a bit about this. For some people, realising that they can think about thinking is revelatory, and the notion that they can change how they think is filled with fearful trepidation.

What other extremely basic things like this are there?

Do you have anecdotes about this? If you do I would be interested to hear them.

This sprang to mind with a particularly scary one - when I was out trolling the Scientologists in my dissolute youth.

A word about the victim who was already in there. She was a good example of Scientology working on people only insofar as they actually notice their minds for the first time EVER - which will be a productive thing - and then attribute any gains to Scientology itself rather than their own efforts. She was utterly BLOWN AWAY by the meagre thrill of a chat about Dianetics, to the point of going out right there and then to get $100 out of the bank to buy a course.

This is what you can get from just getting someone to notice their mind for the first time, and taking the credit for such.

I'd be interested in hearing more about this. Are there a lot of people like this? Are there polite ways to introduce people to this notion (I imagine some people would be somewhat insulted if you assumed they didn't have this notion)?

That's the thing - I don't know! But it's such a HUGE WIN technique.

And a serious susceptibility, if you're introduced to it by the Renfields of a parasitic meme. And even if you already have it, if they get in through your awareness of the power of the idea of self-improvement.

I'm also wondering at other techniques - such as how to teach the notion of rationality by starting with small-scale instrumental rationality and expanding from there. I have no idea if that would work, but it sounds plausible. Of course, that can lead to the Litany of the Politician: "If believing this will get me what I want, then I want to believe this." I expect I'd need to come up with a pile of this stuff then actually test it on actual people.

We are probably inadvertently selecting for people who have some self taught/innate ability to self modify, since people who aren't good at this don't get as much out of the sequences. I'm currently working with a couple people that are bad at this to see if its something simple and easily fixed, and will report back with lessons learned.

I'm extremely glad to hear that. Do please report back, whether or not you succeed in teaching the skills.

Your overall point also seems very good.

I have a detailed progress report for you!

Not much progress (or time) with one person due to a depression related lack of motivation (still struggling to find a method to tackle the motivation part, and it's sorta a prerequisite for progress)

But good results with the other.

I wanted to instill a solid big picture base of "what are we doing, and why?", so I spent a few hours describing in depth the idea of identity/belief space, attractors, the eerily strong commitment and consistency effects, and other forces that push us around.

From there I went on to talk about the importance separating the bad qualities from the rest of our identity and creating a strong sense of indignation towards the problem part.

The particular step by step instructions were to identify the problem thoughts/behavior and say "fuck that!" directed at the problem. For example, if you get upset from something that you don't want to upset you- say "Fuck that" with the attitude of "I don't want that to upset me, and I don't want to be the kind of person that gets upset by that".

It was extremely effective right off the bat. The person in question fixed about a handful of problems that day without so much as a word (problems that had been persisting for months because one sentence explanations weren't enough). From the outside at least, it looked as if the problems hadn't ever been there. With 30 seconds of effort, it was able to completely reverse a bad mood (i.e. tears to persisting genuine laughter and smiles).

It's worth noting that actually saying "fuck that" out loud was more effective than saying it mentally, and that saying it repeatedly, forcefully, and in some cases jokingly was sometimes necessary

After a couple weeks, the technique described stopped being as effective and stop being used. It looked like a case of lost purpose- the words were being recited without the necessary attitude - as if it were supposed to be magic.

I had been coasting hoping that my job was done, but apparently she lacked the introspective skills necessary to notice attitude slipping. I guess more effort needs to be put into creating thought maintenance habits.

Another serious conversation about what the point of all this is and how, exactly, this was supposed to work got her back on track. All the positive signs of progress are back and exceeding my expectations in directions only tangentially related to what was explicitly discussed.

I think it will take more work to get her to the point of truly appreciating the sequences. I've got her to the level of easily knocking down trivially seen obstacles, but I think in order to get the most out of the sequences, you have to be able to notice the importance of things that are more subtle and take them seriously. With luck, this could be taught as easily, but for now my effort is on 'error correcting code' to keep her on track.

In addition to what you've covered here, I think there's a substantial and largely unexplored set of techniques necessary for groups to act rationally. #6 on your list, efficiently accessing others' knowledge, is one aspect of this; Eliezer's "Craft and the Community" sequence also establishes a goal and some good examples of what not to do. Aside from that, though, we don't seem to have collected much instrumental knowledge in that domain.

I actually had "Forming effective teams" on my original list, but then erased it and several others to avoid muddying the discussion with borderline cases of “rationality”. But collecting best practices for teamwork would be extremely cool.

Some other items from my original brainstorm list:

  1. Good brain and body health. Exercise, sufficient sleep, social connectedness, regular concrete accomplishments, and anything else that helps keep the brain in its zone of intended functioning. Maintaining high energy levels and a habit of rapidly implementing new ideas and gathering data, so as to avoid building up excuse mechanisms around tasks or inferences that require effort.

  2. Analogy and pattern-recognition. Much of your inferential power comes from automatic processes of pattern recognition. One could learn to train this process on good examples (that will help it correctly predict the problems you’re actually facing), and to notice what sort of an impression you have in a given instance, and, from track records and/or priors, how likely that impression is to be correct.

  3. Skills learning. Become skilled at learning non-verbal or implicit “doing” competencies, and at trading information back and forth between verbal and non-verbal systems. Example competencies include emotional self-regulation, posture and movement, driving, social perception and interaction, drawing, and martial arts.

Editing is a nontrivial skill and I think you used it well to cut the above topics, since team building and the above three topics seem like consequences of the 8 listed branches.

Team building is a critical skill for making multigenerational changes outside of the hard sciences. For this reason I think it deserves special attention even if its importance can be drawn from your 8 branches and humanities current evolutionary state.

Related is another branch that has received relatively little attention (due to a sort of taboo, possibly not unjustified): how to spread one's beliefs (and/or goals) to other people.

A good point. I wonder who does have that knowledge?

A good point. I wonder who does have that knowledge?

John Boyd's concept of OODA loop seems to be relevant here:

The OODA loop (for observe, orient, decide, and act) is a concept originally applied to the combat operations process, often at the strategic level in both the military operations. It is now also often applied to understand commercial operations and learning processes. The concept was developed by military strategist and USAF Colonel John Boyd.

OODA stands for:

  • Observation: the collection of data by means of the senses
  • Orientation: the analysis and synthesis of data to form one's current mental perspective
  • Decision: the determination of a course of action based on one's current mental perspective
  • Action: the physical playing-out of decisions

To put this into rationality-language:

  • Look at the territory;
  • Draw a correct map of the territory based on what you saw;
  • Plan the route through the map to where you want to be;
  • Hit the road.

The concept itself, as described on Wikipedia, doesn't mention groups per se, but Boyd's work on OODA seems to include specific thoughts on groups. Here's a related quote from Wikipedia:

[Boyd] stated that most effective organizations have a highly decentralized chain of command that utilizes objective-driven orders, or directive control, rather than method-driven orders in order to harness the mental capacity and creative abilities of individual commanders at each level.

[-][anonymous]9y00

Thanks, rationality skill discovered!

Good question. I think the first place I'd look is business, followed by the military (probably at more the strategic than the tactical level). There's probably also some institutional wisdom floating around in academia, although I'm not sure what fields I'd look at first; most of the research I've personally been involved in didn't involve much distribution of decision-making between team members and thus was something of a degenerate case.

Somewhat related to this is the relatively recent field of social epistemology. Just a heads-up.

One question I've been chewing on is not "how to ask the right question" (there are lots of techniques to apply to this one) but "how to notice there's a question to ask." This feels like it's important but I have no idea how to attack it. Ideas welcomed.

Especially with vastly abstract topics -- economics, philosophy, etc. -- I find nothing substitutes for working through concrete examples. My brain's ability to just gloss over abstractions is hard to overestimate. So I've sort of trained myself to sound an alarm whenever my feet "don't touch bottom"... that is, when I can't think of concrete examples of the thing I'm talking about.

For example: I remember a few years ago suddenly realizing, in the course of a conversation about currency exchange rates, that I had no idea how such rates are set. I had been working with them, but had never asked myself where they come from... I hadn't really been thinking about them as information that moves from one place to another, but just vaguely taking them for granted as aspects of my environment. That was embarrassing.

Another useful approach in particular domains is to come up with a checklist of questions to ask about every new feature of that domain, and then ask those questions every time.

This is something I started doing a while ago as part of requirements analysis, and it works pretty well in stable domains, though I sheepishly admit that I dropped the discipline once I had internalized the questions. (This is bad practice and I don't endorse it; checklists are useful.)

It's not quite so useful as a general-analysis technique, admittedly, because the scale differences start to kill you. Still it's better than nothing.

Also, at the risk of repeating myself, I find that restating the thing-about-which-the-question-is-or-might-be is a good way to make myself notice gaps.

Could you post your checklist, or if it is domain specific, something that is more general but based on it?

Yeah, I knew someone was going to ask. Sadly, I can't, for reasons of proprietaryness. But a general sense:

  • For each high-level action to be taken: is this a choicepoint (if so, what alternatives are there, and who chooses, and when is that choice made, and can it be changed later)? is this a potential endpoint, intentional or otherwise (if so, do we have to clean up, and how do we do that? what happens next?) is it optional (see choicepoint)? should we log the action? should we journal the action?

  • For each decision to be made: on what data structure does that decision depend? how does that data structure get populated, and by what process, and is that process reliable (and if not, how do we validate the data structure)? What happens if that data structure is changed later? Where does that decision get logged? Where does the data structure get exposed, and what processes care about it, and what do they need to do with it?

  • For each data structure to be instantiated and/or marshalled: what latency/throughput requirements are there? are they aggregate or individual, and do they need to be monitored? need they be guaranteed? how long do they need to persist for, and what happens then (e.g., archiving)? What's the estimated size and volume?

Etc., etc., etc.

I sometimes find myself noticing unasked questions when I try to rigorously prove (or just make a convincing argument for) something obvious. And I find myself looking for proofs and/or convincing arguments when I try to generalize something obvious to something not quite so obvious.

Good examples of this can be found in the famous thought experiments of theoretical physics - Maxwell's demons, Carnot's engines, and Einstein's moving observers seeking to synchronize clocks. Examples in the analysis of rationality can be found in the thinking leading to the famous axiomatic characterizations of rational decision making - von Neumann's rational utility maximizers, Nash's rational bargainers, Rubinstein's impatient variant bargainers, and Harsanyi's empathetic utilitarians.

But moving now to the meta-level: You mention two questions - how to ask the right question and how to notice there is a question to be asked. You suggest that the second of these is the true 'right question'. May I suggest that perhaps the best question is "how do you know that you are asking the right question once you have discovered the question?"

Which thinking just draws you in deeper. The 'right' question, to my mind, is the fruitful one - the one that allows you to make progress. But then we need to ask: what do we mean by progress? How do we recognize that we are making progress?

[-][anonymous]13y60

It took me nearly half a year to really grok how much farther I have to go. It's one thing to be reasonable in discussions, and quite another to notice opportunities to be rational in real life, and recognize how important it is to develop a healthy relationship with the truth, all the time.

What habits or tricks are helping you apply things in real life, that Less Wrong isn't explicitly teaching?

[-][anonymous]13y80

Well, for productivity, I've got a notebook where I quantify everything useful I do (and then run the stats using a website), and I've been using the Pomodoro method. Some introspection: I realized that I don't ask myself enough questions about what I'm learning, so I give myself "points" for questions posed. Everything I want to do more of (problems, concepts learned, exercise, errands) nets me positive points; everything I want to do less of (eating out, daytime sleeping, websurfing) gets me negative points. I also have an internet blocker that's been working out well for me. I've been reflecting regularly on what I want and what I'm doing it all for.

For having a better relationship with the truth -- being less afraid to find out unpleasant truths, being more eager to find out the state of reality -- I haven't got it all figured out yet, but one thing that helps is being in contact with supportive friends. It gives a sense that no matter what I discover, I can be honest with myself and others, and still be emotionally "safe." The other thing I'm trying to remember is to be more detail oriented, to recall that small or seemingly dull details often aren't superfluous, but essential. (Mathematical example: realizing that functors really have to be functorial or the results will be awful -- demonstrating functoriality isn't just a busy-work exercise.)

I have found that the best way to make myself ask questions about what I am learning is to force myself to take notes in my own words about what I have just read. It forces me to really think about what is important about the material.

(nods) Yes.

I discovered this when my job involved doing a lot of software design reviews; the only way I could make myself actually review the design (rather than read the design and nod my head) was to write my own summary document of what the design was.

This was particularly true for noticing important things that were missing (as opposed to noticing things that were there and false).

and then run the stats using a website

I'm looking for a good tracking / analysis site. Has yours earned a positive recommendation?

[-][anonymous]13y20

I've been using Joe's Goals. I like it because it's exactly what I was looking for. You can choose positive goals (things you want to do more of) and negative goals (things you want to do less of) and assign a weight to each. For instance, I get +4 for each homework problem I do. Then you can see a graph of total score or a spreadsheet of individual goals, over any time period you want.

[-][anonymous]13y-20

If I succeed with what I'm aiming for (I'll know somewhere around May) I'll write a post about my experiences and what worked for me in terms of anti-akrasia techniques.

I often find, when I'm reading successful people's self-descriptions of what they do, that I think "Maybe that person's just naturally awesome and the same tactic would be impossible or useless for a mere mortal like me." So I'm saying this now. I am a mere mortal. If I succeed, it will be because of the changes I'm now making in my life, not because of my intrinsic superpowers. And that means you'll be able to follow my example.

Although 6 might contain it if interpreted broadly enough, I think that the practice of internalizing one's beliefs and using them to direct one's actions is an important focus that you didn't really address. By default, we do not tend to always update our behaviors according to what we believe even with high confidence.

[-][anonymous]13y20

Being on the look-out for lost purposes, cached goals or values, defensiveness, wire-heading patterns, and other tricks your brain tends to play on you.

Being aware of, and accepting, as large a part of yourself as possible.

These seem mutually contradictory.

I phrased them badly.

The following things, I think, are not mutually contradictory:

  • Noticing as much as possible about what your brain is doing. Taking special pains to notice defensiveness, wire-heading patterns, and other things like that that you might flinch away from seeing.
  • Accepting that your current state is in fact your current state. Getting used to the idea thoroughly, and to its details, so that you won't find yourself painfully flinching away to avoid seeing your current state.
  • Helping information get from one part of your brain to another. For example, I find I'm often defensive and mean when I don't want to notice something (e.g., when I don't want to notice that I'm stressed out about a task I haven't done). So, if I instead notice my discomfort about the unfinished task (which may have nothing to do with the poor person who just reminded me of the task), I can short-circuit my brain's defensive reaction.

I guess I did mean more than the above by "accepting as large a part of yourself as possible", but I'm not sure how to unpack it or whether I was right about the other parts.

For what it's worth, I deleted the grandparent comment (which said that 8.3 and 8.4 seemed mutually contradictory) shortly after posting, but would not have done so if I knew that someone was replying and that there would be a hole in the thread. Sorry about that.

Having goals: In our natural state, humans do not have goals in any very useful sense. This art would change that.

The link take us to an earlier essay by Anna in which she wrote:

It also seems that having goals... is part of what “rational” should mean, will help us achieve what we care about ...

Anna seems to suggest that a person who goes through life with no goals more significant than a day-by-day maximization of utility is somehow 'wasting their potential' (my choice of words). But what arguments exist for the superiority of a life spent pursuing a few big goals over a life spent pursuing a multitude of smaller ones? Arguing that having goals will "help us to achieve what we care about" is pretty clearly circular.

It is easy to come up with arguments against being driven by largish goals and in favor of working for smallish ones. Less risk of having to scrap a work-in-progress inventory should circumstances change. Less danger of becoming a 'by whatever means necessary' megalomaniac who is a danger to the rest of mankind.

Is it equally easy to come up with non-question-begging arguments for setting significant goals for oneself?

Anna seems to suggest that a person who goes through life with no goals more significant than a day-by-day maximization of utility is somehow 'wasting their potential' (my choice of words).

That's not what I was trying to say at all. I was trying to note that we pursue even most of our day-to-day goals ineffectively, such that the result looks like a lack of coherent action rather than like choosing a satisfying smallscale life.

For example, I'll often find myself automatically blurting out a correction to something someone said in conversation, even in cases where such argument will decrease my enjoyment, the other person's enjoyment, and the relationship I'm trying to build. Or I'll find myself replaying worries as I walk home, instead of either enjoying the walk, thinking about something fun, or thinking about something useful. Or, as Eliezer notes, procrastinating with a process that is less fun as well as less productive. These automatic behaviors serve neither my day-to-day nor my larger scale goals; and automatic action-patterns like these seem to be more common than is coherent action toward (local or any other) purpose.

Correction accepted. Yes, the techniques that you advocate are effective at all scales. And a hunter-gatherer looking ahead no further than his next meal needs to think and act strategically just as much as does a philosopher looking ahead to the next singularity.

One possible line of argument:

If, as you imply, you find it compelling that being a danger to the rest of humankind is something to avoid, then presumably you should also find it compelling that reducing other such dangers is something to seek out.

And it's pretty clear that projects that require more than a day's effort to show results will never be undertaken by a system with no goals larger than day-to-day optimization.

It seems to follow from there that if there exist dangers to humankind that require more than a day's effort to measurably reduce, it's a good idea to have goals larger than day-to-day optimization.

The way I see it, in order to make good progress on these branches, we need... what's a good twist on the metaphor? A root, or a trunk, to branch from. That trunk would be epistemic rationality, how to be not wrong. LessWrong (the sequences, effectively) has made huge inroads on that goal.

I think we are at the point where LessWrongers should begin to focus on applying our rationality to these branches you identified, and producing top-level posts about these instrumental rationality concepts.

(I think the trunk/branch metaphor is a better way of understanding epistemic/instrumental rationality than "two interlinked fields". Epistemic rationality is the trunk, without which you can't do anything and with weak or rotten trunks, you can't do much without collapsing. But instrumental rationality is the branches; a trunk alone is not much of a tree at all. This also gets at the monolithic nature of epistemic rationality vs the many concepts and directions of instrumental rationality.)

Maybe?

While it's true that epistemic rationality, and accurate ideas are useful for everything, it's also true that really wanting something (having Something to protect, if you like) can give a huge boost to one's willingness to face unpleasant facts, and to one's epistemic rationality more generally.

Yup. I wasn't very clear (and I might be projecting my own particular mind here) but I feel like knowing about something to protect, knowing that really wanting something will boost your willingness to face facts, will let you make decisions to so as to produce more situations where you're willing to face facts.

Hmm. I'm not explaining this very well. Okay. What we want is "doing things right". You could say, roughly, that epistemic rationality (what LW has focused on) is about "things right", and instrumental rationality (what your post after numbers 1 2 and 3 seems to be focused on) is "doing things". I was sort of saying "Look, LessWrong's done a good job on epistemic rationality, lets get the same thing happening for instrumental rationality" and that your post is a great launching pad.

... instrumental rationality (what your post after numbers 1 2 and 3 seems to be focused on) ...

5 and 6 are straight-forwardly about forming accurate beliefs. Much of 4 is also part of epistemic rationality, insofar as epistemic rationality is furthered by finding the few questions that can really improve one's total picture of the world, and e.g. understanding evolution, or what the multiverse is like, instead of learning more random details about fruit flies.

What about teaching other people these skills/helping other people become aware of their own incorrect beliefs? Is that completely separate?

While of course teaching others may be exactly the greatest utilitarian impact available, whenever someone's first and only move after coming to a new belief is to preach it, I suspect they're wireheading status or knowing-powerful-secrets (on top of my suspicion if they're seeking payment). If they were to actually test their ideas in practice, those ideas become more trustworthy - at the very least, bugs will be fixed, and their promotion will be more realistic and persuasive (if their ideas actually held up).

For example, some guru who sells a way of being happy and successful, may be happy and successful only because he gets the thrill of speaking in front of thousands of rapt listeners. If you take away his lecturing gig, what do his "highly effective habits" really earn him?

I think I have been neglecting 7 and 8 somewhat, so thanks for this.

Is the City of Lights link meant to correspond to this post? Currently it links to Alicorn's "shiny stories" post, which provides examples but not the technique itself.