You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

tDCS, Neuroscientists' Open Letter To DIY Brain Hackers

6 morganism 12 July 2016 07:37PM

"The evidence of harm would be the evidence that you can hurt some cognitive functions with the same stimulation protocols that help another cognitive function. But they're completely correct that we don't have any evidence saying you're definitely hurting yourself. We do have evidence that you're definitely changing your brain."

interview:

http://www.wbur.org/commonhealth/2016/07/11/caution-brain-hacking

 

Paper:

http://onlinelibrary.wiley.com/doi/10.1002/ana.24689/references

 

I was aware of the variability of responses to stim, but not the paper that leveraging one brain function could impair another. This was also written to give the docs some info to help inform their patients.

edit

I'll also tuck this in here, as i posted it to open thread.

Texting changes brain waves to new, previously unknown, pattern.

http://sciencebulletin.org/archives/2623.html

Makes me wonder if they were using spell check, or the new, shortend speak. By using constructed kernels, or images of words and concepts, it looks like machine learning retrieval or construction is already  being practiced here ?

[Stub] Ontological crisis = out of environment behaviour?

8 Stuart_Armstrong 13 January 2016 03:10PM

One problem with AI is the possibility of ontological crises - of AIs discovering their fundamental model of reality is flawed, and being unable to cope safely with that change. Another problem is the out-of-environment behaviour - that an AI that has been trained to behave very well in a specific training environment, messes up when introduced to a more general environment.

It suddenly occurred to me that these might in fact be the same problem in disguise. In both cases, the AI has developed certain ways of behaving in reaction to certain regular features of their environment. And suddenly they are placed in a situation where these regular features are absent - either because they realised that these features are actually very different from what they thought (ontological crisis) or because the environment is different and no longer supports the same regularities (out-of-environment behaviour).

In a sense, both these errors may be seen as imperfect extrapolation from partial training data.

Examples of growth mindset or practice in fiction

12 Swimmer963 28 September 2015 09:47PM

As people who care about rationality and winning, it's pretty important to care about training. Repeated practice is how humans acquire skills, and skills are what we use for winning.

Unfortunately, it's sometimes hard to get System 1 fully on board with the fact that repeated, difficult, sometimes tedious practice is how we become awesome. I find fiction to be one of the most useful ways of communicating things like this to my S1. It would be great to have a repository of fiction that shows characters practicing skills, mastering them, and becoming awesome, to help this really sink in.

However, in fiction the following tropes are a lot more common:

  1. hero is born to greatness and only needs to discover that greatness to win [I don't think I actually need to give examples of this?]
  2. like (1), only the author talks about the skill development or the work in passing… but in a way that leaves the reader's attention (and system 1 reinforcement?) on the "already be awesome" part, rather that the "practice to become awesome" part [HPMOR; the Dresden Files, where most of the implied practice takes place between books.]
  3. training montage, where again the reader's attention isn't on the training long enough to reinforce the "practice to become awesome" part, but skips to the "wouldn't it be great to already be awesome" part [TVtropes examples].
  4. The hero starts out ineffectual and becomes great over the course of the book, but this comes from personal revelations and insights, rather than sitting down and practicing [Nice Dragons Finish Last is an example of this].

Example of exactly the wrong thing:
The Hunger Games - Katniss is explicitly up against the Pledges who have trained their whole lives for this one thing, but she has … something special that causes her to win. Also archery is her greatest skill, and she's already awesome at it from the beginning of the story and never spends time practicing.

Close-but-not-perfect examples of the right thing:
The Pillars of the Earth - Jack pretty explicitly has to travel around Europe to acquire the skills he needs to become great. Much of the practice is off-screen, but it's at least a pretty significant part of the journey.
The Honor Harrington series: the books depict Honor, as well as the people around her, rising through the ranks of the military and gradually levelling up, with emphasis on dedication to training, and that training is often depicted onscreen – but the skills she's training in herself and her subordinates aren't nearly as relevant as the "tactical genius" that she seems to have been born with.

I'd like to put out a request for fiction that has this quality. I'll also take examples of fiction that fails badly at this quality, to add to the list of examples, or of TVTropes keywords that would be useful to mine. Internet hivemind, help?

You Are A Brain - Intro to LW/Rationality Concepts [Video & Slides]

13 Liron 16 August 2015 05:51AM

Here's a 32-minute presentation I made to provide an introduction to some of the core LessWrong concepts for a general audience:

You Are A Brain [YouTube]

You Are a Brain [Google Slides] - public domain

I already posted this here in 2009 and some commenters asked for a video, so I immediately recorded one six years later. This time the audience isn't teens from my former youth group, it's employees who work at my software company where we have a seminar series on Thursday afternoons.

Speculative rationality skills and appropriable research or anecdote

3 Clarity 21 July 2015 04:02AM

Is rationality training in it's infancy? I'd like to think so, given the paucity of novel, usable information produced by rationalists since the Sequence days. I like to model the rationalist body of knowledge as superset of pertinent fields such as decision analysis, educational psychology and clinical psychology. This reductionist model enables rationalists to examine the validity of rationalist constructs while standing on the shoulders of giants.

CFAR's obscurantism (and subsequent price gouging) capitalises on our [fear of missing out](https://en.wikipedia.org/wiki/Fear_of_missing_out). They brand established techniques like mindfulness as againstness or reference class forecasting as 'hopping' as if it's of their own genesis, spiting academic tradition and cultivating an insular community. In short, Lesswrongers predictably flouts [cooperative principles](https://en.wikipedia.org/wiki/Cooperative_principle).

This thread is to encourage you to speculate on potential rationality techniques, underdetermined by existing research which might be a useful area for rationalist individuals and organisations to explore. I feel this may be a better use of rationality skills training organisations time, than gatekeeping information.

To get this thread started, I've posted a speculative rationality skill I've been working on. I'd appreciate any comments about it or experiences with it. However, this thread is about working towards the generation of rationality skills more broadly.

CFAR-run MIRI Summer Fellows program: July 7-26

22 AnnaSalamon 28 April 2015 07:04PM

CFAR will be running a three week summer program this July for MIRI, designed to increase participants' ability to do technical research into the superintelligence alignment problem.

The intent of the program is to boost participants as far as possible in four skills:

  1. The CFAR “applied rationality” skillset, including both what is taught at our intro workshops, and more advanced material from our alumni workshops;
  2. “Epistemic rationality as applied to the foundations of AI, and other philosophically tricky problems” -- i.e., the skillset taught in the core LW Sequences.  (E.g.: reductionism; how to reason in contexts as confusing as anthropics without getting lost in words.)
  3. The long-term impacts of AI, and strategies for intervening (e.g., the content discussed in Nick Bostrom’s book Superintelligence).
  4. The basics of AI safety-relevant technical research.  (Decision theory, anthropics, and similar; with folks trying their hand at doing actual research, and reflecting also on the cognitive habits involved.)

The program will be offered free to invited participants, and partial or full scholarships for travel expenses will be offered to those with exceptional financial need.

If you're interested (or possibly-interested), sign up for an admissions interview ASAP at this link (takes 2 minutes): http://rationality.org/miri-summer-fellows-2015/

Also, please forward this post, or the page itself, to anyone you think should come; the skills and talent that humanity brings to bear on the superintelligence alignment problem may determine our skill at navigating it, and sharing this opportunity with good potential contributors may be a high-leverage way to increase that talent.

Upcoming CFAR events: Lower-cost bay area intro workshop; EU workshops; and others

17 AnnaSalamon 02 October 2014 12:08AM

For anyone who's interested:

CFAR is trying out an experimental, lower-cost, 1.5-day introductory workshop Oct 25-26 in the bay area.  It is meant to provide an easier point of entry into our rationality training.  If you've been thinking about coming to a CFAR workshop but have had trouble setting aside 4 days and $3900, you might consider trying this out.  (Or, if you have a friend or family member in that situaiton, you might suggest this to them.)  It's a beta test, so no guarantees as to the outcome -- but I suspect it'll be both useful, and a lot of fun.

We are also finally making it to Europe.  We'll be running two workshops in the UK this November, both of which have both space and financial aid still available.

We're also still running our standard workshops: Jan 16-19 in Berkeley, and April 23-26 in Boston, MA.  (We're experimenting, also, with using alumni "TA's" to increase the amount of 1-on-1 informal instruction while simultaneously increasing workshop size, in an effort to scale our impact.)

Finally, we're actually running a bunch of events lately for alumni of our 4-day workshops (a weekly rationality dojo; a bimonthly colloquium; a yearly alumni reunion; and various for-alumni workshops); which is perhaps less exciting if you aren't yet an alumnus, but which I'm very excited about because it suggests that we'll have a larger community of people doing serious practice, and thereby pushing the boundaries of the art of rationality.

If anyone wishes to discuss any of these events, or CFAR's strategy as a whole, I'd be glad to talk; you can book me here.

Cheers!

Skills and Antiskills

26 katydee 26 April 2014 06:54AM

One useful little concept that a friend and I have is that of the antiskill. Like a normal skill, an antiskill gives you both the ability and the affordance to do things that you wouldn't otherwise be able to do. The difference between a skill and an antiskill is that a skill gives you the ability and affordance to do things that are positive on net, while an antiskill gives you the ability and affordance to do things that are negative on net.

For instance, my friend believes that dancing is often an antiskill, because it gives you an affordance to dance rather than have interesting conversations while at parties, and he considers having interesting conversations to be much more valuable than dancing-- therefore, knowing how to dance serves primarily to enable choices that are bad on net.

I disagree with the specific point in this case, but I nevertheless think it's a good example because it illustrates another key principle of skills and antiskills-- whether something is a skill or an antiskill is context-dependent. If dancing will largely prevent you from having interesting conversations, it may well be an antiskill-- but if you go to a lot of nightclubs where loud music makes conversation difficult, knowing how to dance seems very useful indeed!

Another example is the skill of knowing how to fix computers. In many respects this is very useful, and can indeed lead to a profitable career in IT. But-- as I'm sure many of you may have experienced-- having your friends and family know that you know how to fix computers can be very negative on net!

Overall, I find the skill/antiskill framework quite useful when it comes to navigating what sorts of skills, abilities, and knowledge I should acquire. Before choosing my next priority, I often pause to think:

  • What affordances will learning this give me?
  • In what contexts will those affordances be most relevant?
  • Will this be positive or negative on net?

Using this framework has enabled me to discern strengths and weaknesses that I had previously not considered, and in some cases those strengths and weaknesses have proven decisive to my planning.

[Link] - No evidence of intelligence improvement after working memory training

4 pinyaka 23 September 2013 09:06PM

This article critically examines previous studies that showed a link between working memory training (specifically via n-back training) and fluid intelligence, finding that the results may not have been as positive as reported owing to a number of factors including the use of a no-contact rather than active control group, and difficulty selecting tests that isolate the impact of working memory on fluid intelligence. The authors also present findings from a new study that show no improvement in fluid intelligence from dual n-back training, visual search training (active placebo) and no training (no contact placebo).

 

PubMed

Journal Challenged

 

Optimizing Workouts for Intellectual Performance

9 Ritalin 06 July 2013 07:56PM

So this year I've stopped working out, and my grades have improved drastically, but at the cost of losing muscle mass and gaining fat, and becoming physically slower and lazier just as I became faster and more active intellectually. One effect I especially noticed was the disappearance of that perpetual state of happiness/satisfaction that comes from frequent physical exertion, which I think had a tendency to get in the way of a feeling of urgency regarding studies; why bother with tiresome and frustrating intellectual exercise when physical exercise yielded results and pleasure/satisfaction much more easily and reliably?

Anyway, this got me thinking: "I need to figure out a training that is optimized for intellectual performance. Aspects that might be interesting to work on would be:

  • getting as much blood (oxygen, nutrients) as possible to the brain, whenever needed.
  • minimizing the amount of other tissue (including muscle in excess of what is strictly needed for a comfortable daily life, and digestive organs in excess of what is needed to get the nutrients from the food).
  • optimizing the diet in order to feed the brain according to its needs while avoiding dietetical imbalances that would result in damage of some sort or another (too much sugar can damage the pancreas, too much protein and the kidneys can suffer, etc.)
  • something that is easy and quick to implement and follow, relatively inexpensive and straightforward; the idea is to save as much time, resources and energy as possible for the needs of studying/working.

These ideas I'm throwing around from a position of extreme ignorance. I've tried hiring nutritionists, but their diets were optimized for bodybuilding, not for intellectual efficacy, and were incredibly troublesome to follow. These involved about five to eight meals a day, large amounts of meat or meat substitutes, which is expensive to sustain, and me in a perpetual state of either hunger or digestive lethargy, plus permanent muscular soreness from the training regime that goes with it... and then there's the supplements.

So, yeah, I'm no gwern, but I'd love to figure out a diet that allows me to work at maximum efficacy. Other concerns, such as feeling strong or looking attractive or even dancing well, are quite far behind in priority. How should I go about this? How about you lads and ladies? What's your experience with dieting/working-out? More importantly, what does the research say?

P.S. I tried to read "Good Calories Bad Calories", but I never managed to finish it: it spent so much time attacking the current paradygm that I grew tired of waiting for it to actually list and summarize its recommendations. If anyone here finished reading that and drew out the conclusions, I'd love to hear them.

P.P.S. The main post will update as the discussion advances; once enough proper information is gathered, a top level post might emerge.

 

Developmental Thinking Shout-out to CFAR

16 MarkL 03 May 2013 01:46AM

Preamble

Before I make my main point, I want to acknowledge that curriculum development is hard. It's even harder when you're trying to teach the unteachable. And it's even harder when you're in the process of bootstrapping. I am aware of the Kahneman inside/outside curriculum design story. And, I myself have taught 200+ hours of my own computer science curricula to middle-school students. So this "open letter," is not some sort of criticism of CFAR's curriculum; It's a "Hey, check out this cool stuff eventually when you have time," letter. I just wanted to put all this out there, to possibly influence the next five years of CFAR.

Curriculum development is hard.

So, anyway, I don't personally know any of the people involved in CFAR, but I do know you're all great. 

 

A case for developmental thinking

The point of this post is to make a case for CFAR to become "developmentally aware." Massive amounts of quality research has gone into describing the differences between 1) children, 2) adults, and 3) expert or developmentally advanced adults. I haven't (yet?) seen any evidence of awareness of this research in CFAR's materials. (I haven't attended a CFAR workshop, but I've flipped through some of the more recent stuff.)

Developmental thinking is a different approach than, e.g., cataloguing biases, promoting real-time awareness of them, and having a toolbox of de-biasing strategies and algorithms. Developmental literature gives clues to the precise cognitive operations that are painstakingly acquired over an entire lifetime, in a more fine-grained way than is possible when studying, say, already-expert performers or cognitive bias literature. I think developmental thinking goes deeper than "toolbox thinking" (straw!) and is an angle of approach for teaching the unteachable

Below is an annotated bibliography of some of my personal touchstones in the development literature, books that are foundational or books that synthesize decades of research about the developmental aspects of entrepreneurial, executive, educational, and scientific thinking, as well as the developmental aspects of emotion and cognition. Note that this is personal, idiosyncratic, non-exhaustive list.

And, to qualify, I have epistemological and ontological issues with plenty of the stuff below. But some of these authors are brilliant, and the rest are smart, meticulous, and values-driven. Lots of these authors deeply care about empirically identifying, targeting, accelerating, and stabilizing skills ahead of schedule or helping skills manifest when they wouldn't have otherwise appeared at all. Quibbles and double-takes aside, there is lots of signal, here, even if it's not seated in a modern framework (which would of course increase the value and accessibility of what's below).

There are clues or even neon signs, here, for isolating fine-grained, trainable stuff to be incorporated into curricula. Even if an intervention was designed for kids, a lot of adults still won't perform consistently prior to said intervention. And these researchers have spent thousands of collective hours thinking about how to structure assessments, interventions, and validations which may be extendable to more advanced scenarios.

So all the material below is not only useful for thinking about remedial or grade-school situations, and is not just for adding more tools to a cognitive toolbox, but could be useful for radically transforming a person's thinking style at a deep level.

Consider:

child:adult :: adult: ? 

This has everything to do with the "Outside the Box" Box. Really. One author below has been collecting data for decades to attempt to describe individuals that may represent far less than one percent of the population.

 

0. Protocol analysis

Everyone knows that people are poor reporters of what goes on in their heads. But this is a straw. A tremendous amount of research has gone into understanding what conditions, tasks, types of cognitive routines, and types of cognitive objects foster reliable introspective reporting. Introspective reporting can be reliable and useful. Grandaddy Herbert Simon (who coined the term "bounded rationality") devotes an entire book to it. The preface (I think) is a great overview. I wanted to mention this, first, because lots of the researchers below use verbal reports in their work.

http://www.amazon.com/Protocol-Analysis-Edition-Verbal-Reports/dp/0262550237/

 

1. Developmental aspects of scientific thinking

Deanna Kuhn and colleagues develop and test fine-grained interventions to promote transfer of various aspects of causal inquiry and reasoning in middle school students. In her words, she wants to "[develop] students' meta-level awareness and management of their intellectual processes." Kuhn believes that inquiry and argumentation skills, carefully defined and empirically backed, should be emphasized over specific content in public education. That sounds like vague and fluffy marketing-speak, but if you drill down to the specifics of what she's doing, her work is anything but. (That goes for all of these 50,000 foot summaries. These people are awesome.)

http://www.amazon.com/Education-Thinking-Deanna-Kuhn/dp/0674027450/

http://www.tc.columbia.edu/academics/index.htm?facid=dk100

http://www.educationforthinking.org/

 

David Klahr and colleagues emphasize how children and adults compare in coordinated searches of a hypothesis space and experiment space. He believes that scientific thinking is not different in kind than everyday thinking. Klahr gives an integrated account of all the current approaches to studying scientific thinking. Herbert Simon was Klahr's dissertation advisor.

http://www.amazon.com/Exploring-Science-Cognition-Development-Discovery/dp/0262611767

http://www.psy.cmu.edu/~klahr/

 

2. Developmental aspects of executive or instrumental thinking

Ok, I'll say it: Elliot Jacques was a psychoanalyst, among other things. And the guy makes weird analogies between thinking styles and truth tables. But his methods are rigorous. He has found possible discontinuities in how adults process information in order to achieve goals and how these differences relate to an individuals "time horizon," or maximum time length over which an individual can comfortably execute a goal. Additionally, he has explored how these factors predictably change over a lifespan.

http://www.amazon.com/Human-Capability-Individual-Potential-Application/dp/0962107077/

 

3. Developmental aspects of entrepreneurial thinking

Saras Sarasvathy and colleagues study the difference between novice entrepreneurs and expert entrepreneurs. Sarasvathy wants to know how people function under conditions of goal ambiguity ("We don't know the exact form of what we want"), environmental isotropy ("The levers to affect the world, in our concrete situation, are non-obvious"), and enaction ("When we act we change the world"). Herbert Simon was her advisor. Her thinking predates and goes beyond the lean startup movement.

http://www.amazon.com/Effectuation-Elements-Entrepreneurial-Expertise-Entrepreneurship/dp/1848445725/

"What effectuation is not" http://www.effectuation.org/sites/default/files/research_papers/not-effectuation.pdf

Related: http://lesswrong.com/r/discussion/lw/hcb/book_suggestion_diaminds_is_worth_reading/

4. General Cognitive Development

Jane Loevinger and colleagues' work have inspired scores of studies. Loevinger discovered potentially stepwise changes in "ego level" over a lifespan. Ego level is an archaic-sounding term that might be defined as one's ontological, epistemological, and metacognitive stance towards self and world. Loevinger's methods are rigorous, with good inter-rater reliability, bayesian scoring rules incorporating base rates, and so forth.

http://www.amazon.com/Measuring-Ego-Development-Volume-Construction/dp/0875890598/

http://www.amazon.com/Measuring-Development-Scoring-Manual-Women/dp/0875890695/

Here is a woo-woo description of the ego levels, but note that these descriptions are based on decades of experience and have a repeatedly validated empirical core. The author of this document, Susanne Cook-Greuter, received her doctorate from Harvard by extending Loevinger's model, and it's well worth reading all the way through: 

http://www.cook-greuter.com/9%20levels%20of%20increasing%20embrace%20update%201%2007.pdf

Here is a recent look at the field:

http://www.amazon.com/The-Postconventional-Personality-Researching-Transpersonal/dp/1438434642/

By the way, having explicit cognitive goals predicts an increase in ego level, three years later, but not an increase in subjective well-being. (Only the highest ego levels are discontinuously associated with increased wellbeing.) Socio-emotional goals do predict an increase in subjective well-being, three years later. Great study:

Bauer, Jack J., and Dan P. McAdams. "Eudaimonic growth: Narrative growth goals predict increases in ego development and subjective well-being 3 years later." Developmental Psychology 46.4 (2010): 761.

 

5. Bridging symbolic and non-symbolic cognition

[Related: http://wiki.lesswrong.com/wiki/A_Human's_Guide_to_Words]

Eugene Gendlin and colleagues developed a "[...] theory of personality change [...] which involved a fundamental shift from looking at content [to] process [...]. From examining hundreds of transcripts and hours of taped psychotherapy interviews, Gendlin and Zimring formulated the Experiencing Level variable. [...]"

The "focusing" technique was designed as a trainable intervention to influence an individual's Experiencing Level.

Marion N. Hendricks reviews 89 studies, concluding that [I quote]:

  • Clients who process in a High Experiencing manner or focus do better in therapy according to client, therapist and objective outcome measures.
  • Clients and therapists judge sessions in which focusing takes place as more successful.
  • Successful short term therapy clients focus in every session.
  • Some clients focus immediately in therapy; Others require training.
  • Clients who process in a Low Experiencing manner can be taught to focus and increase in Experiencing manner, either in therapy or in a separate training.
  • Therapist responses deepen or flatten client Experiencing. Therapists who focus effectively help their clients do so.
  • Successful training in focusing is best maintained by those clients who are the strongest focusers during training.

http://www.focusing.org/research_basis.html

http://www.amazon.com/Focusing-Eugene-T-Gendlin/dp/0553278339/

http://www.amazon.com/Focusing-Oriented-Psychotherapy-Manual-Experiential-Method/dp/157230376X/

http://www.amazon.com/Self-Therapy-Step-By-Step-Wholeness-Cutting-Edge-Psychotherapy/dp/0984392777/ [IFS is very similar to focusing]

http://www.amazon.com/Emotion-Focused-Therapy-Coaching-Clients-Feelings/dp/1557988811/ [more references, similar to focusing]

http://www.amazon.com/Experiencing-Creation-Meaning-Philosophical-Psychological/dp/0810114275/ [favorite book of all time, by the way]

 

6. Rigorous Instructional Design

Siegfried Engelmann (http://www.zigsite.com/) and colleagues are dedicated to dramatically accelerating cognitive skill acquisition in disadvantaged children. In addition to his peer-reviewed research, he specializes in unambiguously decomposing cognitive learning tasks and designing curricula. Engelmann's methods were validated as part of Project Follow Through, the "largest and most expensive experiment in education funded by the U.S. federal government that has ever been conducted," according to Wikipedia. Engelmann contends that the data show that Direct Instruction outperformed all other methods:

http://www.zigsite.com/prologue_NeedyKids_chapter_5.html

http://en.wikipedia.org/wiki/Project_Follow_Through

Here, he systematically eviscerates an example of educational material that doesn't meet his standards:

http://www.zigsite.com/RubricPro.htm

And this is his instructional design philosophy:

http://www.amazon.com/Theory-Instruction-Applications-Siegfried-Engelmann/dp/1880183803/

 

Conclusion

In conclusion, lots of scientists have cared for decades about describing the cognitive differences between children, adults, and expert or developmentally advanced adults. And lots of scientists care about making those differences happen ahead of schedule or happen when they wouldn't have otherwise happened at all. This is a valuable and complementary perspective to what seems to be CFAR's current approach. I hope CFAR will eventually consider digging into this line of thinking, though maybe they're already on top of it or up to something even better.

[LINK] How to calibrate your confidence intervals

11 Benjamin_Todd 25 April 2013 06:26AM

In the book "How to Measure Anything" D. Hubbard presents a step-by-step method for calibrating your confidence intervals, which he has tested on hundreds of people, showing that it can make 90% of people almost perfect estimators within half a day of training.

I've been told that the Less Wrong and CFAR community is mostly not aware of this work, so given the importance of making good estimates to rationality, I thought it would be of interest.

(although note CFAR has developed its own games for training confidence interval calibration)

The main techniques to employ are:

 

Equivalent bet:

For each estimate imagine that you are betting $1000 on the answer being within your 90% CI. Now compare this to betting $1000 on a spinner where 90% of the time you win and 10% of the time you lose. Would you prefer to take a spin? If so, your range is too small and you need to increase it. If you decide to answer the question your range is too large and you need to reduce it. If you don’t mind whether you answer the question or take a spin then it really is your 90% CI.

Absurdity Test:

Start with an absurdly large range, maybe from minus infinity to plus infinity, and then begin reducing it based upon things you know to be highly unlikely or even impossible.

Avoid Anchoring:

Anchoring occurs when you think of a single answer to the question and then add an error around this answer; this often leads to ranges which are too narrow. Using the absurdity test is a good way to counter problems brought on by anchoring; another is to change how you look at your 90% CI. For a 90% CI there is a 10% chance that the answer lies outside your estimate, and if you split this there is a 5% chance that the answer is above your upper bound and a 5% chance that the answer is below your lower bound. By treating each bound separately, rephrase the question to read ‘is there a 95% chance that the answer is above my lower bound?’. If the answer is no, then you need to increase or decrease the bound as required. You can then repeat this process for the other bound.

Pros and cons:

Identify two pros and two cons for the range that you have given to help clarify your reasons for making this estimate.

Once you have used these techniques you can make another equivalent bet to check whether your new estimate is your 90% CI.

 

 

To train yourself, practice making estimates repeatedly while using these techniques, until you reach 100% accuracy.

To read more and try sample questions, read the article we prepared on 80,000 Hours here.

 

 

 

Thoughts on the January CFAR workshop

37 Qiaochu_Yuan 31 January 2013 10:16AM

So, the Center for Applied Rationality just ran another workshop, which Anna kindly invited me to. Below I've written down some thoughts on it, both to organize those thoughts and because it seems other LWers might want to read them. I'll also invite other participants to write down their thoughts in the comments. Apologies if what follows isn't particularly well-organized. 

Feelings and other squishy things

The workshop was totally awesome. This is admittedly not strong evidence that it accomplished its goals (cf. Yvain's comment here), but being around people motivated to improve themselves and the world was totally awesome, and learning with and from them was also totally awesome, and that seems like a good thing. 

Also, the venue was fantastic. CFAR instructors reported that this workshop was more awesome than most, and while I don't want to discount improvements in CFAR's curriculum and its selection process for participants, I think the venue counted for a lot. It was uniformly beautiful and there were a lot of soft things to sit down or take naps on, and I think that helped everybody be more comfortable with and relaxed around each other. 

Main takeaways

Here are some general insights I took away from the workshop. Some of them I had already been aware of on some abstract intellectual level but hadn't fully processed and/or gotten drilled into my head and/or seen the implications of. 

  1. Epistemic rationality doesn't have to be about big things like scientific facts or the existence of God, but can be about much smaller things like the details of how your particular mind works. For example, it's quite valuable to understand what your actual motivations for doing things are. 
  2. Introspection is unreliable. Consequently, you don't have direct access to information like your actual motivations for doing things. However, it's possible to access this information through less direct means. For example, if you believe that your primary motivation for doing X is that it brings about Y, you can perform a thought experiment: imagine a world in which Y has already been brought about. In that world, would you still feel motivated to do X? If so, then there may be reasons other than Y that you do X. 
  3. The mind is embodied. If you consistently model your mind as separate from your body (I have in retrospect been doing this for a long time without explicitly realizing it), you're probably underestimating the powerful influence of your mind on your body and vice versa. For example, dominance of the sympathetic nervous system (which governs the fight-or-flight response) over the parasympathetic nervous system is unpleasant, unhealthy, and can prevent you from explicitly modeling other people. If you can notice and control it, you'll probably be happier, and if you get really good, you can develop aikido-related superpowers
  4. You are a social animal. Just as your mind should be modeled as a part of your body, you should be modeled as a part of human society. For example, if you don't think you care about social approval, you are probably wrong, and thinking that will cause you to have incorrect beliefs about things like your actual motivations for doing things. 
  5. Emotions are data. Your emotional responses to stimuli give you information about what's going on in your mind that you can use. For example, if you learn that a certain stimulus reliably makes you angry and you don't want to be angry, you can remove that stimulus from your environment. (This point should be understood in combination with point 2 so that it doesn't sound trivial: you don't have direct access to information like what stimuli make you angry.) 
  6. Emotions are tools. You can trick your mind into having specific emotions, and you can trick your mind into having specific emotions in response to specific stimuli. This can be very useful; for example, tricking your mind into being more curious is a great way to motivate yourself to find stuff out, and tricking your mind into being happy in response to doing certain things is a great way to condition yourself to do certain things. Reward your inner pigeon.

Here are some specific actions I am going to take / have already taken because of what I learned at the workshop. 

  1. Write a lot more stuff down. What I can think about in my head is limited by the size of my working memory, but a piece of paper or a WorkFlowy document don't have this limitation. 
  2. Start using a better GTD system. I was previously using RTM, but badly. I was using it exclusively from my iPhone, and when adding something to RTM from an iPhone the due date defaults to "today." When adding something to RTM from a browser the due date defaults to "never." I had never done this, so I didn't even realize that "never" was an option. That resulted in having due dates attached to RTM items that didn't actually have due dates, and it also made me reluctant to add items to RTM that really didn't look like they had due dates (e.g. "look at this interesting thing sometime"), which was bad because that meant RTM wasn't collecting a lot of things and I stopped trusting my own due dates. 
  3. Start using Boomerang to send timed email reminders to future versions of myself. I think this might work better than using, say, calendar alerts because it should help me conceptualize past versions of myself as people I don't want to break commitments to. 

I'm also planning to take various actions that I'm not writing above but instead putting into my GTD system, such as practicing specific rationality techniques (the workshop included many useful worksheets for doing this) and investigating specific topics like speed-reading and meditation. 

The arc word (TVTropes warning) of this workshop was "agentiness." ("Agentiness" is more funtacular than "agency.") The CFAR curriculum as a whole could be summarized as teaching a collection of techniques to be more agenty. 

Miscellaneous

A distinguishing feature the people I met at the workshop seemed to have in common was the ability to go meta. This is not a skill which was explicitly mentioned or taught (although it was frequently implicit in the kind of jokes people told), but it strikes me as an important foundation for rationality: it seems hard to progress with rationality unless the thought of using your brain to improve how you use your brain, and also to improve how you improve how you use your brain, is both understandable and appealing to you. This probably eliminates most people as candidates for rationality training unless it's paired with or maybe preceded by meta training, whatever that looks like.

One problem with the workshop was lack of sleep, which seemed to wear out both participants and instructors by the last day (classes started early in the day and conversations often continued late into the night because they were unusually fun / high-value). Offering everyone modafinil or something at the beginning of future workshops might help with this.

Overall

Overall, while it's too soon to tell how big an impact the workshop will have on my life, I anticipate a big impact, and I strongly recommend that aspiring rationalists attend future workshops. 

[LINK] Get paid to train your rationality (update)

9 gwern 29 April 2012 03:01PM

Previous: http://lesswrong.com/lw/6ya/link_get_paid_to_train_your_rationality/

The IARPA-run forecasting contest remains ongoing. Season 1 has largely finished up, and groups are preparing for season 2. Season 1 participants like myself get first dibs, but http://goodjudgmentproject.com/ has announced in emails they have spots open for first-time participants! I assume the other groups may have openings as well.

I personally found the tournament a source of predictions to stick on PB.com and I even did pretty well in GJP. (When I checked a few weeks ago, I was ranked 28 of 203 in my experimental group.) I haven't been paid my honorarium yet, though.

Which fields of learning have clarified your thinking? How and why?

12 [deleted] 11 November 2011 01:04AM

Did computer programming make you a clearer, more precise thinker? How about mathematics? If so, what kind? Set theory? Probability theory?

Microeconomics? Poker? English? Civil Engineering? Underwater Basket Weaving? (For adding... depth.)

Anything I missed?

Context: I have a palette of courses to dab onto my university schedule, and I don't know which ones to chose. This much is for certain: I want to come out of university as a problem solving beast. If there are fields of inquiry whose methods easily transfer to other fields, it is those fields that I want to learn in, at least initially.

Rip apart, Less Wrong!

[LINK] Get paid to train your rationality

27 XFrequentist 03 August 2011 03:01PM

A tournament is currently being initiated by the Intelligence Advanced Research Project Activity (IARPA) with the goal of improving forecasting methods for global events of national (US) interest. One of the teams (The Good Judgement Team) is recruiting volunteers to have their forecasts tracked. Volunteers will receive an annual honorarium ($150), and it appears there will be ongoing training to improve one's forecast accuracy (not sure exactly what form this will take).

I'm registered, and wondering if any other LessWrongers are participating/considering it. It could be interesting to compare methods and results.

Extensive quotes and links below the fold.

continue reading »