You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

[LINK] 23andme now approved by the FDA to deliver health reports

9 username2 22 October 2015 01:33AM

http://blog.23andme.com/23andme-and-you/a-new-23andme-experience/

Looks like they were finally able to work out something with the FDA, and are back up and running. On the one hand, I'm very excited about the return of personalized genetic testing, but on the other hand I'm disappointed that their price doubled to $199. I was going to get kits for my 4-member family for Christmas, but that won't be feasible now.

Another interesting release from 23andme that came out at the same time is their [transparency report](https://www.23andme.com/transparency-report/), which shows how many requests from law enforcement they have gotten for customer DNA access, and what percentage they have gone through with.

CFAR-run MIRI Summer Fellows program: July 7-26

22 AnnaSalamon 28 April 2015 07:04PM

CFAR will be running a three week summer program this July for MIRI, designed to increase participants' ability to do technical research into the superintelligence alignment problem.

The intent of the program is to boost participants as far as possible in four skills:

  1. The CFAR “applied rationality” skillset, including both what is taught at our intro workshops, and more advanced material from our alumni workshops;
  2. “Epistemic rationality as applied to the foundations of AI, and other philosophically tricky problems” -- i.e., the skillset taught in the core LW Sequences.  (E.g.: reductionism; how to reason in contexts as confusing as anthropics without getting lost in words.)
  3. The long-term impacts of AI, and strategies for intervening (e.g., the content discussed in Nick Bostrom’s book Superintelligence).
  4. The basics of AI safety-relevant technical research.  (Decision theory, anthropics, and similar; with folks trying their hand at doing actual research, and reflecting also on the cognitive habits involved.)

The program will be offered free to invited participants, and partial or full scholarships for travel expenses will be offered to those with exceptional financial need.

If you're interested (or possibly-interested), sign up for an admissions interview ASAP at this link (takes 2 minutes): http://rationality.org/miri-summer-fellows-2015/

Also, please forward this post, or the page itself, to anyone you think should come; the skills and talent that humanity brings to bear on the superintelligence alignment problem may determine our skill at navigating it, and sharing this opportunity with good potential contributors may be a high-leverage way to increase that talent.

Good movies for rationalists?

0 roland 09 November 2013 08:00AM

Hi,

what good movies can you suggest that give ideas or inspirations on how to be more rational?

I just watched [Memento](https://en.wikipedia.org/wiki/Memento_%28film%29) last night and I was very impressed.

(No spoilers in this post)

The main character is a guy who suffers from amnesia, he forgets everything after a couple minutes so he has developed a system to cope with it. He takes pictures and writes notes. E.g. when staying at a hotel he takes a picture of it and put it in his pocket. So later when he doesnt know where he is staying he searches his pockets, finds the picture of the hotel and then he knows.

What I learned

I identified with the character in the movie because in spite of not having amnesia my memory as everyone elses isn't perfect either and I have all the quirks(biases) of a normal human brain. I cant exactly remember what I did last Thursday at 3 PM. Do I actually know why I am doing what Im doing or why I believe what I believe? I may have good rationalizations for both, of course, but that doesnt mean they are the real reasons.

I like to read LW but I havent developed much of a system to actually be more rational. If anyone has, I would be eager to read about it.

Practical Advice

What system could I develop to be more rational? One thing that a lot of management experts(e.g. Peter Drucker) have already pointed out is to write down how we actually spend our time because often how we spend it is not how we think we spend it and we end up spending much more time on unproductive activities than we are aware of. How much time went into random internet browsing last week?

I will start an activity log during work: how much time Im spending on what. This will be a first step.

 

 

Health/Longevity Link List

3 Dorikka 05 May 2013 03:17AM

Dying or becoming severely physically/mentally ill is very likely going to significantly lower the output of your utility function, so it would probably be a very bad idea to ignore the low-hanging resources which can significantly extend the time for which you are alive and well. I have attempted to search LessWrong for a list of such resources, and haven't been able to find one.

Are there any books, websites, or posts that contain significantly low-hanging fruit in this area? If so, please list them in the comments below.

Solved Problems Repository

25 Qiaochu_Yuan 27 March 2013 04:51AM

Follow-up to: Boring Advice Repository

Many practical problems in instrumental rationality appear to be wide open. Two I've been annoyed by recently are "what should I eat?" and "how should I exercise?" However, some appear to be more or less solved. For example, various mnemonic techniques like memory palaces, along with spaced repetition, seem to more or less solve the problem of memorization.

I would like people to use this thread to post other examples of solved problems in instrumental rationality. I'm pretty sure you all collectively know good examples; there's a comment I can't find from a user who said something like "taking a flattering photograph of yourself is a solved problem," and it's likely that there are other useful examples like this that aren't common knowledge. Err on the side of posting solutions which may not be universal but are still likely to be helpful to many people. 

(This thread is allowed to not be boring! Go wild!) 

Boring Advice Repository

56 Qiaochu_Yuan 07 March 2013 04:33AM

This is an extension of a comment I made that I can't find and also a request for examples. It seems plausible that, when giving advice, many people optimize for deepness or punchiness of the advice rather than for actual practical value. There may be good reasons to do this - e.g. advice that sounds deep or punchy might be more likely to be listened to - but as a corollary, there could be valuable advice that people generally don't give because it doesn't sound deep or punchy. Let's call this boring advice

An example that's been discussed on LW several times is "make checklists." Checklists are great. We should totally make checklists. But "make checklists" is not a deep or punchy thing to say. Other examples include "google things" and "exercise." 

I would like people to use this thread to post other examples of boring advice. If you can, provide evidence and/or a plausible argument that your boring advice actually is useful, but I would prefer that you err on the side of boring but not necessarily useful in the name of more thoroughly searching a plausibly under-searched part of advicespace. 

Upvotes on advice posted in this thread should be based on your estimate of the usefulness of the advice; in particular, please do not vote up advice just because it sounds deep or punchy. 

Thoughts on the January CFAR workshop

37 Qiaochu_Yuan 31 January 2013 10:16AM

So, the Center for Applied Rationality just ran another workshop, which Anna kindly invited me to. Below I've written down some thoughts on it, both to organize those thoughts and because it seems other LWers might want to read them. I'll also invite other participants to write down their thoughts in the comments. Apologies if what follows isn't particularly well-organized. 

Feelings and other squishy things

The workshop was totally awesome. This is admittedly not strong evidence that it accomplished its goals (cf. Yvain's comment here), but being around people motivated to improve themselves and the world was totally awesome, and learning with and from them was also totally awesome, and that seems like a good thing. 

Also, the venue was fantastic. CFAR instructors reported that this workshop was more awesome than most, and while I don't want to discount improvements in CFAR's curriculum and its selection process for participants, I think the venue counted for a lot. It was uniformly beautiful and there were a lot of soft things to sit down or take naps on, and I think that helped everybody be more comfortable with and relaxed around each other. 

Main takeaways

Here are some general insights I took away from the workshop. Some of them I had already been aware of on some abstract intellectual level but hadn't fully processed and/or gotten drilled into my head and/or seen the implications of. 

  1. Epistemic rationality doesn't have to be about big things like scientific facts or the existence of God, but can be about much smaller things like the details of how your particular mind works. For example, it's quite valuable to understand what your actual motivations for doing things are. 
  2. Introspection is unreliable. Consequently, you don't have direct access to information like your actual motivations for doing things. However, it's possible to access this information through less direct means. For example, if you believe that your primary motivation for doing X is that it brings about Y, you can perform a thought experiment: imagine a world in which Y has already been brought about. In that world, would you still feel motivated to do X? If so, then there may be reasons other than Y that you do X. 
  3. The mind is embodied. If you consistently model your mind as separate from your body (I have in retrospect been doing this for a long time without explicitly realizing it), you're probably underestimating the powerful influence of your mind on your body and vice versa. For example, dominance of the sympathetic nervous system (which governs the fight-or-flight response) over the parasympathetic nervous system is unpleasant, unhealthy, and can prevent you from explicitly modeling other people. If you can notice and control it, you'll probably be happier, and if you get really good, you can develop aikido-related superpowers
  4. You are a social animal. Just as your mind should be modeled as a part of your body, you should be modeled as a part of human society. For example, if you don't think you care about social approval, you are probably wrong, and thinking that will cause you to have incorrect beliefs about things like your actual motivations for doing things. 
  5. Emotions are data. Your emotional responses to stimuli give you information about what's going on in your mind that you can use. For example, if you learn that a certain stimulus reliably makes you angry and you don't want to be angry, you can remove that stimulus from your environment. (This point should be understood in combination with point 2 so that it doesn't sound trivial: you don't have direct access to information like what stimuli make you angry.) 
  6. Emotions are tools. You can trick your mind into having specific emotions, and you can trick your mind into having specific emotions in response to specific stimuli. This can be very useful; for example, tricking your mind into being more curious is a great way to motivate yourself to find stuff out, and tricking your mind into being happy in response to doing certain things is a great way to condition yourself to do certain things. Reward your inner pigeon.

Here are some specific actions I am going to take / have already taken because of what I learned at the workshop. 

  1. Write a lot more stuff down. What I can think about in my head is limited by the size of my working memory, but a piece of paper or a WorkFlowy document don't have this limitation. 
  2. Start using a better GTD system. I was previously using RTM, but badly. I was using it exclusively from my iPhone, and when adding something to RTM from an iPhone the due date defaults to "today." When adding something to RTM from a browser the due date defaults to "never." I had never done this, so I didn't even realize that "never" was an option. That resulted in having due dates attached to RTM items that didn't actually have due dates, and it also made me reluctant to add items to RTM that really didn't look like they had due dates (e.g. "look at this interesting thing sometime"), which was bad because that meant RTM wasn't collecting a lot of things and I stopped trusting my own due dates. 
  3. Start using Boomerang to send timed email reminders to future versions of myself. I think this might work better than using, say, calendar alerts because it should help me conceptualize past versions of myself as people I don't want to break commitments to. 

I'm also planning to take various actions that I'm not writing above but instead putting into my GTD system, such as practicing specific rationality techniques (the workshop included many useful worksheets for doing this) and investigating specific topics like speed-reading and meditation. 

The arc word (TVTropes warning) of this workshop was "agentiness." ("Agentiness" is more funtacular than "agency.") The CFAR curriculum as a whole could be summarized as teaching a collection of techniques to be more agenty. 

Miscellaneous

A distinguishing feature the people I met at the workshop seemed to have in common was the ability to go meta. This is not a skill which was explicitly mentioned or taught (although it was frequently implicit in the kind of jokes people told), but it strikes me as an important foundation for rationality: it seems hard to progress with rationality unless the thought of using your brain to improve how you use your brain, and also to improve how you improve how you use your brain, is both understandable and appealing to you. This probably eliminates most people as candidates for rationality training unless it's paired with or maybe preceded by meta training, whatever that looks like.

One problem with the workshop was lack of sleep, which seemed to wear out both participants and instructors by the last day (classes started early in the day and conversations often continued late into the night because they were unusually fun / high-value). Offering everyone modafinil or something at the beginning of future workshops might help with this.

Overall

Overall, while it's too soon to tell how big an impact the workshop will have on my life, I anticipate a big impact, and I strongly recommend that aspiring rationalists attend future workshops. 

[LINK] 23andme is 99$ now

5 Jabberslythe 12 December 2012 02:31AM

It's been reduced to 99$ and it seems like it is a permanent reduction. I was thinking of buying it at 299$ because it had not been on sale for a while, so I'm very pleased this happened.

Their press release on it:

http://blog.23andme.com/news/one-million-strong-a-note-from-23andmes-anne-wojcicki/

 

Two Anki plugins to reinforce reviewing (updated)

11 D_Malik 03 December 2012 10:04PM

This post is about two Anki plugins I just wrote. I've been using them for a few months as monkey patches, but I thought it might help people here (or at least the 20% that are awesome enough to use SRSs) to have them as plugins. They're ugly and you may have to fiddle for a while to get them to work.

 

1. Music-Fiddler

To use this, play music while doing Anki revs. (I also recommend that you try playing music only while doing Anki, as a way of making Anki more pleasant.) While you're reviewing a card, the music volume will gradually decrease. As soon as you pass or fail the card, the volume will go back up, then start gradually decreasing again. So whenever you stop paying attention and instead start thinking about all the awesome things you could do if only you were able to sit down and work, the program punishes you by stopping the music. And whenever you concentrate fully on your work and so go through cards quickly, you have a personal soundtrack!

To use this plugin:

- If you do not have Linux, you'll need to modify the code somehow.

- Ensure that the "amixer" command works on your computer. If it doesn't, you're going to need to modify the code somehow.

- Make sure you have the new Anki 2.0.

- Download the plugin.

- Change all lines (in the plugin source) marked with "CHANGEME" according to your preferences.

- You might want to disable convenient ways of increasing the volume, like keyboard shortcuts.

This plugin provides psychological reinforcement, but is not proper intermittent reinforcement, because it is predictable and regular instead of intermittent. I'm not sure whether this should be fixed; I haven't yet gotten around to trying it with only intermittent volume increases.

 

2. Picture-Flasher

After answering a card, this plugin selects, with some probability, a random image from a folder and flashes it onto your screen briefly. This gives intermittent reinforcement.

To use this plugin:

- I haven't tested it on non-Linux operating systems, but I can't see any obvious places it'll fail.

- Make sure you have the new Anki 2.0.

- Get pictures from someplace; see below.

- Download the plugin.

- Change all lines (in the plugin source) marked with "CHANGEME" according to your preferences. Be sure especially to put in your picture directory and the number of pictures you have.

To get pictures, I downloaded high-scoring pictures off of reddit. This script can do that automatically. You can use pictures of cute animals, funny captioned pictures of cats, or more questionable things.

The plugin could be made a lot more awesome by having it automatically pull pictures from the internet so you're not reusing them. I'm not planning on doing this anytime soon (because I have no internet on my main computer for productivity reasons), but if somebody else does that and posts it, they are awesome and they should feel awesome.

Update 4 Dec: Emanuel Rylke has created a patch for this plugin which removes the requirement to rename the pictures. It also moves the configuration options to the top of the plugin, making them easier to find. The new version is at the same download link


Update 16 June 2015: The plugins were deleted from the official list where they previously were, apparently because my AnkiWeb account was deleted due to disuse. So I've uploaded the two plugins on GitHub here: https://github.com/StephenBarnes/AnkiPlugins. I also re-uploaded the plugins to the official list. Links on this post have been updated.

Empirical claims, preference claims, and attitude claims

5 John_Maxwell_IV 15 November 2012 07:41PM

What do the following statements have in common?

  • "Atlas Shrugged is the best book ever written."
  • "You break it, you buy it."
  • "Earth is the most interesting planet in the solar system."

My answer: None of them are falsifiable claims about the nature of reality.  They're all closer to what one might call "opinions".  But what is an "opinion", exactly?

There's already been some discussion on Less Wrong about what exactly it means for a claim to be meaningful.  This post focuses on the negative definition of meaning: what sort of statements do people make where the primary content of the statement is non-empirical?  The idea here is similar to the idea behind anti-virus software: Even if you can't rigorously describe what programs are safe to run on your computer, there still may be utility in keeping a database of programs that are known to be unsafe.

Why is it useful to be able to be able to flag non-empirical claims?  Well, for one thing, you can believe whatever you want about them!  And it seems likely that this pattern-matching approach works better for flagging them than a more constructive definition.

continue reading »

Analyzing FF.net reviews of 'Harry Potter and the Methods of Rationality'

25 gwern 03 November 2012 11:47PM

The unprecedented gap in Methods of Rationality updates prompts musing about whether readership is increasing enough & what statistics one would use; I write code to download FF.net reviews, clean it, parse it, load into R, summarize the data & depict it graphically, run linear regression on a subset & all reviews, note the poor fit, develop a quadratic fit instead, and use it to predict future review quantities.

Then, I run a similar analysis on a competing fanfiction to find out when they will have equal total review-counts. A try at logarithmic fits fails; fitting a linear model to the previous 100 days of _MoR_ and the competitor works much better, and they predict a convergence in <5 years.

Master version: http://www.gwern.net/hpmor#analysis

Admissions Essay Help?

5 OnTheOtherHandle 01 August 2012 07:19PM

I need help writing a college application essay that will maximize my chances of getting into a school that the world considers prestigious. (17 years old, preparing to enter 12th grade at a central California high school as of this writing.)

Throughout high school, I resisted being over-scheduled, and basically eschewed all extracurricular activities in favor of having time to think and read. Even when my parents pushed me into things like tennis, dance, or debate clubs (ugh), I was secure in the belief that I could forgo them and rely on my grades and test scores to get me into a college that was good enough to earn a useful engineering degree and find a few interesting friends. (I was right.)

However, my priorities have changed, and I’m starting to really value the extra leverage prestige can bring me. I plan to start a Less Wrong/80,000 Hours club at whatever university I end up attending. I would have access to more intelligent, interested people at Stanford than at, say, UC Irvine. Perhaps more importantly, the club itself would have a better standing in the outside world if it were founded in Stanford. (This in addition to the fact that Stanford already has a world-class Decisions and Ethics Center that may be able to help.)

This is not to say I now regret not being an officer in a dozen useless clubs or participating in endless extracurricular activities. I do, however, regret not doing at least one really impressive, externally-verifiable thing like writing a book. Nothing in my life would make someone say, “Wow, how the hell did she do that?” If admissions officers could scan my brain, they would find a lot that would make them say, “How the hell could she think that?” – but not much of it would be positive.

So my question is, how do I write a personal statement essay, 250-500 words, that will leave an impression in an admissions officer’s mind, without lying or plagiarizing, given that my adolescence was spent thinking and reading, not *doing*? Each university then has 2-4 follow-up prompts (<= 250 words), such as these from Stanford:

  1. Stanford students possess intellectual vitality. Reflect on an idea or experience that has been important to your intellectual development.
  2. Virtually all of Stanford’s undergraduates live on campus. What would you want your future roommate to know about you? Tell us something about you that will help your roommate—and us—know you better.
  3. What matters to you, and why?

The problem with answering these is that all of my *best* answers for these questions (“Newcomblike problems,” “Hey, do you want to join this rationality club I want to start?”, and “optimal philanthropy,” respectively) would take way more than 250 words to explain.

The focus on Stanford, by the way, is because my parents would be extremely unwilling to send me to a university on the East Coast, even if it were really prestigious. But feel free to give me general advice or advice specific to another university. :) If it actually happens, I'll be in a better position to convince them.

May Be Relevant:

I once tutored a girl in Algebra 1 over a period of three months, bringing her grades up from a D to a B. She stopped needing help and I didn’t go looking for another tutee.

I completed NaNoWriMo my freshman year – yeah, it was pretty bad.

I’ve been writing a daily essay on 750 words since December 2010, and have written over 518,000 words in 562 days – writing something 98% of the time, and completing my words 95% of the time. (Although a lot of the missed days were due to glitches in the early website eating my words.)

I entered the Science Fair with a couple friends, hated it because it crushed the spirit of curious inquiry under a predetermined experimental procedure with a predetermined result, and unsurprisingly didn’t win – although we got a certificate from the US Army.

I joined a community service club, hated it because we were just unpaid labor for rich people who didn’t need much help, but stayed anyway because my friends were in it.

General SAT: Reading and Writing scores slightly above the median for most prestigious universities, Math score slightly below. 800's on SAT Math II (Pre-calculus), SAT Biology Molecular, and SAT US History.

5's on AP Calculus AB, AP English Language, and other, less relevant AP's. Five AP classes so far taken, received A's, planning to take 6 more next year.

High probability of a good letter of recommendation from APUSH and Calculus teachers.

Thank you!

Edit: Fixed the hyperlink formatting.

Value of Information: 8 examples

48 gwern 18 May 2012 11:45PM

ciphergoth just asked what the actual value of Quantified Self/self-experimentation is. This finally tempted me into running value of information calculations on my own experiments. It took me all afternoon because it turned out I didn’t actually understand how to do it and I had a hard time figuring out the right values for specific experiments. (I may not have not gotten it right, still. Feel free to check my work!)  Then it turned out to be too long for a comment, and as usual the master versions will be on my website at some point. But without further ado!

continue reading »

Lesswrong Community's How-Tos and Recommendations

25 EE43026F 07 May 2012 01:41PM

The Lesswrong community is often a dependable source of recommendations, network help, and advice. When I'm looking for a book or learning material on a topic I'll often try and search here to see what residents have found useful. Similarly, social advice, anecdotes and explanations as seen from the point of view of the community have regularly been insightful or eye-opening. The prototypical examples of such articles are, on top of my head :


http://lesswrong.com/lw/3gu/the_best_textbooks_on_every_subject/

http://lesswrong.com/lw/453/procedural_knowledge_gaps/

the topics of which are neatly listed on

http://lesswrong.com/lw/a08/topics_from_procedural_knowledge_gaps/

 

And lately

http://lesswrong.com/r/discussion/lw/c6y/why_do_people/

 

the latter prompted me to write this article. We don't keep track of such resources as far as I know. This probably belongs in the wiki as well.

 

Other potentially useful resources were:

 

http://lesswrong.com/lw/12d/recommended_reading_for_new_rationalists/

http://lesswrong.com/lw/2kk/book_recommendations/

http://lesswrong.com/lw/2ua/recommended_reading_for_friendly_ai_research/



math learning

http://lesswrong.com/lw/9qq/what_math_should_i_learn/


http://lesswrong.com/lw/8js/what_mathematics_to_learn/

http://lesswrong.com/lw/a54/seeking_education/


misc learning

http://lesswrong.com/lw/5me/scholarship_how_to_do_it_efficiently/

http://lesswrong.com/lw/4yv/i_want_to_learn_programming/

http://lesswrong.com/lw/3qr/i_want_to_learn_economics/

http://lesswrong.com/lw/3us/i_want_to_learn_about_education/

http://lesswrong.com/lw/8e3/which_fields_of_learning_have_clarified_your/


social

http://lesswrong.com/lw/6ey/learning_how_to_explain_things/

http://lesswrong.com/lw/818/how_to_understand_people_better/

http://lesswrong.com/lw/6tb/developing_empathy/


community

http://lesswrong.com/lw/929/less_wrong_mentoring_network/

http://lesswrong.com/lw/7hi/free_research_help_editing_and_article_downloads/


Employment

http://lesswrong.com/lw/43m/optimal_employment/

http://lesswrong.com/lw/2qp/virtual_employment_open_thread/


http://lesswrong.com/lw/38u/best_career_models_for_doing_research/

http://lesswrong.com/lw/4ad/optimal_employment_open_thread/

http://lesswrong.com/lw/626/job_search_advice/

http://lesswrong.com/lw/8cp/any_thoughts_on_how_to_locate_job_opportunities/

http://lesswrong.com/lw/7yl/more_shameless_ploys_for_job_advice/

http://lesswrong.com/lw/a93/existential_risk_reduction_career_network/

 

Entertainment

http://lesswrong.com/r/discussion/tag/recommendations/?sort=new

Case Study: Testing Confirmation Bias

32 gwern 02 May 2012 02:03PM

Master copy lives on gwern.net

Experiment: a good researcher is hard to find

29 gwern 30 April 2012 05:13PM

See previously “A good volunteer is hard to find”

Back in February 2012, lukeprog announced that SIAI was hiring more part-time remote researchers, and you could apply just by demonstrating your chops on a simple test: review the psychology literature on habit formation with an eye towards practical application. What factors strengthen new habits? How long do they take to harden? And so on. I was assigned to read through and rate the submissions and Luke could then look at them individually to decide who to hire. We didn’t get as many submissions as we were hoping for, so in April Luke posted again, this time with a quicker easier application form. (I don’t know how that has been working out.)

But in February, I remembered the linked post above from GiveWell where they mentioned many would-be volunteers did not even finish the test task. I did, and I didn’t find it that bad, and actually a kind of interesting exercise in critical thinking & being careful. People suggested that perhaps the attrition was due not to low volunteer quality, but to the feeling that they were not appreciated and were doing useless makework. (The same reason so many kids hate school…) But how to test this?

continue reading »

Tel Aviv Self-Improvement Meetup Group

3 Meni_Rosenfeld 16 February 2012 03:37PM

I have started the Tel Aviv Self-Improvement Meetup Group. It is not about rationality or LessWrong per se, but it is heavily influenced by rationality dojos and LW posts in the applied rationality, personal optimization and anti-akrasia cluster. As the description says, it is

A group of people helping each other apply rationality to our everyday lives, in order to improve our skills, make the best decisions, become productive and achieve our goals.

If you're interested and in the area, you're welcome to join. If you have any comments or suggestions, based perhaps on experience with similar groups, please share.

[LINK] "The nirvana would be if the questions raised by Oprah Winfrey would be answered by the faculty at Harvard."

2 [deleted] 31 January 2012 04:32PM

Alain de Botton:

I once very politely raised the thought that one reason philosophy departments have been cut is the fault of philosophers. The answer always comes back: 'The point of philosophy is to ask questions, not to give answers.' I can't help but think 'No. It can't be!' Imagine if you applied that question to other areas – is the purpose of rocket science to ask questions about rockets?

Sounds familiar.

Applied Rationality Practice

6 ksvanhorn 23 December 2011 04:43AM

It's one thing to read about a subject, but one gains a deeper understanding by seeing it applied to real problems, and an even deeper understanding by applying it yourself. This applies in particular to the closely related subjects of rationality, cognitive biases, and decision theory. With this in mind, I'd like to propose that we create one or more discussion topics each devoted to discussing and analyzing one decision problem of one person, and see how all this theory we've been discussing can help. The person could be either a Less Wrong member or just an acquaintance of one of us.

I'll commit to actively participating myself. Does anyone want to put forth a problem to discuss?

 

[Link] Walking Through Doors Causes Forgetting

5 khafra 21 November 2011 02:56PM

We investigated the ability of people to retrieve information about objects as they moved through rooms in a virtual space. People were probed with object names that were either associated with the person (i.e., carried) or dissociated from the person (i.e., just set down). Also, people either did or did not shift spatial regions (i.e., go to a new room). Information about objects was less accessible when the objects were dissociated from the person. Furthermore, information about an object was also less available when there was a spatial shift. However, the spatial shift had a larger effect on memory for the currently associated object. These data are interpreted as being more supportive of a situation model explanation, following on work using narratives and film. Simpler memory-based accounts that do not take into account the context in which a person is embedded cannot adequately account for the results.

http://www.springerlink.com/content/m6lq80675m22232h/ 

There's probably some deep implications to this I'm not qualified to plumb.  But next time I'm concentrating on something, and need to get up from the computer and walk around a bit, I'm going to try avoiding doorways.

Hard problem? Hack away at the edges.

45 lukeprog 26 September 2011 10:03AM

Wei Dai offered 7 tips on how to answer really hard questions:

  • Don't stop at the first good answer.
  • Explore multiple approaches simultaneously.
  • Trust your intuitions, but don't waste too much time arguing for them.
  • Go meta.
  • Dissolve the question.
  • Sleep on it.
  • Be ready to recognize a good answer when you see it. (This may require actually changing your mind.)

Some others from the audience include:

I'd like to offer one more technique for tackling hard questions: Hack away at the edges.

General history books compress time so much that they often give the impression that major intellectual breakthroughs result from sudden strokes of insight. But when you read a history of just one breakthrough, you realize how much "chance favors the prepared mind." You realize how much of the stage had been set by others, by previous advances, by previous mistakes, by a soup of ideas crowding in around the central insight made later.

It's this picture of the history of mathematics and science that makes me feel quite comfortable working on hard problems by hacking away at their edges.

I don't know how to build Friendly AI. Truth be told, I doubt humanity will figure it out before going extinct. The whole idea might be impossible or confused. But I'll tell you this: I doubt the problem will be solved by getting smart people to sit in silence and think real hard about decision theory and metaethics. If the problem can be solved, it will be solved by dozens or hundreds of people hacking away at the tractable edges of Friendly AI subproblems, drawing novel connections, inching toward new insights, drawing from others' knowledge and intuitions, and doing lots of tedious, boring work.

Here's what happened when I encountered the problem of Friendly AI and decided I should for the time being do research on the problem rather than, say, trying to start a few businesses and donate money. I realized that I didn't see a clear path toward solving the problem, but I did see tons of apparently relevant research that could be done around the edges of the problem, especially with regard to friendliness content (because metaethics is my background). Snippets of my thinking process look like this:

Friendliness content is about human values. Who studies human values, besides philosophers? Economists and neuroscientists. Let's look at what they know. Wow, neuroeconomics is far more advanced than I had realized, and almost none of it has been mentioned by anybody researching Friendly AI! Let me hack away at that for a bit, and see if anything turns up.

Some people approach metaethics/CEV with the idea that humans share a concept of 'ought', and figuring out what that is will help us figure out how human values are. Is that the right way to think about it? Lemme see if there's research on what concepts are, how much they're shared between human brains, etc. Ah, there is! I'll hack away at this next.

CEV involves the modeling of human preferences. Who studies that? Economists do it in choice modeling, and AI programmers do it in preference elicitation. They even have models for dealing with conflicting desires, for example. Let me find out what they know...

CEV also involves preference extrapolation. Who has studied that? Nobody but philosophers, unfortunately, but maybe they've found something. They call such approaches "ideal preference" or "full information" accounts of value. I can check into that.

You get the idea.

This isn't the only way to solve hard problems, but when problems are sufficiently hard, then hacking away at their edges may be just about all you can do. And as you do, you start to see where the problem is more and less tractable. Your intuitions about how to solve the problem become more and more informed by regular encounters with it from all angles. You learn things from one domain that end up helping in a different domain. And, inch by inch, you make progress.

Of course you want to be strategic about how you're tackling the problem. But you also don't want to end up thinking in circles because the problem is too hard to even think strategically about how to tackle it.

You also shouldn't do 3 months of thinking and never write any of it down because you know what you've thought isn't quite right. Hacking away at a tough problem involves lots of wrong solutions, wrong proposals, wrong intuitions, and wrong framings. Maybe somebody will know how to fix what you got wrong, or maybe your misguided intuitions will connect to something they know and you don't and spark a useful thought in their head.

Okay, that's all. Sorry for the rambling!

Bayesian Reasoning Applied to House Selling: Listing Price

1 byrnema 26 August 2011 11:43PM

Like Yvain's parents, I am planning on moving house. Selling a house and buying a house involve making a lot of decisions based on limited information, which I thought would make a set of good exercises for the application of Bayesian reasoning. I need to decide what price to list my house for, determine how much time and money to put into fixing it up, choose a new home and then there's the two poker games of the final negotiations of the sale.

(I logged onto Less Wrong having just made the decision to consider posting this article, so I was kind of weirded out at first by the title of Yvain's post; but then I was relieved that the topic was somewhat different. I am used to coincidences but on the other hand they push me a little paranoid on my spectrum and I'll feel less stable for a few hours. I already know Google tracks me and who knows what algorithms could be running given a bunch of computer scientists...?)

 

House Story

tldr; We're listing at the appraised value +10%.

A few years ago, we purchased a beautiful house. 'We' is my husband and I and my parents. We purchased the house because it includes a guest house where my parents can retire. However, my mom continues to postpone retirement and in the meantime my husband and I decided we would a) like more light, b) like a shorter commute and c) could purchase two homes we prefer for the price of this one -- my parents would enjoy a house on the water. (Great post and spot on about the features that matter, Yvain!)

I would be happy to sell the house for +5%, covering real estate fees and new flooring we put in. However, three houses in the cul de sac have sold this year for +10% and so we listed it at that price too. Our house is bigger than theirs but not as nice (they have granite and impressive entrances and we don't). On the other hand, having the guest house makes us special.

Via agent and potential buyer feedback, we're coming to realize that we might be lucky to sell the house for +5%. At this price level, people prefer a house that is impressive and in perfect condition.

 

Primary Bayesian Question

My primary question is the following: how should we decide to modify our listing price as we get more information?

First, I've read that if a house is priced correctly you'll get an average of one offer every 10 showings. So far we've had 2 showings without an offer. After how many showings should we reduce the price?

Second, the other three houses sold in 6 or 7 months. After how many months should we reduce the price?

Keep in mind, we don't have to move and I estimate that I would be willing to stay in this house for about +3% per year. In other words, I would be willing to wait 2 years for a higher offer if I could sell it for +3% more by doing so.

I anticipate that after posting this I will be embarrassed that it is so pecuniary. On the other hand, this makes it concrete and the problem in general doesn't have too many emotional factors. Any money we make over the first +5% can be used as a down payment for our next house after we pay our parents back. (I did feel embarrassed, so I took out the dollar values and replaced with relative percents.)

Why epidemiology will not correct itself

38 gwern 11 August 2011 12:54AM

We're generally familiar here with the appalling state of medical and dietary research, where most correlations turn out to be bogus. (And if we're not, I have collected a number of links on the topic in my DNB FAQ that one can read, see http://www.gwern.net/DNB%20FAQ#flaws-in-mainstream-science-and-psychology - probably the best first link to read would be Ioannidis's “Why Most Published Research Findings Are False”.)

I recently found a talk arguing that this problem was worse than one might assume, with false positives in the >80% range, and more interestingly, why the rate is so high and will remain high for the foreseeable future. Young asserts, pointing to papers and textbooks by epidemiologists, that they are perfectly aware of what the Bonferroni correction does (and why one would use it) and that they choose to not use it because they do not want to risk any false negatives. (Young also conducts some surveys showing less interest in public sharing of data and other good things like that, but that seems to me to be much less important than the statistical tradeoffs.)

There are three papers online that seem representative:

  1. Rothman (1990)
  2. Perneger (1998)
  3. Vandenbroucke, PLoS Med (2008)

Reading them is a little horrifying when one considers the costs of the false positives, all the people trying to stay healthy by following what is only random noise, and the general (and justified!) contempt for science by those aware of the false positive rate. (I enlarge on this vein of thought on Reddit. The recent kerfluffle about whether salt really is bad for you - medical advice that has stressed millions and will cost more millions due to New York City's war on salt - is a reminder of what is at stake.)

The take-away, I think, is to resolutely ignore anything to do with diet & exercise that is not a randomized trial. Correlations may be worth paying attention to in other areas but not in health.

Bayesian justice

18 gwern 26 July 2011 12:58AM

"The mathematical mistakes that could be undermining justice"

They failed, though, to convince the jury of the value of the Bayesian approach, and Adams was convicted. He appealed twice unsuccessfully, with an appeal judge eventually ruling that the jury's job was "to evaluate evidence not by means of a formula... but by the joint application of their individual common sense."

But what if common sense runs counter to justice? For David Lucy, a mathematician at Lancaster University in the UK, the Adams judgment indicates a cultural tradition that needs changing. "In some cases, statistical analysis is the only way to evaluate evidence, because intuition can lead to outcomes based upon fallacies," he says.

Norman Fenton, a computer scientist at Queen Mary, University of London, who has worked for defence teams in criminal trials, has just come up with a possible solution. With his colleague Martin Neil, he has developed a system of step-by-step pictures and decision trees to help jurors grasp Bayesian reasoning (bit.ly/1c3tgj). Once a jury has been convinced that the method works, the duo argue, experts should be allowed to apply Bayes's theorem to the facts of the case as a kind of "black box" that calculates how the probability of innocence or guilt changes as each piece of evidence is presented. "You wouldn't question the steps of an electronic calculator, so why here?" Fenton asks.

It is a controversial suggestion. Taken to its logical conclusion, it might see the outcome of a trial balance on a single calculation. Working out Bayesian probabilities with DNA and blood matches is all very well, but quantifying incriminating factors such as appearance and behaviour is more difficult. "Different jurors will interpret different bits of evidence differently. It's not the job of a mathematician to do it for them," says Donnelly.

The linked paper is "Avoiding Probabilistic Reasoning Fallacies in Legal Practice using Bayesian Networks" by Norman Fenton and Martin Neil. The interesting parts, IMO, begin on page 9 where they argue for using the likelihood ratio as the key piece of information for evidence, and not simply raw probabilities; page 17, where a DNA example is worked out; and page 21-25 on the key piece of evidence in the Bellfield trial, no one claiming a lost possession (nearly worthless evidence)

Related reading: Inherited Improbabilities: Transferring the Burden of Proof, on Amanda Knox.

"The True Rejection Challenge" - Thread 2

7 Armok_GoB 02 July 2011 11:49AM

The old thread (found here: http://lesswrong.com/lw/6dc/the_true_rejection_challenge/ ) was becoming very unwieldy and hard to check, so many people suggested we made a second one. I just realized that the only reason it didn't exist yet was bystander effect-like, so I desiced to just do this one.

From the original thread:

An exercise:

Name something that you do not do but should/wish you did/are told you ought, or that you do less than is normally recommended.  (For instance, "exercise" or "eat vegetables".)

Make an exhaustive list of your sufficient conditions for avoiding this thing.  (If you suspect that your list may be non-exhaustive, mention that in your comment.)

Precommit that: If someone comes up with a way to do the thing which doesn't have any of your listed problems, you will at least try it.  It counts if you come up with this response yourself upon making your list.

(Based on: Is That Your True Rejection?)

Edit to add: Kindly stick to the spirit of the exercise; if you have no advice in line with the exercise, this is not the place to offer it.  Do not drift into confrontational or abusive demands that people adjust their restrictions to suit your cached suggestion, and do not offer unsolicited other-optimizing.

LW Bipolar Support Group?

4 bipolar 29 June 2011 10:06PM

Related to: Intrapersonal negotiation

I'm writing to inquire about whether there's interest on LW in developing a bipolar support group.

There's a general issue of the people at in-person support groups and designated online forums. been relatively uneducated; having little capacity for or ability for reflection; and for the discussion at such places to degenerate into platitudes. I was touched by datadataeverywhere's posting Intrapersonal negotiation and would be interested in talking with similar people about similar topics.

I'm bipolar ii and have been for at least a decade but only fully became aware of my condition over the past year. I've found my varying functionality/productivity corresponding to hypomanic/depressive oscillations very confusing and have little idea of how to best ride out the waves. I am seeing a psychiatrist and have read books such as The Bipolar Disorder Survival Guide, Second Edition: What You and Your Family Need to Know, and The Bipolar Workbook: Tools for Controlling Your Mood Swings. I tried to read the Goodwin/Jamison Manic-Depressive Illness but found it dull. I looked at Jamison's other books but though she's a very poetic author I found the accuracy and general applicability of her subjective narratives questionable.

Anyway, any LWers who are interested should comment below or PM me.

Personal Benefits from Rationality

5 Celer 12 May 2011 01:08AM

I saw this and realised something:

"Hey, wait, where have I seen other people talk about specific benefits from Rationality?"

And then I realised I hadn't. I look around the site some. Nothing there.

This is a place to fix that. The idea of this page is to post specific things that you personally have found helpful, that you learned from your studies of Bayescraft. This way we can find some that seem to work for a large number of people, so that when new people start to become interested in Rationality we can "make it rain" so that they see the benefits that come with being less wrong.

For commenters:

If someone posted something already that also worked for you, mention that. If every tactic is apparently used by only a single person, then it is harder for us as a community to figure out what we should recommend to tyros. 

List of N Things:

 

Understanding that my high school history class has more to do with real science than does my Chemistry class let me understand how I should be approaching the problem. History lets you look at what happened and say "Why did this happen" when you view it the right way.

Reading up on cognitive neuroscience taught me that I could use the placebo affect on myself. I have missed one day of school due to illness in my life.

Learning to not propose solutions for a minimum of five minutes, by the clock, has honestly been the most effective thing I have yet learned for personal application at Less Wrong.

 

May we all share many useful things, for our own benefit and as a place to point tyros towards.

 

 

 

HELP! I want to do good

15 Giles 28 April 2011 05:29AM

There are people out there who want to do good in the world, but don't know how.

Maybe you are one of them.

Maybe you kind of feel that you should be into the "saving the world" stuff but aren't quite sure if it's for you. You'd have to be some kind of saint, right? That doesn't sound like you.

Maybe you really do feel it's you, but don't know where to start. You've read the "How to Save the World" guide and your reaction is, ok, I get it, now where do I start? A plan that starts "first, change your entire life" somehow doesn't sound like a very good plan.

All the guides on how to save the world, all the advice, all the essays on why cooperation is so hard, everything I've read so far, has missed one fundamental point.

If I could put it into words, it would be this:

AAAAAAAAAAAGGGHH WTF CRAP WHERE DO I START EEK BLURFBL

If that's your reaction then you're half way there. That's what you get when you finally grasp how much pointless pain, misery, risk, death there is in the world; just how much good could be done if everyone would get their act together; just how little anyone seems to care.

If you're still reading, then maybe this is you. A little bit.

And I want to help you.

How will I help you? That's the easy part. I'll start a community of aspiring rationalist do-gooders. If I can, I'll start it right here in the comments section of this post. If anything about this post speaks to you, let me know. At this point I just want to know whether there's anybody out there.

And what then? I'll listen to people's opinions, feelings and concerns. I'll post about my worldview and invite people to criticize, attack, tear it apart. Because it's not my worldview I care about. I care about making the world better. I have something to protect.

The posts will mainly be about what I don't see enough of on Less Wrong. About reconciling being rational with being human. Posts that encourage doing rather than thinking. I've had enough ideas that I can commit to writing 20 discussion posts over a reasonable timescale, although some might be quite short - just single ideas.

Someone mentioned there should be a "saving the world wiki". That sounds like a great idea and I'm sure that setting one up would be well within my power if someone else doesn't get around to it first.

But how I intend to help you is not the important part. The important part is why.

To answer that I'll need to take a couple of steps back.

Since basically forever, I've had vague, guilt-motivated feelings that I ought to be good. I ought to work towards making the world the place I wished it would be. I knew that others appeared to do good for greedy or selfish reasons; I wasn't like that. I wasn't going to do it for personal gain.

If everyone did their bit, then things would be great. So I wanted to do my bit.

I wanted to privately, secretively, give a hell of a lot of money to a good charity. So that I would be doing good and that I would know I wasn't doing it for status or glory.

I started small. I gave small amounts to some big-name charities, charities I could be fairly sure would be doing something right. That went on for about a year, with not much given in total - I was still building up confidence.

And then I heard about GiveWell. And I stopped giving. Entirely.

WHY??? I can't really give a reason. But something just didn't seem right to me. People who talked about GiveWell also tended to mention that the best policy was to give only to the charity listed at the top. And that didn't seem right either. I couldn't argue with the maths, but it went against what I'd been doing up until that point and something about that didn't seem right.

Also, I hadn't heard of GiveWell or any of the charities they listed. How could I trust any of them? And yet how could I give to anyone else if these charities were so much more effective? Big akrasia time.

It took a while to sink in. But when it did, I realised that my life so far had mostly been a waste of time. I'd earned some money, but I had no real goals or ambitions. And yet, why should I care if my life so far had been wasted? What I had done in the past was irrelevant to what I intended to do in the future. I knew what my goal was now and from that a whole lot became clear.

One thing mattered most of all. If I was to be truly virtuous, altruistic, world-changing then I shouldn't deny myself status or make financial sacrifices. I should be completely indifferent to those things. And from that the plan became clear: the best way to save the world would be to persuade other people to do it for me. I'm still not entirely sure why they're not already doing it, but I will use the typical mind prior and assume that for some at least, it's for the same reasons as me. They're confused. And that to carry out my plan I won't need to manipulate anyone into carrying out my wishes, but simply help them carry out their own.

I could say a lot more and I will, but for now I just want to know. Who will be my ally?

IA first steps (Berkeley, CA) (dead)

1 Cayenne 27 April 2011 01:25AM

Apologies for wasting your time with this post.  Please disregard it.

 

My temporary plan is to meet together daily at one of the BART stops (Ashby or Berkeley, probably Ashby at first), choose a random set of directions, walk for 30-45 minutes, and then find a place to sit and chat for a while about other activities.  Then walk back and split up.  Total time may be 2-2.5 hours, 1-1.5 of it spent walking.  I'm planning on doing this in the evening to avoid the midday heat, and to be done before it gets dark.

Anyone that wishes can come along.  If you would like to bring ideas for future format or training ideas, please do!  Together we can come up with things that enhance us all.

 

Edit - forgot to add location!  Thanks for reminding me, Randaly.

 

Insufficiently Awesome

28 Cayenne 19 April 2011 07:28PM

Apologies for the wasted time spent reading and replying to this post.  Please disregard it.

 

I've been feeling non-awesome for a long time.  I don't know if anyone else here feels the same way, but I'm going to assume that at least a few people do.  I want to correct this horrible deficiency.

We already have the LW meetups in a lot of places, monthly in some places and weekly in others.  I've gone to a few, and they're interesting and I get to meet a lot of very smart people (and get intimidated by them)... but mostly all we've done is talk and sometimes go and eat at a restaurant.  I want more than this!

 

We already talk, we need an action-based meetup.  I want to propose another kind of meetup, the Insufficiently Awesome meetup.  It should aim to make us good at baseline things like fitness, social skills, strategy, and reflexes, and to make us very good at specialized awesome things like master-level chess/go/shogi, public speaking, various sports, dancing, making music, making art.

I think this meetup should be daily, though not everyone would want to go every day.  Nonetheless, we should have something happening every day that we're not spending talking.  The goal shouldn't be just to be fit in different situations, but to instead become totally awesome.

Is there anyone else that feels the same?  If so, what things do you think we need to learn for the baseline, and what things should we get very good at?

 

Case study: Console Insurance

11 gwern 05 March 2011 02:11AM

I've sometimes seen people say that they need concrete simple examples of ideas like expected utility and Bayes' theorem. So, continuing in the same vein as An Abortion Dialogue and Case Study: Melatonin, I recently polished up my shorter-but-hopefully-still-interesting article on Console Insurance.

It's basically a short discussion of how back of the envelop estimates show console insurance (and by extension, most warranty extensions) to be a bad investment.

When to scream "Error!"

10 Dorikka 26 February 2011 05:40PM

In Anna’s recent post, she talked about training your mind to notice when it wasn’t curious about something and scream “Error! Look for a different way to do this” in such cases. Johnicholas and TheOtherDave's list of what stupidity feels like also looks useful for this purpose. I'm creating this post to make a more comprehensive list of feelings which indicate that people should reanalyze different possible paths to make sure that the one which they're taking is the most effective one to their objective.

Please suggest additions to the list in your comments -- I'll move them up here (along with links to further explanation, if given.) Keep in mind that your description of the feeling should be as illustrative as possible. For example, "feeling stupid" is unhelpful, while "you feel like you've taken a wrong turn into a never-ending tunnel" is better. Of course, metaphors which are immediately understood by some people may not be so easily understood by others, so try to give a more detailed description of the feeling if other people express that you're probably saying more than they're hearing.

List: "Error! Look for a different way to do this" if you feel like:

  • being bored, being in pain, being distracted, wanting to do anything else than this
  • being unworthy of these divine (external) ideas
  • blind plodding obedience
  • being tired all the time, even if you're not2
  • not having enough fingers to hold all of my thoughts in place
  • merging onto the highway when I can't see all the oncoming traffic
  • someone's playing loud distracting music that I can't hear
  • riding on a train with square wheels

1. Sometimes tedious/boring tasks genuinely cannot be made easier or less boring, so your "Error!" message might not return anything useful. However, you should at least look.
2. This may also indicate that your stupidity has biological causes, such as nutrition/sleep deficiency. 20-30 minute naps are awesome, though longer ones might make you groggy.
3. Of course, if a goal-achieving action is also supported by authorities, that is a good thing.

Tool for combating undue hesitation

2 Dorikka 20 February 2011 05:19AM

I sometimes feel negative emotions at the thought that the course of action that I am taking isn't even close to the optimal course of action -- that a more effective mind could sort through whatever situation I'm currently having difficulty with and craft a plan that was much more likely to succeed than any of my own. In short, I feel inept in comparison to the better minds in the space-of-all-possible-minds, and so I experience undue hesitation while I'm trying to figure out the correct action to take. Of course, good planning often leads to better results, but this behavioral pattern has a significantly negative effect in situations (especially social ones) where I need to make a decision quickly.

I've found that it helps me think more rapidly and clearly if I think in terms of which of the possible actions that I've thought of will produce the greatest positive difference in net expected utility in comparison to doing nothing. Once I come up with a course of action, I no longer feel a sense of paralysis at how inept my decision-making skills must be compared to much better minds than my own, which saves a certain amount of mental processing power and emotional effort which can then be used for other things. Doing this also helps to prevent panic and the like from springing up because I'm not thinking in terms of whether I can succeed or not, but sorting through which actions maximize my chances of succeeding out of the set of actions that I've currently  thought up.

I feel like I've written less than I think that I've written -- that people may not get much out of this post because they haven't actually shared my brain with me, and I've done an inadequate job of deconstructing my thoughts when I've put them to paper. If this is correct, please tell me and I can try to elaborate.

Optimal Employment Open Thread

13 [deleted] 14 February 2011 10:49PM

Related to: Optimal Employment, Best career models for doing research?, (Virtual) Employment Open Thread

In Optimal Employment Louie discussed some biases that lead people away from optimal employment, and gave working in Australia as an option for such employment. What are some other options?

Your optimal employment will depend on how much you care about a variety of things (free time, money, etc.) so when discussing options it might be helpful to say what you're trying to optimize for. 

In addition to proposing options we could list resources that might be helpful for generating or implementing options.

Everyday Questions Wanting Rational Answers

5 Relsqui 05 October 2010 06:04AM

I'm working on a list of question types which come up frequently in day-to-day life but which I haven't yet found a reliable, rational way to answer. Here are some examples, including summaries of any progress made in the comments.

continue reading »