Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

[Link] General and Surprising

3 John_Maxwell_IV 15 September 2017 06:33AM

Heuristics for textbook selection

8 John_Maxwell_IV 06 September 2017 04:17AM

Back in 2011, lukeprog posted a textbook recommendation thread.  It's a nice thread, but not every topic has a textbook recommendation.  What are some other heuristics for selecting textbooks besides looking in that thread?

Amazon star rating is the obvious heuristic, but it occurred to me that Amazon sales rank might actually be more valuable: It's an indicator that profs are selecting the textbook for their classes.  And it's an indicator that the textbook has achieved mindshare, meaning you're more likely to learn the same terminology that others use.  (But there are also disadvantages of having the same set of mental models that everyone else is using.)

Somewhere I read that Elements of Statistical Learning was becoming the standard machine learning text partially because it's available for free online.  That creates a wrinkle in the sales rank heuristic, because people are less likely to buy a book if they can get it online for free.  (Though Elements of Statistical Learning appears to be a #1 bestseller on Amazon, in bioinformatics.)

Another heuristic is to read the biographies of the textbook authors and figure out who has the most credible claim to expertise, or who seems to be the most rigorous thinker (e.g. How Brands Grow is much more data-driven than a typical marketing book).  Or try to figure out what text the most expert professors are choosing for their classes.  (Oftentimes you can find the syllabi of their classes online.  I guess the naive path would probably look something like: go to US News to see what the top ranked universities are for the subject you're interested in.  Look at the university's course catalog until you find the course that covers the topic you want to learn.  Do site:youruniversity.edu course_id on Google in order to find the syllabus for the most recent time that course was taught.)

Revitalizing Less Wrong seems like a lost purpose, but here are some other ideas

19 John_Maxwell_IV 12 June 2016 07:38AM

This is a response to ingres' recent post sharing Less Wrong survey results. If you haven't read & upvoted it, I strongly encourage you to--they've done a fabulous job of collecting and presenting data about the state of the community.

So, there's a bit of a contradiction in the survey results.  On the one hand, people say the community needs to do more scholarship, be more rigorous, be more practical, be more humble.  On the other hand, not much is getting posted, and it seems like raising the bar will only exacerbate that problem.

I did a query against the survey database to find the complaints of top Less Wrong contributors and figure out how best to serve their needs.  (Note: it's a bit hard to read the comments because some of them should start with "the community needs more" or "the community needs less", but adding that info would have meant constructing a much more complicated query.)  One user wrote:

[it's not so much that there are] overly high standards,  just not a very civil or welcoming climate . why write content for free and get trashed when I can go write a grant application or a manuscript instead?

ingres emphasizes that in order to revitalize the community, we would need more content.  Content is important, but incentives for producing content might be even more important.  Social status may be the incentive humans respond most strongly to.  Right now, from a social status perspective, the expected value of creating a new Less Wrong post doesn't feel very high.  Partially because many LW posts are getting downvotes and critical comments, so my System 1 says my posts might as well.  And partially because the Less Wrong brand is weak enough that I don't expect associating myself with it will boost my social status.

When Less Wrong was founded, the primary failure mode guarded against was Eternal September.  If Eternal September represents a sort of digital populism, Less Wrong was attempting a sort of digital elitism.  My perception is that elitism isn't working because the benefits of joining the elite are too small and the costs are too large.  Teddy Roosevelt talked about the man in the arena--I think Less Wrong experienced the reverse of the evaporative cooling EY feared, where people gradually left the arena as the proportional number of critics in the stands grew ever larger.

Given where Less Wrong is at, however, I suspect the goal of revitalizing Less Wrong represents a lost purpose.

ingres' survey received a total of 3083 responses.  Not only is that about twice the number we got in the last survey in 2014, it's about twice the number we got in 20132012, and 2011 (though much bigger than the first survey in 2009).  It's hard to know for sure, since previous surveys were only advertised on the LessWrong.com domain, but it doesn't seem like the diaspora thing has slowed the growth of the community a ton and it may have dramatically accelerated it.

Why has the community continued growing?  Here's one possibility.  Maybe Less Wrong has been replaced by superior alternatives.

  • CFAR - ingres writes: "If LessWrong is serious about it's goal of 'advancing the art of human rationality' then it needs to figure out a way to do real investigation into the subject."  That's exactly what CFAR does.  CFAR is a superior alternative for people who want something like Less Wrong, but more practical.  (They have an alumni mailing list that's higher quality and more active than Less Wrong.)  Yes, CFAR costs money, because doing research costs money!
  • Effective Altruism - A superior alternative for people who want something that's more focused on results.
  • Facebook, Tumblr, Twitter - People are going to be wasting time on these sites anyway.  They might as well talk about rationality while they do it.  Like all those phpBB boards in the 00s, Less Wrong has been outcompeted by the hot new thing, and I think it's probably better to roll with it than fight it.  I also wouldn't be surprised if interacting with others through social media has been a cause of community growth.
  • SlateStarCodex - SSC already checks most of the boxes under ingres' "Future Improvement Wishlist Based On Survey Results".  In my opinion, the average SSC post has better scholarship, rigor, and humility than the average LW post, and the community seems less intimidating, less argumentative, more accessible, and more accepting of outside viewpoints.
  • The meatspace community - Meeting in person has lots of advantages.  Real-time discussion using Slack/IRC also has advantages.

Less Wrong had a great run, and the superior alternatives wouldn't exist in their current form without it.  (LW was easily the most common way people heard about EA in 2014, for instance, although sampling effects may have distorted that estimate.)  But that doesn't mean it's the best option going forward.

Therefore, here are some things I don't think we should do:

  • Try to be a second-rate version of any of the superior alternatives I mentioned above.  If someone's going to put something together, it should fulfill a real community need or be the best alternative available for whatever purpose it serves.
  • Try to get old contributors to return to Less Wrong for the sake of getting them to return.  If they've judged that other activities are a better use of time, we should probably trust their judgement.  It might be sensible to make an exception for old posters that never transferred to the in-person community, but they'd be harder to track down.
  • Try to solve the same sort of problems Arbital or Metaculus is optimizing for.  No reason to step on the toes of other projects in the community.

But that doesn't mean there's nothing to be done.  Here are some possible weaknesses I see with our current setup:

  • If you've got a great idea for a blog post, and you don't already have an online presence, it's a bit hard to reach lots of people, if that's what you want to do.
  • If we had a good system for incentivizing people to write great stuff (as opposed to merely tolerating great stuff the way LW culture historically has), we'd get more great stuff written.
  • It can be hard to find good content in the diaspora.  Possible solution: Weekly "diaspora roundup" posts to Less Wrong.  I'm too busy to do this, but anyone else is more than welcome to (assuming both people reading LW and people in the diaspora want it).
  • EDIT 11/27/16 - Recently people have been arguing that social media generates relatively superficial discussions.  This plausibly undermines my "lost purpose" thesis.

ingres mentions the possibility of Scott Alexander somehow opening up SlateStarCodex to other contributors.  This seems like a clearly superior alternative to revitalizing Less Wrong, if Scott is down for it:

  • As I mentioned, SSC already seems to have solved most of the culture & philosophy problems that people complained about with Less Wrong.
  • SSC has no shortage of content--Scott has increased the rate at which he creates open threads to deal with an excess of comments.
  • SSC has a stronger brand than Less Wrong.  It's been linked to by Ezra Klein, Ross Douthat, Bryan Caplan, etc.

But the most important reasons may be behavioral reasons.  SSC has more traffic--people are in the habit of visiting there, not here.  And the posting habits people have acquired there seem more conducive to community.  Changing habits is hard.

As ingres writes, revitalizing Less Wrong is probably about as difficult as creating a new site from scratch, and I think creating a new site from scratch for Scott is a superior alternative for the reasons I gave.

So if there's anyone who's interested in improving Less Wrong, here's my humble recommendation: Go tell Scott Alexander you'll build an online forum to his specification, with SSC community feedback, to provide a better solution for his overflowing open threads.  Once you've solved that problem, keep making improvements and subfora so your forum becomes the best available alternative for more and more use cases.

And here's my humble suggestion for what an SSC forum could look like:

As I mentioned above, Eternal September is analogous to a sort of digital populism.  The major social media sites often have a "mob rule" culture to them, and people are increasingly seeing the disadvantages of this model.  Less Wrong tried to achieve digital elitism and it didn't work well in the long run, but that doesn't mean it's impossible.  Edge.org has found a model for digital elitism that works.  There may be other workable models out there.  A workable model could even turn in to a successful company.  Fight the hot new thing by becoming the hot new thing.

My proposal is based on the idea of eigendemocracy.  (Recommended that you read the link before continuing--eigendemocracy is cool.)  In eigendemocracy, your trust score is a composite rating of what trusted people think of you.  (It sounds like infinite recursion, but it can be resolved using linear algebra.)

Eigendemocracy is a complicated idea, but a simple way to get most of the way there would be to have a forum where having lots of karma gives you the ability to upvote multiple times.  How would this work?  Let's say Scott starts with 5 karma and everyone else starts with 0 karma.  Each point of karma gives you the ability to upvote once a day.  Let's say it takes 5 upvotes for a post to get featured on the sidebar of Scott's blog.  If Scott wants to feature a post on the sidebar of his blog, he upvotes it 5 times, netting the person who wrote it 1 karma.  As Scott features more and more posts, he gains a moderation team full of people who wrote posts that were good enough to feature.  As they feature posts in turn, they generate more co-moderators.

Why do I like this solution?

  • It acts as a cultural preservation mechanism.  On reddit and Twitter, sheer numbers rule when determining what gets visibility.  The reddit-like voting mechanisms of Less Wrong meant that the site deliberately kept a somewhat low profile in order to avoid getting overrun.  Even if SSC experienced a large influx of new users, those users would only gain power to affect the visibility of content if they proved themselves by making quality contributions first.
  • It takes the moderation burden off of Scott and distributes it across trusted community members.  As the community grows, the mod team grows with it.
  • The incentives seem well-aligned.  Writing stuff Scott likes or meta-likes gets you recognition, mod powers, and the ability to control the discussion--forms of social status.  Contrast with social media sites where hyperbole is a shortcut to attention, followers, upvotes.  Also, unlike Less Wrong, there'd be no punishment for writing a low quality post--it simply doesn't get featured and is one more click away from the SSC homepage.

TL;DR - Despite appearances, the Less Wrong community is actually doing great.  Any successor to Less Wrong should try to offer compelling advantages over options that are already available.

Zooming your mind in and out

8 John_Maxwell_IV 06 July 2015 12:30PM

I recently noticed I had two mental processes opposing one another in an interesting way.

The first mental process was instilled by reading Daniel Kahneman on the focusing illusion and Paul Graham on procrastination.  This process encourages me to "zoom out" when engaging in low-value activities so I can see they don't deliver much value in the grand scheme of things.

The second mental process was instilled by reading about the importance of just trying things.  (These articles could be seen as steelmanning Mark Friedenbach's recent Less Wrong critique.)  This mental process encourages me to "zoom in" and get my hands dirty through experimentation.

Both these processes seem useful.  Instead of spending long stretches of time in either the "zoomed in" or "zoomed out" state, I think I'd do better flip-flopping between them.  For example, if I'm wandering down internet rabbit holes, I'm spending too much time zoomed in.  Asking "why" repeatedly could help me realize I'm doing something low value.  If I'm daydreaming or planning lots with little doing, I'm spending too much time zoomed out.  Asking "how" repeatedly could help me identify a first step.

This fits in with construal level theory, aka "near/far theory" as discussed by Robin Hanson.  (I recommend the reviews Hanson links to; they gave me a different view of the concept than his standard presentation.)  To be more effective, maybe one should increase cross communication between the "near" and "far" modes, so the parts work together harmoniously instead of being at odds.

If Hanson's view is right, maybe the reason people become uncomfortable when they realize they are procrastinating (or not Just Trying It) is that this maps to getting caught red-handed in an act of hypocrisy in the ancestral environment.  You're pursuing near interests (watching Youtube videos) instead of working towards far ideals (doing your homework)?  For shame!

(Possible cure: Tell yourself that there's nothing to be ashamed of if you get stuck zoomed in; it happens to everyone.  Just zoom out.)

Part of me is reluctant to make this post, because I just had this idea and it feels like I should test it out more before writing about it.  So here are my excuses:

1. If I wait until I develop expertise in everything, it may be too late to pass it on.

2. In order to see if this idea is useful, I'll need to pay attention to it.  And writing about it publicly is a good way to help myself pay attention to it, since it will become part of my identity and I'll be interested to see how people respond.

There might be activities people already do on a regular basis that consist of repeated zooming in and out.  If so, engaging in them could be a good way to build this mental muscle.  Can anyone think of something like this?

Purchasing research effectively open thread

12 John_Maxwell_IV 21 January 2015 12:24PM

Many of the biggest historical success stories in philanthropy have come in the form of funding for academic research.  This suggests that the topic of how to purchase such research well should be of interest to effective altruists.  Less Wrong survey results indicate that a nontrivial fraction of LW has firsthand experience with the academic research environment.  Inspired by the recent Elon Musk donation announcement, this is a thread for discussion of effectively using money to enable important, useful research.  Feel free to brainstorm your own questions and ideas before reading what's written in the thread.

Productivity thoughts from Matt Fallshaw

13 John_Maxwell_IV 21 August 2014 05:05AM

At the 2014 Effective Altruism Summit in Berkeley a few weeks ago, I had the pleasure of talking to Matt Fallshaw about the things he does to be more effective.  Matt is a founder of Trike Apps (the consultancy that built Less Wrong), a founder of Bellroy, and a polyphasic sleeper.  Notes on our conversation follow.

Matt recommends having a system for acquiring habits.  He recommends separating collection from processing; that is, if you have an idea for a new habit you want to acquire, you should record the idea at the time you have it and then think about actually implementing it at some future time.  Matt recommends doing this through a weekly review.  He recommends vetting your collection to see what habits seem actually worth acquiring, then for those habits you actually want to acquire, coming up with a compassionate, reasonable plan for how you're going to acquire the habit.

(Previously on LW: How habits work and how you may control themCommon failure modes in habit formation.)

The most difficult kind of habit for me to acquire is that of random-access situation-response habits, e.g. "if I'm having a hard time focusing, read my notebook entry that lists techniques for improving focus".  So I asked Matt if he had any habit formation advice for this particular situation.  Matt recommended trying to actually execute the habit I wanted as many times as possible, even in an artificial context.  Steve Pavlina describes the technique here.  Matt recommends making your habit execution as emotionally salient as possible.  His example: Let's say you're trying to become less of a prick.  Someone starts a conversation with you and you notice yourself experiencing the kind of emotions you experience before you start acting like a prick.  So you spend several minutes explaining to them the episode of disagreeableness you felt coming on and how you're trying to become less of a prick before proceeding with the conversation.  If all else fails, Matt recommends setting a recurring alarm on your phone that reminds you of the habit you're trying to acquire, although he acknowledges that this can be expensive.

Part of your plan should include a check to make sure you actually stick with your new habit.  But you don't want a check that's overly intrusive.  Matt recommends keeping an Anki deck with a card for each of your habits.  Then during your weekly review session, you can review the cards Anki recommends for you.  For each card, you can rate the degree to which you've been sticking with the habit it refers to and do something to revitalize the habit if you haven't been executing it.  Matt recommends writing the cards in a form of a concrete question, e.g. for a speed reading habit, a question could be "Did you speed read the last 5 things you read?"  If you haven't been executing a particular habit, check to see if it has a clear, identifiable trigger.

Ideally your weekly review will come at a time you feel particularly "agenty" (see also: Reflective Control).  So you may wish to schedule it at a time during the week when you tend to feel especially effective and energetic.  Consuming caffeine before your weekly review is another idea.

When running in to seemingly intractable problems related to your personal effectiveness, habits, etc., Matt recommends taking a step back to brainstorm and try to think of creative solutions.  He says that oftentimes people will write off a task as "impossible" if they aren't able to come up with a solution in 30 seconds.  He recommends setting a 5-minute timer.

In terms of habits worth acquiring, Matt is a fan of speed reading, Getting Things Done, and the Theory of Constraints (especially useful for larger projects).

Matt has found that through aggressive habit acquisition, he's been able to experience a sort of compound return on the habits he's acquired: by acquiring habits that give him additional time and mental energy, he's been able to reinvest some of that additional time and mental energy in to the acquisition of even more useful habits.  Matt doesn't think he's especially smart or high-willpower relative to the average person in the Less Wrong community, and credits this compounding for the reputation he's acquired for being a badass.

Managing one's memory effectively

13 John_Maxwell_IV 06 June 2014 05:39PM

Note: this post leans heavily on metaphors and examples from computer programming, but I've tried to write it so it's accessible to a determined person with no programming background.

To summarize some info from computer processor design at very high density: There are a variety of ways to manufacture the memory that's used in modern computer processors.  There's a trend where the faster a kind of memory is to read from and write to, the more expensive it will be.  So modern computers have a hierarchical memory structure: a very small amount of memory that's very fast to do computation with ("the registers"), a larger amount of memory that's a bit slower to do computation with, a even larger amount of memory that's even slower to do computation with, and so on.  The two layers immediately below the the registers (the L1 cache and the L2 cache) are typically abstracted away from even the assembly language programmer.  They store data that's been accessed recently from the level below them ("main memory").  The processor will do a lookup in the caches when accessing data; if the data is not already in the cache, that's called a "cache miss" and the data will get loaded in to the cache before it's accessed.

(Please correct me in the comments if I got any of that wrong; it's based on years-old memories of an undergrad computer science course.)

Lately I've found it useful to think of my memory in the same way.  I've got working memory (7±2 items?), consisting of things that I'm thinking about in this very moment.  I've got short term memory and long term memory.  And if I can't find something after trying to think of it for a while, I'll look it up (frequently on Google).  Cache miss for the lose.

What are some implications of thinking about memory about this way?


Register limitations and chunking

When programming, I've noticed that sometimes I'll encounter a problem that's too big to fit in my working memory (WM) all at once.  In the spirit of getting stronger, I'm typically tempted to attack the problem head on, but I find that my brain just tends to flit around the details of the problem instead of actually making progress on it.  So lately I've been toying with the idea of trying to break off a piece of the problem that can be easily modularized and fits fully in my working memory and then solving it on its own.  (Feynman: "What's the smallest nontrivial example?")  You could turn this definition around and define a good software architecture as one that consists of modular components that can individually be made to fit completely in to one's working memory when reading code.

As you write or read code modules, you'll come to understand them better and you'll be able to compress or "chunk" them so they take up less space in your working memory.  This is why top-down programming doesn't always work that well.  You're trying to fit the entire design in your working memory, but because you don't have a good understanding of the components yet (since you haven't written them), you aren't dealing with chunks but pseudochunks.  This is true for concepts in general: it takes all of a beginner's WM to comprehend a for loop, but in a master's WM a for loop can be but one piece in a larger puzzle.



One thing to observe: you don't get alerted when memory at the top of your mental hierarchy gets overwritten.  We've all had the experience of having some idea in the shower and having forgotten it by the time we get out.  Similarly, if you're working on a delicate mental task (programming, math, etc.) and you get interrupted, you'll lose mental state related to the problem you're working on.

If you're having difficulty focusing, this can easily make doing a delicate mental task, like a complicated math problem, much less fun and productive.  Instead of actually making progress on the task, your mind drifts away from it, and when you redirect your attention, you find that information related to the problem has swapped out of your working memory or short-term memory and must be re-loaded.  If you're getting distracted frequently enough or you're otherwise lacking mental stamina, you may find that you spend the majority of your time context switching instead of making progress on your problem.


Adding an additional external cache level

Anecdotally, adding an additional brain cache level between long-term memory and Google seems like a pretty big win for personal productivity.  My digital notebook (since writing that post, I've started using nvALT) has turned out to be one of my biggest wins where productivity is concerned; it's ballooned to over 700K words, and a decent portion of it consists of copy-pasted snippets that represent the best information from Google searches I've done.  A co-worker wrote a tool that allows him to quickly look up how to use software libraries and reports that he's continued to find it very useful years after making it.

Text is the most obvious example of an exobrain memory device, but here's a more interesting example: if you're cleaning a messy room, you probably don't develop a detailed plan in your head of where all of your stuff will be placed when you finish cleaning.  Instead, you incrementally organize things in to related piles, then decide what to do with the piles, using the organization of the items in your room as a kind of external memory aid that allows you to do a mental task that you wouldn't be able to do entirely in your head.

Would it be accurate to say that you're "not intelligent enough" to organize your room in your head without the use of any external memory aides?  It doesn't really fit with the colloquial use of "intelligence", does it?  But in the same way computers are frequently RAM-limited, I suspect that humans are also frequently RAM-limited, even on mental tasks we associate with "intelligence".  For example, if you're reading a physics textbook and you notice that you're getting confused, you could write down a question that would resolve your confusion, then rewrite the question to be as precise as possible, then list hypotheses that would answer your question along with reasons to believe/disbelieve each hypothesis.  By writing things down, you'd be able to devote all of your working memory to the details of a particular aspect of your confusion without losing track of the rest of it.


OpenWorm and differential technological development

6 John_Maxwell_IV 19 May 2014 04:47AM

According to Robin Hanson's arguments in this blog post, we want to promote research in to cell modeling technology (ideally at the expense of research in to faster computer hardware).  That would mean funding this kickstarter, which is ending in 11 hours (it may still succeed; there are a few tricks for pushing borderline kickstarters through).  I already pledged $250; I'm not sure if I should pledge significantly more on the strength of one Hanson blog post.  Thoughts from anyone?  (I also encourage other folks to pledge!  Maybe we can name neurons after characters in HPMOR or something.  EDIT: Or maybe funding OpenWorm is a bad idea; see this link.)

I'm also curious on what people think about the efficiency of trying to pursue differential technological development directly this way vs funding MIRI/FHI.  I haven't read the entire conversation referenced here, but this bit from the blog post sounds correct-ish to me:

People doing philosophical work to try to reduce existential risk are largely wasting their time. Tyler doesn’t think it’s a serious effort, though it may be good publicity for something that will pay off later. A serious effort looks more like the parts of the US government that trained people to infiltrate the post-collapse Soviet Union and then locate and neutralize nuclear weapons. There was also a serious effort by the people who set up hotlines between leaders to be used to quickly communicate about nuclear attacks (e.g., to help quickly convince a leader in country A that a fishy object on their radar isn’t an incoming nuclear attack).

Edit: For some reason I forgot about this previous discussion on this topic, which makes the case for funding OpenWorm look less clear-cut.

System Administrator Appreciation Day - Thanks Trike!

70 John_Maxwell_IV 26 July 2013 05:57PM

In honor of System Administrator Appreciation Day, this is a post to thank Trike Apps for creating & maintaining Less Wrong.  A lot of the time when they are mentioned on Less Wrong, it is to complain about bugs or request new features.  So this is the time of year: thanks for everything that continues to silently go right!

Existential risks open thread

10 John_Maxwell_IV 31 March 2013 12:52AM

We talk about a wide variety of stuff on LW, but we don't spend much time trying to identify the very highest-utility stuff to discuss and promoting additional discussion of it.  This thread is a stab at that.  Since it's just comments, you can feel more comfortable bringing up ideas that might be wrong or unoriginal (but nevertheless have relatively high expected value, since existential risks are such an important topic).

View more: Next