Revitalizing Less Wrong seems like a lost purpose, but here are some other ideas

19 John_Maxwell_IV 12 June 2016 07:38AM

This is a response to ingres' recent post sharing Less Wrong survey results. If you haven't read & upvoted it, I strongly encourage you to--they've done a fabulous job of collecting and presenting data about the state of the community.

So, there's a bit of a contradiction in the survey results.  On the one hand, people say the community needs to do more scholarship, be more rigorous, be more practical, be more humble.  On the other hand, not much is getting posted, and it seems like raising the bar will only exacerbate that problem.

I did a query against the survey database to find the complaints of top Less Wrong contributors and figure out how best to serve their needs.  (Note: it's a bit hard to read the comments because some of them should start with "the community needs more" or "the community needs less", but adding that info would have meant constructing a much more complicated query.)  One user wrote:

[it's not so much that there are] overly high standards,  just not a very civil or welcoming climate . why write content for free and get trashed when I can go write a grant application or a manuscript instead?

ingres emphasizes that in order to revitalize the community, we would need more content.  Content is important, but incentives for producing content might be even more important.  Social status may be the incentive humans respond most strongly to.  Right now, from a social status perspective, the expected value of creating a new Less Wrong post doesn't feel very high.  Partially because many LW posts are getting downvotes and critical comments, so my System 1 says my posts might as well.  And partially because the Less Wrong brand is weak enough that I don't expect associating myself with it will boost my social status.

When Less Wrong was founded, the primary failure mode guarded against was Eternal September.  If Eternal September represents a sort of digital populism, Less Wrong was attempting a sort of digital elitism.  My perception is that elitism isn't working because the benefits of joining the elite are too small and the costs are too large.  Teddy Roosevelt talked about the man in the arena--I think Less Wrong experienced the reverse of the evaporative cooling EY feared, where people gradually left the arena as the proportional number of critics in the stands grew ever larger.

Given where Less Wrong is at, however, I suspect the goal of revitalizing Less Wrong represents a lost purpose.

ingres' survey received a total of 3083 responses.  Not only is that about twice the number we got in the last survey in 2014, it's about twice the number we got in 20132012, and 2011 (though much bigger than the first survey in 2009).  It's hard to know for sure, since previous surveys were only advertised on the LessWrong.com domain, but it doesn't seem like the diaspora thing has slowed the growth of the community a ton and it may have dramatically accelerated it.

Why has the community continued growing?  Here's one possibility.  Maybe Less Wrong has been replaced by superior alternatives.

  • CFAR - ingres writes: "If LessWrong is serious about it's goal of 'advancing the art of human rationality' then it needs to figure out a way to do real investigation into the subject."  That's exactly what CFAR does.  CFAR is a superior alternative for people who want something like Less Wrong, but more practical.  (They have an alumni mailing list that's higher quality and more active than Less Wrong.)  Yes, CFAR costs money, because doing research costs money!
  • Effective Altruism - A superior alternative for people who want something that's more focused on results.
  • Facebook, Tumblr, Twitter - People are going to be wasting time on these sites anyway.  They might as well talk about rationality while they do it.  Like all those phpBB boards in the 00s, Less Wrong has been outcompeted by the hot new thing, and I think it's probably better to roll with it than fight it.  I also wouldn't be surprised if interacting with others through social media has been a cause of community growth.
  • SlateStarCodex - SSC already checks most of the boxes under ingres' "Future Improvement Wishlist Based On Survey Results".  In my opinion, the average SSC post has better scholarship, rigor, and humility than the average LW post, and the community seems less intimidating, less argumentative, more accessible, and more accepting of outside viewpoints.
  • The meatspace community - Meeting in person has lots of advantages.  Real-time discussion using Slack/IRC also has advantages.

Less Wrong had a great run, and the superior alternatives wouldn't exist in their current form without it.  (LW was easily the most common way people heard about EA in 2014, for instance, although sampling effects may have distorted that estimate.)  But that doesn't mean it's the best option going forward.

Therefore, here are some things I don't think we should do:

  • Try to be a second-rate version of any of the superior alternatives I mentioned above.  If someone's going to put something together, it should fulfill a real community need or be the best alternative available for whatever purpose it serves.
  • Try to get old contributors to return to Less Wrong for the sake of getting them to return.  If they've judged that other activities are a better use of time, we should probably trust their judgement.  It might be sensible to make an exception for old posters that never transferred to the in-person community, but they'd be harder to track down.
  • Try to solve the same sort of problems Arbital or Metaculus is optimizing for.  No reason to step on the toes of other projects in the community.

But that doesn't mean there's nothing to be done.  Here are some possible weaknesses I see with our current setup:

  • If you've got a great idea for a blog post, and you don't already have an online presence, it's a bit hard to reach lots of people, if that's what you want to do.
  • If we had a good system for incentivizing people to write great stuff (as opposed to merely tolerating great stuff the way LW culture historically has), we'd get more great stuff written.
  • It can be hard to find good content in the diaspora.  Possible solution: Weekly "diaspora roundup" posts to Less Wrong.  I'm too busy to do this, but anyone else is more than welcome to (assuming both people reading LW and people in the diaspora want it).

ingres mentions the possibility of Scott Alexander somehow opening up SlateStarCodex to other contributors.  This seems like a clearly superior alternative to revitalizing Less Wrong, if Scott is down for it:

  • As I mentioned, SSC already seems to have solved most of the culture & philosophy problems that people complained about with Less Wrong.
  • SSC has no shortage of content--Scott has increased the rate at which he creates open threads to deal with an excess of comments.
  • SSC has a stronger brand than Less Wrong.  It's been linked to by Ezra Klein, Ross Douthat, Bryan Caplan, etc.

But the most important reasons may be behavioral reasons.  SSC has more traffic--people are in the habit of visiting there, not here.  And the posting habits people have acquired there seem more conducive to community.  Changing habits is hard.

As ingres writes, revitalizing Less Wrong is probably about as difficult as creating a new site from scratch, and I think creating a new site from scratch for Scott is a superior alternative for the reasons I gave.

So if there's anyone who's interested in improving Less Wrong, here's my humble recommendation: Go tell Scott Alexander you'll build an online forum to his specification, with SSC community feedback, to provide a better solution for his overflowing open threads.  Once you've solved that problem, keep making improvements and subfora so your forum becomes the best available alternative for more and more use cases.

And here's my humble suggestion for what an SSC forum could look like:

As I mentioned above, Eternal September is analogous to a sort of digital populism.  The major social media sites often have a "mob rule" culture to them, and people are increasingly seeing the disadvantages of this model.  Less Wrong tried to achieve digital elitism and it didn't work well in the long run, but that doesn't mean it's impossible.  Edge.org has found a model for digital elitism that works.  There may be other workable models out there.  A workable model could even turn in to a successful company.  Fight the hot new thing by becoming the hot new thing.

My proposal is based on the idea of eigendemocracy.  (Recommended that you read the link before continuing--eigendemocracy is cool.)  In eigendemocracy, your trust score is a composite rating of what trusted people think of you.  (It sounds like infinite recursion, but it can be resolved using linear algebra.)

Eigendemocracy is a complicated idea, but a simple way to get most of the way there would be to have a forum where having lots of karma gives you the ability to upvote multiple times.  How would this work?  Let's say Scott starts with 5 karma and everyone else starts with 0 karma.  Each point of karma gives you the ability to upvote once a day.  Let's say it takes 5 upvotes for a post to get featured on the sidebar of Scott's blog.  If Scott wants to feature a post on the sidebar of his blog, he upvotes it 5 times, netting the person who wrote it 1 karma.  As Scott features more and more posts, he gains a moderation team full of people who wrote posts that were good enough to feature.  As they feature posts in turn, they generate more co-moderators.

Why do I like this solution?

  • It acts as a cultural preservation mechanism.  On reddit and Twitter, sheer numbers rule when determining what gets visibility.  The reddit-like voting mechanisms of Less Wrong meant that the site deliberately kept a somewhat low profile in order to avoid getting overrun.  Even if SSC experienced a large influx of new users, those users would only gain power to affect the visibility of content if they proved themselves by making quality contributions first.
  • It takes the moderation burden off of Scott and distributes it across trusted community members.  As the community grows, the mod team grows with it.
  • The incentives seem well-aligned.  Writing stuff Scott likes or meta-likes gets you recognition, mod powers, and the ability to control the discussion--forms of social status.  Contrast with social media sites where hyperbole is a shortcut to attention, followers, upvotes.  Also, unlike Less Wrong, there'd be no punishment for writing a low quality post--it simply doesn't get featured and is one more click away from the SSC homepage.

TL;DR - Despite appearances, the Less Wrong community is actually doing great.  Any successor to Less Wrong should try to offer compelling advantages over options that are already available.

Comment author: Algon 03 June 2016 04:06:56PM *  0 points [-]

I've seen you mention trigger point therapy before. It's something I do, and it helps to a degree, but it has not had made a large change in my quality of life.

The rest seems worthwhile. Thank you for that.

Comment author: John_Maxwell_IV 04 June 2016 01:38:10AM *  0 points [-]

I would guess then that you either

  • Suffer mainly from trigger points, but you're treating the wrong ones/haven't found effective treatment methods

  • Suffer from some other condition that's causing trigger points in your muscles as a downstream effect

One thing that might give you a clue is to figure out just how bad your trigger points are. You won't have a point of reference yourself, so I'd suggest visiting a few massage therapists and asking them after your massage whether you seem tighter than a typical client and where your worst tightness is. If your trigger points are very bad, or you have significant tightness/pain even in areas that aren't close to your head, I'd update some in the direction of them representing the core of your problem.

If trigger points are your primary issue, then keep in mind they can require quite a lot of creative investigation to treat effectively. For example, my current hypothesis is that the eyestrain issues I struggled with a few months ago were caused in part by the following chain: morton's foot -> trigger points in my soleus -> trigger points in my jaw muscles -> trigger points in my upper sternocleidomastoid -> trigger points in my eye muscles. It sounds weird, but when I spend a day walking around with inserts in my shoes to correct for the Morton's Foot, my eyes feel like they're loosening up when I lie down to sleep at the end of the day.

I recommend thoroughly reading the perpetuating factors chapter on every trigger point book you can get your hands on. Part of the reason I recommend SAMe is that one of the perpetuating factors that's been identified for trigger point problems is folate deficiency, but some people (like me) have MTHFR mutations that interfere with folate motabolism, and SAMe helps get around that. (Getting 23andme can help you determine if you're also an undermethylator.)

Make yourself the world's foremost expert on trigger points (and any other field of research that seems helpful for your pain). Then you'll have a great career if you do end up managing to fix yourself.

Comment author: Algon 31 May 2016 08:10:08PM 5 points [-]

To fellow victims of chronic pain: do you ever despair about the future, knowing your pain might never end? If so, how do you deal with it?

I've made it a schelling point to never end it all. To leave open the possibility of suicide seems too dangerous to me, too alluring. But I'm still afraid that one day I might try. Do any of you ever feel like this?

I would like to know how others deal with this, as I'm only doing so-so.

Comment author: John_Maxwell_IV 03 June 2016 11:39:09AM *  4 points [-]

Did you look at https://www.painscience.com/? That site had info that cured nasty chronic pain of mine that lasted >1 year. This tutorial in particular was extremely helpful: https://www.painscience.com/tutorials/trigger-points.php

To answer your original question: when I was dealing with chronic pain, I had issues with deep despair similar to what you describe. My chronic pain left me unemployed, and I was constantly in fear of doing things that would aggravate my condition and set back the (very slow and variable) progress it was making in resolving itself. Definitely an extremely miserable period.

Thoughts I had that I found helpful and I'll pass on to you: I decided there were basically 2 strategies for dealing with the pain I had: cure and mitigation. Cure refers to finding a way to roll back the root cause of the problem and return to being my pain-free self. Mitigation refers to accepting the pain and finding ways to work around it (for me--finding a job that doesn't require me to make use of my hands at all, and probably doing a lot more meditation). I decided that it was best to focus on 1 strategy at a time, and that I should focus on the "cure" strategy for at least several years before switching to "mitigation". (What's a few years when I had decades left to live?) I realized that any given "cure" had a pretty low probability of working out, and being in a state of deep despair was extremely non-conducive to trying things that individually had a small probability of working out. This observation was helpful for recalibrating my intuition, and I resolved to make the "list of things I had tried" as long as I could possibly make it. I also resolved to do more of a breadth-first search than a depth-first search, at least at first--I didn't want something that would gradually fix my pain over the course of many months in a way that I would need careful journaling to observe--I wanted a technique that would help things noticeably, that I could use at any time, if the issues came up in the future. Luckily I did manage to find such a technique, which was trigger point therapy (see above links). I've since helped a few others make progress on their pain using trigger point therapy, and I think it's potentially useful for many, perhaps almost all, people who suffer chronic pain.

Some more specific recommendations:

  • If you're not already taking something, start taking SAMe. It's a supplement that you can buy over the counter that's anti-depressant and has been shown to be quite useful for arthritis (so who knows, maybe it will end up helping your condition somehow--it probably hasn't been studied for your condition and you may as well do an n=1 trial). Ideally it will improve your mood, which will give you the motivation to try low-probability treatments, and it might fix your issue on its own. Here's more info: http://www.lifeextension.com/Magazine/2007/4/report_same/Page-01

  • Read this book: http://smile.amazon.com/How-Fail-Almost-Everything-Still/dp/1591847745/ Not only is it an great book in and of itself, the author covers mental strategies that are ideal for chronic medical condition sufferers. And he uses the story of his chronic medical condition as a motivating example through the book, so it gives you something to relate to.

Comment author: John_Maxwell_IV 30 May 2016 05:22:55AM 1 point [-]

Negativity bias might be a better cite than loss aversion.

Comment author: username2 16 May 2016 02:46:34PM 4 points [-]

A repost from an earlier open thread.

I am looking for sources of semi-technical reviews and expository weblog posts to add to my RSS reader; preferably 4—20 screenfuls of text on topics including or related to evolutionary game theory, mathematical modelling in the social sciences, theoretical computer science applied to non-computer things, microeconomics applied to unusual things (e.g. Hanson's Age of Em), psychometrics, the theory of machine learning, and so on. What I do not want: pure mathematics, computer science trivia, coding trivia, machine learning tutorials, etc.

Some examples that mostly match what I want, in roughly descending order:

How do I go about finding more feeds like that? I have already tried the obvious, such as googling "allintext: egtheory jeremykun" and found a couple OPML files (including gwern's), but they didn't contain anything close. The obvious blogrolls weren't helpful either (most of them were endless lists of conference announcements and calls for papers). Also, I've grepped a few relevant subreddits for *.wordpress.*, *.blogspot.* and *.github.io submissions (only finding what I already have in my RSS feeds — I suspect the less established blogs just haven't gotten enough upvotes).

Comment author: John_Maxwell_IV 27 May 2016 08:54:23AM 1 point [-]

Would Andrew Gelman's blog count? (Author of recommended textbook on Bayesian statistics.)

Maybe it would be useful for you to share the entire blogroll you've accumulated thus far and just ask people to recommend more blogs like the ones you already have. For example, I'm guessing you found Gelman already since he's present in Robin Hanson's blogroll--but I could think of a way you would have plausibly found lots of potential recommendations.

You could even create a "show us your blogroll" discussion post, in order to harvest OPMLs to mine through.

Comment author: John_Maxwell_IV 12 May 2016 08:06:59AM 0 points [-]

Related threads: 1, 2, 3.

Comment author: John_Maxwell_IV 31 March 2016 08:52:48AM 1 point [-]

Awesome, I ought to be there.

Comment author: Viliam 18 March 2016 09:33:38AM 1 point [-]

An interesting idea, but I can still imagine it failing in a few ways:

  • the AI kills you during the process of building the "incredibly rich world-model", for example because using the atoms of your body will help it achieve a better model;

  • the model is somehow misleading, or just your human-level intelligence will make a wrong conclusion when looking at the model.

Comment author: John_Maxwell_IV 18 March 2016 09:54:47PM 0 points [-]

the AI kills you during the process of building the "incredibly rich world-model", for example because using the atoms of your body will help it achieve a better model;

OK, I think this is a helpful objection because it helps me further define the "tool"/"agent" distinction. In my mind, an "agent" works towards goals in a freeform way, whereas a "tool" executes some kind of defined process. Google Search is in no danger of killing me in the process of answering my search query (because using my atoms would help it get me better search results). Google Search is not an autonomous agent working towards the goal of getting me good search results. Instead, it's executing a defined process to retrieve search results.

A tool is a safer tool if I understand the defined process by which it works, the defined process works in a fairly predictable way, and I'm able to anticipate the consequences of following that defined process. Tools are bad tools when they behave unpredictably and create unexpected consequences: for example, a gun is a bad tool if it shoots me in the foot without me having pulled the trigger. A piece of software is a bad tool if it has bugs or doesn't ask for confirmation before taking an action I might not want it to take.

Based on this logic, the best prospects for "tool AIs" may be "speed superintelligences"/"collective superintelligences"--AIs that execute some kind of well-understood process, but much faster than a human could ever execute, or with a large degree of parallelism. My pocket calculator is a speed superintelligence in this sense. Google Search is more of a collective superintelligence insofar as its work is parallelized.

You can imagine using the tool AI to improve itself to the point where it is just complicated enough for humans to still understand, then doing the world-modeling step at that stage.

Also if humans can inspect and understand all the modifications that the tool AI makes to itself, so it continues to execute a well-understood defined process, that seems good. If necessary you could periodically put the code on some kind of external storage media, transfer it to a new air-gapped computer, and continue development on that computer to ensure that there wasn't any funny shit going on.

the model is somehow misleading, or just your human-level intelligence will make a wrong conclusion when looking at the model.

Sure, and there's also the "superintelligent, but with bugs" failure mode where the model is pretty good (enough for the AI to do a lot of damage) but not so good that the AI has an accurate representation of my values.

I imagine this has been suggested somewhere, but an obvious idea is to train many separate models of my values using many different approaches (ex - in addition to what I initially described, also use natural language processing to create a model of human values, and use supervised learning of some sort to learn from many manually entered training examples what human values look like, etc.) Then a superintelligence could test a prospective action against all of these models, and if even one of these models flagged the action as an unethical action, it could flag the action for review before proceeding.

And in order to make these redundant user preference models better, they could be tested against one another: the AI could generate prospective actions at random and test them against all the models; if the models disagreed about the appropriateness of a particular action, this could be flagged as a discrepancy that deserves examination.

My general sense is that with enough safeguards and checks, this "tool AI bootstrapping process" could probably be made arbitrarily safe. Example: the tool AI suggests an improvement to its own code, you review the improvement, you ask the AI why it did things in a particular way, the AI justifies itself, the justification is hard to understand, you make improvements to the justifications module... For each improvement the tool AI generates, it also generates a proof that the improvement does what it says it will do (checked by a separate theorem-proving module) and test coverage for the new improvement... Etc.

Comment author: John_Maxwell_IV 17 March 2016 02:01:11AM *  2 points [-]

In The genie knows, but it doesn't care, RobbBB argues that even if an AI is intelligent enough to understand its creator's wishes in perfect detail, that doesn't mean that its creator's wishes are the same as its own values. By analogy, even though humans were optimized by evolution to have as many descendants as possible, we can understand this without caring about it. Very smart humans may have lots of detailed knowledge of evolution & what it means to have many descendants, but then turn around and use condoms & birth control in order to stymie evolution's "wishes".

I thought of a potential way to get around this issue:

  1. Create a tool AI.

  2. Use the tool AI as a tool to improve itself, similar to the way I might use my new text editor to edit my new text editor's code.

  3. Use the tool AI to build an incredibly rich world-model, which includes, among other things, an incredibly rich model of what it means to be Friendly.

  4. Use the tool AI to build tools for browsing this incredibly rich world-model and getting explanations about what various items in the ontology correspond to.

  5. Browse this incredibly rich world-model. Find the item in the ontology that corresponds to universal flourishing and tell the tool AI "convert yourself in to an agent and work on this".

There's a lot hanging on the "tool AI/agent AI" distinction in this narrative. So before actually working on this plan, one would want to think hard about the meaning of this distinction. What if the tool AI inadvertently self-modifies & becomes "enough of an agent" to deceive its operator?

The tool vs agent distinction probably has something to do with (a) the degree to which the thing acts autonomously and (b) the degree to which its human operator stays in the loop. A vacuum is a tool: I'm not going to vacuum over my prized rug and rip it up. A Roomba is more of an agent: if I let it run while I am out of the house, it's possible that it will rip up my prized rug as it autonomously moves about the house. But if I stay home and glance over at my Roomba every so often, it's possible that I'll notice that my rug is about to get shredded and turn off my Roomba first. I could also be kept in the loop if the thing gives me warnings about undesirable outcomes I might not want: for example, my Roomba could scan the house before it ran, giving me an inventory of all the items it might come in contact with.

An interesting proposition I'm tempted to argue for is the "autonomy orthogonality thesis". The original "orthogonality thesis" says that how intelligent an agent is and what values it has are, in principle, orthogonal. The autonomy orthogonality thesis says that how intelligent an agent is and the degree to which it has autonomy and can be described as an "agent" are also, in principle, orthogonal. My pocket calculator is vastly more intelligent than I am at doing arithmetic, but it's still vastly less autonomous than me. Google Search can instantly answer questions it would take me a lifetime to answer working independently, but Google Search is in no danger of "waking up" and displaying autonomy. So the question here is whether you could create something like Google Search that has the capacity for general intelligence while lacking autonomy.

I feel like the "autonomy orthogonality thesis" might be a good steelman of a lot of mainstream AI researchers who blow raspberries in the general direction of people concerned with AI safety. The thought is that if AI researchers have programmed something in detail to do one particular thing, it's not about to "wake up" and start acting autonomous.

Another thought: One might argue that if a Tool AI starts modifying itself in to a superintelligence, the result will be too complicated for humans to ever verify. But there's an interesting contradiction here. A key disagreement in the Hanson/Yudkowsky AI-foom debate was the existence of important, undiscovered chunky insights about intelligence. Either these insights exist or they don't. If they do, then the amount of code one needs to write in order to create a superintelligence is relatively little, and it should be possible for humans to independently verify the superintelligence's code. If they don't, then we are more likely going to have a soft takeoff anyway because intelligence is about building lots of heterogenous structures and getting lots of little things right, and that takes time.

Another thought: maybe it's valuable to try to advance natural language processing, differentially speaking, so AIs can better understand human concepts by reading about them?

Comment author: John_Maxwell_IV 05 February 2016 12:58:19AM *  0 points [-]

A simple substitute strategy for using spaced repetition: Say fact usefulness has a power law distribution: some facts you are going to look up 10s or 100s of times, others not that frequently. Say it's hard to predict which facts are going to be the ones you look up 100s of times. If that's true then by using SR you're going to create a lot of wasted cards for facts that you thought you'd look up 10s or 100s of times but in fact are pretty useless. Instead what you could do is every time you want to look up a fact, before looking it up, try to recall it from memory. Research shows that trying to recall facts solidifies their memories much better than looking them up, so over time you will come to have all of the facts you most frequently need at your mental fingertips using this strategy (a bit like microprocessor cache management).

View more: Prev | Next