Filter Last three months

Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

LW 2.0 Strategic Overview

43 Habryka 15 September 2017 03:00AM

Update: We're in open beta! At this point you will be able to sign up / login with your LW 1.0 accounts (if the latter, we did not copy over your passwords, so hit "forgot password" to receive a password-reset email).

Hey Everyone! 

This is the post for discussing the vision that I and the rest of the LessWrong 2.0 team have for the new version of LessWrong, and to just generally bring all of you up to speed with the plans for the site. This post has been overdue for a while, but I was busy coding on LessWrong 2.0, and I am myself not that great of a writer, which means writing things like this takes quite a long time for me, and so this ended up being delayed a few times. I apologize for that.

With Vaniver’s support, I’ve been the primary person working on LessWrong 2.0 for the last 4 months, spending most of my time coding while also talking to various authors in the community, doing dozens of user-interviews and generally trying to figure out how to make LessWrong 2.0 a success. Along the way I’ve had support from many people, including Vaniver himself who is providing part-time support from MIRI, Eric Rogstad who helped me get off the ground with the architecture and infrastructure for the website, Harmanas Chopra who helped build our Karma system and did a lot of user-interviews with me, Raemon who is doing part-time web-development work for the project, and Ben Pace who helped me write this post and is basically co-running the project with me (and will continue to do so for the foreseeable future).

We are running on charitable donations, with $80k in funding from CEA in the form of an EA grant and $10k in donations from Eric Rogstad, which will go to salaries and various maintenance costs. We are planning to continue running this whole project on donations for the foreseeable future, and legally this is a project of CFAR, which helps us a bunch with accounting and allows people to get tax benefits from giving us money. 

Now that the logistics is out of the way, let’s get to the meat of this post. What is our plan for LessWrong 2.0, what were our key assumptions in designing the site, what does this mean for the current LessWrong site, and what should we as a community discuss more to make sure the new site is a success?

Here’s the rough structure of this post: 

  • My perspective on why LessWrong 2.0 is a project worth pursuing
  • A summary of the existing discussion around LessWrong 2.0 
  • The models that I’ve been using to make decisions for the design of the new site, and some of the resulting design decisions
  • A set of open questions to discuss in the comments where I expect community input/discussion to be particularly fruitful 

Why bother with LessWrong 2.0?  

I feel that independently of how many things were and are wrong with the site and its culture, overall, over the course of its history, it has been one of the few places in the world that I know off where a spark of real discussion has happened, and where some real intellectual progress on actually important problems was made. So let me begin with a summary of things that I think the old LessWrong got right, that are essential to preserve in any new version of the site:

On LessWrong…


  • I can contribute to intellectual progress, even without formal credentials 
  • I can sometimes have discussions in which the participants focus on trying to convey their true reasons for believing something, as opposed to rhetorically using all the arguments that support their position independent of whether those have any bearing on their belief
  • I can talk about my mental experiences in a broad way, such that my personal observations, scientific evidence and reproducible experiments are all taken into account and given proper weighting. There is no narrow methodology I need to conform to to have my claims taken seriously.
  • I can have conversations about almost all aspects of reality, independently of what literary genre they are associated with or scientific discipline they fall into, as long as they seem relevant to the larger problems the community cares about
  • I am surrounded by people who are knowledgeable in a wide range of fields and disciplines, who take the virtue of scholarship seriously, and who are interested and curious about learning things that are outside of their current area of expertise
  • We have a set of non-political shared goals for which many of us are willing to make significant personal sacrifices
  • I can post long-form content that takes up as much space at it needs to, and can expect a reasonably high level of patience of my readers in trying to understand my beliefs and arguments
  • Content that I am posting on the site gets archived, is searchable and often gets referenced in other people's writing, and if my content is good enough, can even become common knowledge in the community at large
  • The average competence and intelligence on the site is high, which allows discussion to generally happen on a high level and allows people to make complicated arguments and get taken seriously
  • There is a body of writing that is generally assumed to have been read by most people  participating in discussions that establishes philosophical, social and epistemic principles that serve as a foundation for future progress (currently that body of writing largely consists of the Sequences, but also includes some of Scott’s writing, some of Luke’s writing and some individual posts by other authors) 


When making changes to LessWrong, I think it is very important to preserve all of the above features. I don’t think all of them are universally present on LessWrong, but all of them are there at least some of the time, and no other place that I know of comes even remotely close to having all of them as often as LessWrong has. Those features are what motivated me to make LessWrong 2.0 happen, and set the frame for thinking about the models and perspectives I will outline in the rest of the post. 

I also think Anna, in her post about the importance of a single conversational locus, says another, somewhat broader thing, that is very important to me, so I’ve copied it in here: 

1. The world is locked right now in a deadly puzzle, and needs something like a miracle of good thought if it is to have the survival odds one might wish the world to have.

2. Despite all priors and appearances, our little community (the "aspiring rationality" community; the "effective altruist" project; efforts to create an existential win; etc.) has a shot at seriously helping with this puzzle.  This sounds like hubris, but it is at this point at least partially a matter of track record.

3. To aid in solving this puzzle, we must probably find a way to think together, accumulatively. We need to think about technical problems in AI safety, but also about the full surrounding context -- everything to do with understanding what the heck kind of a place the world is, such that that kind of place may contain cheat codes and trap doors toward achieving an existential win. We probably also need to think about "ways of thinking" -- both the individual thinking skills, and the community conversational norms, that can cause our puzzle-solving to work better.

4. One feature that is pretty helpful here, is if we somehow maintain a single "conversation", rather than a bunch of people separately having thoughts and sometimes taking inspiration from one another.  By "a conversation", I mean a space where people can e.g. reply to one another; rely on shared jargon/shorthand/concepts; build on arguments that have been established in common as probably-valid; point out apparent errors and then have that pointing-out be actually taken into account or else replied-to).

5. One feature that really helps things be "a conversation" in this way, is if there is a single Schelling set of posts/etc. that people (in the relevant community/conversation) are supposed to read, and can be assumed to have read.  Less Wrong used to be a such place; right now there is no such place; it seems to me highly desirable to form a new such place if we can.

6. We have lately ceased to have a "single conversation" in this way.  Good content is still being produced across these communities, but there is no single locus of conversation, such that if you're in a gathering of e.g. five aspiring rationalists, you can take for granted that of course everyone has read posts such-and-such.  There is no one place you can post to, where, if enough people upvote your writing, people will reliably read and respond (rather than ignore), and where others will call them out if they later post reasoning that ignores your evidence.  Without such a locus, it is hard for conversation to build in the correct way.  (And hard for it to turn into arguments and replies, rather than a series of non sequiturs.)

The Existing Discussion Around LessWrong 2.0

Now that I’ve given a bit of context on why I think LessWrong 2.0 is an important project, it seems sensible to look at what has been said so far, so we don’t have to repeat the same discussions over and over again. There has already been a lot of discussion about the decline of LessWrong, the need for a new platform and the design of LessWrong 2.0, and I won’t be able to summarise it all here, but I can try my best to summarize the most important points, and give a bit of my own perspective on them.

Here is a comment by Alexandros, on Anna’s post I quoted above:

Please consider a few gremlins that are weighing down LW currently:

1. Eliezer's ghost -- He set the culture of the place, his posts are central material, has punctuated its existence with his explosions (and refusal to apologise), and then, upped and left the community, without actually acknowledging that his experiment (well kept gardens etc) has failed. As far as I know he is still the "owner" of this website, retains ultimate veto on a bunch of stuff, etc. If that has changed, there is no clarity on who the owner is (I see three logos on the top banner, is it them?), who the moderators are, who is working on it in general. I know tricycle are helping with development, but a part-time team is only marginally better than no-team, and at least no-team is an invitation for a team to step up.


...I consider Alexei's hints that Arbital is "working on something" to be a really bad idea, though I recognise the good intention. Efforts like this need critical mass and clarity, and diffusing yet another wave of people wanting to do something about LW with vague promises of something nice in the future... is exactly what I would do if I wanted to maintain the status quo for a few more years.

Any serious attempt at revitalising should focus on defining ownership and plan clearly. A post by EY himself recognising that his vision for lw 1.0 failed and passing the batton to a generally-accepted BDFL would be nice, but i'm not holding my breath. Further, I am fairly certain that LW as a community blog is bound to fail. Strong writers enjoy their independence. LW as an aggregator-first (with perhaps ability to host content if people wish to, like hn) is fine. HN may have degraded over time, but much less so than LW, and we should be able to improve on their pattern.

I think if you want to unify the community, what needs to be done is the creation of a hn-style aggregator, with a clear, accepted, willing, opinionated, involved BDFL, input from the prominent writers in the community (scott, robin, eliezer, nick bostrom, others), and for the current to be archived in favour of that new aggregator. But even if it's something else, it will not succeed without the three basic ingredients: clear ownership, dedicated leadership, and as broad support as possible to a simple, well-articulated vision. Lesswrong tried to be too many things with too little in the way of backing.

I think Alexandros hits a lot of good points here, and luckily these are actually some of the problems I am most confident we have solved. The biggest bottleneck – the thing that I think caused most other problems with LessWrong – is simply that there was nobody with the motivation, the mandate and the resources to fight against the inevitable decline into entropy. I feel that the correct response to the question of “why did LessWrong decline?” is to ask “why should it have succeeded?”. 

In the absence of anyone with the mandate trying to fix all the problems that naturally arise, we should expect any online platform to decline. Most of the problems that will be covered in the rest of this post are things that could have been fixed many years ago, but simply weren’t because nobody with the mandate put much resources into fixing them. I think the cause for this was a diffusion of responsibility, and a lot of vague promises of problems getting solved by vague projects in the future. I myself put off working on LessWrong for a few months because I had some vague sense that Arbital would solve the problems that I was hoping to solve, even though Arbital never really promised to solve them. Then Arbital’s plan ended up not working out, and I had wasted months of precious time. 

Since this comment was written, Vaniver has been somewhat unanimously declared benevolent dictator for life of LessWrong. He and I have gotten various stakeholders on board, received funding, have a vision, and have free time – and so we have the mandate, the resources and the motivation to not make the same mistakes. With our new codebase, link posts are now something I can build in an afternoon, rather than something that requires three weeks of getting permissions from various stakeholders, performing complicated open-source and confidentiality rituals, and hiring a new contractor who has to first understand the mysterious Reddit fork from 2008 that LessWrong is based on. This means at least the problem of diffusion of responsibility is solved. 

Scott Alexander also made a recent comment on Reddit on why he thinks LessWrong declined, and why he is somewhat skeptical of attempts to revive the website: 

1. Eliezer had a lot of weird and varying interests, but one of his talents was making them all come together so you felt like at the root they were all part of this same deep philosophy. This didn't work for other people, and so we ended up with some people being amateur decision theory mathematicians, and other people being wannabe self-help gurus, and still other people coming up with their own theories of ethics or metaphysics or something. And when Eliezer did any of those things, somehow it would be interesting to everyone and we would realize the deep connections between decision theory and metaphysics and self-help. And when other people did it, it was just "why am I reading this random bulletin board full of stuff I'm not interested in?"

2. Another of Eliezer's talents was carefully skirting the line between "so mainstream as to be boring" and "so wacky as to be an obvious crackpot". Most people couldn't skirt that line, and so ended up either boring, or obvious crackpots. This produced a lot of backlash, like "we need to be less boring!" or "we need fewer crackpots!", and even though both of these were true, it pretty much meant that whatever you posted, someone would be complaining that you were bad.

3. All the fields Eliezer wrote in are crackpot-bait and do ring a bunch of crackpot alarms. I'm not just talking about AI - I'm talking about self-help, about the problems with the academic establishment, et cetera. I think Eliezer really did have interesting things to say about them - but 90% of people who try to wade into those fields will just end up being actual crackpots, in the boring sense. And 90% of the people who aren't will be really bad at not seeming like crackpots. So there was enough kind of woo type stuff that it became sort of embarassing to be seen there, especially given the thing where half or a quarter of the people there or whatever just want to discuss weird branches of math or whatever.

4. Communities have an unfortunate tendency to become parodies of themselves, and LW ended up with a lot of people (realistically, probably 14 years old) who tended to post things like "Let's use Bayes to hack our utility functions to get superfuzzies in a group house!". Sometimes the stuff they were posting about made sense on its own, but it was still kind of awkward and the sort of stuff people felt embarassed being seen next to.

5. All of these problems were exacerbated by the community being an awkward combination of Google engineers with physics PhDs and three startups on one hand, and confused 140 IQ autistic 14 year olds who didn't fit in at school and decided that this was Their Tribe Now on the other. The lowest common denominator that appeals to both those groups is pretty low.

6. There was a norm against politics, but it wasn't a very well-spelled-out norm, and nobody enforced it very well. So we would get the occasional leftist who had just discovered social justice and wanted to explain to us how patriarchy was the real unfriendly AI, the occasional rightist who had just discovered HBD and wanted to go on a Galileo-style crusade against the deceptive establishment, and everyone else just wanting to discuss self-help or decision-theory or whatever without the entire community becoming a toxic outcast pariah hellhole. Also, this one proto-alt-right guy named Eugene Nier found ways to exploit the karma system to mess with anyone who didn't like the alt-right (ie 98% of the community) and the moderation system wasn't good enough to let anyone do anything about it.

7. There was an ill-defined difference between Discussion (low-effort random posts) and Main (high-effort important posts you wanted to show off). But because all these other problems made it confusing and controversial to post anything at all, nobody was confident enough to post in Main, and so everything ended up in a low-effort-random-post bin that wasn't really designed to matter. And sometimes the only people who didpost in Main were people who were too clueless about community norms to care, and then their posts became the ones that got highlighted to the entire community.

8. Because of all of these things, Less Wrong got a reputation within the rationalist community as a bad place to post, and all of the cool people got their own blogs, or went to Tumblr, or went to Facebook, or did a whole bunch of things that relied on illegible local knowledge. Meanwhile, LW itself was still a big glowing beacon for clueless newbies. So we ended up with an accidental norm that only clueless newbies posted on LW, which just reinforced the "stay off LW" vibe.

I worry that all the existing "resurrect LW" projects, including some really high-effort ones, have been attempts to break coincidental vicious cycles - ie deal with 8 and the second half of 7. I think they're ignoring points 1 through 6, which is going to doom them.

At least judging from where my efforts went, I would agree that I have spent a pretty significant amount of resources on fixing the problems that Scott described in point 6 and 7, but I also spent about equal time thinking about how to fix 1-5. The broader perspective that I have on those latter points is I think best illustrated in an analogy: 

When I read Scott’s comments about how there was just a lot of embarrassing and weird writing on LessWrong, I remember my experiences as a Computer Science undergraduate. When the median undergrad makes claims about the direction of research in their field, or some other big claim about their field that isn't explicitly taught in class, or if you ask an undergraduate physics student what they think about how to do physics research, or what ideas they have for improving society, they will often give you quite naive sounding answers (I have heard everything from “I am going to build a webapp to permanently solve political corruption” to “here’s my idea of how we can transmit large amounts of energy wirelessly by using low-frequency tesla-coils”.) I don’t think we should expect anything different on LessWrong. I actually think we should expect it to be worse here, since we are actively encouraging people to have opinions, as opposed to the more standard practice of academia, which seems to consist of treating undergraduates as slightly more intelligent dogs that need to be conditioned with the right mixture of calculus homework problems and mandatory class attendance, so that they might be given the right to have any opinion at all if they spend 6 more years getting their PhD. 

So while I do think that Eliezer’s writing encouraged topics that were slightly more likely to attract crackpots, I think a large chunk of the weird writing is just a natural consequence of being an intellectual community that has a somewhat constant influx of new members. 

And having undergraduates go through the phase where they have bad ideas, and then have it explained to them why their ideas are bad, is important. I actually think it’s key to learning any topic more complicated than high-school mathematics. It takes a long time until someone can productively contribute to the intellectual progress of an intellectual community (in academia it’s at least 4 years, though usually more like 8), and during all that period they will say very naive and silly sounding things (though less and less so as time progresses). I think LessWrong can do significantly better than 4 years, but we should still expect that it will take new members time to acclimate and get used to how things work (based on user-interviews of a lot of top commenters it usually took something like 3-6 months until someone felt comfortable commenting frequently and about 6-8 months until someone felt comfortable posting frequently. This strikes me as a fairly reasonable expectation for the future). 

And I do think that we have many graduate students and tenured professors of the rationality community who are not Eliezer, and who do not sound like crackpots, that can speak reasonably about the same topics Eliezer talked about, and who I feel are acting with a very similar focus to what Eliezer tried to achieve. Luke Muehlhauser, Carl Shulman, Anna Salamon, Sarah Constantin, Ben Hoffman, Scott himself and many more, most of whose writing would fit very well on LessWrong (and often still ends up there). 

But all of this doesn’t mean what Scott describes isn’t a problem. It’s still a bad experience for everyone to constantly have to read through bad first year undergrad essays, but I think the solution can’t involve those essays not getting written at all. Instead it has to involve some kind of way of not forcing everyone to see those essays, while still allowing them to get promoted if someone shows up who does write something insightful from day one. I am currently planning to tackle this mostly with improvements to the karma system, as well as changes to the layout of the site, where users primarily post to their own profiles and can get content promoted to the frontpage by moderators and high-karma members. A feed consisting solely of content of the quality of the average Scott, Anna, Ben or Luke post would be an amazing read, and is exactly the kind of feed I am hoping to create with LessWrong, while still allowing users to engage with the rest of the content on the site (more on that later).

I would very very roughly summarize what Scott says in the first 5 points as two major failures: first a failure of separating the signal from the noise, and second a failure of enforcing moderation norms when people did turn out to be crackpots or just unable to productively engage with the material on the site. Both of which are natural consequences of the abandonment of promoting things to main, the fact that discussion is ordered by default by recency and not by some kind of scoring system, and the fact that the moderation tools were completely insufficient (but more on the details of that in the next section)

My models of LessWrong 2.0

I think there are three major bottlenecks that LessWrong is facing (after the zeroth bottleneck, which is just that no single group had the mandate, resources and motivation to fix any of the problems): 

  1. We need to be able to build on each other’s intellectual contributions, archive important content and avoid primarily being news-driven
  2. We need to improve the signal-to-noise ratio for the average reader, and only broadcast the most important writing
  3. We need to actively moderate in a way that is both fun for the moderators, and helps people avoid future moderation policy violations


The first bottleneck for our community, and the biggest I think, is the ability to build common knowledge. On facebook, I can read an excellent and insightful discussion, yet one week later I forgot it. Even if I remember it, I don’t link to the facebook post (because linking to facebook posts/comments is hard) and it doesn’t have a title so I don’t casually refer to it in discussion with friends. On facebook, ideas don’t get archived and built upon, they get discussed and forgotten. To put this another way, the reason we cannot build on the best ideas this community had over the last five years, is because we don’t know what they are. There’s only fragments of memories of facebook discussions which maybe some other people remember. We have the sequences, and there’s no way to build on them together as a community, and thus there is stagnation.

Contrast this with science. Modern science is plagued by many severe problems, but of humanity’s institutions it has perhaps the strongest record of being able to build successfully on its previous ideas. The physics community has this system where the new ideas get put into journals, and then eventually if they’re new, important, and true, they get turned into textbooks, which are then read by the upcoming generation of physicists, who then write new papers based on the findings in the textbooks. All good scientific fields have good textbooks, and your undergrad years are largely spent reading them. I think the rationality community has some textbooks, written by Eliezer (and we also compiled a collection of Scott’s best posts that I hope will become another textbook of the community), but there is no expectation that if you write a good enough post/paper that your content will be included in the next generation of those textbooks, and the existing books we have rarely get updated. This makes the current state of the rationality community analogous to a hypothetical state of physics, had physics no journals, no textbook publishers, and only one textbook that is about a decade old. 

This seems to me what Anna is talking about - the purpose of the single locus of conversation is the ability to have common knowledge and build on it. The goal is to have every interaction with the new LessWrong feel like it is either helping you grow as a rationalist or has you contribute to lasting intellectual progress of the community. If you write something good enough, it should enter the canon of the community. If you make a strong enough case against some existing piece of canon, you should be able to replace or alter that canon. I want writing to the new LessWrong to feel timeless. 

To achieve this, we’ve built the following things: 

  • We created a section for core canon on the site that is prominently featured on the frontpage and right now includes Rationality: A-Z, The Codex (a collection of Scott’s best writing, compiled by Scott and us), and HPMOR. Over time I expect these to change, and there is a good chance HPMOR will move to a different section of the site (I am considering adding an “art and fiction” section) and will be replaced by a new collection representing new core ideas in the community.
  • Sequences are now a core feature of the website. Any user can create sequences of their own and other users posts, and those sequences themselves can be voted and commented on. The goal is to help users compile the best writing on the site, and make it so that good timeless writing gets read by users for a long time, as opposed to disappearing into the void. Separating creative and curatorial effort allows the sort of professional specialization that you see in serious scientific fields.
  • Of those sequences, the most upvoted and most important ones will be chosen to be prominently featured on other sections of the site, allowing users easy access to read the best content on the site and get up to speed with the current state of knowledge of the community.
  • For all posts and sequences the site keeps track of how much of them you’ve read (including importing view-tracking from old LessWrong, so you will get to see how much of the original sequences you’ve actually read). And if you’ve read all of a sequence you get a small badge that you can choose to display right next to your username, which helps people navigate how much of the content of the site you are familiar with.
  • The design of the core content of the site (e.g. the Sequences, the Codex, etc.) tries to communicate a certain permanence of contributions. The aesthetic feels intentionally book-like, which I hope gives people a sense that their contributions will be archived, accessible and built-upon.
    One important issue with this is that there also needs to be a space for sketches on LessWrong. To quote PaulGraham: “What made oil paint so exciting, when it first became popular in the fifteenth century, was that you could actually make the finished work from the prototype. You could make a preliminary drawing if you wanted to, but you weren't held to it; you could work out all the details, and even make major changes, as you finished the painting.”
  • We do not want to discourage sketch-like contributions, and want to build functionality that helps people build a finished work from a prototype (this is one of the core competencies of Google Docs, for example).

And there are some more features the team is hoping to build in this direction, such as: 

  • Easier archiving of discussions by allowing discussions to be turned into top-level posts (similar to what Ben Pace did with a recent Facebook discussion between Eliezer, Wei Dai, Stuart Armstrong, and some others, which he turned into a post on LessWrong 2.0
  • The ability to continue reading the content you’ve started reading with a single click from the frontpage. Here's an example logged-in frontpage:




The second bottleneck is improving the signal-to-noise ratio. It needs to be possible for someone to subscribe to only the best posts on LessWrong, and only the most important content needs to turned into common-knowledge. 

I think this is a lot of what Scott was pointing at in his summary about the decline of LessWrong. We need a way for people to learn from their mistakes, while also not flooding the inboxes of everyone else, and while giving people active feedback on how to improve in their writing. 

The site structure: 

To solve this bottleneck, here is the rough content structure that I am currently planning to implement on LessWrong: 

The writing experience: 

If you write a post, it first shows up nowhere else but your personal user page, which you can basically think of being a medium-style blog. If other users have subscribed to you, your post will then show up on their frontpages (or only show up after it hit a certain karma threshold, if users who subscribed to you set a minimum karma threshold). If you have enough karma you can decide to promote your content to the main frontpage feed (where everyone will see it by default), or a moderator can decide to promote your content (if you allowed promoting on that specific post). The frontpage itself is sorted by a scoring system based on the HN algorithm, which uses a combination of total karma and how much time has passed since the creation of the post. 

If you write a good comment on a post a moderator or a high-karma user can promote that comment to the frontpage as well, where we will also feature the best comments on recent discussions. 


Meta will just be a section of the site to discuss changes to moderation policies, issues and bugs with the site, discussion about site features, as well as general site-policy issues. Basically the thing that all StackExchanges have. Karma here will not add to your total karma and will not give you more influence over the site. 

Featured posts

In addition to the main thread, there is a promoted post section that you can subscribe to via email and RSS, that has on average three posts a week, which for now are just going to be chosen by moderators and editors on the site to be the posts that seem most important to turn into common-knowledge for the community. 

Meetups (implementation unclear)

There will also be a separate section of the site for meetups and event announcements that will feature a map of meetups, and generally serve as a place to coordinate the in-person communities. The specific implementation of this is not yet fully figured out. 

Shortform (implementation unclear)

Many authors (including Eliezer) have requested a section of the site for more short-form thoughts, more similar to the length of an average FB post. It seems reasonable to have a section of the site for that, though I am not yet fully sure how it should be implemented. 


The goal of this structure is to allow users to post to LessWrong without their content being directly exposed to the whole community. Their content can first be shown to the people who follow them, or the people who actively seek out content from the broader community by scrolling through all new posts. Then, if a high-karma users among them finds their content worth posting to the frontpage, it will get promoted. The key to this is a larger userbase that has the ability to promote content (i.e. many more than have the ability to promote content to main on the current LessWrong), and the continued filtering of the frontpage based on the karma level of the posts. 

The goal of all of these is to allow users to see good content at various levels of engagement with the site, while giving some personalization options so that people can follow the people they are particularly interested and while also ensuring that this does not sabotage the attempt at building common knowledge by having the best posts from the whole ecosystem be featured and promoted on the frontpage. 

The karma system:

Another thing I’ve been working on to fix the signal-to-noise ratio is to improve the karma system. It’s important that the people having the most significant insights are able to shape a field more. If you’re someone who regularly produces real insights, you’re better able to notice and bring up other good ideas. To achieve this we’ve built a new karma system, where your upvotes and downvotes weight more if you have a lot of karma already. So far the current weighting is a very simple heuristic, whereby your upvotes and downvotes count for log base 5 of your total karma. Ben and I will post another top-level post to discuss just the karma system at some point in the next few weeks, but feel free to ask any questions now, and we will just include those in that post.

(I am currently experimenting with a karma system based on the concept of eigendemocracy by Scott Aaronson, which you can read about here, but which basically boils down to applying Google’s PageRank algorithm to karma allocation. How trusted you are as a user (your karma) is based on how much trusted users upvote you, and the circularity of this definition is solved using linear algebra.)

I am also interested in having some form of two-tiered voting, similarly to how Facebook has a primary vote interaction (the like) and a secondary interaction that you can access via a tap or a hover (angry, sad, heart, etc.). But the implementation of that is also currently undetermined. 


The third and last bottleneck is an actually working moderation system that is fun to use by moderators, while also giving people whose content was moderated a sense of why, and how they can improve. 

The most common, basic complaint currently on LessWrong pertains to trolls and sockpuppet accounts that the reddit fork’s mod tools are vastly inadequate for dealing with (Scott's sixth point refers to this). Raymond Arnold and I are currently building more nuanced mod tools, that include abilities for moderators to set the past/future votes of a user to zero, to see who upvoted a post, and to know the IP address that an account comes from (this will be ready by the open beta). 

Besides that, we are currently working on cultivating a moderation group we are calling “Sunshine Regiment.” Members of the sunshine regiment that will have the ability to take various smaller moderation actions around the site (such as temporarily suspending comment threads, making general moderating comments in a distinct font and promoting content), and so will have the ability to generally shape the culture and content of the website to a larger degree.

The goal is moderation that goes far beyond dealing with trolls, and actively makes the epistemic norms a ubiquitous part of the website. Right now Ben Pace is thinking about moderation norms that encourage archiving and summarizing good discussion, as well as other patterns of conversation that will help the community make intellectual progress. He’ll be posting to the open beta to discuss what norms the site and moderators should have in the coming weeks. We're both in agreement that moderation can and should be improved, and that moderators need better tools, and would appreciate good ideas about what else to give them.

How you can help and issues to discuss:

The open beta of the site is starting in a week, and so you can see all of this for yourself. For the duration of the open beta, we’ll continue the discussion on the beta site. At the conclusion of the open beta, we plan to have a vote open to those who had a thousand karma or more on 9/13 to determine whether we should move forward with the new site design, which would move to the url from its temporary beta location, or leave LessWrong as it is now. (As this would represent the failure of the plan to revive LW, this would likely lead to the site being archived rather than staying open in an unmaintained state.) For now, this is an opportunity for the current LessWrong community to chime in here and object to anything in this plan.

During the open beta (and only during that time) the site will also have an Intercom button in the bottom right corner that allows you to chat directly with us. If you run into any problems, or notice any bugs, feel free to ping us directly on there and Ben and I will try to help you out as soon as possible.

Here are some issues where I discussion would be particularly fruitful: 

  • What are your thoughts about the karma system? Does an eigendemocracy based system seem reasonable to you? How would you implement the details? Ben and I will post our current thoughts on this in a separate post in the next two weeks, but we would be interested in people’s unprimed ideas.
  • What are your experiences with the site so far? Is anything glaringly missing, or are there any bugs you think I should definitely fix? 
  • Do you have any complaints or thoughts about how work on LessWrong 2.0 has been proceeding so far? Are there any worries or issues you have with the people working on it? 
  • What would make you personally use the new LessWrong? Is there any specific feature that would make you want to use it? For reference, here is our current feature roadmap for LW 2.0.
  • And most importantly, do you think that the LessWrong 2.0 project is doomed to failure for some reason? Is there anything important I missed, or something that I misunderstood about the existing critiques?
The closed beta can be found at

Ben, Vaniver, and I will be in the comments!

Against lone wolf self-improvement

27 cousin_it 07 July 2017 03:31PM

LW has a problem. Openly or covertly, many posts here promote the idea that a rational person ought to be able to self-improve on their own. Some of it comes from Eliezer's refusal to attend college (and Luke dropping out of his bachelors, etc). Some of it comes from our concept of rationality, that all agents can be approximated as perfect utility maximizers with a bunch of nonessential bugs. Some of it is due to our psychological makeup and introversion. Some of it comes from trying to tackle hard problems that aren't well understood anywhere else. And some of it is just the plain old meme of heroism and forging your own way.

I'm not saying all these things are 100% harmful. But the end result is a mindset of lone wolf self-improvement, which I believe has harmed LWers more than any other part of our belief system.

Any time you force yourself to do X alone in your room, or blame yourself for not doing X, or feel isolated while doing X, or surf the web to feel some human contact instead of doing X, or wonder if X might improve your life but can't bring yourself to start... your problem comes from believing that lone wolf self-improvement is fundamentally the right approach. That belief is comforting in many ways, but noticing it is enough to break the spell. The fault wasn't with the operator all along. Lone wolf self-improvement doesn't work.

Doesn't work compared to what? Joining a class. With a fixed schedule, a group of students, a teacher, and an exam at the end. Compared to any "anti-akrasia technique" ever proposed on LW or adjacent self-help blogs, joining a class works ridiculously well. You don't need constant willpower: just show up on time and you'll be carried along. You don't get lonely: other students are there and you can't help but interact. You don't wonder if you're doing it right: just ask the teacher.

Can't find a class? Find a club, a meetup, a group of people sharing your interest, any environment where social momentum will work in your favor. Even an online community for X that will reward your progress with upvotes is much better than going X completely alone. But any regular meeting you can attend in person, which doesn't depend on your enthusiasm to keep going, is exponentially more powerful.

Avoiding lone wolf self-improvement seems like embarrassingly obvious advice. But somehow I see people trying to learn X alone in their rooms all the time, swimming against the current for years, blaming themselves when their willpower isn't enough. My message to such people: give up. Your brain is right and what you're forcing it to do is wrong. Put down your X, open your laptop, find a class near you, send them a quick email, and spend the rest of the day surfing the web. It will be your most productive day in months.

LW 2.0 Open Beta starts 9/20

24 Vaniver 15 September 2017 02:57AM

Two years ago, I wrote Lesswrong 2.0. It’s been quite the adventure since then; I took up the mantle of organizing work to improve the site but was missing some of the core skills, and also never quite had the time to make it my top priority. Earlier this year, I talked with Oliver Habryka and he joined the project and has done the lion’s share of the work since then, with help along the way from Eric Rogstad, Harmanas Chopra, Ben Pace, Raymond Arnold, and myself. Dedicated staff has led to serious progress, and we can now see the light at the end of the tunnel.


So what’s next? We’ve been running the closed beta for some time at with an import of the old LW database, and are now happy enough with it to show it to you all. On 9/20, next Wednesday, we’ll turn on account creation, making it an open beta. (This will involve making a new password, as the passwords are stored hashed and we’ve changed the hashing function from the old site.) If you don't have an email address set for your account (see here), I recommend adding it by the end of the open beta so we can merge accounts. For the open beta, just use the Intercom button in the lower right corner if you have any trouble. 


Once the open beta concludes, we’ll have a vote of veteran users (over 1k karma as of yesterday) on whether to change the code at over to the new design or not. It seems important to look into the dark and have an escape valve in case this is the wrong direction for LW. If the vote goes through, we’ll import the new LW activity since the previous import to the new servers, merging the two, and point the url to the new servers. If it doesn’t, we’ll likely turn LW into an archive.


Oliver Habryka will be posting shortly with his views on LW and more details on our plans for how LW 2.0 will further intellectual progress in the community.

Bi-Weekly Rational Feed

21 deluks917 24 June 2017 12:07AM

===Highly Recommended Articles:

Introducing The Ea Involvement Guide by The Center for Effective Altruism (EA forum) - A huge list of concrete actions you can take to get involved. Every action has a brief description and a link to an article. Each article rates the action on time commitment, duration, familiarity and occupation. Very well put together.

Deep Reinforcement Learning from Human Preferences - An algorithm learns to backflip with 900 bits of feedback from the human evaluator. "One step towards building safe AI systems is to remove the need for humans to write goal functions, since using a simple proxy for a complex goal, or getting the complex goal a bit wrong, can lead to undesirable and even dangerous behavior. In collaboration with DeepMind’s safety team, we’ve developed an algorithm which can infer what humans want by being told which of two proposed behaviors is better."

Build Baby Build by Bryan Caplan - Quote from a paper estimating the high costs of housing restrictions. We should blame the government, especially local government. The top alternate theory is wrong. Which regulations are doing the damage? It's complicated. Functionalists are wrong. State government is our best hope.

The Use And Abuse Of Witchdoctors For Life by Lou (sam[]zdat) - Anti-bullet magic and collective self-defense. Cultural evolution. People don't directly believe in anti-bullet magic, they believe in elders and witch doctors. Seeing like a State. Individual psychology is the foundation. Many psychologically important customs couldn't adapt to the marketplace.

S-risks: Why They Are The Worst Existential Risks by Kaj Sojata (lesswrong) - “S-risk – One where an adverse outcome would bring about severe suffering on a cosmic scale, vastly exceeding all suffering that has existed on Earth so far.” Why we should focus on S-risk. Probability: Artificial sentience, Lack of communication, badly aligned Ai and competitive pressures. Tractability: Relationship with x-risk. Going meta, cooperation. Neglectedness: little attention, people conflate x-risk = s-risk.

Projects Id Like To See by William MacAskill (EA forum) - CEA is giving out £100K grants. General types of applications. EA outreach and Community, Anti-Debates, Prediction Tournaments, Shark Tank Discussions, Research Groups, Specific Skill Building, New Organizations, Writing.

The Battle For Psychology by Jacob Falkovich (Put A Number On It!) - An explanation of 'power' in statistics and why its always good. Low power means that positive results are mostly due to chance. Extremely bad incentives and research practices in psychology. Studying imaginary effects. Several good images.

Identifying Sources Of Cost Disease by Kurt Spindler - Where is the money going: Administration, Increased Utilization, Decreased Risk Tolerance. What market failures are in effect: Unbounded Domains, Signaling and Competitive Pressure (ex: military spending), R&D doesn't cut costs it creates new ways to spend money, individuals don't pay. Some practical strategies to reduce cost disease.


To Understand Polarization Understand The Extent Of Republican Failure by Scott Alexander - Conservative voters voted for “smaller government”, “fewer regulations”, and “less welfare state”. Their reps control most branches of the government. They got more of all three (probably thanks to cost disease).

Against Murderism by Scott Alexander - Three definitions of racism. Why 'Racism as motivation' fits best. The futility of blaming the murder rate in the USA on 'murderism'. Why its often best to focus on motivations other than racism.

Open Thread Comment by John Nerst (SSC) - Bi-weekly public open thread. I am linking to a very interesting comment. The author made a list of the most statistically over-represented words in the SSC comment section.

Some Unsong Guys by Scott Alexander (Scratchpad) - Pictures of Unsong Fan Art.

Silinks Is Golden by Scott Alexander - Standard SSC links post.

What is Depression Anyway: The Synapse Hypothesis - Six seemingly distinct treatments for depression. How at least six can be explained by considering synapse generation rates. Skepticism that this method can be used to explain anything since the body is so inter-connected. Six points that confuse Scott and deserve more research. Very technical.


Idea For Lesswrong Video Tutoring by adamzerner (lesswrong) - Community Video Tutoring. Sign up to either give or receive tutoring. Teaching others is a good way to learn and lots of people enjoy teaching. Hopefully enough people want to learn similar things. This could be a great community project and I recommend taking a look.

Regulatory Arbitrage For Medical Research What I Know So Far by Sarah Constantin (Otium) - Economics of avoiding the USA/FDA. Lots of research is already conducted in other countries. The USA is too large of a market not to sell to. Investors aren't interested in cheap preliminary trials. Other options: supplements, medical tourism, clinic ships, cryptocurrency.

Responses To Folk Ontologies by Ferocious Truth - Folk ontology: Concepts and categories held by ordinary people with regard to an idea. Especially pre-scientific or unreflective ones. Responses: Transform/Rescue, Deny or Restrict/Recognize. Rescuing free will and failing to rescue personal identity. Rejecting objective morality. Restricting personal identity and moral language. When to use each approach.

The Battle For Psychology by Jacob Falkovich (Put A Number On It!) - An explanation of 'power' in statistics and why its always good. Low power means that positive results are mostly due to chance. Extremely bad incentives and research practices in psychology. Studying imaginary effects. Several good images.

A Tangled Task Future by Robin Hanson - We need to untangle the economy to automate it. What tasks are heavily tangled and which are not. Ems and the human brain as a legacy system. Human brains are well-integrated and good at tangled tasks.

Epistemic Spot Check Update by Aceso Under Glass - Reviewing self-help books. Properties of a good self-help model: As simple as possible but not more so, explained well, testable on a reasonable timescale, seriously handles the fact the techniques might now work, useful. The author would appreciate feedback.

Skin In The Game by Elo (BearLamp) - Armchair activism and philosophy. Questions to ask yourself about your life. Actually do the five minute exercise at the end.

Momentum Reflectiveness Peace by Sarah Constantin (Otium) - Rationality requires a reflective mindset; a willingness to change course and consider how things could be very different. Momentum, keeping things as they are except more so, is the opposite of reflectivity. Cultivating reflectiveness: rest, contentment, considering ideas lightly and abstractly. “Turn — slowly.”

The Fallacy Fork Why Its Time To Get Rid Of by theFriendlyDoomer (r/SSC) - "The main thesis of our paper is that each and every fallacy in the traditional list runs afoul of the Fallacy Fork. Either you construe the fallacy in a clear-cut and deductive fashion, which means that your definition has normative bite, but also that you hardly find any instances in real life; or you relax your formal definition, making it defeasible and adding contextual qualifications, but then your definition loses its teeth. Your “fallacy” is no longer a fallacy."

Instrumental Rationality 1 Starting Advice by lifelonglerner (lesswrong) - "This is the first post in the Instrumental Rationality Sequence. This is a collection of four concepts that I think are central to instrumental rationality-caring about the obvious, looking for practical things, practicing in pieces, and realistic expectations."

Concrete Ways You Can Help Make The Community Better by deluks917 (lesswrong) - Write more comments on blog posts and non-controversial posts on lw and r/SSC. Especially consider commenting on posts you agree with. People are more likely to comment if other people are posting high quality comments. Projects: Gaming Server, aggregate tumblr effort-posts, improve lesswrong wiki, leadership in local rationalist group

Daring Greatly by Bayesian Investor - Fairly positive book review, some chapters were valuable and it was an easy read. How to overcome shame and how it differs from guilt. Perfectionism vs healthy striving. If you stop caring about what others think you lose your capacity for connection

A Call To Adventure by Robin Hanson - Meaning in life can be found by joining or starting a grand project. Two possible adventures: Promoting and implementing futarchy (decision making via prediction markets). Getting a real understanding of human motivation.

Thought Experiment Coarsegrained Vr Utopia by cousin_it (lesswrong) - Assume an AI is running a Vr simulation that is hooked up to actual human brains. This means that the AI only has to simulate nature at a coarse grained level. How hard would it be to make that virtual reality a utopia?

[The Rationalist-sphere and the Lesswrong Wiki]]( - What's next for the Lesswrong wiki. A distillation of Lesswrong. Fully indexing the diaspora. A list of communities. Spreading rationalist ideas. Rationalist Research.

Deep Reinforcement Learning from Human Preferences - An algorithm learns to backflip with 900 bits of feedback from the human evaluator. "One step towards building safe AI systems is to remove the need for humans to write goal functions, since using a simple proxy for a complex goal, or getting the complex goal a bit wrong, can lead to undesirable and even dangerous behavior. In collaboration with DeepMind’s safety team, we’ve developed an algorithm which can infer what humans want by being told which of two proposed behaviors is better."

Where Do Hypotheses Come From by c0rw1n (lesswrong) - Link to a 25 page article. "Why are human inferences sometimes remarkably close to the Bayesian ideal and other times systematically biased? In particular, why do humans make near-rational inferences in some natural domains where the candidate hypotheses are explicitly available, whereas tasks in similar domains requiring the self-generation of hypotheses produce systematic deviations from rational inference. We propose that these deviations arise from algorithmic processes approximating Bayes’ rule."

The Precept Of Universalism by H i v e w i r e d - "Universality, the idea that all humans experience life in roughly the same way. Do not put things or ideas above people. Honor and protect all peoples." Eight points expanding on how to put people first and honor everyone.

We Are The Athenians Not The Spartans by wubbles (lesswrong) - "Our values should be Athenian: individualistic, open, trusting, enamored of beauty. When we build social technology, it should not aim to cultivate values that stand against these. High trust, open, societies are the societies where human lives are most improved."


Updating My Risk Estimate of Geomagnetic Big One by Open Philosophy - Risk from magnetic storms caused by the sun. "I have raised my best estimate of the chance of a really big storm, like the storied one of 1859, from 0.33% to 0.70% per decade. And I have expanded my 95% confidence interval for this estimate from 0.0–4.0% to 0.0–11.6% per decade."

Links by GiveDirectly - Eight Media articles on Cash Transfers, Basic Income and Effective Altruism.

Are Givewells Top Charities The Best Option For Every Donor by The GiveWell Blog - Why GiveWell recommend charities are a good option for most donors. Which donors have better options: Donors with lots of time, high trust in a particular institution or values different from GiveWell's.

A New President of GWWC by Giving What We Can - Julia Wise is the New president of Giving What We Can.

Angst Ennui And Guilt In Effective Altruism by Gordon (Map and Territory) - Learning about existential risk can cause psychological harm. Guilt about being unable to help solve X-risk. Akrasia. Reasons to not be guilty: comparative advantage, ability is unequally distributed.

S-risks: Why They Are The Worst Existential Risks by Kaj Sojata (lesswrong) - “S-risk – One where an adverse outcome would bring about severe suffering on a cosmic scale, vastly exceeding all suffering that has existed on Earth so far.” Why we should focus on S-risk. Probability: Artificial sentience, Lack of communication, badly aligned Ai and competitive pressures. Tractability: Relationship with x-risk. Going meta, cooperation. Neglectedness: little attention, people conflate x-risk = s-risk.

Update On Sepsis Donations Probably Unnecessary by Sarah Constantin (Otium) - Sarah C had asked people to crowdfund a sepsis RCT. The trial will probably get funded by charitable foundations. Diminishing returns. Finding good giving opportunities is hard and talking to people in the know is a good way to find things out.

What Is Valuable About Effective Altruism by Owen_Cotton-Barratt (EA forum) - Why should people join EA? The impersonal and personal perspectives. Tensions and synergies between the two perspectives. Bullet point conclusions for researchers, community leaders and normal members.

QALYs/$ Are More Intuitive Than $/QALYs by ThomasSittler (EA forum) - QALYs/$ are preferable to $/QALYs. visual representations on graphs. Avoiding Small numbers and re-normalizing to QUALs/10K$.

Introducing The Ea Involvement Guide by The Center for Effective Altruism (EA forum) - A huge list of concrete actions you can take to get involved. Every action has a brief description and a link to an article. Each article rates the action on time commitment, duration, familiarity and occupation. Very well put together.

Cash is King by GiveDirectly - Eight media articles about Effective Altruism and Cash transfers.

Separating GiveWell and the Open Philanthropy Project by The GiveWell Blog - The GiveWell perspective. Context for the sale. Effect on donors who rely on GiveWell. Organization changes at GiveWell. Steps taken to sell Open Phil assets. The new relationship between GiveWell and Open Phil.

Open Philanthropy Project is Now an Independent Organization by Open Philosophy - The evolution of Open Phil. Why should Open Phil split from GiveWell. LLC structure.

Projects Id Like To See by William MacAskill (EA forum) - CEA is giving out £100K grants. General types of applications. EA outreach and Community, Anti-Debates, Prediction Tournaments, Shark Tank Discussions, Research Groups, Specific Skill Building, New Organizations, Writing.

===Politics and Economics:

No Us School Funding Is Actually Somewhat Progressive by Random Critical Analysis - Many people think that wealthy public school districts spend more per pupil. This information is outdated. Within most states spending is higher on disadvantaged students. This is despite the fact that school funding is mostly local. Extremely thorough with loads of graphs.

Build Baby Build by Bryan Caplan - Quote from a paper estimating the high costs of housing restrictions. We should blame the government, especially local government. The top alternate theory is wrong. Which regulations are doing the damage? It's complicated. Functionalists are wrong. State government is our best hope.

Identifying Sources Of Cost Disease by Kurt Spindler - Where is the money going: Administration, Increased Utilization, Decreased Risk Tolerance. What market failures are in effect: Unbounded Domains, Signaling and Competitive Pressure (ex: military spending), R&D doesn't cut costs it creates new ways to spend money, individuals don't pay. Some practical strategies to reduce cost disease.

The Use And Abuse Of Witchdoctors For Life by Lou (sam[]zdat) - Anti-bullet magic and collective self-defense. Cultural evolution. People don't directly believe in anti-bullet magic, they believe in elders and witch doctors. Seeing like a State. Individual psychology is the foundation. Many psychologically important customs couldn't adapt to the marketplace.

Greece Gdp Forecasting by João Eira (Lettuce be Cereal) - Transforming the Data. Evaluating the Model with Exponential Smoothing, Bagged ETS and ARIMA. The regression results and forecast.

Links 9 by Artir (Nintil) - Economics, Psychology, Artificial Intelligence, Philosophy and other links.

Amazon Buying Whole Foods by Tyler Cowen - Quotes from Matt Yglesias, Alex Tabarrock, Ross Douthat and Tyler. “Dow opens down 10 points. Amazon jumps 3% after deal to buy Whole Foods. Walmart slumps 7%, Kroger plunges 16%”

Historical Returns Market Portfolio by Tyler Cowen - From 1960 to 2015 the global market portfolio realized a compounded real return of 4.38% with a std of 11.6%. Investors beat savers by 3.24%. Link to the original paper.

Trust And Diver by Bryan Caplan - Robert Putnam's work is often cited as showing the costs of diversity. However Putnam's work shows the negative effect of diversity on trust is rather modest. On the other hand Putnam found multiple variables that are much more correlated with trust (such as home ownership).

Why Optimism is More Rational than Pessimism by TheMoneyIllusion - Splitting 1900-2017 into Good and Bad periods. We learn something from our mistakes. Huge areas where things have improved long term. Top 25 movies of the 21st Century. Artforms in decline.

Is Economics Science by Noah Smith - No one knows what a Science is. Thoeries that work (4 examples). The empirical and credibility revolutions. Why we still need structural models. Ways economics could be more scientific. Data needs to kill bad theories. Slides from Noah's talk are included and worth playing but assume familiarity with the economics profession.


Clojure Concurrency And Blocking With Coreasync by Eli Bendersky - Concurrent applications and blocking operations using core.async. Most of the article compares threads and go-blocks. Lots of code and well presented test results.

Optopt by Ben Kuhn - Startup options are surprisingly valuable once you factor in that you can quit of the startup does badly. A mathematical model of the value of startup options and the optimal time to quit. The ability to quit rose the option value by over 50%. The sensitivity of the analysis with respect to parameters (opportunity cost, volatility, etc).

Epistemic Spot Check: The Demon Under The Microscope by Aceso Under Glass - Biography of the man who invented sulfa drugs, the early anti-bacteria treatments which were replaced by penicillin. Interesting fact checks of various claims.

Sequential Conversion Rates by Chris Stucchio - Estimating success rates when you have noisy reporting. The article is a sketch of how the author handled such a problem in practice.

Set Theory Problem by protokol2020 - Bring down ZFC. Aleph-zero spheres and Aleph-one circles.

Connectome Specific Harmonic Waves On Lsd by Qualia Computing - Transcript and video of a talk on neuroimaging the brain on LSD. "Today thanks to the recent developments in structural neuroimaging techniques such as diffusion tensor imaging, we can trace the long-distance white matter connections in the brain. These long-distance white matter fibers (as you see in the image) connect distant parts of the brain, distant parts of the cortex."

Approval Maximizing Representations by Paul Christiano - Representing images. Manipulation representations. Iterative and compound encodings. Compressed representations. Putting it all together and bootstrapping reinforcement learning.

Travel by Ben Kuhn - Advice for traveling frequently. Sleeping on the plane and taking redeyes. Be robust. Bring extra clothes, medicine, backup chargers and things to read when delayed. Minimize stress. Buy good luggage and travel bags.

Learning To Cooperate, Compete And Communicate by Open Ai - Competitive multi-agent models are a step towards AGI. An algorithm for centralized learning and decentralized execution in multi-agent environment. Initial Research. Next Steps. Lots of visuals demonstrating the algorithm in practice.

Openai Baselines Dqn by Open Ai - "We’re open-sourcing OpenAI Baselines, our internal effort to reproduce reinforcement learning algorithms with performance on par with published results." Best practices we use for correct RL algorithm implementations. First release: DQN and three of its variants, algorithms developed by DeepMind.

Corrigibility by Paul Christiano - Paul defines the sort of AI he wants to build, he refers to such systems as "corrigible". Paul argues that a sufficiently corrigible agent will become more corrigible over time. This implies that friendly AI is not a narrow target but a broad basin of attraction. Corrigible agents prefer to build other agents that share the overseers preferences, not their own. Predicting that the overseer wants me to turn off when he hits the off-button is not complicated relative to being deceitful. Comparison with Eliezer's views.

G Reliant Skills Seem Most Susceptible To Automation by Freddie deBoer - Computers already outperform humans in g-loaded domains such as Go and Chess. Many g-loaded jobs might get automated. Jobs involving soft or people skills are resilient to automation.

Persona 5: Spoiler Free Review - Persona games are long but deeply worthwhile if you enjoy the gameplay and the story. Persona 5 is much more polished but Persona 3 has a more meaningful story and more interesting decisions. Tips for Maximum Enjoyment of Persona 5. Very few spoilers.

Sea Problem by protokol2020 - A fun problem. Measuring sea level rise.


83 The Politics Of Emergency by Waking Up with Sam Harris - Fareed Zakaria. "His career as a journalist, Samuel Huntington's "clash of civilizations," political partisanship, Trump, the health of the news media, the connection between Islam and intolerance"

On Risk, Statistics, And Improving The Public Understanding Of Science by 80,000 Hours - A lifetime of communicating science. Early career advice. Getting people to intuitively understand hazards and their effect on life expectancy.

Ed Luce by Tyler Cowen - The Retreat of Western Liberalism "What a future liberalism will look like, to what extent current populism is an Anglo-American phenomenon, Modi’s India, whether Kubrick, Hitchcock, and John Lennon are overrated or underrated, and what it is like to be a speechwriter for Larry Summers."

Thomas Ricks by EconTalk - Thomas Ricks book Churchill and Orwell. Overlapping lives and the fight to preserve individual liberty.

The End Of The World According To Isis by Waking Up with Sam Harris - Graeme Wood. His experience reporting on ISIS, the myth of online recruitment, the theology of ISIS, the quality of their propaganda, the most important American recruit to the organization, the roles of Jesus and the Anti-Christ in Islamic prophecy, free speech and the ongoing threat of jihadism.

Jason Khalipa by Tim Ferriss - "8-time CrossFit Games competitor, a 3-time Team USA CrossFit member, and — among other athletic feats — he has deadlifted 550 pounds, squatted 450 pounds, and performed 64 pullups at a bodyweight of 210 pounds."

Dario Amodei, Paul Christiano & Alex Ray. - 80K hours released a detailed guide to careers in AI policy. " We discuss the main career paths; what to study; where to apply; how to get started; what topics are most in need of research; and what progress has been made in the field so far." Transcript included.

Don Bourdreaux Emergent Order by EconTalk - "Why is it that people in large cities like Paris or New York City people sleep peacefully, unworried about whether there will be enough bread or other necessities available for purchase the next morning? No one is in charge--no bread czar. No flour czar."

Tania Lombrozo On Why We Evolved The Urge To Explain by Rational Speaking - "Research on what purpose explanation serves -- i.e., why it helps us more than our brains just running prediction algorithms. Tania and Julia also discuss whether simple explanations are more likely to be true, and why we're drawn to teleological explanations"

Bi-weekly Rational Feed

20 deluks917 08 August 2017 01:56PM

===Highly Recommended Articles:

Skills Most Employable by 80,000 Hours - Metrics: Satisfaction, risk of automation, and breadth of applicability. Leadership and social skills will gain the most in value. The least valuable skills involve manual labor. Tech skills may not be the most employable but they are straightforward to improve at. The most valuable skills are the hardest to automate and useful in the most situations. Data showing a large oversupply of some tech skills, though others are in high demand. A chart of which college majors add the most income.

Something Was Wrong by Zvi Moshowitz - Zvi visits a 'stepford pre-school'. He can't shake the feeling that something is wrong. He decides not to send his son to the place where kid's souls go to die.

Ems Evolve by Bayesian Investor - Will the future we dominated by entities that lack properties we consider important (such as 'have fun' or even 'sentient'). Will agents lacking X-value outcompete other agents. What counter-measures could society take and how effective would they be.

Housing Price Bubble Revisited by Tyler Cowen - "Over the entire 20th century real home prices averaged an index value of about 110 (and were quite close to this value over the the entire 1950-1997 period). Over the entire 20th century, housing prices never once roce above 131, the 1989 peak. But beginning around 2000 house prices seemed to reach for an entirely new equilibrium. In fact, even given the financial crisis, prices since 2000 fell below the 20th century peak for only a few months in late 2011. Real prices today are now back to 2004 levels and rising. As I predicted in 2008, prices never returned to their long-run 20th century levels."

Tyler Cowen On Stubborn Attachments by EconTalk - "Cowen argues that economic growth--properly defined--is the moral key to maintaining civilization and promoting human well-being. Along the way, the conversation also deals with inequality, environmental issues, and education"


Contra Grant On Exaggerated Differences by Scott Alexander - "Hyde found moderate or large gender differences in aggressiveness, horniness, language abilities, mechanical abilities, visuospatial skills, mechanical ability, tendermindness, assertiveness, comfort with body, various physical abilities, and computer skills. Perhaps some peeople might think that finding moderate-to-large-differences in mechanical abilities, computer skills, etc supports the idea that gender differences might play a role in gender balance in the tech industry. But because Hyde’s meta-analysis drowns all of this out with stuff about smiling-when-not-observed, Grant is able to make it sound like this proves his point. It’s actually worse than this, because Grant misreports the study findings in various ways."

Links: On The Site Of The Angels by Scott Alexander - Standard SSC links post.

Mildly Condescending Advice by SlateStarScratchpad - Ten mildly condescending but useful pieces of advice Scott recommends.

Communism by SlateStarScratchpad - Scott thinks he would have been a communist in 1910.

What Are The Median Psychiatrists Scores On The by SlateStarScratchpad - Psychiatrists are very mentally well adjusted on average. "I think you get way more illness in the therapists, counselors, etc, especially the ones that are kind of low-status and don’t require a lot of training." Doctor's recovery rates from alcoholism are very good.

Why Not More Excitement About Prediction Aggregation by Scott Alexander - Prediction markets and aggregation methods work. Superforecasters proved some groups can consistently make good predictions. Why isn't there more interest? Wouldn't investors pay for predictions? Do theories about signaling and prestige explain the situation?

Where The Falling Einstein Meets The Rising Mouse by Scott Alexander - Eliezer/Scott's model of intelligence suggests that the gap between 'village idiot' and Einstein is tiny relative to the difference between 'village idiot' and a chimp. This suggests that once AI reaches human levels it will almost immediately pass the best human. This happened in Go. But in other fields progress was gradual throughout the approximately human level skill range. Scott looks at possible explanations.

Stem vs The Humanities by SlateStarScratchpad - A long and intelligent thread about "STEM" vs "The Humanities". What are the natural categories? Should we consider math part of the humanities? Should we groups careful humanities scholars with careful STEM scholars? So-called-autistics. Other topics.

Gender Imbalances Are Mostly Not Due To Offensive Attitudes by Scott Alexander - Men and women massively differ in terms of interest in things vs people. Libertarians are about 5% women. r/MRAand the gamergate subreddit have twice this percentage. Trump voters are close to gender parity and the Catholic Church has more women than men. Why this matters.

Is It Possible To Have Coherent Principles Around Free Speech Norms by Scott Alexander - The doctrine of the preferred first speaker. Separating having an opinion, signaling a propensity, and committing a speech act. Self selected communities. "Don’t try to destroy people in order to enforce social norms that only exist in your head"

Book Review Raise A Genius by Scott Alexander - Scott quote-mines Polgar's book on raising genius. Many of the quotes are concerned with the importance of instilling a love of learning in children. Polgar gives some detail on how to do this but not as much as Scott hoped. Summary: "Get those four things right – early start, single-subject focus, 1:1 home schooling, and a great parent/teacher – and the rest is just common-sense advice."

Against Signal Boosting As Doxxing by Scott Alexander - Free speech did not come into existence ex nihilo when the First Amendment was ratified. People need to be free from mobs as well as kings.

Open Djed by Scott Alexander - Bi-Weekly public open thread. Meetup Tab. Updates on rationalist houses in Berkeley. Selected comments on Griggs. A comment on Democratic strategy and Georgia.

Why Is Clozapine So Great by Scott Alexander - Clozapine is a very effect anti-psychotic but it has large serious side effects. NDMA agonists and a proposed mechanism for Clozapine. Maybe you could just give patients a normal anti-psychotic plus glycine.

Djoser Joseph Osiris by Scott Alexander - "The other day a few people (including Ben Hoffman of Compass Rose) tried to convince me that Pharaoh Djoser was the inspiration for the god Osiris and the Biblical Joseph. The short summary is that the connection between Djoser and Osiris is probably meaningless, but there’s a very small chance there might be some tiny distant scrap of a connection to Joseph."

Don't Blame Griggs by Scott Alexander - Griggs vs. Duke Power Co is commonly cited as making it prohibitively hard for companies to use intelligence tests in hiring. Scott argues this doesn't explain the rise in credentialism. You can still ask about SAT scores. Fields with easily available test scores (LSAT, MCAT) are still credentialist. Other countries lack equivalents of Griggs vs Duke.

Highlights From The Comment Thread On Meritocracy by Scott Alexander - Real merit vs credentials. Which merits do we reward? Meritocracy causes high ability people to concentrate into one class. Just rid rid of ruler and structural divisions between people. Scott finds the later idea utopian. " The most salient alternative to meritocracy isn’t perfect equality, it’s cronyism."


Why So Few Women In Cs: The Google Memo Is Right by Artir (Nintil) - Sampler: Lots of data and graphs covering multiple countries. "Occupational segregation by gender is stronger in egalitarian countries. This is a fatal blow to the sexism theory." In the 1980s demand for the CS major far outstripped capacity. This lead to severe limits on who could major in CS. These limits occurred at the same time female enrollment percentage dropped.

Double Crux Web App by mindlevelup - Double Crux is a rationalist technique for resolving and understanding disagreements. It involves identifying facts/statements, called cruxes, that would cause you to change your mind if you changed your mind about the crux. The author built software to facilitate double crux during the Google CSSI 3 week web dev camp. Links to the site and an explanation of Double Crux.

Compare Institutions To Institutions Not To Perfection by Robin Hanson - Hanson responds to criticisms of prediction markets. Short term accuracy is always easier to incentivize. Its always easier to find surface as opposed to deep connections.

Thank You For Listening by Ben Hoffman (Compass Rose) - Zvi's post above starts with a reference to a previous Ben Quo post. If you have hurt your child via school you aren't the enemy. Society taught you that you were helping. If you are still sending your child to a harmful system you aren't the enemy either, you are doing what you think is best.

Something Was Wrong by Zvi Moshowitz - Zvi visits a 'stepford pre-school'. He can't shake the feeling that something is wrong. He decides not to send his son to the place where kid's souls go to die.

Inscrutable Ideas by Gordon (Map and Territory) - The author describes 'holonic' thinking and why its hard to explain. Postmodernism as a flawed holonic tradition. Buddhism as a better holonic tradition. Fundamental incompatibility with system-relationship epistemology.

Body Pleasure by Sarah Perry (ribbonfarm) - "As non-human intelligences get more sophisticated, it may be the case that human work remains extremely important; however, it may also be that humans are faced with increasing leisure. If that is the case, the critical problem facing humanity will be how to enjoy ourselves. If that seems silly, consider your favorite dystopian images of the future: only humans who understand how to enjoy themselves can demand living conditions in which they are able to do so."

Erisology Of Self And Will The Need And The Reasons by Everything Studies - "Here in part 6 I discuss the reasons why the traditional view persists when prescientific thinking on other topics often doesn’t."

Confidence And Patience Dont Feel Like Anything In Particular by Kaj Sotala - Being confident doesn't feel like anything. 'Feeling confident' is really just the lack of feeling unconfident.

Foom Justifies Ai Risk Efforts Now by Robin Hanson - Organizations and corporations are already much smarter and more powerful than individuals, yet they remain mostly under control. Despite setbacks (Wars, revolutions, famines) the organization ecosystem is mostly functional. The only reason to be preemptively worried about AI is if AI takeoff will be very fast.

Skills Most Employable by 80,000 Hours - Metrics: Satisfaction, risk of automation, and breadth of applicability. Leadership and social skills will gain the most in value. The least valuable skills involve manual labor. Tech skills may not be the most employable but they are straightforward to improve at. The most valuable skills are the hardest to automate and useful in the most situations. Data showing a large oversupply of some tech skills, though others are in high demand. A chart of which college majors add the most income.

A Tactics by protokol2020 - Why its very hard to argue against the scientific consensus on fields such as Relativity or Quantum Mechanics. The Earth's temperature was hotter when it rotated fast, despite a fainter sun. Many physicists failed to grasp this fact. What does that imply?

Hedonic Model by Jeff Kaufman - "Happiness is having how things are going be closer to how you think things could be going." Some interesting implications including that both inequality and social mobility are bad.

Link Blog: Broadcom Broadpwn Gender Signal by Name and Nature - Links: History of Atheism. Evolution of Trust. Graphics depicting the Fast Fourier Transform. Remotely Compromising Android and iOS via a Bug in Broadcom’s Wi-Fi Chipsets.

Lying On The Ground by mindlevelup - "A rambling look at how rewards, distractions, and attention interact. Starts with the idea of lying on the ground as an interesting break-time activity and goes from there to talk about Saturation and Feeling, two concepts that I’ve been thinking about lately."

Ems Evolve by Bayesian Investor - Will the future we dominated by entities that lack properties we consider important (such as 'have fun' or even 'sentient'). Will agents lacking X-value outcompete other agents. What counter-measures could society take and how effective would they be.

Models Of Human Relationships Tools To Understand People by Elo (BearLamp) - Brief Model descriptions: Crucial Conversations. 4 Difficult Conversations, 4 Behaviors that kill relationships, How to Win Friends and Influence People (Detailed review), Non-Judgmental conversations, Emotional Intelligence. Circling, The Game (PUA), Apologies, Emotional Labor and others.

Inefficiencies In The Social Value Market by Julia Galef - Add liquidity where needed. Solve coordination problems. Pool risks. Provide resource allocation information. Make biases work for you. Remove rent-seeking. Reduce transaction costs.

Erisology Of Self And Will Campbellian Thinking In The Wild by Everything Studies - "In this section I’ll show some examples of casual conversation revealing Campbellian ideas. Comment threads attached to online newspaper articles are excellent sources of such casual conversation. Written down in a neat and accessible form, their existence makes it practical to do this kind of research for the first time."

The Future: Near Zero Growth Rates by The Foundational Research Institute - Moore's law cannot possibly go one for more than ~400 years, we will hit physical limits to computation. At 2.3% growth in energy use we would need to coat the Earth in Solar panels to get enough solar energy in only 400 years. If we captured all the energy from the sun we would run out in 1350 years. The universe can only support so much economic activity. We will in a very unusual part of humanity's timeline in terms of growth rates.

How I Found Fixed The Root Problem Behind My Depression And Anxiety After 20 Years by Kaj Sotala- Finding the root cause: self-concept. How to cultivate lovable self-concepts (ex: bravery). Consider memories where you lived up to the concept of being brave. Also consider cases where you failed. Integrating the positives and negatives into a healthy whole. Positive benefits the author experienced: professional success, emotional landscape improvement, negative emotions disappeared. Expected relationship changes. Lots of personal history details throughout.

Taking Integrity Literally by Ben Hoffman (Compass Rose) - Defending Kant. Fight the murderer or shut the door but don't become the sort of person who considers lying. Honesty is optimal in healthy environments. Thoughts on unhealthy environments. How Ben started to become honest about how late he would be. Not lying to yourself or others.

People Dont Have Beliefs Anymore Than They Have by Bound_up (lesswrong) - Actions are not deduced form goals. Beliefs are not deduced from models of the world. Maybe nerds have real beliefs but most people do not. Less nerdy people will probably interpret in your stated beliefs as social moves and will respond in turn.

Complexity Is Bad by Zvi Moshowitz - People can only think about ~3 things and store ~7 pieces of information in working memory. People will simplify in unexpected ways or fail to engage. Some concepts that help you manage complexity (ex: Resonance, Chunking). A link to the MtG head of R&D's podcast about why complexity is a cost.

Write Down Your Process by Zvi Moshowitz - Writing down your thought process helps you improve. Magic R&D's openness. Zvi's success as a MtG player and writer.


July 2017 Newsletter by The MIRI Blog - News and Links: Open AI, Deepmind, AI Impacts, EA global, 80K hours, etc

Yudkowsky And Miri by Jeff Kaufman - Elizier once wrote an extremely embarrassing article called 'So you want to be a Seed AI Programmer'. A ML researcher showed it to Jeff Kaufman and said it implied Elizier was a crank. Elizier wrote it in 2003, when he was 24. What does this imply about MIRI?


Medical Research Cancer Is Hugely Overfunded by Sanjay (EA forum) - Chart of disease burden vs research share. Six reasons you might disagree with the conclusion including cause tractability and methodology.

Blood Donation Generally Not That Effective On by Grue_Slinky (EA forum) - Having a supply of blood is very important. However the marginal value of blood donation is too low to recommend it as an efficient intervention.

How We Banned Fur In Berkeley by jayquigly (EA forum) - Fur sales banned. Main strategies: cultivating relationships with sympathetic council members, utilizing a proven template for the bill. Background. Strategy Details. Advice

Links: Our Main Goal is to Learn by GiveDirectly - Eight media links on Give Directly, Basic Income, Cash Transfers and Development Aid.

Funding Constraints For Ea Orgs by Jeff Kaufman - Value of direct work vs donation. Jeff argues EA organizations could make use of more resources. For example EA-Global could hire non-EA professional conference organizers.

===Politics and Economics:

Rise And Fall Of Rawlsianism by Artir (Nintil) - "I will introduce street Rawlsianism, a simplified version of Rawls’s Theory of Justice to get an idea of what this is all about. Then, I will explain how that came to be, including some extra details about Rawls’s justification for his theory. This story itself, the development of Rawls’s own philosophical views, is a good enough criticism of his original theory, but I will add at the end what I think are the strongest critiques I know."

Hazlett's Political Spectrum by Robin Hanson - "Not only would everything have been much better without FCC regulation, it actually was much better before the FCC! Herbert Hoover, who was head of the US Commerce Department at the time, broke the spectrum in order to then “save” it, a move that probably helped him rise to the presidency."

Another Point Of View by Simon Penner (Status 451) - The author was raised working class in semi-rural Canada and moved to Silicon valley. He experienced a ton of culture shock and significant cultural discrimination. This causes him to have less sympathy for people who quit software because of relatively minor pressure saying they don't fit in. The author overcame this stuff and so should other people.

Rapid Onset Gender Dysphoria Is Bad Science by Ozy (Thing of Things) - Ozy cites two commonly misinterpreted but good studies about suicide rates among transgender individuals. Ozy then discusses a very shoddy study about people rapidly becoming trans after meeting trans friends. However the study got its information by asking the parents of trans teens and young adults. Ozy explains how and why young adults hide much of their feelings from their parents, especially if they are neurodiverse.

Housing Price Bubble Revisited by Tyler Cowen - "Over the entire 20th century real home prices averaged an index value of about 110 (and were quite close to this value over the the entire 1950-1997 period). Over the entire 20th century, housing prices never once roce above 131, the 1989 peak. But beginning around 2000 house prices seemed to reach for an entirely new equilibrium. In fact, even given the financial crisis, prices since 2000 fell below the 20th century peak for only a few months in late 2011. Real prices today are now back to 2004 levels and rising. As I predicted in 2008, prices never returned to their long-run 20th century levels."

Reinventing The Wheel Of Fortune by sam[]zdat - Two definitions of democracy. A key idea: "Lasch is an external commentary using this rough model. At some point, the combined apparatus of American culture (the state, capital, media, political agitation) tried to make things “better”. To better its citizens required new social controls (paternalism). The taylorism employed makes things more focused on image, and this results in a more warlike society. Happened with “authenticity” last time and also [everything below]. To deal with this invasion, society turns to narcissistic defenses. Narcissism is self-centered, but it’s an expression of dependence on others, and specifically on the others’ validation of the narcissist’s image."


Prime Towers Problem by protokol2020 - Prime height towers. From which tower is the most tower tops visible.

The Unyoga Manifesto by SquirrelInHell - Yoga has a sort of 'competitive' ethos baked in. There is alot of pressure to do the postures 'correctly'. Instead you should listen to your body and follow the natural incentive gradients that lead to maintaining one's body well. Four practical pieces of advice.

Clojure The Perfect Language To Expand Your Brain by Eli Bendersky - Clojure will almost certainly change how you think about programming. Clojure is a fully modern and useable Lisp. The designers of Clojure are extremely pragmatic, building upon decades of industry experience. Sequences and laziness for powerful in-language data processing. Right approach to OOP. Built-in support for concurrency and parallelism.

A Physics Problem Once Again by protokol2020 - Discussion of n-dimensional mating. Approximate the sum of all gravitational forces between pairs of atoms inside the earth.

Meta Contrarian Typography by Tom Bartleby - The author is a self-described meta-contrarian. Supporting two spaces after a period. The three reasons for single spaces and why they don't hold up. Double spaces makes writing easier to skim, periods are over-worked in English.

I Cant Be Your Hero Im Too Busy Being Super by Jim Stone (ribbonfarm) - "But people don’t generally take on the burdens of inauthenticity without good reason. Often it’s because they want to occupy social roles that allow them to get their physical and psychological needs met, and other people won’t let them play those roles unless they are the right kind of person. Sometimes people put on masks simply to secure the role of “community member” or “citizen” or “human being”."


Physical Training Dating Strategies And Stories From The Early Days by Tim Feriss - Tim answers viewer questions. Physical training, interview prep, the art of networking, education reform, dream guests on the show.

Living With Violence by Waking Up with Sam Harris - "Gavin de Becker about the primacy of human intuition in the prediction and prevention of violence."

Amanda Askell On Pascals Wager And Other Low Risks Wi by Rational Speaking - Pascal's Wager: It's rational to believe in God, because if you're wrong it's no big deal, but if you're right then the payoff is huge. Amanda Askell argues that it's much trickier to rebut Pascal's Wager than most people think. Handling low probability but very high impact possibilities: should you round them down to zero? Does it matter how measurable the risk is? And should you take into account the chance you're being scammed?"

Tyler Cowen On Stubborn Attachments by EconTalk - "Cowen argues that economic growth--properly defined--is the moral key to maintaining civilization and promoting human well-being. Along the way, the conversation also deals with inequality, environmental issues, and education"

40 Making Humans Legible by The Bayesian Conspiracy - Seeing like a State. Scott and Sam[]zdat's posts. Green Revolution. Age of Em. Chemtrails and invasive species. Friendship is Optimal.

Dave Rubin by Tyler Cowen - "Comedy and political correctness, which jokes should not be told, the economics of comedy, comedy in Israel and Saudi Arabia, comedy on campus, George Carlin, and the most underrated Star Wars installment"

Yascha Mounk by The Ezra Klein Show - Trump's illiberalism is catalyzed by his failures. Recently Trump has been more illiberal. Support for Trump remains at around 40 percent. What does this imply about the risk of an illiberal Trump successor with more political competence.

Alex Guarnasche by EconTalk - Food network star. "What it's like to run a restaurant, the challenges of a career in cooking, her favorite dishes, her least favorite dishes, and what she cooked to beat Bobby Flay."

On Becoming A Better Person by Waking Up with Sam Harris - "David Brooks. His book The Road to Character, the importance of words like “sin” and "virtue," self-esteem vs. self-overcoming, the significance of keeping promises, honesty, President Trump."

Julia Galef On How To Argue Better And Change Your Mind More by The Ezra Klein Show - Thinking more clearly and arguing better, Ezra's concerns that the traditional paths toward a better discourse. Signaling is turtles all the way down, motivated reasoning, probabilistic debating, which identities help us find truth, making online arguments less terrible. Julia heavily emphasizes the importance of good epistemic communities. Being too charitable can produce wrong predictions. Seeing like a State.

2017 LessWrong Survey

19 ingres 13 September 2017 06:26AM

The 2017 LessWrong Survey is here! This year we're interested in community response to the LessWrong 2.0 initiative. I've also gone through and fixed as many bugs as I could find reported on the last survey, and reintroduced items that were missing from the 2016 edition. Furthermore new items have been introduced in multiple sections and some cut in others to make room. You can now export your survey results after finishing by choosing the 'print my results' option on the page displayed after submission. The survey will run from today until the 15th of October.

You can take the survey below, thanks for your time. (It's back in single page format, please allow some seconds for it to load):

Click here to take the survey

Rescuing the Extropy Magazine archives

18 Deku-shrub 01 July 2017 02:25PM

Possibly of more interest to old school Extropians, you may be aware the defunct Extropy Institute's website is very slow and broken, and certainly inaccessible to newcomers.

Anyhow, I have recently pieced together most of the early publications, 1988 - 1996 of 'Extropy: Vaccine For Future Shock' later, Extropy: Journal of Transhumanist Thought, as a part of mapping the history of Extropianism.

You'll find some really interesting very early articles on neural augmentation, transhumanism, libertarianism, AI (featuring Eliezer), radical economics (featuring Robin Hanson of course) and even decentralised payment systems.

Along with the ExI mailing list which is not yet wikified, it provides a great insight into early radical technological thinking, an era mostly known for the early hacker movement.

Let me know your thoughts/feedback!

Ten small life improvements

16 paulfchristiano 20 August 2017 07:09PM

I've accumulated a lot of small applications and items that make my life incrementally better. Most of these ultimately came from someone else's recommendation, so I thought I'd pay it forward by posting ten of my favorite small improvements.

(I've given credit where I remember who introduced the item into my life. Obviously the biggest part of the credit goes to the creator.)

Video speed

Video Speed Controller lets you speed up HTML 5 video; it gives a nicer interface than the YouTube speed adjustment and works for most videos displayed in a browser (including e.g. netflix/amazon).

(Credit: Stephanie Zolayvar?)


Spectacle on OSX provides keyboard shortcuts to snap windows to any half or third of the screen (or full screen).

Pinned tabs + tab wrangler

I use tab wrangler to automatically close tabs (and save a bookmark) after 10m. I keep gmail and vimflowy pinned so that they don't close. For me, closing tabs after 10m is usually the right behavior.

Aggressive AdBlock

I use AdBlock for anything that grabs attention even if isn't an ad. I usually block "related content," "next stories," the whole youtube sidebar, everything on Medium other than the article, the gmail sidebar, most comment sections, etc. Similarly, I use kill news feed to block my Facebook feed.

Avoiding email inbox

I often need to write or look up emails during the day, which would sometimes lead me to read/respond to new emails and switch contexts. I've mostly fixed the problem by leaving gmail open to my list of starred emails rather than my inbox, ad-blocked the "Inbox (X)" notification, and pin gmail so that I can't see the "Inbox (X)" title.

Christmas lights

I prefer the soft light from christmas lights to white overhead lights or even softer lamps. My favorite are multicolored lights, though soft white lights also seem OK.

(Credit: Ben Hoffman)


Karabiner remaps keys in a very flexible way. (Unfortunately, it only works on OSX pre-Sierra. Would be very interested if there is any similarly flexible software that )

Some changes have helped me a lot:

  • While holding s: hjkl move the cursor. (Turn on "Simple Vi Mode v2") I find this way more convenient than the arrow keys.
  • While holding d: hjkl move the mouse. (Turn on "Mouse Keys Mode v2") I find this slightly more convenient than a mouse most of the time, but the big win is that I can use my computer when a bluetooth mouse disconnects.
  • Other stuff while holding s: (add this gist to your private.xml):
    • While holding s: u/o move to the previous and next word, n is backspace. 
    • While holding s+f: key repeat is 10x faster.
    • While holding s+a: hold shift (so cursor selects whatever it moves over, e.g. I can quickly select last ten words by holding a+s+f and then holding u for 1 second).

I'd definitely pay > a minute a day for these changes.


I find split+tented keyboards much nicer than usual keyboards. I use a Kinesis Freestyle 2 with this to prop it up. I put my touchpad on a raised platform between the keyboard halves. Alternatively, you might prefer the wire cutter's recommendations.

(Credit: Emerald Yang)


Vimflowy is similar to Workflowy, with a few changes: it lets you "clone" bullets so they appear in multiple places in your document, has marks that you can jump to easily, and has much more flexible motions / macros / etc. I find all of these very helpful. The biggest downside for most people is probably modal editing (keystrokes issue commands rather than inserting text).

The biggest value add for me is the time tracking plugin. I use vimflowy essentially constantly, so this gives me extremely fine-grained time tracking for free.

Running locally (download from github) lets you use vimflowy offline, and using the SQLite backend scales to very large documents (larger than workflowy can handle).

(Credit: Jeff Wu and Zachary Vance.)

ClipMenu [hard to get?]

Keeps a buffer of the last 20 things you've copied, so that you can paste any one of them. Source for OSX is on github here, I'm not sure if it can be easily compiled/installed (binaries used to be available). Would be curious if anyone knows a good alternative or tries to compile it.

(Credit: Jeff Wu.)

[Link] How I [Kaj] found & fixed the root problem behind my depression and anxiety after 20+ years

16 Kaj_Sotala 26 July 2017 12:56PM

[Link] Game Theory & The Golden Rule (From Reddit)

15 Brillyant 28 July 2017 01:54PM

The dark arts: Examples from the Harris-Adams conversation

15 Stabilizer 20 July 2017 11:42PM

Recently, James_Miller posted a conversation between Sam Harris and Scott Adams about Donald Trump. James_Miller titled it "a model rationalist disagreement". While I agree that the tone in which the conversation was conducted was helpful, I think Scott Adams is a top practitioner of the Dark Arts. Indeed, he often prides himself on his persuasion ability. To me, he is very far from a model for a rationalist, and he is the kind of figure we rationalists should know how to fight against.


Here are some techniques that Adams uses:


  1. Changing the subject: (a) Harris says Trump is unethical and cites the example of Trump gate-crashing a charity event to falsely get credit for himself. Adams responds by saying that others are equally bad—that all politicians do morally dubious things. When Harris points out that Obama would never do such a thing, Adams says Trump is a very public figure and hence people have lots of dirt on him. (b) When Harris points out that almost all climate scientists agree that climate change is happening and that it is wrong for Trump to have called climate change a hoax, Adams changes the subject to how it is unclear what economic policies one ought to pursue if climate change is true.
  2. Motte-and-bailey: When Harris points out that the Trump University scandal and Trump's response to it means Trump is unethical, Adams says that Trump was not responsible for the university because it was only a licensing deal. Then Harris points out that Trump is unethical because he shortchanged his contractors. Adams says that that’s what happens with big construction projects. Harris tries to argue that it’s the entirety of Trump’s behavior that makes it clear that he is unethical—i.e., Trump University, his non-payment to contractors, his charity gate-crashing, and so on. At this points Adams says we ought to stop expecting ethical behavior from our Presidents. This is a classic motte-and-bailey defense. Try to defend an indefensible position (the bailey) for a while, but then once it becomes untenable to defend it, then go to the motte (something much more defensible).
  3. Euphemisation: (a) When Harris tells Adams that Trump lies constantly and has a dangerous disregard for the truth, Adams says, I agree that Trump doesn’t pass fact checks. Indeed, throughout the conversation Adams never refers to Trump as lying or as making false statements. Instead, Adams always says, Trump “doesn’t pass the fact checks”. This move essentially makes it sound as if there’s some organization whose arbitrary and biased standards are what Trump doesn’t pass and so downplays the much more important fact that Trump lies. (b) When Harris call Trump's actions morally wrong, Adams makes it seem as if he is agreeing with Harris but then rephrases it as: “he does things that you or I may not do in the same situation”. Indeed, that's Adams's constant euphemism for a morally wrong action. This is a very different statement compared to saying that what Trump did was wrong, and makes it seem as if Trump is just a normal person doing what normal people do. 
  4. Diagnosis: Rather than debate the substance of Harris’s claims, Adams will often embark on a diagnosis of Harris’s beliefs or of someone else who has that belief. For example, when Harris says that Trump is not persuasive and does not seem to have any coherent views, Adams says that that's Harris's "tell" and that Harris is "triggered" by Trump's speeches. Adams constantly diagnoses Trump critics as seeing a different movie, or as being hypnotized by the mainstream media. By doing this, he moves away from the substance of the criticisms.
  5. Excusing: (a) When Harris says that it is wrong to not condemn, and wrong to support, the intervention of Russia in America’s election, Adams says that the US would extract revenge via its intelligence agencies and we would never know about it. He provides no evidence for the claim that Trump is indeed extracting revenge via the CIA. He also says America interferes in other elections too. (b) When Harris says that Trump degraded democratic institutions by promising to lock up his political opponent after the election, Adams says that was just a joke. (c) When Harris says Trump is using the office of the President for personal gain, Adams tries to spin the narrative as Trump trying to give as much as possible late in his life for his country. 
  6. Cherry-picking evidence: (a) When Harris points out that seventeen different intelligence agencies agreed that Russia’s government interfered in the US elections, Adams says that the intelligence agencies have been known to be wrong before. (b) When Harris points out that almost all climate scientists agree on climate change, Adams points to some point in the 1970s where (he claims) climate scientists got something wrong, and therefore we should be skeptical about the claims of climate scientists.

Overall, I think what Adams is doing is wrong. He is an ethical and epistemological relativist: he does not seem to believe in truth or in morality. At the very least, he does not care about what is true and false and what is right and wrong. He exploits his relativism to push his agenda, which is blindingly clear: support Trump.


(Note: I wanted to work on this essay more carefully, and find out all the different ways in which Adams subverts the truth and sound reasoning. I also wanted to cite more clearly the problematic passages from the conversations. But I don't have the time. So I relied on memory and highlighted the Dark Arts moves that struck me immediately. So please, contribute in the comments with your own observations about the Dark Arts involved here.)

Models of human relationships - tools to understand people

14 Elo 29 July 2017 03:31AM

This post will not teach you the models here.  This post is a summary of the models that I carry in my head.  I have written most of the descriptions without looking them up (See Feynman notebook method).  If you have read a book on every one of these points they will make sense, as if you were shaking hands with an old acquaintance.  If you are seeing them for the first time, they won't make very much sense or they will feel like a surface trivial truth.

I can't make you read all the books but maybe I can offer you that the answer to social problems is surprisingly simple.  After reading enough books you start to see the overlap and realise they often are trying to talk about the same thing.  (i.e. NVC + Gottman go together well).

In fact if you were several independent dragon hunters trying to model an invisible beast and all of various people's homemade sensors kept going "ping" at similar events you would probably start to agree you were chasing the same monster.  Models should start to agree when they are talking about the same thing.  The variety of models should make it easier for different minds to connect to different parts of the answer.

All models are wrong, some models are useful.  Try to look at where the models converge.  That's where I find the truth.

1. The book Crucial Confrontations - Kerry Patterson
(without explaining how) If you can navigate to a place of safety in a conversation you can say pretty much anything.  Which is not to say "here is how to be a jerk" but if you know something is going to come across negative you can first make sure to be in a positive/agreeable/supportive conversation before raising the hard thing.

In the middle of a yelling match is maybe not the best time to bring up something that has bugged you for years.  However a few sentences about growth mindset, supporting people being a better person and trying to help (and getting a feel that the person is ready to hear the thing) and you could tell anyone they are a lazy bum who needs to shape up or ship out.

The conversation needs to be safe.  For example - "I want to help you as a person and I know how hard it can be to get feedback from other people and I want to make you into a better person.  I have an idea for how you might like to improve.  Before I tell you I want to reassure you that even though this might come across abrasive I want to help you grow and be better in the future..."

(some people will be easier than others to navigate a safe conversation and that's where there are no hard and fast rules for how to do this.  Go with your gut)

The crux of this model is "have a model of the other person" [15]

2. The partner book "Difficult conversations"

There are 4 types of difficult conversations around communicating a decision:
a. Consultation (Bob asks Alice for ideas for the decision he is going to make on his own)
b. Collaboration (Bob and Alice make a decision together)
c. Declaration (Bob tells alice the decision he has made)
d. Delegation (Bob tells alice to make the decision)

As someone's boss you may sometimes have to pass on bad news in the form of a declaration.  It's up to you which conversation this is going to be but being clear about what conversation this is will be helpful to a person to understand their place in responding or interacting with you.  It becomes difficult where there is a misunderstanding about what is going on.

It's also important when you are on the receiving end to be on the same page about what conversation this is.  (you don't want to be negotiating in a collaborative manner when they are trying to give you a declaration of their decision, and the same when you are leading the conversation).

Among other details in the book.

3. Getting the 3rd story.

linking back to -
(from one of those books [1] or [2])

Bob knows what happened from his perspective and Alice knows her version of events.  Where there is a disagreement of what follows from different versions of events it is possible to construct a 3rd person story.  This may be hard to do when you are involved and an actual 3rd person can help but is not crucial in constructing the story.  If you can step outside of your own story and construct a 3rd version together this can resolve misunderstandings.
Something like; "I thought you said we should meet here, even though I said I wanted ice-cream, you thought that meant we should meet at the ice-cream place next door and we each waited 30mins for the other one to turn up to where we were.".  By constructing a 3rd story it's possible that no one was at fault.  It's also possible that it can become clear what went wrong and how to learn from that or what can be done differently.

(cue business management After-Action-Review activities {what did we do well, what could we have done better, what would we do differently}, now SWOT)

4. The Gottman Institute research (and book)

The 4 horsemen of divorce (but just because that's what the research is about doesn't mean we can't apply it elsewhere) (yes Gottman is limited in value because of bad use of statistics we can't be sure the models are accurate, I still find it's a good model at explaining things).

Don't do these things.  When you see these things, recognise them for what they are and don't engage with them.  If necessary acknowledge people are feeling certain angry feelings and let them get them out (not everyone can efficiently drop how they are feeling and get on with talking about it, especially not without practice).

Each one has an antidote, usually in the form of an attitude or strategy that can leave you thinking about the same thing differently and relating to it differently.

I. Criticism
I would rename to "inherent criticism".  Comes in the form of an inherent descriptor like, "you are a lazy person", "you always run late".  "you are the type of person who forgets my birthday"[see 5].  Try to replace inherent criticism with *[6] concrete descriptions of actions.

To counter this - try descriptions like [6a]:  "I can see you are sitting on the couch right now and I would like you to offer help when you can see me cleaning".  "yesterday I saw you try to do a few extra tasks and that caused us to run late", "you forgot my birthday last year".

The important thing about the change here is that an inherent label comes in the form of an unchangeable belief.  It's equivalent to saying, "you are a tall person".  It's fixed in time, space and attitude.  You don't want to give someone a fixed negative trait.  Not in your head and especially not out of your head either to that person or to anyone else.  You set someone up for failure if you do.  As soon as someone is "the lazy one" you give them the ticket to "always be lazy" and if they are half smart they will probably take it.  Besides - you don't change people's actions by using criticism.  You maybe relieve some frustration but then you have created some open frustration and the problem still exists.

II. Defensiveness
Probably easiest to understand by the description of reactive defensiveness.  It usually comes as a reaction to an accusation.  If two people are yelling, chances are neither is listening.  In response to "you are always making us run late", a defensive reaction would be, "I make us run late because you always stress me out".

It does two things:
1. claim to not be responsible
2. make a second accusation (can be irrelevant to the subject at hand).

First of all if you are bringing up several problems at once you are going to confuse matters.  Try to deal with one problem at a time.  It doesn't really matter which so long as you are not yelling about being late while they are yelling about you forgetting the laundry. (and so long as you deal with all the problems)

The second part is that you can't shift blame.  Absorbing some blame does not make you a bad person.  Nor does it make you inherently terrible.  You can have both done a wrong thing and not be a bad person.  After all you had your reasons for doing what you did.

The antidote to defensiveness is to acknowledge [6] what they have said and move forward without reacting.

III. Contempt
This is about an internal state as much as an external state.  Contempt is about the story we tell ourselves about the other person (see NVC) and is a state of negative intent.  I hold you contemptuously.  For example, "a good person would not run late", "if you were smarter you would just...", "I work so hard on this relationship and you just...", Some examples of displays of contempt include when a person uses sarcasm, cynicism, name-calling, eye-rolling, sneering, mockery, and hostile humour [see 7 - emotional intelligence about physiological events].  This overlaps with Inherent criticism and makes more sense with [6 NVC].
Contempt has two antidotes, Teacher mindset and curiosity.  Teacher mindset can change an attitude of, "He should know what he did wrong" to, "I need to explain to him how to do it right".  Curiosity [See NVC, also [3] the 3rd story] can take you to a place of trying to understand what is going on and take you away from the place of the stories we tell ourselves.[10]

IV. Stonewalling
This is a physiological state of going silent.  It is used when you are being lectured (for example) and you go silent, possibly start thinking about everything else while you wait for someone to finish.  It's like holding your breath when you go underwater, waiting for it to pass.  If you are doing this what you need to do is take a break from whatever is going on and do something different, for example go for a walk and calm down.
There was a classic joke, they asked a 110 year old why he lived so long and he said, every time I got into an argument with my wife I used to go for a walk.  I went on a lot of walks in my life.
Because this is a physiological state it's so easy to fix so long as you remember to pay attention to your internal state [see NVC what is most alive in you, and 11. what does that look like in practice]

5. How to win friends and influence people

I always recommend this book to people starting the journey because it's a great place to start.  These days I have better models but when I didn't know anything this was a place to begin.  Most of my models are now more complicated applications of the ideas initially presented.  You still need weak models before replacing them with more complicated ones which are more accurate.
The principles and (in brackets) what has superseded them for me:

1. Don't criticize, condemn or complain. (There are places and methods to do this.  Criticism can be done as [1] from a place of safety or in [4] from a teacher/mentor/growth mindset.  Definitely don't do it from a place of criticism.  Condemnation is more about [10] and is an inherent trait.  Progress doesn't usually happen when we use inherent traits, From Saul Alinsky's rules for radicals - don't complain unless you have the right answer - "I have a problem and you have to figure out how to fix it for me" is not a good way to get your problem solved.)
2. Give honest, sincere appreciation. (so long as you are doing this out of the goodness of your heart good.  If you are using it for manipulation you can just not bother.  NVC supersedes this.  By keeping track of what is most alive in you, you can do better than this)
3. Arouse in the other person an eager want. (Work out what people want, work out how to get both your needs met - superceded by NVC.)
4. Become genuinely interested in other people. (depends what for.  Don't bother if you don't want to.  That would not be genuine.  You need to find the genuine interest inside yourself first.)
5. Smile. (um.  Hard to disagree with but a default smiling state is a good one to cultivate - from [7] physiological states are linked two ways.  Smiling will make you happy just as being happy will make you smile)
6. Remember that a person's name is to that person the most important sound in any language. (I don't know about most important but I would say that anyone can remember names with practice.

7. Be a good listener. Encourage others to talk about themselves. (NVC - pay attention to what is most alive in you when you do. Make sure you know about the spectrum of )
8. Talk in terms of the other person's interest. (Sure why not.  Sales are a lot easier when you are selling what people want. See [15] and NVC to supersede how and why this works)
9. Make the other person feel important - and do so sincerely. (I guess?  I don't do this actively.)
10 The only way to get the best of an argument is to avoid it. ([9] if you are in an argument something already went wrong)
11. Show respect for the other person's opinions. Never say, "You're wrong." (NVC, instead of saying no, say what gets in the way.  "here is evidence that says otherwise" can be better than "durr WRONGGG" but I have seen people use "you are wrong" perfectly fine.)
12. If you are wrong, admit it quickly and emphatically. (hard to disagree with, but holding onto grudges and guity things is not useful.  [4] gottman talks about defensiveness, avoid defensiveness and acknowledge the fact that someone feels you are at fault first.  It will satisfy the psychological need arising in an offended person [14])
13. Begin in a friendly way. (as opposed to what?  Sure I guess.)
14. Get the other person saying, "Yes, yes" immediately. (Yes ladders are important and valuable.  You see bits of this creeping into Gottman [4], NVC [6], The game [13] and other practices but no one as yet explains it as well as I would like.  The game probably has the best commentary on it, short of business books that escape my memory right now)
15. Let the other person do a great deal of the talking. (not really important who talks so long as you are on the same page and in agreement.  If you want someone else to do the emotional labour [15] for you, then you can let them.  If you want to do it for them you can.  Implications of EL are not yet clear to me in full.  Some places it will be good to do EL for people, other places they need to do it for themselves to feel ownership of the problems and solutions)
16. Let the other person feel that the idea is his or hers. (sure I guess.  A good idea is it's own champion.  Ideas that are obviously better will win out.  You can't make a turd beat a diamond but you can employ tricks to polish certain diamonds over others.  This technique is battling over little bits.  can be useful but I would not rely on it alone.)
17. Try honestly to see things from the other person's point of view. (NVC [6] and EL [15] should help do that better.  Imagining that you are that person in a way that is hard to impart in words because it's about having the experience of being that other person (see and not "just thinking about it". needs a longer description and is an effective technique.)
18. Be sympathetic with the other person's ideas and desires. (NVC supercedes.  Everyone has basic feelings and needs that you can understand, like the need for safety)
19. Appeal to the nobler motives. (giving people a reputation to live up to is a valuable technique that I would say only works for qualified people - but does not work so well if you put pressure on people who are less skilled.  Probably relates to the things going through our head at the time - see also book - the inner game of tennis, NVC, judgement model)
20. Dramatize your ideas. (I don't know?  Try it.  It could work.  will not work by virtue of it being a good model of things, might work by luck/breaking people out of their habits)
21. Throw down a challenge. (can work if people are willing to rise to a challenge can work against you and create cognitive dissonance if people are not willing.  Need more information to make it work)
22. Begin with praise and honest appreciation. (Don't give people a shit sandwich - slices of compliments surrounding shit.  That's not respectful of them.  Instead using [1] navigate to a place of safety to talk about things)
23. Call attention to people's mistakes indirectly. (there are correct and incorrect ways to do this.  You can be passive agressive about it.  I don't see a problem with being blunt - in private, in safe conversations [1] - about what is going on)
24. Talk about your own mistakes before criticizing the other person. (don't yammer on, but it can help to connect you and them and the problem.  NVC would be better than just this)
25. Ask questions instead of giving direct orders. (socratic method, can be a drain, need more advanced skills and [15] EL to know if this is appropriate )
26. Let the other person save face. (better described in I agree with this, but [15] EL might describe it better)
27. Praise the slightest and every improvement. Be "lavish in your praise." (NVC disagrees, praise only what is relevant, true and valid.  Be a teacher [4] but deliver praise when praise is due.)
28. Give the other person a fine reputation to live up to. (This is 19/26 again.  I agree with it.  I could use it more)
29. Use encouragement. Make the fault seem easy to correct. (agree, solve the "problem" for someone else, make it easy to move forward)
30. Make the other person happy about doing the thing you suggest. (NVC gives a better model of doing what other people want, "with the joy of a small child feeding a hungry duck")

* Giving people a positive reputation to live up to.  "I trust that you won't forget my birthday again".  Don't be silly with this, "I have confidence that you will give me a million dollars" will not actually yield you a million dollars unless you have reason to believe that will work.

6. NVC - Non-Judgemental communication 

I can't yet do justice to NVC but I am putting together the pieces.  Best to watch the youtube talk in the title link but here are some short points.  Also this helps -
a. Concrete descriptions -
In agreement with Gottman, be concrete and specific -  The objective test of whether the description is concrete is whether the description can be followed by an anonymous person to produce the same experience.  "you are a lazy person" VS "you are sitting on the couch"
b. Acknowledge feelings -
people have huge psychological needs to be heard and understood.  Anyone can fulfill that need
c. Connect that to a need
See the NVC video.
d.  Making a request
See NVC video.
e. Saying no by passing your goals forward
Instead of saying no, Consider what it is that gets in the way of you saying no and say that instead.  Keep in mind vulnerability [16].  This also allows people to plan around your future intentions.  If someone asks you to buy a new car and you say, "no I plan to save money towards buying a house" they can choose to be mindful of that in the future and they can act accordingly (not offering you a different car for sale next week).
f. Connect with what is most alive in you right now
See video for best description.

7. Emotional intelligence

There is a two way path between physiological states and emotional states.

Try these:
a. Hold a pencil/pen in your mouth and go back and read the joke about the old man [4]. (expect to find it funnier than you did the first time)
b. furrow your brow while reading the first paragraph of this page again (expect to either feel confused or the cognitive dissonance version if you know it very well - "I know this too well")
The two way path means that you can feel better about emotional pain by taking a paracetamol, but more specifically, if you take a break from a situation and come back to it the emotions might have improved.  This can include getting a glass of water, going for a walk, getting some fresh air.  And for more complicated decisions - sleeping on it (among other things).

Everyone can train emotional intelligence, they need practice.  This includes holding an understanding of your own states as well as being able to notice emotional states in other people.

I had an ex who had particularly visible physiological states, it was a very valuable experience to me to see the state changes and it really trained my guessing mind to be able to notice changes.  These days I can usually see when things change but I can't always pick the emotion that has come up.  This is where NVC and curiosity become valuable (also circling).

EI is particularly important when it is particularly deficient.  In the book it talks about anger as a state that (to an untrained person) can cause a reaction before someone knows that they were angry.  Make sure to fix that first before moving to higher levels of emotional management.

8. model of arguments

(see also NVC)

If you view disagreements or misunderstandings as a venn diagram of what you know and what the other person knows.  You have full rights to make comment on anything you know but only have limited rights to make comment on what the other person knows.  Instead you can comment on the information they have given you.  "you said 'X', I know Y about what you said 'X'".  To say X is wrong, is not going to yield progress.  Instead to acknowledge that they described 'X' and their description does not make sense to me, or leaves me feeling confused [6].

9. The argument started earlier

From Gavin: "If I ever find myself in a position of saying - well officer, let me explain what happened...", Something already went wrong well and truly before now.
When you start the journey you will start getting to "Aha" moments about where arguments start.  As you get more and more experience you realise the argument started well and truly earlier than you ever first realised.  When you get really good at it, you can stop and say [6] "I am confused"  well and truly before a yelling match.

10. The stories we tell ourselves

NVC based, Judgement model, There is a lot of people who are thinking in stories.  Related -

Their entire existence is the story and narrative they tell about themselves (see also Jordan Peterson - maps of meaning).  The constant narrative about how "the world hates me" is going to give you a particular world experience compared to the constant narrative, "I am a lucky person".  You see this in gamblers who are searching for "the prevailing wind" or "winning streaks".

You also see this in social pressure - when people think and get fixated on, "what will people think of me?", sometimes the social pressure does not even have to be there to cause the thoughts and the actions that would be "social pressure".
Several models of thinking advocate removing the story telling in your head to relieve the psychological pain.  See books, "search inside yourself", NVC, Gateless gatecrashers, some information in the Persistent Non Symbolic Experience Article.

I am not sure what is the best practice, but mindfulness seems to help as well, since these thoughts are all theoretical, grounding yourself in the concrete [6a] and observing those thoughts seems to alleviate the anxieties it can cause.  But this can explain a lot of people's actions (they are telling themselves a particular story in their head).

11. Polling your internal states
[related to 6 NVC]. Any time you are disconnected to what is going on, try asking yourself an internal question of "what is going on?" to connect with what is most alive in you right now.  This might be a feeling of boredom.  It could be anything, but if it's not a good and strong connection with what is presently happening you have a chance to fix it.  (See also the book "The Charisma myth")

12. circling (The circling handbook)

[6 built on NVC] is a practice of living in the current and present experience.  You can focus on another person or focus on yourself.  Perpetually answering the question of "what is most alive in you right now?" and sharing that with other people.

Some examples include:I am feeling nervous sharing this experienceI just closed my eyes and put my head back trying to think of a good example.I am distracted by the sound of birds behind me.I can feel air going past my nostrils as I think about this question.

The creators of cicling find it a very connecting experience to either share what is going on inside you or to guess at what is going on inside someone else and ask if that's an accurate guess.  Or to alternate experiences, each sharing one and one.  or each guessing of each other - one and one.

I find it valuable because everyone can understand present experience, and get a glimpse of your current experience in the process of sharing experience with you.  This method can also work as a form of [15] and [7].

13. The game

(From the book The Game) This concept receives equal part condemnation and praise from various parties.

The basic concept of the game is that life is a game.  Specifically social interactions are a game that you can try out.  You can iterate on and repeat until success.  In the book it follows the journey of a pick up artist as he generally disregards other people's agency and works out how to get what he wants (regularly bed people) through some stages of practicing certain methods of interaction, and iterating until he sees a lot of success.

I see a lot of this concept at kegan stage 3[18].  Everything is about social, and the only thing that matters is social relationships.

Most of the condemnations comes from the failure of this model to treat other people as human, worthy of moral weight, thought or anything other than to be used to your own purposes.  If you don't like dehumanising people the book can still teach you a lot about social interaction, and practicing towards incremental improvement.

If you feel uncomfortable with Pick up, you should examine that belief closely, it's probably to do with feeling uncomfortable with people using manipulation to pursue sex.  That's fine, there is a lot to learn about social and a lot of social systems before you turn into "literally the devil" for knowing about it.  There are also other social goals other than sex that you can pursue.

If you are cautious about turning into a jerk - you are probably not likely to ever even get close to actions that paint you as a jerk because your filters will stop you.  It's the people who have no filter on actions that might want to be careful - herein lies dark arts and being a jerk.  And as much as no one will stop you, no one will really enjoy your presence either if you are a jerk.

The biggest problem I have with game and game methodology is that we all play a one-shot version.  With high stakes of failure.  Which means some of the iteration and having to fail while you learn how to not be terrible - will permanently damage your reputation.  There is no perfect "retry" - a reputation will follow you basically to the ends of the earth and back.  As much as game will teach you some things, the other models in this list have better information for you and are going to go further than game.

14. what an apology must do from Aaron Lazare, M.D.- on apology

1. A valid acknowledgement of the offence that makes clear who the offender is and who is the offended. The offender must clearly and completely acknowledge the offence.
2. An effective explanation, which shows an offence was neither intentional nor personal, and is unlikely to recur.
3. Expressions of remorse, shame, and humility, which show that the offender recognises the suffering of the offended.
4. A reparation of some kind, in the form of a real or symbolic compensation for the offender’s transgression.
An effective apology must also satisfy at least one of seven psychological needs of an offended person.
1. The restoration of dignity in the offended person.
2. The affirmation that both parties have shared values and agree that the harm committed was wrong.
3. Validation that the victim was not responsible for the offense.
4. The assurance that the offended party is safe from a repeat offense.
5. Reparative justice, which occurs when the offended sees the offending party suffer through some type of punishment.
6. Reparation, when the victim receives some form of compensation for his pain.
7. A dialogue that allows the offended parties to express their feelings toward the offenders and even grieve over their losses.

These are not my notes from the book but they are particularly valuable when trying to construct an understanding of apologising and making up for misdeeds.  I don't have them in memory but I know when I need to make a serious apology I can look them up.  They fit quite well with [6], but are more specific to apology and not all interactions.

15. Emotional labour

A relatively new concept.  This is roughly the ability to:
I. Model someone else's emotional state
II. Get it right
III. act on their emotional state

For example:
I. I notice my partners eyes are droopy and they do not appear to be concentrating very well.  Is rubbing eyes and checking their watch a lot.
II. I suspect they are sleepy
III. I make them a coffee, or I offer to make them coffee.  (as a downgraded form I mention they look tired and ask if this is the case)

From Erratio:

Emotional labour is essentially a name for a managerial role in a relationship. This takes on a few different concrete forms.

The first is management of the household, appointments, shopping, and other assorted tasks that are generally shared across couples and/or housemates. Sweeping a floor or cooking dinner is not emotional labour, but being the person who makes sure that those things are accomplished is. It doesn't matter whether you get the floor swept by doing it yourself, asking your partner to do it, firing up a Roomba, or hiring a cleaning service; what matters is that you are taking on responsibility for making sure the task is done. This is why people who say that they would be happy to help with the housework if you would just tell them what needs doing are being a lot less helpful than they think. They're taking the physical labour component of the task but explicitly sticking the other person with the emotional labour component.

The second is taking responsibility for the likes, dislikes, feelings, wants and needs of other people who you are in a relationship with (and to be clear, it doesn't have to be a romantic relationship). Stereotypical scenarios that are covered by this kind of emotional labour include: the hysterical girlfriend who demands that her boyfriend drop everything he's doing to comfort her, the husband who comes home tense and moody after a long day at the office and expects to be asked how his day went and listened to and have validating noises made at him, noticing that the other person in a conversation is uncomfortable and steering the conversation to a more pleasant topic without having to be asked, helping a confused friend talk through their feelings about a potential or former partner, reminding your spouse that it's so-and-so's birthday and that so-and-so would appreciate being contacted, remembering birthdays and anniversaries and holidays and contacting people and saying or doing the right things on each of those dates.

This overlaps with [7].  Commentary on this concept suggest that it's a habit that women get into doing more than men.  Mothers are good at paying attention to their kids and their needs (as the major caregiver from early on), and stemming from this wives also take care of their husbands.  While it would not be fair to suggest that all wives do anything I would be willing to concede that these are habits that people get into and are sometimes socially directed by society.

I am not sure of the overall value of this model but it's clear that it has some implications about how people organise themselves - for better or worse.

16. Vulnerability - Brene brown
In order to form close connections with people a certain level of vulnerability is necessary.  This means that you need to share about yourself in order to give people something to connect to.  In the other direction people need to be a certain level of vulnerable to you in order to connect.  If you make sure to be open and encouraging and not judge you will enable people to open up to you and connect with you.
Sometimes being vulnerable will get you hurt and you need to be aware of that and not shut down future experiences (continue to be open with people).  I see this particularly in people who "take time" to get over relationships.  Being vulnerable is a skill that can be practiced.  Vulnerability replaced a lot of my ideas about [13 The game].  And would have given me a lot of ideas of how to connect with people, combined with [15] and [12]. (I have not read her books but I expect them to be useful)

17. More Than Two (book)

This is commonly known as the polyamory bible.  It doesn't have to be read as a polyamory book, but in the world of polyamory emotional intelligence and the ability to communicate is the bread and butter of every day interactions.  If you are trying to juggle two or three relationships and you don't know how to talk about hard things then you might as well quit now.  If you don't know how to handle difficult feelings or experiences you might as well quit polyamory now.

Reading about these skills and what you might gain from the insight that polyamorous people have learnt is probably valuable to anyone.

18. Kegan stages of development

Other people have summarised this model better than me.  I won't do it justice but if I had to be brief about it - there are a number of levels that we pass through as we grow from very small to more mature.  They include the basic kid level where we only notice inputs and outputs.  Shortly after - when we are sad "the whole world is sad" because we are the whole world.  Eventually we grow out of that and recognise other humans and that they have agency.  At around teenager we end up caring a lot about what other people think about us.  classic teenagers are scared of social pressure and say things like, "I would die if she saw me in this outfit" (while probably being hyperbolic, there is a bit of serious concern present).  Eventually we grow out of that and into system thinking (Libertarian, Socialist, among other tribes).  And later above tribalism into more nuanced varieties of tribes.

It's hard to describe and you are better off reading the theories to get a better idea.  I find the model limited in application but I admit I need to read more about the theories to get my head around it better.

I have a lot more books on the topic to read but I am publishing this list because I feel like I have a good handle on the whole "how people work" and, "how relationships work" thing.  It's rare that anyone does any actions that surprise me (socially) any more.  In fact I am getting so good at it that I trust my intuition [11] more than what people will say sometimes.

When something does not make sense I know what question to ask [6] to get answers.  Often enough it happens that people won't answer the first time, this can represent people not feeling Safe [1] enough to be vulnerable [16].  That's okay.  That represents it's my job to get them to a comfortable place to open up if I want to get to the answers.

I particularly like NVC, Gottman, EL, EI, Vulnerability all of them and find myself using them fortnightly.  Most of these represent a book or more of educational material.  Don't think you know them enough to dismiss them if you have not read the books.  If you feel you know them and already employ the model then it's probably not necessary to look into it further, but if you are ready to dismiss any of these models because they "sound bad" or "don't work" then I would encourage you to do your homework and understand them inside and out before you reject them.

The more models I find the more I find them converging on describing reality.  I am finding less and less I can say, "this is completely new to me" and more and more, "oh that's just like [6] and [7]

Meta: this is something around 6000 words and took a day to write ~12 hours.  I did this in one sitting because everything was already in my head.  I am surprised I could sit still for this long.  (I took breaks for food and a nap but most of today was spent at my desk)

Originally posted on my blog:

Cross posted to Medium:

Becoming stronger together

14 b4yes 11 July 2017 01:00PM

I want people to go forth, but also to return.  Or maybe even to go forth and stay simultaneously, because this is the Internet and we can get away with that sort of thing; I've learned some interesting things on Less Wrong, lately, and if continuing motivation over years is any sort of problem, talking to others (or even seeing that others are also trying) does often help.

But at any rate, if I have affected you at all, then I hope you will go forth and confront challenges, and achieve somewhere beyond your armchair, and create new Art; and then, remembering whence you came, radio back to tell others what you learned.

Eliezer Yudkowsky, Rationality: From AI to Zombies

If you want to go fast, go alone. If you want to go far, go together.

African proverb (possibly just made up)

About a year ago, a secret rationalist group was founded. This is a report of what the group did during that year.

The Purpose

“Rationality, once seen, cannot be unseen,” are words that resonate with all of us. Having glimpsed the general shape of the thing, we feel like we no longer have a choice. I mean, of course we still have an option to think and act in stupid ways, and we probably do it a lot more than we would be okay to admit! We just no longer have an option to do it knowingly without feeling stupid about it. We can stray from the way, but we cannot pretend anymore that it does not exist. And we strongly feel that more is possible, both in our private lives, and for the society in general.

Less Wrong is the website and the community that brought us together. Rationalist meetups are a great place to find smart, interesting, and nice people; awesome people to spend your time with. But feeling good was not enough for us; we also wanted to become stronger. We wanted to live awesome lives, not just to have an awesome afternoon once in a while. But many participants seemed to be there only to enjoy the debate. Or perhaps they were too busy doing important things in their lives. We wanted to achieve something together; not just as individual aspiring rationalists, but as a rationalist group. To make peer pressure a positive force in our lives; to overcome akrasia and become more productive, to provide each other feedback and to hold each other accountable, to support each other. To win, both individually and together.

The Group

We are not super secret really; some people may recognize us by reading this article. (If you are one of them, please keep it to yourself.) We just do not want to be unnecessarily public. We know who we are and what we do, and we are doing it to win at life; trying to impress random people online could easily become a distraction, a lost purpose. (This article, of course, is an exception.) This is not supposed to be about specific individuals, but an inspiration for you.

We started as a group of about ten members, but for various reasons some people soon stopped participating; seven members remained. We feel that the current number is probably optimal for our group dynamic (see Parkinson's law), and we are not recruiting new members. We have a rule “what happens in the group, stays in the group”, which allows us to be more open to each other. We seem to fit together quite well, personality-wise. We desire to protect the status quo, because it seems to work for us.

But we would be happy to see other groups like ours, and to cooperate with them. If you want to have a similar kind of experience, we suggest starting your own group. Being in contact with other rationalists, and holding each other accountable, seems to benefit people a lot. CFAR also tries to keep their alumni in regular contact after the rationality workshops, and some have reported this as a huge added value.

To paint a bit more specific picture of us, here is some summary data:

  • Our ages are between 20 and 40, mostly in the middle of the interval.
  • Most of us, but not all, are men.
  • Most of us, but not all, are childless.
  • All of us are of majority ethnicity.
  • Most of us speak the majority language as our first language.
  • All of us are atheists; most of us come from atheist families.
  • Most of us have middle-class family background.
  • Most of us are, or were at some moment, software developers.

I guess this is more or less what you could have expected, if you are already familiar with the rationalist community.

We share many core values, but have some different perspectives, which adds value and confronts groupthink. We have entrepreneurs, employees, students, and unemployed bums; the ratio changes quite often. It is probably the combination of all of us having a good sense of epistemology, but different upbringing, education and professions, that makes supporting each other and giving advice more effective (i.e. beyond the usual benefits of the outside view); there have been plenty of situations which were trivial for one, but not for the other.

Some of us knew each other for years before starting the group, even before the local Less Wrong meetups. Some of us met the others at the meetups. And finally, some of us talked to some other members for the first time after joining the group. It is surprising how well we fit, considering that we didn’t apply any membership filter (although we were prepared to); people probably filtered themselves by their own interest, or a lack thereof, to join this kind of a group, specifically with the productivity and accountability requirements.

We live in different cities. About once in a month we meet in person; typically before or after the local Less Wrong meetup. We spend a weekend together. We walk around the city and debate random stuff in the evening. In the morning, we have a “round table” where each of us provides a summary of what they did during the previous month, and what they are planning to do during the following month; about 20 minutes per person. That takes a lot of time, and you have to be careful not to go off-topic too often.

In between meetups, we have a Slack team that we use daily. Various channels for different topics; the most important one is a “daily log”, where members can write briefly what they did during the day, and optionally what they are planning to do. In addition to providing extra visibility and accountability, it helps us feel like we are together, despite the geographical distances.

Besides mutual accountability, we are also fans of various forms of self-tracking. We share tips about tools and techniques, and show each other our data. Journaling, time tracking, exercise logging, step counting, finance tracking...

Even before starting the group, most of us were interested in various productivity systems: Getting Things Done, PJ Eby; one of us even wrote and sold their own productivity software.

We do not share a specific plan or goal, besides “winning” in general. Everyone follows their own plan. Everything is voluntary; there are no obligations nor punishments. Still, some convergent goals have emerged.

Also, good habits seem to be contagious, at least in our group. If a single person was doing some useful thing consistently, eventually the majority of the group seems to pick it up, whether it is related to productivity, exercise, diet, or finance.


All of us exercise regularly. Now it seems like obviously the right thing to do. Exercise improves your health and stamina, including mental stamina. For example, the best chess players exercise a lot, because it helps them stay focused and keep thinking for a long time. Exercise increases your expected lifespan, which should be especially important for transhumanists, because it increases your chances to survive until the Singularity. Exercise also makes you more attractive, creating a halo effect that brings many other benefits.

If you don’t consider these benefits worth at least 2 hours of your time a week, we find it difficult to consider you a rational person who takes their ideas seriously. Yes, even if you are busy doing important things; the physical and mental stamina gained from exercising is a multiplier to whatever you are doing in the rest of your time.

Most of us lift weights (see e.g. StrongLifts 5×5, Alan Thrall); some of us even have a power rack and/or treadmill desk at home. Others exercise using their body weight (see Convict Conditioning). Exercising at home saves time, and in long term also money. Muscle mass correlates with longevity, in addition to the effect of exercise itself; and having more muscle allows you to eat more food. Speaking of which...


Most of us are, mostly or completely, vegetarian or vegan. Ignoring the ethical aspects and focusing only on health benefits, there is a lot of nutrition research summarized in a book How Not to Die and a website The short version is that whole-food vegan diet seems to work best, but you really should look into details. (Not all vegan food is automatically healthy; there is also vegan junk food. It is important to eat a lot of unprocessed vegetables, fruit, nuts, flax seeds, broccoli, beans. Read the book, seriously. Or download the Daily Dozen app.) We often share tasty recipes when we meet.

We also helped each other research food supplements, and actually find the best and cheapest sources. Most of us take extra B12 to supplement the vegan diet, creatine monohydrate, vitamin D3, and some of us also use Omega3, broccoli sprouts, and a couple of other things that are generally aimed at health and longevity.


We strategize and brainstorm career decisions or just debug office politics. Most of us are software developers. This year, one member spent nine months learning how to program (using Codecademy, Codewars, and freeCodeCamp at the beginning; reading tutorials and documentation later); as a result their income more than doubled, and they got a job they can do fully remotely.

Recently we started researching cryptocurrencies and investing in them. Some of us started doing P2P lending.

Personal life

Many of us are polyamorous. We openly discuss sex and body image issues in the group. We generally feel comfortable sharing this information with each other; women say they do not feel the typical chilling effects.


Different members report different benefits from their membership in the group. Some quotes:

“During the first half of the year, my life was more or less the same. I was already very productive before the group, so I kept the same habits, but benefited from sharing research. Recently, my life changed more noticeably. I started training myself to think of more high-leverage moves (inspired by a talk on self-hypnosis). This changed my asset allocation, and my short-term career plans. I realize more and more that I am very much monkey see, monkey do.”

“Before stumbling over the local Less Wrong meetup, I had been longing (and looking) for people who shared, or even just understood, my interest and enthusiasm for global, long-term, and meta thinking (what I now know to be epistemic rationality). After the initial thrill of the discovery had worn off however, I soon felt another type of dissonance creeping up on me: "Wait, didn't we agree that this was ultimately about winning? Where is the second, instrumental half of rationality, that was supposedly part of the package?" Well, it turned out that the solution to erasing this lingering dissatisfaction was to be found in yet a smaller subgroup.

So, like receiving a signal free of interference for the first time, I finally feel like I'm in a "place" where I can truly belong, i.e. a tribe, or at least a precursor to one, because I believe that things hold the potential to be way more awesome still, and that just time alone may already be enough to take us there.

On a practical level, the speed of adoption of healthy habits is truly remarkable. I've always been able to generally stick to any goals and commitments I've settled on, however the process of convergence is just so much faster and easier when you can rely on the judgment of other epistemically trustworthy people. Going at full speed is orders of magnitudes easier when multiple people illuminate the path (i.e. figure out what is truly worth it), while simultaneously sharing the burdens (of research, efficient implementation, trial-and-error, etc.)”

“Now I'm on a whole-food vegan diet and I exercise 2 times a week, and I also improved in introspection and solving my life problems. But most importantly, the group provides me companionship and emotional support; for example, starting a new career is a lot easier in the presence of a group where reinventing yourself is the norm.”

“It usually takes grit and willpower to change if you do it alone; on the other hand, I think it's fairly effortless if you're simply aligning your behavior with a preexisting strong group norm. I used to eat garbage, smoke weed, and have no direction in life. Now I lift weights, eat ~healthy, and I learned programming well enough to land a great job.

The group provides existential mooring; it is a homebase out of which I can explore life. I don't think I'm completely un-lost, but instead of being alone in the middle of a jungle, I'm at a friendly village in the middle of a jungle.”

“I was already weightlifting and eating vegan, but got motivated to get more into raw and whole foods. I get confronted more with math, programming and finance, and can broaden my horizon. Sharing daily tasks in Slack helps me to reflect about my priorities. I already could discuss many current career and personal challenges with the whole group or individuals.”

“I started exercising regularly, and despite remaining an omnivore I eat much more fresh vegetables now than before. People keep telling me that my body shape improved a lot during this year. Other habits did not stick (yet).”

“Finding a tribe of sane people in an insane world was a big deal for me, now I feel more self-assured and less alone. Our tribe has helped me to improve my habits—some more than others (for example, it has inspired me to buy a power-rack for my living room and start weightlifting daily, instead of going to the gym). The friendly bragging we do among our group is our way of celebrating success and inspires me to keep going and growing.”


Despite having met each other thanks to Less Wrong, most of us do not read it anymore, because our impression is that “Less Wrong is dead”. We do read Slate Star Codex.

From other rationalist blogs, we really liked the article about Ra, and we discussed it a lot.

The proposal of a Dragon Army evoked mixed reactions. On one hand, we approve of rationalists living closer to each other, and we want to encourage fellow rationalists to try it. On the other hand, we don’t like the idea of living in a command hierarchy; we are adults, and we all have our own projects. Our preferred model would be living close to each other; optimally in the same apartment building with some shared communal space, but also with a completely self-contained unit for each of us. So far our shared living happened mostly by chance, but it always worked out very well.

Jordan Peterson and his Self-Authoring Suite is very popular with about half of the group.

What next?

Well, we are obviously going to continue doing what we are doing now, hopefully even better than before, because it works for us.

You, dear reader, if you feel serious about becoming stronger and winning at life, but are not yet a member of a productive rationalist group, are encouraged to join one or start one. Geographical distances are annoying, but Slack helps you overcome the intervals between meetups. Talking to other rationalists can be a lot of fun, but accountability can make the difference between productivity and mere talking. Remember: “If this is your first night at fight club, you have to fight!”

Even if it’s seemingly small things, such as doing an exercise or adding some fiber to your diet; these things, accumulated over time, can increase your quality of life a lot. The most important habit is the meta-habit of creating and maintaining good habits. And it is always easier when your tribe is doing the same thing.

Any questions? It may take some time for our hive mind to generate an answer, and in case of too many or too complex questions we may have to prioritize. Don’t feel shy, though. We care about helping others.


(This account was created for the purpose of making this post, and after a week or two it will stop being used. It may be resurrected after another year, or maybe not. Please do not send private messages; they will most likely be ignored.)

In praise of fake frameworks

14 Valentine 11 July 2017 02:12AM

Related to: Bucket errors, Categorizing Has Consequences, Fallacies of Compression

Followup to: Gears in Understanding

I use a lot of fake frameworks — that is, ways of seeing the world that are probably or obviously wrong in some important way.

I think this is an important skill. There are obvious pitfalls, but I think the advantages are more than worth it. In fact, I think the "pitfalls" can even sometimes be epistemically useful.

Here I want to share why. This is for two reasons:

  • I think fake framework use is a wonderful skill. I want it represented more in rationality in practice. Or, I want to know where I'm missing something, and Less Wrong is a great place for that.

  • I'm building toward something. This is actually a continuation of Gears in Understanding, although I imagine it won't be at all clear here how. I need a suite of tools in order to describe something. Talking about fake frameworks is a good way to demo tool #2.

With that, let's get started.

continue reading »

Prediction should be a sport

13 chaosmage 10 August 2017 07:55AM

So, I've been thinking about prediction markets and why they aren't really catching on as much as I think they should.

My suspicion is that (beside Robin Hanson's signaling explanation, and the amount of work it takes to get to the large numbers of predictors where the quality of results becomes interesting) the basic problem of prediction markets is that they look and feel like gambling. Or at best like the stock market, which for the vast majority of people is no less distasteful.

Only a small minority of people are neither disgusted by nor terrified of gambling. Prediction markets right now are restricted to this small minority.

Poker used to have the same problem.

But over the last few decades Poker players have established that Poker is (also) a sport. They kept repeating that winning isn't purely a matter of luck, they acquired the various trappings of tournaments and leagues, they developed a culture of admiration for the most skillful players that pays in prestige rather than only money and makes it customary for everyone involved to show their names and faces. For Poker, this has worked really well. There are much more Poker players, more really smart people are deciding to get into Poker and I assume the art of game probably improved as well.

So we should consider re-framing prediction the same way.

The calibration game already does this to a degree, but sport needs competition, so results need to be comparable, so everyone needs to make predictions on the same events. You'd need something like standard cards of events that players place their predictions on.

Here's a fantasy of what it could look like.

  • Late in the year, a prediction tournament starts with the publication of a list of events in the coming year. Everybody is invited to enter the tournament (and maybe pay a small participation fee) by the end of the year, for a chance to be among the best predictors and win fame and prizes.
  • Everyone who enters plays the calibration game on the same list of events. All predictions are made public as soon as the submission period is over and the new year begins. Lots of discussion of each event's distribution of predictions.
  • Over the course of the year, events on the list happen or fail to happen. This allows for continually updated scores, a leaderboard and lots of blogging/journalistic opportunities.
  • Near the end of the year, as the leaderboard turns into a shortlist of potential winners, tension mounts. Conveniently, this is also when the next tournament starts.
  • At new year's, the winner is crowned (and I'm open to having that happen literally) at a big celebration which is also the end of the submission period for the next tournament and the revelation of what everyone is predicting for this next round. This is a big event that happens to be on a holiday, where more people have time for big events.
Of course this isn't intended to replace existing prediction markets. It is an addition to those, a fun and social thing with lots of PR potential and many opportunities to promote rationality. It should attract people to prediction who are not attracted to prediction markets. And it could be prototyped pretty cheaply, and developed further if it is as much fun as I think it would be.

Idea for LessWrong: Video Tutoring

13 adamzerner 23 June 2017 09:40PM

Update 7/9/17: I propose that Learners individually reach out to Teachers, and set up meetings. It seems like the most practical way of getting started, but I am not sure and am definitely open to other ideas. Other notes:

  • There seems to be agreement that the best way to do this is individualized guidance, rather than lectures and curriculums. Eg. the Teacher "debugging" the Learner. Assuming that approach, it is probably best for the amount of Learners in a session to be small.
  • Consider that it may make sense for you to act as a Teacher, even if you don't have a super strong grasp of the topic. For example, I know a decent amount about computer science, but don't have a super strong grasp of it. Still, I believe it would be valuable for me to teach computer science to others. I can definitely offer value to people with no CS background. And for people who do have a CS background, there could be value in us taking turns teaching/learning, and debugging each other.
  • We may not be perfect at this in the beginning, but let's dive in and see what we can do! I think it'd be a good idea to comment on this post with what did/didn't work for you, so we as a group could learn and improve.
  • I pinned to #productivity on the LessWrongers Slack group.

Update 6/28/17: With 14 people currently interested, it does seem that there's enough to get started. However, I'd like to give it a bit more time and see how much overall interest we get.

Idea: we coordinate to teach each other things via video chat.

  • We (mostly) all like learning. Whether it be for fun, curiosity, a stepping stone towards our goals.
  • My intuition is that there's a lot of us who also enjoy teaching. I do, personally.
  • Enjoyment aside, teaching is a good way of solidifying ones knowledge.
  • Perhaps there would be positive unintended consequences. Eg. socially.
  • Why video? a) I assume that medium is better for education than simply text. b) Social and motivational benefits, maybe. A downside to video is that some may find it intimidating.
  • It may be nice to evolve this into a group project where we iteratively figure out how to do a really good job teaching certain topics.
  • I see the main value in personalization, as opposed to passive lectures/seminars. Those already exist, and are plentiful for most topics. What isn't easily accessible is personalization. With that said, I figure it'd make sense to have about 5 learners per teacher.

So, this seems like something that would be mutually beneficial. To get started, we'd need:

  1. A place to do this. No problem: there's Hangouts, Skype,, etc.
  2. To coordinate topics and times.

Personally, I'm not sure how much I can offer as far as doing the teaching. I worked as a web developer for 1.5 years and have been teaching myself computer science. I could be helpful to those unfamiliar with those fields, but probably not too much help for those already in the field and looking to grow. But I'm interested in learning about lots of things!

Perhaps a good place to start would be to record in some spreadsheet, a) people who want to teach, b) what topics, and c) who is interested in being a Learner. Getting more specific about who wants to learn what may be overkill, as we all seem to have roughly similar interests. Or maybe it isn't.

If you're interested in being a Learner or a Teacher, please add yourself to this spreadsheet.

Online discussion is better than pre-publication peer review

12 Wei_Dai 05 September 2017 01:25PM

Related: Why Academic Papers Are A Terrible Discussion Forum, Four Layers of Intellectual Conversation

During a recent discussion about (in part) academic peer review, some people defended peer review as necessary in academia, despite its flaws, for time management. Without it, they said, researchers would be overwhelmed by "cranks and incompetents and time-card-punchers" and "semi-serious people post ideas that have already been addressed or refuted in papers already". I replied that on online discussion forums, "it doesn't take a lot of effort to detect cranks and previously addressed ideas". I was prompted by Michael Arc and Stuart Armstrong to elaborate. Here's what I wrote in response:

My experience is with systems like LW. If an article is in my own specialty then I can judge it easily and make comments if it’s interesting, otherwise I look at its votes and other people’s comments to figure out whether it’s something I should pay more attention to. One advantage over peer review is that each specialist can see all the unfiltered work in their own field, and it only takes one person from all the specialists in a field to recognize that a work may be promising, then comment on it and draw others’ attentions. Another advantage is that nobody can make ill-considered comments without suffering personal consequences since everything is public. This seem like an obvious improvement over standard pre-publication peer review, for the purpose of filtering out bad work and focusing attention on promising work, and in practice works reasonably well on LW.

Apparently some people in academia have come to similar conclusions about how peer review is currently done and are trying to reform it in various ways, including switching to post-publication peer review (which seems very similar to what we do on forums like LW). However it's troubling (in a "civilizational inadequacy" sense) that academia is moving so slowly in that direction, despite the necessary enabling technology having been invented a decade or more ago.

Rational Feed

11 deluks917 09 September 2017 07:48PM

=== Updates:

I have been a little more selective about which articles make it onto the feed. I have not been overly selective and all of the obviously general interest rationalsit articles still make it.

Unless people object I am going to try a "weekly feed". The bi-weekly feed is pretty long. I currently post on the SSC reddit and lesswrong. Weekly seems fine for the SSC reddit but lesswrong is a lower activity forum. I will see how it goes. Obviously on a weekly feed there will about half as many recommended articles.

===Highly Recommended Articles:

Object, Subjects and Gender by The Baliocene Apocrypha - "Under modern post-industrial bureaucratized high-tech capitalism, it is less rewarding than ever before to be a subject. Under modern post-industrial bureaucratized high-tech capitalism, it is more rewarding than ever before to be an object. This alone accounts for a lot of the widespread weird stuff going on with gender these days."

Winning Is For Losers by Putanumonit (ribbonfarm) - Zero vs Positive Sum Games. The strong have room to cooperate. Rene Girard's theory of mimetics and competition. College Admissions. Tit for Tat. Spiked dicks in nature. Short and long term strategies in dating. Quirky dating profiles. Honesty on the first date. Beating Moloch with a transhuman God.

Premium Mediocre by Jacob Falkovich - Being 30% wrong is better than being 5% wrong. Consumption: Signaling vs genuine enjoyment. Dating other PM people. Venkat is wrong about impressing parents. He is more wrong, or joking, about cryptocurrencies. Fear of missing out.

Ten New 80000 Hours Articles Aimed At The by 80K Hours (EA forum) - Ten recent articles and descriptions from 80K hours. Over and underpaid jobs relative to their social impact, the most employable skills, learning ML, whether most social programs work and other topics.

Minimizing Motivated Beliefs by Entirely Useless - The tradeoffs between epistemic and instrumental rationality. Yudkowsky's argument such tradeoffs either very stupid or don't exist. Issues with Yudkowsky: Denial that belief is voluntary, thinking that trading away the truth requires being blind to consequences. Horror victims and transcendent meaning. Interesting things are usually false.


How Do We Get Breasts Out Of Bayes Theorem by Scott Alexander - "But evolutionary psychologists make claims like 'Men have been evolutionarily programmed to like women with big breasts, because those are a sign of fertility.' Forget for a second whether this is politically correct, or cross-culturally replicable, or anything like that. From a neurological point of view, how could this possibly work?"

Predictive Processing And Perceptual Control by Scott Alexander - "predictive processing attributes movement to strong predictions about proprioceptive sensations. Because the brain tries to minimize predictive error, it moves the limbs into the positions needed to produce those sensations, fulfilling its own prophecy." Connections with Will Power's 'Behavior: The Control of Perception' which Scott already reviewed.

Book Review: Surfing Uncertainty by Scott Alexander - Scott finds a real theory of how the brain works. "The key insight: the brain is a multi-layer prediction machine. All neural processing consists of two streams: a bottom-up stream of sense data, and a top-down stream of predictions. These streams interface at each level of processing, comparing themselves to each other and adjusting themselves as necessary."

Links: Exsitement by Scott Alexander - Slatestarcodex links post. A Nootropics survey, gene editing, AI, social norms, Increasing profit margins, politics, and other topics.

Highlights From The Comments On My Irb Nightmare by Scott Alexander - Tons of hilarious IB stories. A subreddit comment about getting around irb. Whether the headaches are largely institutional rather than dictated by government fiat. Comments argue in favor of the irb and Scott responds.

My IRB Nightmare by Scott Alexander - Scott tries to run a study to test the Deck Depression Inventory. The institutional review board makes this impossible. They not only make tons of capricious demands they also attempt to undermine the study's scientific validity.

Slippery Slopen Thread by Scott Alexander - Public open thread. The slippery slope to rationalist catgirl. Selected top comments. Update on Trump and crying wolf.

Contra Askell On Moral Offsets by Scott Alexander - Axiology is the study of what’s good. Morality is the study of what the right thing to do is. You can offset axiological effects but you can't offset moral transgressions.


MRE Futures To Not Starve by Robin Hanson - Emergency food sources as a way to mitigate catastrophic risk. The Army's 'Meals Ready to Eat'. Food insurance. Incentives for producers to deliver food in emergencies. Incentives for researchers to find new sources. Sharing information.

Book Reviews: Zoolitude And The Void by Jacob Falkovich - Seven Surrenders the sequel to 'Too like the Lightning' mercilessly cuts the bad parts and focuses on the politics, personalities, and philosophy that made TLTL great. The costs of adding too much magic to a setting, don't make the mundane irrelevant. One Hundred Years of Solitude: Shit just happens. Zoo City: Realistic Magic: "The Zoo part is the magic: some people who commit crimes mysteriously acquire an animal familiar and a low-key magical talent." The Mark and the Void: "Technically, there’s no magic in The Mark and the Void. But there’s investment banking, which takes the role of the mysterious force that decides the fate of individuals and nations but remains beyond the ken of mere mortals."

The World As If by Sarah Perry (ribbonfarm) - "This is an account of how magical thinking made us modern." Magical thinking as a confusing of subjective and objective. Useful fictions. Hypothetical thinking. Pre-modern concrete thinking and categorization schemes relative to modern abstract ones. As if thinking. Logic and magic.

To Save The World Make Sure To Go Beyond Academia by Kaj Sotala - Academic research often fails to achieve real change. Lots of economic research concerns the optimal size of a carbon tax but we currently lack any carbon tax. Academic research on x-risk from nuclear winter doesn't change the motivations of politicians very much.

Introducing Mindlevelup The Book by mindlevelup - MLU compiled and edited their work from 2017 into a 30K word, 150 page book. Most of the material appeared on the blog but some of it is new and the pre-existing posts have been edited for clarity.

Expanding Premium Mediocrity by Zvi Moshowitz - "This is (much of) what I think Rao is trying to say in the second section of his post, the part about Maya but before Molly and Max, translated into DWATV-speak. Proceed if and only if you want that."

Simple Affection And Deep Truth by Particular Virtue - "Simple Affection is treating someone like a child: they will forget about bad things, as long as you give them something good to think about instead. Deep Truth is treating someone like an elephant: they never forget, and they forgive only with deep deliberation."

Are People Innately Good by Sailor Vulcan - SV got into two arguments that went badly. One was on all lives matter. The other occurred when SV tried to defend Glen of Intentional Insights on the SSC discord. Terminal values aren't consistent. SV was abused as a child.

Metapost September 5th by sam[]zdat - Plans for the blog. Next series will be on epistemology and the ''internal' side of nhilism. Revised introduction. Sam will probably write fiction. Site reorganization. History section. Current reading list. Patreon.

Minimizing Motivated Beliefs by Entirely Useless - The tradeoffs between epistemic and instrumental rationality. Yudkowsky's argument such tradeoffs either very stupid or don't exist. Issues with Yudkowsky: Denial that belief is voluntary, thinking that trading away the truth requires being blind to consequences. Horror victims and transcendent meaning. Interesting things are usually false.

Exploring Premium Mediocrity by Zvi Moshowitz - Defining premium mediocre. Easy and hard mode related to Rao's theories of losers, sociopaths and heroes. The Real Thing. A 2x2 ribbonfarm style graph. Restaurants.

Tegmarks Book Of Foom by Robin Hanson - Tegmark's recent book basically described Yudkowsky's intelligence explosion. Tegmark is worried the singularity might be soon and we need to have figured out big philosophical issues by then. Hanson thinks Tegmark overestimates the generality of intelligence. AI weapons and regulations.

The Doomsday Argument In Anthropic Decision Theory by Stuart Armstrong (lesswrong) - "In Anthropic Decision Theory (ADT), behaviors that resemble the Self Sampling Assumption (SSA) derive from average utilitarian preferences. However, SSA implies the doomsday argument. This post shows there is a natural doomsday-like behavior for average utilitarian agents within ADT."

Forager Vs Farmer Elaborated by Robin Hanson - Early humans collapsed Machiavellian dynamics down to a reverse-dominance-hierarchy. Group norm enforcement and its failure modes. Safety leads to collective play and art, threat leads to a return to Machiavellianisn and suspicion. Individuals greatly differ as to what level of threat causes the switch, often for self-serving reasons. Left vs right. "The first and primary political question is how much to try to resolve issues via a big talky collective, or to let smaller groups decide for themselves."

Critiquing Other Peoples Plans Politely by Katja Grace - Three failure modes: The attack, The polite sidestep, The inadvertent personal question. A plan to avoid these issues: debate beliefs, not actions.

Gleanings From Double Crux On The Craft Is Not The Community by Sarah Constantin - Results from Sarah's public double crux. Sarah initially did not think the rationalist intellectual project was worth preserving. She wants to see results, even though she concedes that formal results can be very difficult to get. What is the value of introspection and 'navel grazing'?

Intrinsic Properties And Eliezers Metaethics by Tyrrell_McAllister (lesswrong) - Intuitions of intrinsicness. Is goodness intrinsic? Seeing intrinsicness in simulations. Back to goodness.

Winning Is For Losers by Putanumonit (ribbonfarm) - Zero vs Positive Sum Games. The strong have room to cooperate. Rene Girard's theory of mimetics and competition. College Admissions. Tit for Tat. Spiked dicks in nature. Short and long term strategies in dating. Quirky dating profiles. Honesty on the first date. Beating Moloch with a transhuman God.

Dangers At Dilettante Point by Everything Studies - Its relatively easy to know a little about alot of topics. But its dangerous to find yourself playing the social role of the knowledgeable person too often. The percentage fo people with a given level of knowledge goes to zero quickly.

Entrenchment Happens by Robin Hanson - Many systems degrade, collapse and our replaced. However other systems, even somewhat arbitrary ones, are very stable over time. Many current systems in programming, language and law are likely to remain in the future.

Premium Mediocre by Jacob Falkovich - Being 30% wrong is better than being 5% wrong. Consumption: Signaling vs genuine enjoyment. Dating other PM people. Venkat is wrong about impressing parents. He is more wrong, or joking, about cryptocurrencies. Fear of missing out.


Ideological Engineering And Social Control by Geoffrey Miller (EA forum) - China is trying hard to develop advanced AI. A major goal is to use AI to monitor both physical space and social media. Supressing wrong-think doesn't require radically advanced AI.

Incorrigibility In Cirl by The MIRI Blog - Paper. Goal: Incentivize a value learning system to follow shut down instructions. Demonstration that some assumptions are not stable with respect to model mis-specification (ex programmer error). Weaker sets of assumptions: difficulties and simple strategies.

Nothing Wrong With Ai Weapons by kbog (EA forum) - Death by AI is no more intrinsically bad than death by conventional weapons. Some consequenitoualist issues the author addresses: Civilian deaths, AI arms race, vulnerability to hacking.


Can Outsourcing Improve Liberias Schools Preliminary RCT Results by Innovations for Poverty - "Last summer, the Liberian government delegated management of 93 public elementary schools to eight different private contractors. After one year, public schools managed by private operators raised student learning by 60 percent compared to standard public schools. But costs were high, performance varied across operators, and contracts authorized the largest operator to push excess pupils and under-performing teachers into other government schools."

Ten New 80000 Hours Articles Aimed At The by 80K Hours (EA forum) - Ten recent articles and descriptions from 80K hours. Over and underpaid jobs relative to their social impact, the most employable skills, learning ML, whether most social programs work and other topics.

Is Ea Growing Some Ea Growth Metrics For 2017 by Peter Hurford (EA forum) - Activity metrics for EA website, donations data, additional Facebook data, commentary that EA seems to be growing but there is substantial uncertainty.

Ea Survey 2017 Series Cause Area Preferences by Tee (EA forum) - Top Cause Area, near-top areas, areas which should not have EA resources, cause area correlated with demographics, donations by cause area.

Looking At How Superforecasting Might Improve AI Predictions by Will Pearson (EA forum) - Good Judgement Project: What they did, results, relevance. Lessons: Focus on concrete issues, focus on AI with no intelligence augmentation, learn a diverse range of subjects, breakdown the open questions, publicly update.

Why Were Allocating Discretionary Funds To The Deworm The World Initiative by The GiveWell Blog - "Why Deworm the World has a pressing funding need. The benefits and risks of granting discretionary funds to Deworm the World today. Why we’re continuing to recommend that donors give 100% of their donation to AMF."

Ea Survey 2017 Series Community Demographics by Katie Gertsch (EA forum) - Some results: Mostly young and male, slight increase in female participation. Highest concentration cities. Atheism/Agnostic rate fell from 87% to 80%. Increase in the proportion of EA who see EA as a duty or opportunity as opposed to an obligation.

Effective Altruism Survey 2017 Distribution And by Ellen McGeoch and Peter Hurford (EA forum) - EA 2017 Study results are in. Details about distribution abd data analysis techniques. Discussion of whether the subpopulation is a representative sample of EA and its subpopulations.

Six Tips Disaster Relief Giving by The GiveWell Blog - Practical advice for effective disaster relief charity. Give Cash, give to proven effective charities and allow charities significant freedom in how they use your donation.

===Politics and Economics:

Harvard Admit Legacy Students by Marginal Revolution - Demand for Ivy league admissions far outstrips supply. The main constraint is that the Ivy League depends on donations. One way to scale up, while maintaining high donation rates, is to increase legacy admissions. Teaching quality is unlikely to suffer, qualified students are easy to find.

Object, Subjects and Gender by The Baliocene Apocrypha - "Under modern post-industrial bureaucratized high-tech capitalism, it is less rewarding than ever before to be a subject. Under modern post-industrial bureaucratized high-tech capitalism, it is more rewarding than ever before to be an object. This alone accounts for a lot of the widespread weird stuff going on with gender these days."

Links 11 by Artir - Psychology, Politics, Economics, Philosophy, Other. Several links related to the Google memo.

Unpopular Ideas About Crime And Punishment by Julia Galef - Thirteen opinions on prison abolition, the death penalty, corporal punishment, rehabilitation, redistribution and more.

Intangible Investment and Monopoly Profits by Marginal Revolution - "Intangible capital used to be below 30 percent of the S&P 500 in the 70s, now it is about 84 percent. " Seven implications about profit, monopoly, spillover, etc.

What You Cant Say To A Sympathetic Ear by Katja Grace - Sharing socially unacceptable views with your friends is putting them in a bad situation, regardless of whether they agree with those ideas. If they don't punish you society will hold them complicit. Socially condemning views is worse than commonly thought "To successfully condemn a view socially is to lock that view in place with a coordination problem."

A I Bias Doesnt Mean What Journalists Want You To Think It Means by Chris Stucchio And Lisa Mahapatra (Jacobite) - What is data science and AI? What is bias? How do we identify bias? The fallout of the author's algorithm. Predicting Creditworthiness. Understanding Language. Predicting Criminal Behavior. Journalists and Wishful Thinking.

Four Decades of the Middle East by Bryan Caplan - "Almost all of the Middle East's disasters over the past four decades can be credibly traced back to a single highly specific major event: the Iranian Revolution. Let me chronicle the tragic trail of dominoes."

The Thresher by sam[]zdat - "Still, if what makes 'modernity' modernity is partially in technology, then the Uruk Machine will be updated and whirring at unfathomable speeds, the thresher to Gilgamesh’s sacred club."

The Uruk Machine by sam[]zdat - Sam's fundamental framework: Seeing like a State, The Great Transformation, The True Believer, The Culture of Narcissism.


Into The Gray Zone by Bayesian Investor - Book Review. A modest fraction of people diagnosed as being in a persistent vegetative state have locked in syndrome. People misjudge when they would want to die. Alzheimer's.


What You Need To Know About Climate Change by Waking Up with Sam Harris - "How the climate is changing and how we know that human behavior is the primary cause. They discuss why small changes in temperature matter so much, the threats of sea-level rise and desertification, the best and worst case scenarios, the Paris Climate Agreement, the politics surrounding climate science."

Dan Rather by The Ezra Klein Show - "Rather and I discuss the Trump presidency and what it means for the Republican Party's future, our fractured media landscape, and Rather's own evolving career in media."

Caplan Family by Bryan Caplan - "For the last two years, I homeschooled my elder sons, Aidan and Tristan, rather than send them to traditional middle school. Now they've been returned to traditional high school. We decided to mark our last day with a father-son/teacher-student podcast on how we homeschooled, why we homeschooled, and what we achieved in homeschool."

Rob Reich On Foundations by EconTalk - "The power and effectiveness of foundations--large collections of wealth typically created and funded by a wealthy donor. Is such a plutocratic institution consistent with democracy? Reich discusses the history of foundations in the United States and the costs and benefits of foundation expenditures in the present."

Jesse Singal On The Problems With Implicit Bias Tests by Rational Speaking - "The IAT has been massively overhyped, and that in fact there's little evidence that it's measuring real-life bias. Jesse and Julia discuss how to interpret the IAT, why it became so popular, and why it's still likely that implicit bias is real, even if the IAT isn't capturing it."

Emotionally Charged Discussion by The Bayesian Conspiracy - Conversations where one party thinks the other sides position is stupid/evil/etc. Debate vs truth seeking. Julia Galef's lists of unpopular ideas. Agenty Duck's thoughts on introspection. Double Crux.

The Future Of Intelligence by Waking Up with Sam Harris - "Max Tegmark. His new book Life 3.0: Being Human in the Age of Artificial Intelligence. They talk about the nature of intelligence, the risks of superhuman AI, a nonbiological definition of life, the substrate independence of minds, the relevance and irrelevance of consciousness for the future of AI, near-term breakthroughs in AI."

Benedict Evans by EconTalk - "Two important trends for the future of personal travel--the increasing number of electric cars and a world of autonomous vehicles. Evans talks about how these two trends are likely to continue and the implications for the economy, urban design, and how we live."

The Life Of A Quant Trader by 80,000 Hours - What do quant traders do. Compensation. Is quant trading harmful? Who is a good fit and how to break into quant trading. Work environment and motivation. Variety of available positions.

Rational Feed

11 deluks917 27 August 2017 03:49AM

===Highly Recommended Articles:

What Is Rationalist Berkleys Community Culture by Zvi Moshowitz - The original rationalist community mission was to save the world, not to be nice to each other. Sarah recently suggested the later is currently the actual goal. Zvi reinterprets this as sounding an alarm. The rationalists should not become just another Berkeley community of bohemians and weirdos.

Cthugha The Living Flame by Exploring Egregores - Rationalists as worshippers of an Eldritch Star God. Valuing knowledge and ideas above all else. Bonobos and transhumanists. Yudkowsky's argument about distributed vs concentrated intellect. The AI box experiment. Nerds as the true extraverts. "What do you think the singularity will actually look like?" The site maps eight other Eldritch Gods to different philosophical dispositions.

Internet Explorers Not Exploiters by Nostalgebraist - Exploit vs explore tradeoffs. Attention spans. How long should you try a math problem before you give up? Exploring new options can be uncomfortable since it might lead nowhere. Addictive games and the internet. Academic research.

Diversity And Team Performance What The Research Says by Eukaryote - Opens with several links about diversity and inclusion in EA. The pros and cons of different types of diversity in terms of group cohesion and information processing. Practical ways to minimize the costs of diversity and magnify the benefits. Lots of references.

The Market Power Story by Noah Smith - Many issues in the American economy are blamed on the increasing market power of a small number of firms. Analysis: Monopolistic competition. Profits. Market Concentration. Output restriction. Three updates. Lots of citations and references to papers.

The Anti Slip Slope by samuelthefifth (Status 451) - An analogy between workplace noise and workplace sexism. How efforts to stamp out 'workplace noise' can get out of control.

Dota 2 by Open Ai - Open AI codes a 1v1 Dota-2 bot that defeaated top players. The bots actions per minute were comparable to many humans. The bot learned the game from scratch by self-play, and does not use imitation learning or tree search. The game involves hidden information and the bot's strategies were complicated.

Stop Caring So Much About Technical Problems by Particular Virtue - Links to an article describing what attributes actually get developers jobs (other than technical skill). Caring about making great products is much more desirable than caring about technical problems. Developer interviews are highly random. Experience matters alot. Enterprise programmers are disliked. Practical advice.


Partial Credit by Scott Alexander - Blotting out the Sun. Short story.

Moral Reflective Equilibrium and the Absurdity Principle by SlateStarScratchpad - A long discussion about the nature of morality. The absurdity heuristic. Reflective equilibrium of moral values. The feedback loop between intuition and logic.

Advertising by SlateStarScratchpad - Nostalgebraist muses about advertising. Scott briefly explains how advertising works on SSC.

Fear And Loathing At Effective Altruism Global 2017 by Scott Alexander - EA Global was well run and impressive. The deep weirdness of EA. The fundamental goodness of effective altruists. The yoke is light and everyone is welcome.

Community History by Scott (r/SSC) - Scott answers: "What happened to Lesswrong? When (and more importantly why) did the spread out to other blogs happen?"

Threado Quia Absurdum by Scott Alexander - Bi-Weekly public open thread. Recommended comments on: how organizations change over time, self-driving car progress, gun laws in the Czech Republic, why comments are closed on some posts here. Scot may be choosing a SSC moderator.

Brief Cautionary Notes On Branded Combination Nootropics by Scott Alexander - Many 'Xbrain' pills contain ineffectively low doses of ingredients. Nootropics, like many drugs, effect people differently; you need to isolate which nootropics work for you. Drug interactions are very poorly understood, even for well studied drugs.

The Lizard People Of Alpha Draconis 1 Decided To Build An Ansible by Scott Alexander - Faster than light communication via negative average preference utilitarianism.

Sparta by SlateStarScratchpad - A historian claims that Sparta's military renown was developed during a period when Sparta's actual military ability was declining. Scott disagrees and cites sources showing that the earliest records all claim Sparta was very powerful.


Internal Dialogue About End Of World by Sailor Vulcan - Short Story. Keep living, maybe we will win the lottery.

My Tedtedx Talks by Robin Hanson - Ted talks by Robin about his books "The Age of Em" and "The Elephant in the Brain". Talks are short ~12 minutes.

Paranoia Testing by Elo - Experiments to test if you have paranoia. Costs. Notes and some graphs.

Theres Always Subtext by Robin Hanson - Mostly a quote about subtext in film.

Play In Hard Mode by Zvi Moshowitz - "Hard mode is harder. The reason to Play in Hard Mode is because it is the only known way to become stronger, and to defend against Goodhart’s Law." Zvi revists the eleven examples from 'easy mode' and shows how to approach them from a hard mode perspective.

Play In Easy Mode by Zvi Moshowitz - Eleven examples of 'selling out' and taking the path of least resistance. Interestingly in several examples taking the easy path is quite defensible.

Emotional Labour by Elo - "I wanted to save you the effort of thinking about the thing and so I decided not to tell/ask you before it was resolved." VS "I wanted to not have to withhold a thing from you so I told you as soon as it was bothering me so that I didn’t have to lie/cheat/withhold/deceive you even if I thought it was in your best interest"

Paths Forward On Berkeley Culture Discussion by Zvi Moshowitz - Follow up to Zvi's post on the Berkeley rationalist community. A long sketch of the arguments Zvi would make and the article he would write if he had time to respond in depth.

How Social Is Reason by Robin Hanson - Humans alone have a logical reasoning module. 'Logical Fallacies' evolved because they are adaptive for persuasion. Unschooled populations often cannot solve logical problems. Epistemic learned helplessness. Impressive complex arguments are preferred over simple ones.

Cthugha The Living Flame by Exploring Egregores - Rationalists as worshippers of an Eldritch Star God. Valuing knowledge and ideas above all else. Bonobos and transhumanists. Yudkowsky's argument about distributed vs concentrated intellect. The AI box experiment. Nerds as the true extraverts. "What do you think the singularity will actually look like?" The site maps eight other Eldritch Gods to different philosophical dispositions.

Self Fulfilling Prophecy by Entirely Useless - The author analyzes various edge cases about intention and choice. They discuss how to modify their theories and whether they are on the right track.

Decisions As Predictions by Entirely Useless - "Consider the hypothesis that both intention and choice consist basically in beliefs: intention would consist in the belief that one will in fact obtain a certain end, or at least that one will come as close to it as possible. Choice would consist in the belief that one will take, or that one is currently taking, a certain temporally immediate action for the sake of such an end."

Bathtime by The Unit of Caring - Bath time play with a baby. Things are compelling when they have the right balance of surprise and predictability.

Internet Explorers Not Exploiters by Nostalgebraist - Exploit vs explore tradeoffs. Attention spans. How long should you try a math probem before you give up? Exploring new options can be uncomfortable since it might lead nowhere. Addictive games and the internet. Academic research.

Embracing Metamodernism by Gordon (Map and Territory) - "Metamodernism believes in reconstructing things that have been deconstructed with a view toward reestablishing hope and optimism in the midst of a period (the postmodern period) marked by irony, cynicism, and despair."

Why Ethnicity Ideology by Robin Hanson - "he more life decisions a feature influences, the more those who share this feature may plausibly share desired policies, policies that their coalition could advocate. So you might expect political coalitions to be mostly based on individual features that are very useful for predicting individual behavior. But you’d be wrong."

A Village Is Better Than Group House by Particular Virtue - More private space. Non-shared legal ownership. More people means much more social space and stability.

A Flaw In The Way Smart People Think About Robots And Job Loss by Tom Bartleby - Considering jobs one at a time causes smart people to think no one will lose their job from automation. However small incremental advances reduce the number of needed workers. A history of secretaries. Personal experience of saving time via programming.

More Brain Lies by Aceso Under Glass - "But sometimes it helps to take the gap between is and ought as a sign of how high your standards are, rather than how bad you are at a thing."

Ems In Walkaway by Robin Hanson - A review of the science fiction book 'Walkaway' which features brain emulation. Robin describes what he finds realistic and unrealistic.

Take My Job by Jacob Falkovich - "I want to tell you about the job I’m leaving, why you should think about applying for it, and what it has taught me in the last four years about company culture, diversity, and the makings of a good workplace." Cool jobs have work environments. Keep company identity small if you want real diversity.

The Parliamentary Model As The Correct Ethical Model by Kaj Sotala - An explanation of how the 'parliamentary' model of morality resolves uncertainty around which model of morality is correct. Why the parliamentary model is itself the correct model.

The Problem With Prestige by Robin Hanson - Small fields such as academic disciplines often use prestige to reward people. A mathematical model of how effort is allocated to maximize prestige. Why prestige doesn't scale and what is under-incentivized by prestige.

How I Think About Free Speech Four Categories by Julia Galef - Descriptions of the following categories: No consequences, Individual social consequences, Official social consequences, Legal consequences. Disagreements about categories.

Choices Are Really Bad by Zvi Moshowitz - Exercising willpower is a cost in the short term. Decision fatigue. Reasons why people, including you, WILL choose wrong. People justify their choices. Choices create blame and responsibility. Choices cause paralysis. Choice are communication. Choices require justification. Choices let people defect and destroy cooperation.

What Is Rationalist Berkleys Community Culture by Zvi Moshowitz - The original rationalist community mission was to save the world, not to be nice to each other. Sarah recently suggested the later is currently the actual goal. Zvi reinterprets this as sounding an alarm. The rationalists should not become just another Berkeley community of bohemians and weirdos.

Repairing Anxiety Using Internal And External Locus Of Control Models by Elo - Two variable model. Locus of Control: Internal or External. Feeling: Good or bad. The four combinations. Moving diagonally, for example from internal-bad to external-good.

Social Insight When A Lie Is Not A Lie When A by Bound_Up (lesswrong) - If you merely speak the truth as you see then you will be misunderstood. Example of saying you are an atheist. Many people are incapable of understanding your real arguments.

Multiverse Wide Cooperation Via Correlated Decision Making by The Foundational Research Institute - "If we care about what happens in civilizations located elsewhere in the multiverse, we can superrationally cooperate with some of the their inhabitants. That is, if we take their values into account, this makes it more likely that they do the same for us. In this paper, I attempt to assess the practical implications of this idea"

Questions Are Not Just For Asking by Malcom Ocean (ribbonfarm) - Hazards of asking questions. Hold your Questions. Reveal your questions. Un-ask your questions. Question your questions. Using Questions to Organize Attention. Letting the question ask you; becoming the answer.

Happiness Is Not Coherent Concept by Particular Virtue - A social science concept is 'real' if and only if it represents reality well and you have ruled out alternatives. "If a thing can be measured several different ways, and a causal factor can push one in a direction but not the other, then you start to worry that the thing is not actually one thing, but several things." Why should you care that happiness isn't a single thing.

The Craft Is Not The Community by Sarah Constantin (Otium) - The Berkeley Rationalists are building a true community: Sharehouses, Plans for an unschooling center, etc. However many rationalist companies/projects have failed. Sarah doesn't think it makes sense to tackle 'external facing' projects as a community. Tesla Motors and MIT aren't run as community projects, they are run meriotocratically. Lots of analysis on the meaning of community and what makes organizations effective. Personal.


More On Dota 2 by Open Ai - Timeline of the DOTA-bot's rapid improvement. Bot Exploits. Physical Infrastructure. What needs to be done to play 5x5.

Openai Bots Were Defeated At Least 50 Times - People could play against the openAI Dota bot. Several people found strategies to beat the bot. One of the human victors explains their strategy.

Dota 2 by Open Ai - Open AI codes a 1v1 Dota-2 bot that defeaated top players. The bots actions per minute were comparable to many humans. The bot learned the game from scratch by self-play, and does not use imitation learning or tree search. The game involves hidden information and the bot's strategies were complicated.


Things I Have Gotten Wrong by Aceso Under Glass - Mistaken evaluations: Animal Charity Evaluators, Raising for Effective Giving, Charity Science, Tostan.

We Have No Idea If There Are Cost Effective Interventions Into Wild Animal Suffering by Ozy - Many people are confident there are no effective ways to reduce wild animal suffering, Ozy disagrees. Ecosystems are complex but we aren't completely uncertain. Wild Animal Suffering is a tiny field staffed by non-experts working part time.

Altruism Is Incomplete by Zvi Moshowitz - "I worry many in EA are looking at life like a game where giving money to charity is how the world scores victory points." Controls in psychology are often motivated by researcher bias. Amazon is the world's most effective charity. Life is about getting things done, often for selfish reasons. Veganism. Zvi doesn't believe the official EA party line.

Let Them Decide by GiveDirectly - Eight media articles about Basic Income, Give Directly, Cash Transfer and Development Aid.

High Time For Drug Policy Reform Part 44 by MichaelPlant (EA forum) - "This is the fourth of four posts on DPR. In this part I provide some simplistic but illustrative cost-effectiveness estimates comparing an imaginary campaign for DPR against current interventions for poverty, physical health and mental health; I also consider what EAs should do next."

High Time For Drug Policy Reform Policy by MichaelPlant (EA forum) - "This is the third of four posts on DPR. In this part I look at what a better approach to drug policy might be and then discuss how neglected and tractable this problem is as cause area of EAs to work on."

Drug Policy Reform 1 by MichaelPlant (EA forum) - 9300 Words. Six Mechanisms for drug reform to do good: Fighting mental illness. Reducing pain. Improving public health. Reducing crime, violence, corruption and instability (including international scale). Raising revenue for governments. Recreational use. Five major objections and the Author's response.

===Politics and Economics:

Diversity And Team Performance What The Research Says by Eukaryote - Opens with several links about diversity and inclusion in EA. The pros and cons of different types of diversity in terms of group cohesion and information processing. Practical ways to minimize the costs of diversity and magnify the benefits. Lots of references.

Unpopular Ideas About Social Norms by Julia Galef - Twenty-four ideas, many with references explaining the ideas. As an example: "Overall it would be a good thing to have a totally transparent society with no privacy"

Unpopular Ideas About Political And Economic Systems by Julia Galef - Twenty-three ideas, many with references explaining the ideas. As an example "Many people have a moral duty not to vote".

The Market Power Story by Noah Smith - Many issues in the American economy are blamed on the increasing market power of a small number of firms. Analysis: Monopolistic competition. Profits. Market Concentration. Output restriction. Three updates. Lots of citations and references to papers.

The Courage To Stand Up And Do The Wrong Thing by Tom Bartleby - According to Supreme Court Justice Black, applying Brown vs Board of Education to DC schools was an unprincipled but correct decision. Have principles. Don't follow them over a cliff. Acknowledge deviations. Charlottesville. Cloduflare suspends service to the daily stormer.

Many Topics by Scott Aaronson - Misc Topics: HTTPS / Kurtz / eclipse / Charlottesville / Blum / P vs. NP

The Muted Signal Hypothesis Of Online Outrage by Kaj Sotala - "People want to feel respected, loved, appreciated, etc. When we interact physically, you can easily experience subtle forms of these feelings... Online, most of these messages are gone: a thousand people might read your message, but if nobody reacts to it, then you don’t get any signal indicating that you were seen... . So if you want to consistently feel anything, you may need to ramp up the intensity of the signals."

Marching Markups by Robin Hanson - "Holding real productivity constant, if firms move up their demand curves to sell less at a higher prices, then total output, and measured GDP, get smaller. Their numerical estimates suggest that, correcting for this effect, there has been no decline in US productivity growth since 1965. That’s a pretty big deal."

Greater Gender Parity Economics Suggests Reform Tenure Systems by Marginal Revolution - Biological clocks conflict with the tenure system timeline. Tyler recommends a much more flexible system with a variety of roles. The leaders in the economics profession have been 'punching down' at an infamous anonymous economics forum.

Moral Precepts And Suicide Pacts by Perfecting Dated Visions - "To be trusted to remain peaceful, you must be the kind of person who remains peaceful. And to be a peaceful person and earn the trust placed in you, you must be peaceful even when you have every right to fight. It’s the same with tolerance. If you want to shut up your argumentative opponents and vigorously retaliate when your opponents show signs of intolerance, you will not be trusted to be tolerant to others who are tolerant, even those who basically agree with you." The constitution, World War 1, Nazi's today.

The Anti Slip Slope by samuelthefifth (Status 451) - An analogy between workplace noise and workplace sexism. How efforts to stamp out 'workplace noise' can get out of control.

Seattle Minimum Wage Study Part 3 Tell Me Why Im Wrong Please by Zvi Moshowitz - Most writers thought the Seattle minimum wage study showed that low wage workers were hurt. Zvi found a fundamental flaw in their analysis. If you correct for raising wages in Seattle then the study seems to show low wage workers weren't hurt or perhaps benefitted.

Theory Vs Data In Statistics by Noah Smith - Theory heavy vs minimal theory models in Economics. Machine learning as the extreme of a "no model required" paradigm.

Thats Amore by sam[]zdat - Epistocracy, democracy with limits on who can vote. Competency and incompetency and pizza. Politics is the strongest identity. Trading power for the image of power. Morlocks and Eloi. Replication crisis. Google guy. The Left's support for the powerful. Nhilism.

Contra Sadedin Varinsky: The Google Memo Is Still Right Again by Artir - Detailed refutation of two criticisms of the google memo. Lots of long quotations and citation of counter evidence.

Indian Feminism And The Role Of The Environment: Why The Google Memo Is Still Right by Artir - A very detailed cross-country look at female enrollment in CS and various technology fields. A focus on countries where women are well represented in tech (many in Asia). Lots of discussion.

Brief Thoughts On The Google Memo by Julia Galef - "So as far as I can see, there are only two intellectually honest ways to respond to the memo: 1. Acknowledge gender differences may play some role, but point out other flaws in his argument (my preference) 2. Say “This topic is harmful to people and we shouldn’t discuss it” (a little draconian maybe, but at least intellectually honest)"

The Kolmogorov Option by Scott Aaronson - Kolmogorov was a brilliant mathematician as well as a sensitive and kind man. However he cooperated with the Soviets. An option for living in a society where many falsehoods are 'official truth': Build a bubble of truth and wait for the right time to take down the Orthodoxy. Don't charge headfirst and get killed. There are no 'good heretics' in the eyes of the Inquisition.


Can Atheists be Jewish by Brute Reason - Reasons MIRI can be an atheist Jew: Judaism is a religion, but being Jewish isn’t necessarily. Belief in god isn’t particularly central in most Jewish communities and practices. Because I fucking said so.

Ten Small Life Improvements by Paul Christiano (lesswrong) - Nine tech tips. Christmas lights all year round.

Extremely Easy Problem by protokol2020 - How much water per second do you need to raise the sea level 6 meters in 100 years.

The Premium Mediocre Life Of Maya Millennial by venkat (ribbonfarm) - Venkat - "Yes, ribbonfarm is totally premium mediocre. We are a cut above the new media mediocrityfests that are Vox and Buzzfeed, and we eschew low-class memeing and listicles. But face it: actually enlightened elite blog readers read Tyler Cowen and Slatestarcodex."

Right And Left Folds Primitive Recursion Patterns In Python And Haskell by Eli Bendersky - "In this article I'll present how left and right folds work and how they map to some fundamental recursive patterns. The article starts with Python, which should be (or at least look) familiar to most programmers. It then switches to Haskell for a discussion of more advanced topics like the connection between folding and laziness, as well as monoids."

Meta Contrarian Typography Part 2 by Tom Bartleby - You should use two spaces after your sentences whn drafting. Why to use a plaintext editor. Why to write a resume in plaintext. Flexibility is power. Two spaces is more much more machine readable.

Stop Caring So Much About Technical Problems by Particular Virtue - Links to an article describing what attributes actually get developers jobs (other than technical skill). Caring about making great products is much more desirable than caring about technical problems. Developer interviews are highly random. Experience matters alot. Enterprise programmers are disliked. Practical advice.

Trip Sitting Tips And Tricks by AellaGirl - Thirteen practical tips for trip sitting someone on a high dose of acid. Focuses on accepting their experiences, treating them similarly to a small child and keeping yourself safe.

Erisology Of Self And Will Closing Thoughts by Everything Studies - "Here in Part 7 I’ll end with a summary and some thoughts on how to deal with the problems described in the series."


We Are Not Worried Enough About The Next Pandemic by 80,000 Hours - "We spend the first 20 minutes covering his work as a foundation grant-maker, then discuss how bad the pandemic problem is, why it’s probably getting worse, and what can be done about it. In the second half of the interview we go through what you personally could study and where you could work to tackle one of the worst threats facing humanity."

Identity Terror by Waking Up with Sam Harris - "Douglas Murray. Identity politics, the rise of white nationalism, the events in Charlottesville, guilt by association, the sources of western values, the problem of finding meaning in a secular world."

Seth Stephens Davidowitz On What The Internet Can Tell Us by Rational Speaking - "New research gives us into which parts of the USA are more racist, what kinds of strategies reduce racism, whether the internet is making political polarization worse, and the sexual fetishes and insecurities people will only admit to their search engine."

John McWhorter on the Evolution of Language and Words on the Move by EconTalk - "The unplanned ways that English speakers create English, an example of emergent order. Topics discussed include how words get short (but not too short), the demand for vividness in language, and why Shakespeare is so hard to understand."

The Limits Of Persuasion by Waking Up with Sam Harris - "David Pizarro and Tamler Sommers. Free speech on campus, the Scott Adams podcast, the failings of the mainstream media, moral persuasion, moral certainty, the ethics of abortion, Buddhism, the illusion of the self."

Conversation: Comedian Dave Barry by Marginal Revolution - "What makes Florida special, why business writing is so terrible, Eddie Murphy, whether social conservatives can be funny (in public), the weirdness of Peter Pan, how he is so productive, playing guitar with Roger McGuinn, DT, the future of comedy."

Ritual And Spirituality by The Bayesian Conspiracy - Rationalist ritual. Witchcraft. Welcome to Nightvale. Concerts. What makes something ritual? Is rationalist ritual psychologically safe?

Chris Hayes by The Ezra Klein Show - Chris Hayes. Should Trump be removed from office. "Infighting between different factions of the Democratic Party, the signs that congressional Republicans are growing some backbone, and the reports that Trump’s closest aides are conspiring to keep him from doing too much damage to the country."

The Biology Of Good And Evil by Waking Up with Sam Harris - "Robert Sapolsky. His work with baboons, the opposition between reason and emotion, doubt, the evolution of the brain, the civilizing role of the frontal cortex, the illusion of free will, justice and vengeance, brain-machine interface, religion, drugs"

Senator Michael Bennet by The Ezra Klein Show - Senator Michael Bennet. "This is a conversation about why Congress is broken, and what broke it. We discuss money, partisanship, the media, the rules, the leadership, and much more. We talk about what Bennet thinks House of Cards gets right (hint: it’s the sociopathy) and whether President Trump’s antics are creating some hope of institutional renewal."

Book Review: Mathematics for Computer Science (Suggestion for MIRI Research Guide)

11 richard_reitz 22 July 2017 07:26PM

tl;dr: I read Mathematics for Computer Science (MCS) and found it excellent. I sampled Discrete Mathematics and Its Applications (Rosen)—currently recommended in MIRI's research guide—as well as Concrete Mathematics and Discrete Mathematics with Applications (Epp), which appear to be MCS's competition. Based on these partial readings, I found MCS to be the best overall text. I therefore recommend MIRI change the recommendation in its research guide.


MCS is used at MIT for their introductory discrete math course, 6.042, which appears to be taken primarily by second-semester freshman and sophomores. You can find OpenCourseWare archives from 2010 and 2015, although the book is self-contained; I never had occasion to use them throughout my reading. 

If you liked Computability and Logic (review), currently in the MIRI research guide, you'll like MCS:

MCS is a wonderful book. It's well written. It's rigorous, but does a nice job of motivating the material. It efficiently proves a number of counterintuitive results and then helps you see them as intuitively obvious. Freed from the constraint of printing cost, it contains many diagrams which are generally useful. You can find the pdf here or, if that link breaks, by googling "Mathematics for Computer Science". (See section 21.2 for why this works.)

MCS is regularly updated during the semester. Based on the dates of revision given to the cover, I suspect that the authors attempt to update it within a week of the last update during the semester. The current version is 87 pages longer than the 2015 version, suggesting ~40 pages of material is added a year. My favorite thing about the constant updates was that I never needed to double check statements about our current state of knowledge to see if anything had changed since publication.

MCS is licensed under a Creative Commons attribution share-alike license: it is free in the sense of both beer and freedom. I'm a big fan of such copyleft licenses, so I give MIT major props. I've tried to remain unbiased in my review, but halo effect suggests my views on the text might be affected by the text's license: salt accordingly.


The only prerequisite is single-variable calculus. In particular, I noted integration, differentiation, and convergence/infinite sums coming up. That said, I don't remember seeing them coming up in sections that provided a lot of dependencies: with just a first course in algebra, I feel a smart 14-year-old could get through 80–90% of the book, albeit with some help, mostly in places where "do a bunch of algebra" steps are omitted. An extra 4–5 years of practice doing algebraic manipulations makes a difference.

MCS is also an introduction to proofwriting. In my experience, writing mathematical proofs is a skill complex enough to require human feedback to get all the nuances of why something works and why something else doesn't work and why one approach is better than another. If you've never written proofs before and would like a human to give you feedback, please pm me.

Comparison to Other Discrete Math Texts


I randomly sampled section 4.3 of Rosen, on primes and greatest common divisors and was very unimpressed. Rosen states the fundamental theorem of arithmetic without a proof. The next theorem had a proof which was twice as long and half as elegant as it could have been. The writing was correct but unmotivating and wordy. For instance, Rosen writes "If n is a composite integer", which is redundant, since all composite numbers are integers, so he could have just said "If n is composite".

In the original Course Recommendations for Friendliness Researchers, Louie responded to Rosen's negative reviews:

people taking my recommendations would be geniuses by-and-large and that the harder book would be better in the long-run for the brightest people who studied from it.

Based on the sample I read, Rosen is significantly dumbed-down relative to MCS. Rosen does not prove the fundamental theorem of arithmetic whereas MCS proves it in section 9.4. For the next theorem, Rosen gives an inelegant proof when a much sleeker—but reasonably evident!—proof exists, making it feel like Rosen expected the reader to not be able to follow the sleeker proof. Rosen's use of "composite integer" instead of "composite" seems like he assumes the reader doesn't understand that the only objects one describes as composite are integers; MCS does not contain the string "composite integer".

In the section I read, Rosen has worked examples for finding gcd(24, 36) and gcd(17, 22), something I remember doing when I was 12. It's almost like Rosen was spoon-feeding how to guess the teacher's password for the student to regurgitate on an exam instead of building insight.

Concrete Mathematics

There are probably individuals who would prefer Concrete Mathematics to MCS. These people are probably into witchcraft.

I explain by way of example. In section 21.1.1, MCS presents a very sleek, but extremely nonobvious, proof of gambler's ruin using a clever argument courtesy of Pascal. In section 21.1.2, MCS gives a proof that doesn't require the reader to be "as ingenuious Pascal [sic]". As an individual who is decidedly not as ingenious as Pascal was, I appreciate this.

More generally, say we want to prove a theorem that looks something like "If A, then B has property C." You start at A and, appealing to the definition of C, show that B has it. There's probably some cleverness involved in doing so, but you start at the obvious place (A), end in the obvious place (B satisfies the definition of C), and don't rely on any crazy, seemingly-unrelated insights. Let's call this sort of proof mundane.

(Note that mundane is far from mechanical. Most of the proofs in Baby Rudin are mundane, but require significant cleverness and work to generate independently.)

There is a virtue in mundane proofs: a smart reader can usually generate them after they read the theorem but before they read its proof. Doing is beneficial, since proof-generating makes the theorem more memorable. It also gives the reader practice building intuition by playing around with the mathematical objects and helps them improve their proofwriting by comparing their output to a maximally refined proof.

On the end of the spectrum opposing mundane is witchcraft. Proofs that use witchcraft typically have a step where you demonstrate you're as ingenious as Pascal by having a seemingly-unrelated insight that makes everything easier. Notice that, even if you are as ingenious as Pascal, you won't necessarily be able to generate these insights quickly enough to get through the text at any reasonable pace.

For the reasons listed above, I prefer mundane proofs. This isn't to say MCS is devoid of witchcraft: sometimes it's the best or only way of getting a proof. The difference is that MCS uses mundane proofs whenever possible whereas Concrete Mathematics invokes witchcraft left and right. This is why I don't recommend it.

Individuals who are readily as ingenious as Pascal, don't want the skill-building benefits of mundane proofs, or prefer the whimsy of witchcraft may prefer Concrete Mathematics.


I randomly sampled section 12.2 of Epp and found it somewhat dry but wholly unobjectionable. Unlike Rosen, I felt like Epp was writing for an intelligent human being (though I was reading much further along in the book, so maybe Rosen assumed the reader was still struggling with the idea of proof). Unlike Concrete Mathematics, I detected no witchcraft. However, I felt that Epp had inferior motivation and was written less engagingly. Epp is also not licensed under Creative Commons.


Epp, Rosen, and MCS are all ~1000 pages long, whereas Concrete Mathematics is ~675. To determine what these books covered that might not be in MCS, I looked through their table of contents' for things I didn't recognize. The former three have the same core coverage, although Epp and Rosen go into material you would find in Computability and Logic or Sipser (also part of the research guide), whereas MCS spends more time developing discrete probability. Based on the samples I read, Epp and MCS have about the same density, whereas Rosen spends little time building insight and a lot of time showing how to do really basic, obvious stuff. I would expect Epp and MCS to have roughly the same amount of content covering mostly (but not entirely) the same stuff and Rosen to offer a mere shadow of the insight of the other two.

Concrete Mathematics seems to contain a subset of MCS's topics, but from the sections I read, I expect the presentation to be wildly different.


My only substantial complaint about MCS is that, to my knowledge, the source LaTeX is not available. Contrast this to SICP, which has the HTML available. This resulted in a proliferation of PDFs tailored for different use cases. It'd be nice, for instance, to have a print-friendly version of MCS (perhaps with fewer pages), plus a version that fit nicely onto the small screen of an ereader or mobile device, plus a version with the same aspect ratio as my monitor. This all would be extremely easy to generate given the source. It would also facilitate crowdsourcing proofreading: there are more than a few typos, although they don't preempt comprehension. At the very least, I wish there were somewhere to submit errata.

Some parts of MCS were notation-heavy. To quote what a professor once wrote on a problem set of mine:

I'm not sure all the notation actually serves the goal of clarifying the argument for the reader. Of course, such notation is sometimes needed. But when it is not needed, it can function as a tool with which to bludgeon the reader…

I found myself referring to Wikipedia's glossary of graph theory terms more than a few times when I was making definitions to put into Anki. Not sure if this is measuring a weak section or a really good glossary or something else.

A Note on Printing

A lot of people like printed copies of their books. One benefit of MCS I've put forward is that it's free (as in beer), so I investigated how much printing would cost.

I checked the local print shops and Kinko's online was unable to find printing under $60, a typical price around $70, with the option to burn $85 if I wanted nicer paper. This was more than I had expected and between ⅓ and ½ (ish) the price of Rosen or Epp.

Personally, I think printing is counterproductive, since the PDF has clickable links.

Final Thoughts

Despite sharing first names, I am not Richard Stallman. I prefer the license on MCS to the license on its competitors, but I wouldn't recommend it unless I thought the text itself was superior. I would recommend baby Rudin (nonfree) over French's Introduction to Real Analysis; Hoffman and Kunze's Linear Algebra (nonfree) over Jim Hefferson's Linear Algebra; and Epp over 2010!MCS. The freer the better, but that consideration is trumped by the quality of the text. When you're spending >100 hours working out of a book that provides foundational knowledge for the rest of your life, ~$150 and a loss of freedom is a price many would pay for better quality.

Eliezer writes:

Tell a real educator about how Earth classes are taught in three-month-sized units, and they would’ve sputtered and asked how you can iterate fast enough to learn how to teach that.

Rosen is in its seventh edition. Epp is in its fourth edition and Concrete Mathematics its second. The earliest copy of MCS I've happened across comes from 2004. Near as I can tell, it is improved every time the authors go through the material with their students, which would put it in its 25th edition.

And you know what? It's just going to keep getting better faster than anything else.


Thank you to Gram Stone for reviewing drafts of this review.

LessWrong Is Not about Forum Software, LessWrong Is about Posts (Or: How to Immanentize the LW 2.0 Eschaton in 2.5 Easy Steps!)

11 enye-word 15 July 2017 09:35PM

[epistemic status: I was going to do a lot of research for this post, but I decided not to as there are no sources on the internet so I'd have to interview people directly and I'd rather have this post be imperfect than never exist.]

Many words have been written about how LessWrong is now shit. Opinions vary about how shit exactly it is. I refer you to and for more comments about LessWrong being shit and the LessWrong diaspora being suboptimal.

However, how to make LessWrong stop being shit seems remarkably simple to me. Here are the steps to resurrect it:

1. Get Eliezer: The lifeblood of LessWrong is Eliezer Yudkowsky's writing. If you don't have that, what's the point of being on this website? Currently Eliezer is posting his writings on Facebook, ( which I consider foolish, for the same reasons I would consider it foolish to house the Mona Lisa in a run-down motel.

2. Get Scott: Once you have Eliezer back, and you sound the alarm that LW is coming back, I'm fairly certain that Scott "Yvain" Alexander will begin posting on LessWrong again. As far as I can tell he's never wanted to have to moderate a comment section, and the growing pains are stressing his website at the seams. He's even mused publicly about arbitrarily splitting the Slate Star Codex comment section in two ( which is a crazy idea on its own but completely reasonable in the context of (cross)posting to LW. Once you have Yudkowsky and Yvain, you have about 80% of what made LessWrong not shit.

3. Get Gwern: I don't read many of Gwern's posts; I just like having him around. Luckily for us, he never left!

After this is done, everyone else should wander back in, more or less.

Possible objections, with replies:

Objection: Most SSC articles and Yudkowsky essays are not on the subject of rationality and thus for your plan to work LessWrong's focus would have to subtly shift.

Reply: Shift away, then! It's LessWrong 2! We no longer have to be a community dedicated to reading Rationality: From AI to Zombies as it's written in real time; we can now be a community that takes Rationality: From AI to Zombies as a starting point and discusses whatever we find interesting! Thus the demarcation between 1.0 and 2.0!

Objection: People on LessWrong are mean and I do not like them.

Reply: The influx of new readers from the Yudkowsky-Yvain in-migration should make the tone on this website more upbeat and positive. Failing that, I don't know, ban the problem children, I guess. I don't know if it's poor form to declare this but I'd rather have a LessWrong Principate than a LessWrong Ruins. See also:

Objection: I'd prefer, for various reasons, to just let LessWrong die.

Reply: Then kill it with your own hands! Don't let it lie here on the ground, bleeding out! Make a post called "The discussion thread at the end of the universe" that reads "LessWrong is over, piss off to r/SlateStarCodex", disallow new submissions, and be done with it! Let it end with dignity and bring a close to its history for good.

Bi-Weekly Rational Feed

11 deluks917 09 July 2017 07:11PM

===Highly Recommended Articles:

Just Saying What You Mean Is Impossible by Zvi Moshowitz - "Humans are automatically doing lightning fast implicit probabilistic analysis on social information in the background of every moment of their lives." This implies there is no way to divorce the content of your communication from its myriad probabilistic social implications. Different phrasings will just send different implications.

In Defense Of Individualist Culture by Sarah Constantin (Otium) - A description of individualist culture. Criticisms of individualist culture: Lacking sympathy, few good defaults. Defenses: Its very hard to change people (psychology research review). A defense of naive personal identity. Traditional culture is fragile. Building a community project is hard in the modern world, prepare for the failure modes. Modernity has big upsides, some people will make better choices than the traditional rules allow.

My Current Thoughts On Miris Highly Reliable by Daniel Dewey (EA forum) - Report by the Open Phil AI safety lead. A basic description of and case for the MIRI program. Conclusion: 10% credence in MIRI's work being highly useful. Reasons: Hard to apply to early agents, few researchers are excited, other approaches seem more promising.

Conversation With Dario Amodei by Jeff Kaufman - "The research that's most valuable from an AI safety perspective also has substantial value from the perspective of solving problems today". Prioritize work on goals. Transparency and adversarial examples are also important.

Cfar Week 1 by mindlevelup - What is working at CFAF actually like. Less rationality research than anticipated. Communication costs scale quadratically. Organization efficiency and group rationality.

The Ladder Of Interventions by mindlevelup - "This is a hierarchy of techniques to use for in-the-moment situations where you need to “convince” yourself to do something." The author uses these methods in practice.

On Dragon Army by Zvi Moshowitz - Long response to many quotes from "Dragon Army Barracks". Duncan't attitude to criticism. Tyler Durden shouldn't appeal to Duncan. Authoritarian group houses haven't been tried. Rationalists undervalue exploration. Loneliness and doing big things. The pendulum model of social progress. Sticking to commitments even when its painful. Saving face when you screw up. True Reliability: The bay is way too unreliable but Duncan goes too far. Trust and power Dynamics. Pragmatic criticism of the charter.

Without Belief In A God But Never Without Belief In A Devil by Lou (sam[]zdat) - The nature of mass movements. The beats and the John Birchers. The taxonomy of the frustrated. Horseshoe theory. The frustrated cannot derive satisfaction from action, something else has to fill the void Poverty, work and meaning. Mass movements need to sow resentment. Hatred is the strongest unifier. Modernity inevitably causes justified resentment. Tocqueville, Polyanai, Hoffer and Scott's theories. Helpful and unhelpful responses.

On The Effects Of Inequality On Economic Growth by Artir (Nintil) - Most of the article tries to explain and analyze the economic consensus on whether inequality harms growth. A very large number of papers are cited and discussed. A conclusion that the effect is at most small.


Two Kinds Of Caution by Scott Alexander - Sometimes boring technologies (ex container ships) wind up being far more important than flashy tech. However Scott argues that often the flashy tech really is important. There is too much contrarianism and not enough meta-contrarianism. AI risk.

Open Road by Scott Alexander - Bi-weekly public open thread. Some messages from Scott Alexander.

To The Great City by Scott Alexander - Scott's Karass is in San Fransisco. He is going home.

Open Thread 78 75 by Scott Alexander - Bi-weekly public open thread.

Why Are Transgender People Immune To Optical Illusions by Scott Alexander - Scott's community survey showed, with a huge effect size, that transgender individuals are less susceptible to the spinning mask and dancer illusions. Trans suffer from dis-associative disorder at a high rate. Connections between the two phenomena and NDMA. Commentary on the study methodology.

Contra Otium On Individualism by Scott Alexander (Scratchpad) - Eight point summary of Sarah's defense of individualism. Scott is terrified the market place of ideals doesn't work and his own values aren't memetically fit.

Conversation Deliberately Skirts The Border Of Incomprehensibility by Scott Alexander - Communication is often designed to be confusing so as to preserve plausible deniability.


Rethinking Reality And Rationality by mindlevelup - Productivity is almost a solved problem. Much current rationalist research is very esoteric. Finally grokking effective altruism. Getting people good enough at rationality that they are self correcting. Pedagogy and making research fields legible.

The Power Of Pettiness by Sarah Perry (ribbonfarm) - "These emotions – pettiness and shame – are the engines driving epistemic progress" Four virtues: Loneliness, ignorance, pettiness and overconfidence.

Irrationality is in the Eye of the Beholder by João Eira (Lettuce be Cereal) - Is eating a chocolate croissant on a diet always irrational? Context, hidden motivations and the curse of knowledge.

The Abyss Of Want by AellaGirl - The infinite regress of 'Asking why'. Taking acid and ego death. You can't imagine the experience of death. Coming back to life. Wanting to want things. Humility and fake enlightenment.

Epistemic Laws Of Motion by SquirrelInHell - Newton's three laws re-interpreted in terms of psychology and people's strategies. A worked example using 'physics' to determine if someone will change their mind. Short and clever.

Against Lone Wolf Selfimprovement by cousin_it (lesswrong) - Lone wolf improvement is hard. Too many rationalists attempt it for cultural and historical reasons. Its often better to take a class or find a group.

Fictional Body Language by Eukaryote - Body language in literature is often very extreme compared to real life. Emojis don't easily map to irl body language. A 'random' sample of how emotion in represented in American Gods, Earth and Lirael. Three strategies: Explicitly describing feelings vs describing actions vs metaphors.

Bayesian Probability Theory As Extended Logic A by ksvanhorn (lesswrong) - Cox's theorem is often cited to support that Bayesian probability is the only valid fundamental method of plausible reasoning. A simplified guide to Cox's theorem. The author their paper that uses weaker assumptions than Cox's theorem. The author's full paper and a more detailed exposition of Cox's theorem are linked.

Steelmanning The Chinese Room Argument by cousin_it (lesswrong) - A short thought experiment about consciousness and inferring knowledge from behavior.

Ideas On A Spectrum by Elo (BearLamp) - Putting ideas like 'selfishness' on a spectrum. Putting yourself and others on the spectrum. People who give you advice might disagree with you about where you fall on the spectrum. Where do you actually stand?

A Post Em Era Hint by Robin Hanson - In past ages there were pairs of innovations that enabled the emulation age without changing the growth rate. Forager: Reasoning and language. Farmer: Writing and math. Industrial: Computers and Digital Communication. What will the em-age equivalents be?

Zen Koans by Elo (BearLamp) - Connections between koans and rationalist ideas. A large number of koans are included at the end of the post. Audio of the associated meetup is included.

Fermi Paradox Resolved by Tyler Cowen - Link to a presentation. Don't just multiply point estimates. Which Drake parameters are uncertain. The Great filter is probably in the past. Lots of interesting graphs and statistics. Social norms and laws. Religion. Eusocial society.

Developmental Psychology In The Age Of Ems by Gordan (Map and Territory) - Brief intro to the Age of Em. Farmer values. Robin's approach to futurism. Psychological implications of most ems being middle aged. Em conservatism and maturity.

Call To Action by Elo (BearLamp) - Culmination of a 21 article series on life improvement and getting things done. A review of the series as a whole and thoughts on moving forward.

Cfar Week 1 by mindlevelup - What is working at CFAF actually like. Less rationality research than anticipated. Communication costs scale quadratically. Organization efficiency and group rationality.

Onemagisterium Bayes by tristanm (lesswrong) - Toolbox-ism is the dominant mode of thinking today. Downsides of toolbox-ism. Desiderata that imply Bayesianism. Major problems: Assigning priors and encountering new hypothesis. Four minor problems. Why the author is still a strong Bayesianism. Strong Bayesians can still use frequentist tools. AI Risk.

Selfconscious Ideology by casebash (lesswrong) - Lesswrong has a self conscious ideology. Self conscious ideologies have major advantages even if any given self-conscious ideology is flawed.

Intellectuals As Artists by Robin Hanson - Many norms function to show off individual impressiveness: Conversations, modern songs, taking positions on diverse subjects. Much intellectualism is not optimized for status gains not finding truth.

Just Saying What You Mean Is Impossible by Zvi Moshowitz - "Humans are automatically doing lightning fast implicit probabilistic analysis on social information in the background of every moment of their lives." This implies there is no way to divorce the content of your communication from its myriad probabilistic social implications. Different phrasings will just send different implications.

In Defense Of Individualist Culture by Sarah Constantin (Otium) - A description of individualist culture. Criticisms of individualist culture: Lacking sympathy, few good defaults. Defenses: Its very hard to change people (psychology research review). A defense of naive personal identity. Traditional culture is fragile. Building a community project is hard in the modern world, prepare for the failure modes. Modernity has big upsides, some people will make better choices than the traditional rules allow.

Forget The Maine by Robin Hanson - Monuments are not optimized for reminding people to do better. Instead they largely serve as vehicles for simplistic ideology.

The Ladder Of Interventions by mindlevelup - "This is a hierarchy of techniques to use for in-the-moment situations where you need to “convince” yourself to do something." The author uses these methods in practice.

On Dragon Army by Zvi Moshowitz - Long response to many quotes from "Dragon Army Barracks". Duncan't attitude to criticism. Tyler Durden shouldn't appeal to Duncan. Authoritarian group houses haven't been tried. Rationalists undervalue exploration. Loneliness and doing big things. The pendulum model of social progress. Sticking to commitments even when its painful. Saving face when you screw up. True Reliability: The bay is way too unreliable but Duncan goes too far. Trust and power Dynamics. Pragmatic criticism of the charter.


Updates To The Research Team And A Major Donation by The MIRI Blog - MIRIr received a 1 million dollar donation. Two new full-time researchers. Two researchers leaving. Medium term financial plans.

Conversation With Dario Amodei by Jeff Kaufman - "The research that's most valuable from an AI safety perspective also has substantial value from the perspective of solving problems today". Prioritize work on goals. Transparency and adversarial examples are also important.

Why Don't Ai Researchers Panic by Bayesian Investor - AI researchers predict a 5% chance of "extremely bad" (extinction level) events, why aren't they panicking? Answers: They are thinking of less bad worst cases, optimism about counter-measures, risks will be easy to deal with later, three "star theories" (MIRI, Paul Christiano, GOFAI). More answers: Fatal pessimism and resignation. It would be weird to openly worry. Benefits of AI-safety measures are less than the costs. Risks are distant.

Strategic Implications Of Ai Scenarios by (EA forum) - Questions and topics: Advanced AI timelines. Hard or soft takeoff? Goal alignment? Will advanced AI act as a single entity or a distributed system? Implication for estimating the EV of donating to AI-safety. - Tobias Baumann

Tool Use Intelligence Conversation by The Foundational Research Institute - A dialogue. Comparisons between humans and chimps/lions. The value of intelligence depends on the available tools. Defining intelligence. An addendum on "general intelligence" and factors that make intelligence useful.

Self-modification As A Game Theory Problem by (lesswrong) - "If I'm right, then any good theory for cooperation between AIs will also double as a theory of stable self-modification for a single AI, and vice versa." An article with mathematical details is linked.

Looking Into Ai Risk by Jeff Kaufman - Jeff is trying to decide if AI risk is a serious concern and whether he should consider working in the field. Jeff's AI-risk reading list. A large comment section with interesting arguments.


Ea Marketing And A Plea For Moral Inclusivity by MichaelPlant (EA forum) - EA markets itself as being about poverty reduction. Many EAs think other topics are more important (far future, AI, animal welfare, etc). The author suggests becoming both more inclusive and more openly honest.

My Current Thoughts On Miris Highly Reliable by Daniel Dewey (EA forum) - Report by the Open Phil AI safety lead. A basic description of and case for the MIRI program. Conclusion: 10% credence in MIRI's work being highly useful. Reasons: Hard to apply to early agents, few researchers are excited, other approaches seem more promising.

How Can We Best Coordinate As A Community by Benn Todd (EA forum) - 'Replaceability' is a bad reason not to do direct work, lots of positions are very hard to fill. Comparative Advantage and division of labor. Concrete ways to boost productivity: 5 minute favours, Operations roles, Community infrastructure, Sharing knowledge and Specialization. EA Global Video is included.

Deciding Whether to Recommend Fistula Management Charities by The GiveWell Blog - "An obstetric fistula, or gynecologic fistula, is an abnormal opening between the vagina and the bladder or rectum." Fistula management, including surgery. Open questions and uncertainty particularly around costs. Our plans to partner with IDinsight to answer these questions.

Allocating the Capital by GiveDirectly - Eight media links on Give Directly, Basic Income and Cash Transfers.

Testing An Ea Networkbuilding Strategy by remmelt (EA forum) - Pivot from supporting EA charities to cooperating with EA networks. Detailed goals, strategy, assumptions, metrics, collaborators and example actions.

How Long Does It Take To Research And Develop A Vaccine by (EA forum) - How long it takes to make a vaccine. Literature review. Historical data on how long a large number of vaccines took to develop. Conclusions.

Hi Im Luke Muehlhauser Ama About Open by Luke Muelhauser (EA forum) - Animal and computer consciousness. Luke wrote a report for the open philanthropy project on consciousness. Lots of high quality questions have been posted.

Hidden Cost Digital Convenience by Innovations for Poverty - Moving from in person to digital micro-finance can harm saving rates in developing countries. Reduction in group cohesion and visible transaction fees. Linked paper with details.

Projects People And Processes by Open Philosophy - Three approaches used by donors and decision makers: Choose from projects presented by experts, defer near-fully to trusted individuals and establishing systematic criteria. Pros and cons of each. Open Phil's current approach.

Effective Altruism An Idea Repository by Onemorenickname (lesswrong) - Effective altruism is less of a closed organization than the author thought. Building a better platform for effective altruist idea sharing.

Effective Altruism As Costly Signaling by Raemon (EA forum) - " 'a bunch of people saying that rich people should donate to X' is a less credible signal than 'a bunch of people saying X thing is important enough that they are willing to donate to it themselves.' "

The Person Affecting Philanthropists Paradox by MichaelPlant (EA forum) - Population ethics. The value of creating more happy people as opposed to making pre-existing people happy. Application to the question of whether to donate now or invest and donate later.

Oops Prize by Ben Hoffman (Compass Rose) - Positive norms around admitting you were wrong. Charity Science publicly admitted they were wrong about grant writing. Did anyone organization at EA Global admit they made a costly mistake? 1K oops prize.

===Politics and Economics:

Scraps 3 Hoffer And Performance Art by Lou (sam[]zdat) - Growing out of radicalism. Either economic and family instability can cause mass movements. why the left has adopted Freud. The Left's economic platform is popular, its cultural platform is not. Performance art: Marina Abramović's' 'Rhythm 0'. Recognizing and denying your own power.

What Replaces Rights And Discourse by Feddie deBoer - Lots of current leftist discourse is dismissive of rights and open discussion. But what alternative is there? The Soviets had bad justifications and a terrible system but at least it had an explicit philosophical alternative.

Why Do You Hate Elua by H i v e w i r e d - Scott's Elua as an Eldritch Abomination that threatens traditional culture. An extended sci-fi quote about Ra the great computer. "The forces of traditional values remembered an important fact: once you have access to the hardware, it’s over."

Why Did Europe Lose Crusades by Noah Smith - Technological comparison between Europe and the Middle East. Political divisions on both sides. Geographic distance. Lack of motivation.

Econtalk On Generic Medications by Aceso Under Glass - A few egregious ways that big pharma games the patent system. Short.

Data On Campus Free Speech Cases by Ozy (Thing of Things) - Ozy classifies a sample of the cases handled by the Foundation for Individual Rights in Education. Ozy classifies 77 cases as conservative, liberal or apolitical censorship. Conservative ideas were censored 52%, liberal 26% and apolitical 22%.

Beware The Moral Spotlight by Robin Hanson - The stated goals of government/business don't much matter compared to the selective pressures on their leadership, don't obsess over which sex has the worse deal overall, don't overate the benefits of democracy and ignore higher impact changes to government.

Reply To Yudkowsky by Bryan Caplan - Caplan quotes and replies to many sections Yudkowsky's response. Points: Yudkowsky's theory is a special case of Caplan's. The left has myriad complaints about markets. Empirically the market actually has consistently provided large benefits in many countries and times.

Without Belief In A God But Never Without Belief In A Devil by Lou (sam[]zdat) - The nature of mass movements. The beats and the John Birchers. The taxonomy of the frustrated. Horseshoe theory. The frustrated cannot derive satisfaction from action, something else has to fill the void Poverty, work and meaning. Mass movements need to sow resentment. Hatred is the strongest unifier. Modernity inevitably causes justified resentment. Tocqueville, Polyanai, Hoffer and Scott's theories. Helpful and unhelpful responses.

Genetic Behaviorism Supports The Influence Of Chance On Life Outcomes by Freddie deBoer - Much of the variance in many traits is non-shared-environment. Much non-shared-environment can be thought of as luck. In addition no one chooses or deserves their genes.

Yudkowsky On My Simpistic Theory of Left and Right by Bryan Caplan - Yudkowsky claims the left holds the market to the same standards as human beings. The market as a ritual holding back a dangerous Alien God. Caplan doesn't respond he just quotes Yudkowsky.

On The Effects Of Inequality On Economic Growth by Artir (Nintil) - Most of the article tries to explain and analyze the economic consensus on whether inequality harms growth. A very large number of papers are cited and discussed. A conclusion that the effect is at most small.


Erisology Of Self And Will Representative Campbell Speaks by Everything Studies - An exposition of the "mainstream" view of the self and free will.

What Is The Ein Sof The Meaning Of Perfection In by arisen (lesswrong) - "Kabbalah is based on the analogy of the soul as a cup and G-d as the light that fills the cup. Ein Sof, nothing ("Ein") can be grasped ("Sof"-limitation)."

Sexualtaboos by AellaGirl - A graph of sexual fetishes. The axes are "taboo-ness" and "reported interest". Taboo correlated negatively with interest (p < 0.01). Lots of fetishes are included and the sample size is pretty large.

Huffman Codes Problem by protokol2020 - Find the possible Huffman Codes for all twenty-six English letters.

If You're In School Try The Curriculum by Freddie deBoer - Ironic detachment "leaves you with the burden of the work but without the emotional support of genuine resolve". Don't be the sort of person who tweets hundreds of thousands of times but pretends they don't care about online.

Media Recommendations by Sailor Vulcan (BYS) - Various Reviews including: Games, Animated TV shows, Rationalist Pokemon. A more detailed review of Harry Potter and the Methods of Rationality.

Sunday Assorted Links by Tyler Cowen - Variety of Topics. Ethereum Cryptocurrency, NYC Diner decline, Building Chinese Airports, Soccer Images, Drone Wars, Harberger Taxation, Douthat on Heathcare.

Summary Of Reading April June 2017 by Eli Bendersky - Brief reviews. Various topics: Heavy on Economics. Some politics, literature and other topics.

Rescuing The Extropy Magazine Archives by deku_shrub (lesswrong) - "You'll find some really interesting very early articles on neural augmentation, transhumanism, libertarianism, AI (featuring Eliezer), radical economics (featuring Robin Hanson of course) and even decentralized payment systems."

Epistemic Spot Check A Guide To Better Movement Todd Hargrove by Aceso Under Glass - Flexibility and Chronic Pain. Early section on flexibility fails check badly. Section on psychosomatic pain does much better. Model: Simplicity (Good), Explanation (Fantastic), Explicit Predictions (Good), Useful Predictions (Poor), Acknowledge Limits (Poor), Measurability (Poor).

Book Review Barriers by Eukaryote - Even cell culturing is surprisingly hard if you don't know the details. There is not much institutional knowledge left in the field of bioweapons. Forcing labs underground makes bioterrorism even harder. However synthetic biology might make things much more dangerous.

Physics Problem 2 by protokol2020 - Can tidal forces rotate a metal wheel?

Poems by Scott Alexander (Scratchpad) - Violets aren't blue.

Evaluating Employers As Junior Software by Particular Virtue - You need to write alot of code and get detailed feedback to improve as an engineer. Practical suggestions to ensure your first job fulfills both conditions.


Kyle Maynard Without Limits by Tim Ferriss - "Kyle Maynard is a motivational speaker, bestselling author, entrepreneur, and ESPY award-winning mixed martial arts athlete, known for becoming the first quadruple amputee to reach the summit of Mount Kilimanjaro and Mount Aconcagua without the aid of prosthetics."

85 Is This The End Of Europe by Waking Up with Sam Harris - Douglas Murray and his book 'The Strange Death of Europe: Immigration, Identity, Islam'.

Myers Briggs, Diet, Mistakes And Immortality by Tim Ferriss - Ask me anything podcast. Topics beyond the title: Questions to prompt introspection, being a Jack of All Trades, balancing future and present goals, don't follow your passion, 80/20 memory retention, advice to your past selves.

Interview Ro Khanna Regional Development by Tyler Cowen - Bloomberg Podcast. "Technology, jobs and economic lessons from his perspective as Silicon Valley’s congressman."

Avic Roy by The Ezra Klein Show - Better Care Reconciliation Act, broader health care philosophies that fracture the right. Roy’s disagreements with the CBO’s methodology. The many ways he thinks the Senate bill needs to improve. How the GOP has moved left on health care policy. Medicaid, welfare reform, and the needy who are hard to help. The American health care system subsidizes the rich, etc.

Chris Blattman 2 by EconTalk - "Whether it's better to give poor Africans cash or chickens and the role of experiments in helping us figure out the answer. Along the way he discusses the importance of growth vs. smaller interventions and the state of development economics."

Landscapes Of Mind by Waking Up with Sam Harris - "why it’s so hard to predict future technology, the nature of intelligence, the 'singularity', artificial consciousness."

Blake Mycoskie by Tim Ferriss - Early entrepreneurial ventures. The power of journaling. How “the stool analogy” changed Blake’s life. Lessons from Ben Franklin.

Ben Sasse by Tyler Cowen - "Kansas vs. Nebraska, famous Nebraskans, Chaucer and Luther, unicameral legislatures, the decline of small towns, Ben’s prize-winning Yale Ph.d thesis on the origins of conservatism, what he learned as a university president, Stephen Curry, Chevy Chase, Margaret Chase Smith"

Danah Boyd on why Fake News is so Easy to Believe by The Ezra Klein Show - Fake news, digital white flight, how an anthropologist studies social media, machine learning algorithms reflect our prejudices rather than fixing them, what Netflix initially got wrong about their recommendations engine, the value of pretending your audience is only six people, the early utopian visions of the internet.

Robin Feldman by EconTalk - Ways pharmaceutical companies fight generics.

Jason Weeden On Do People Vote Based On Self Interest by Rational Speaking - Do people vote based on personality, their upbringing, blind loyalty or do they follow their self interest? What does self-interest even mean?

Reid Hoffman 2 by Tim Ferriss - The 10 Commandments of Startup Success according to the extremely successful investor Reid Hoffman.

[Link] Dissolving the Fermi Paradox (Applied Bayesianism)

11 shin_getter 03 July 2017 09:44AM

Self-modification as a game theory problem

11 cousin_it 26 June 2017 08:47PM

In this post I'll try to show a surprising link between two research topics on LW: game-theoretic cooperation between AIs (quining, Loebian cooperation, modal combat, etc) and stable self-modification of AIs (tiling agents, Loebian obstacle, etc).

When you're trying to cooperate with another AI, you need to ensure that its action will fulfill your utility function. And when doing self-modification, you also need to ensure that the successor AI will fulfill your utility function. In both cases, naive utility maximization doesn't work, because you can't fully understand another agent that's as powerful and complex as you. That's a familiar difficulty in game theory, and in self-modification it's known as the Loebian obstacle (fully understandable successors become weaker and weaker).

In general, any AI will be faced with two kinds of situations. In "single player" situations, you're faced with a choice like eating chocolate or not, where you can figure out the outcome of each action. (Most situations covered by UDT are also "single player", involving identical copies of yourself.) Whereas in "multiplayer" situations your action gets combined with the actions of other agents to determine the outcome. Both cooperation and self-modification are "multiplayer" situations, and are hard for the same reason. When someone proposes a self-modification to you, you might as well evaluate it with the same code that you use for game theory contests.

If I'm right, then any good theory for cooperation between AIs will also double as a theory of stable self-modification for a single AI. That means neither problem can be much easier than the other, and in particular self-modification won't be a special case of utility maximization, as some people seem to hope. But on the plus side, we need to solve one problem instead of two, so creating FAI becomes a little bit easier.

The idea came to me while working on this mathy post on IAFF, which translates some game theory ideas into the self-modification world. For example, Loebian cooperation (from the game theory world) might lead to a solution for the Loebian obstacle (from the self-modification world) - two LW ideas with the same name that people didn't think to combine before!

[Link] Daniel Dewey on MIRI's Highly Reliable Agent Design Work

10 lifelonglearner 09 July 2017 04:35AM

David C Denkenberger on Food Production after a Sun Obscuring Disaster

9 JenniferRM 17 September 2017 09:06PM

Having paid a moderate amount of attention to threats to the human species for over a decade, I've run across an unusually good thinker with expertise unusually suited to helping with many threats to the human species, that I didn't know about until quite recently.

I think he warrants more attention from people thinking seriously about X-risks.

David C Denkenberger's CV is online and presumably has a list of all his X-risks relevant material mixed into a larger career that seems to have been focused on energy engineering.

He has two technical patents (one for a microchannel heat exchanger and another for a compound parabolic concentrator) and interests that appear to span the gamut of energy technologies and uses.

Since about 2013 he has been working seriously on the problem of food production after a sun obscuring disaster, and he is in Lesswrong's orbit basically right now.

This article is about opportunities for intellectual cross-pollination!

continue reading »

AI Summer Fellows Program

9 abramdemski 06 August 2017 07:35AM

CFAR is running a free two-week program this September, aimed at increasing participant's ability to do technical research in AI alignment. Like the MIRI Summer Fellows Program which ran the past two years, this will include CFAR course material, plus content on AI alignment research and time to collaborate on research with other participants and researchers such as myself! It will be located in the SF Bay area, September 8-25. See more information and apply here.

People don't have beliefs any more than they have goals: Beliefs As Body Language

9 Bound_up 25 July 2017 05:41PM

Many people, anyway.


There is a common mistake in modeling humans, to think that they have actual goals, and that they deduce their actions from those goals. First there is a goal, then there is an action which is born from the goal. This is wrong.

More accurately, humans have a series of adaptations they execute. A series of behaviors which, under certain circumstances, kinda-sorta-maybe aim at the same-ish thing (like inclusive genetic fitness), but which will be executed regardless of whether or not they actually hit that thing.

Actions are not deduced from goals. The closest thing we have to "goals" are inferred from a messy conglomerate of actions, and are only an approximation of the reality that is there: just a group of behaviors.


I've come to see beliefs as very much the same way. Maybe some of us have real beliefs, real models. Some of us may in fact choose our statements about the world by deducing them from a foundational set of beliefs.

The mistake is repeated when we model most humans as having actual beliefs (nerds might be an exception). To suppose that their statements about reality, their propositions about the world, or their answers to questions are deduced from some foundational belief. First there is a belief, then there is a report on that belief, provided if anyone inquires about the belief they're carrying around. This is wrong.

More accurately, humans have a set of social moves/responses that they execute. Some of those moves APPEAR (to the naive nerd such as I) to be statements about how and what reality is. Each of these "statements" was probably vetted and accepted individually, without any consideration for the utterly foreign notion that the moves should be consistent or avoid contradiction. This sounds as tiresome to them as suggesting that their body language, or dance moves should be "consistent," for to them, the body language, dance moves, and "statements about reality" all belong to the same group of social moves, and thinking a social move is "consistent" is like thinking a certain posture/gesture is consistent or a color is tasty.

And missing the point like a nerd and taking things "literally" is exactly the kind of thing that reveals low social acuity.

Statements about individual beliefs are not deduced from a model of the world, just like actions are not deduced from goals. You can choose to interpret "I think we should help poor people" as a statement about the morality of helping poor people, if you want to miss the whole point, of course. We can suppose that "XYZ would be a good president" is a report on their model of someone's ability to fulfill a set of criteria. And if we interpret all their statement as though they were actual, REAL beliefs, we might be able to piece them together into something sort of like a model of the world.

All of which is pointless, missing the point, and counter-productive. Their statements don't add up to a model like ours might, anymore than our behaviors really add up to a goal. The "model" that comes out of aggregating their social learned behaviors will likely be inconsistent, but if you think that'll matter to them, you've fundamentally misunderstood what they're doing. You're trying to find their beliefs, but they don't HAVE any. There IS nothing more. It's just a set of cached responses. (Though you might find, if you interpret their propositions about reality as signals about tribal affiliation and personality traits, that they're quite consistent).

"What do you think about X" is re-interpreted and answered as though you had said "What do good, high-status groups (that you can plausibly be a part of) think about X?"

"I disagree" doesn't mean they think your model is wrong; they probably don't realize you have a model. Just as you interpret their social moves as propositional statements and misunderstand, so they interpret your propositional statements as social moves and misunderstand. If you ask how their model differs from yours, it'll be interpreted as a generic challenge to their tribe/status, and they'll respond like they do to such challenges. You might be confused by their hostility, or by how they change the subject. You think you're talking about X and they've switched to Y. While they'll think you've challenged them, and respond with a similar challenge, the "content" of the sentences need not be considered; the important thing is to parry the social attack and maybe counter-attack. Both perspectives make internal sense.

As far as they're concerned, the entire meaning of your statement was basically equivalent to a snarl, so they snarled back. Beliefs As Body Language.

Despite the obvious exceptions and caveats, this has been extremely useful for me in understanding less nerdy people. I try not to take what to them are just the verbal equivalent of gestures/postures or dance moves, and interpret them as propositional statements about the nature of reality (even though they REALLY sound like they're making propositional statements about the nature of reality), because that misunderstands what they're actually trying to communicate. The content of their sentences is not the point. There is no content. (None about reality, that is. All content is social). They do not HAVE beliefs. There's nothing to report on.

MILA gets a grant for AI safety research

9 Dr_Manhattan 21 July 2017 03:34PM

The really good news is that Yoshua Bengio is leading this (he is extremely credible in modern AI/deep learning world), and this is a pretty large change of mind for him. When I spoke to him at a conference 3 years ago he was pretty dismissive of the whole issue; this year's FLI conference seems to have changed his mind (kudos to them)

Of course huge props to OpenPhil for pursuing this

Bayesian probability theory as extended logic -- a new result

9 ksvanhorn 06 July 2017 07:14PM

I have a new paper that strengthens the case for strong Bayesianism, a.k.a. One Magisterium Bayes. The paper is entitled "From propositional logic to plausible reasoning: a uniqueness theorem." (The preceding link will be good for a few weeks, after which only the preprint version will be available for free. I couldn't come up with the $2500 that Elsevier makes you pay to make your paper open-access.)

Some background: E. T. Jaynes took the position that (Bayesian) probability theory is an extension of propositional logic to handle degrees of certainty -- and appealed to Cox's Theorem to argue that probability theory is the only viable such extension, "the unique consistent rules for conducting inference (i.e. plausible reasoning) of any kind." This position is sometimes called strong Bayesianism. In a nutshell, frequentist statistics is fine for reasoning about frequencies of repeated events, but that's a very narrow class of questions; most of the time when researchers appeal to statistics, they want to know what they can conclude with what degree of certainty, and that is an epistemic question for which Bayesian statistics is the right tool, according to Cox's Theorem.

You can find a "guided tour" of Cox's Theorem here (see "Constructing a logic of plausible inference"). Here's a very brief summary. We write A | X for "the reasonable credibility" (plausibility) of proposition A when X is known to be true. Here X represents whatever information we have available. We are not at this point assuming that A | X is any sort of probability. A system of plausible reasoning is a set of rules for evaluating A | X. Cox proposed a handful of intuitively-appealing, qualitative requirements for any system of plausible reasoning, and showed that these requirements imply that any such system is just probability theory in disguise. That is, there necessarily exists an order-preserving isomorphism between plausibilities and probabilities such that A | X, after mapping from plausibilities to probabilities, respects the laws of probability.

Here is one (simplified and not 100% accurate) version of the assumptions required to obtain Cox's result:


  1. A | X is a real number.
  2. (A | X) = (B | X) whenever A and B are logically equivalent; furthermore, (A | X) ≤ (B | X) if B is a tautology (an expression that is logically true, such as (a or not a)).
  3. We can obtain (not A | X) from A | X via some non-increasing function S. That is, (not A | X) = S(A | X).
  4. We can obtain (A and B | X) from (B | X) and (A | B and X) via some continuous function F that is strictly increasing in both arguments: (A and B | X) = F((A | B and X), B | X).
  5. The set of triples (x,y,z) such that x = A|X, y = (B | A and X), and z = (C | A and B and X) for some proposition A, proposition B, proposition C, and state of information X, is dense. Loosely speaking, this means that if you give me any (x',y',z') in the appropriate range, I can find an (x,y,z) of the above form that is arbitrarily close to (x',y',z').
The "guided tour" mentioned above gives detailed rationales for all of these requirements.

Not everyone agrees that these assumptions are reasonable. My paper proposes an alternative set of assumptions that are intended to be less disputable, as every one of them is simply a requirement that some property already true of propositional logic continue to be true in our extended logic for plausible reasoning. Here are the alternative requirements:
  1. If X and Y are logically equivalent, and A and B are logically equivalent assuming X, then (A | X) = (B | Y).
  2. We may define a new propositional symbol s without affecting the plausibility of any proposition that does not mention that symbol. Specifically, if s is a propositional symbol not appearing in A, X, or E, then (A | X) = (A | (s ↔ E) and X).
  3. Adding irrelevant background information does not alter plausibilities. Specifically, if Y is a satisfiable propositional formula that uses no propositional symbol occurring in A or X, then (A | X) = (A | Y and X).
  4. The implication ordering is preserved: if  A → B is a logical consequence of X, but B → A is not, then then A | X < B | X; that is, A is strictly less plausible than B, assuming X.
Note that I do not assume that A | X is a real number. Item 4 above assumes only that there is some partial ordering on plausibility values: in some cases we can say that one plausibility is greater than another.


I also explicitly take the state of information X to be a propositional formula: all the background knowledge to which we have access is expressed in the form of logical statements. So, for example, if your background information is that you are tossing a six-sided die, you could express this by letting s1 mean "the die comes up 1," s2 mean "the die comes up 2," and so on, and your background information X would be a logical formula stating that exactly one of s1, ..., s6 is true, that is,

(s1 or s2 or s3 or s5 or s6) and
not (s1 and s2) and not (s1 and s3) and not (s1 and s4) and
not (s1 and s5) and not (s1 and s6) and not (s2 and s3) and
not (s2 and s4) and not (s2 and s5) and not (s2 and s6) and
not (s3 and s4) and not (s3 and s5) and not (s3 and s6) and
not (s4 and s5) and not (s4 and s6) and not (s5 and s6).

Just like Cox, I then show that there is an order-preserving isomorphism between plausibilities and probabilities that respects the laws of probability.

My result goes further, however, in that it gives actual numeric values for the probabilities. Imagine creating a truth table containing one row for each possible combination of truth values assigned to each atomic proposition appearing in either A or X. Let n be the number of rows in this table for which X evaluates true. Let m be the number of rows in this table for which both A and X evaluate true. If P is the function that maps plausibilities to probabilities, then P(A | X) = m / n.

For example, suppose that a and b are atomic propositions (not decomposable in terms of more primitive propositions), and suppose that we only know that at least one of them is true; what then is the probability that a is true? Start by enumerating all possible combinations of truth values for a and b:
  1. a false, b false: (a or b) is false, a is false.
  2. a false, b true : (a or b) is true,  a is false.
  3. a true,  b false: (a or b) is true,  a is true.
  4. a true,  b true : (a or b) is true,  a is true.
There are 3 cases (2, 3, and 4) in which (a or b) is true, and in 2 of these cases (3 and 4) a is also true. Therefore,

    P(a | a or b) = 2/3.

This concords with the classical definition of probability, which Laplace expressed as

The probability of an event is the ratio of the number of cases favorable to it, to the number of possible cases, when there is nothing to make us believe that one case should occur rather than any other, so that these cases are, for us, equally possible.

This definition fell out of favor, in part because of its apparent circularity. My result validates the classical definition and sharpens it. We can now say that a “possible case” is simply a truth assignment satisfying the premise X. We can simply drop the problematic phrase “these cases are, for us, equally possible.” The phrase “there is nothing to make us believe that one case should occur rather than any other” means that we possess no additional information that, if added to X, would expand by differing multiplicities the rows of the truth table for which X evaluates true.

For more details, see the paper linked above.

One-Magisterium Bayes

9 tristanm 29 June 2017 11:02PM

[Epistemic Status: Very partisan / opinionated. Kinda long, kinda rambling.]

In my conversations with members of the rationalist community as well as in my readings of various articles and blog posts produced by them (as well as outside), I’ve noticed a recent trend towards skepticism of Bayesian principles and philosophy (see Nostalgebraist’s recent post for an example), which I have regarded with both surprise and a little bit of dismay, because I think progress within a community tends to be indicated by moving forward to new subjects and problems rather than a return to old ones that have already been extensively argued for and discussed. So the intent of this post is to summarize a few of the claims I’ve seen being put forward and try to point out where I believe these have gone wrong.

It’s also somewhat an odd direction for discussion to be going in, because the academic statistics community has largely moved on from debates between Bayesian and Frequentist theory, and has largely come to accept both the Bayesian and the Frequentist / Fisherian viewpoints as valid. When E.T. Jaynes wrote his famous book, the debate was mostly still raging on, and many questions had yet to be answered. In the 21st century, statisticians have mostly come to accept a world in which both approaches exist and have their merits.

Because I will be defending the Bayesian side here, there is a risk that this post will come off as being dogmatic. We are a community devoted to free-thought after all, and any argument towards a form of orthodoxy might be perceived as an attempt to stifle dissenting viewpoints. That is not my intent here, and in fact I plan on arguing against Bayesian dogmatism as well. My goal is to argue that having a base framework with which to feel relatively high confidence in is useful to the goals of the community, and that if we feel high enough confidence in it, then spending  extra effort trying to prove it false might be wasting brainpower than can potentially be used on more interesting or useful tasks. There could always be a point we reach where most of us strongly feel that unless we abandon Bayesianism, we can’t make any further progress. I highly doubt that we have reached such a point or that we ever will.

This is also a personal exercise to test my understanding of Bayesian theory and my ability to communicate it. My hope is that if my ideas here are well presented, it should be much easier for both myself and others to find flaws with it and allow me to update.

I will start with an outline of philosophical Bayesianism, also called “Strong Bayesianism”, or what I prefer to call it, “One Magisterium Bayes.” The reason for wanting to refer to it as being a single magisterium will hopefully become clear. The Sequences did argue for this point of view, however, I think the strength of the Sequences had more to do with why you should update your beliefs in the face of new evidence, rather than why Bayes' theorem was the correct way to do this. In contrast, I think the argument for using Bayesian principles as the correct set of reasoning principles was made more strongly by E.T. Jaynes. Unfortunately, I feel like his exposition of the subject tends to get ignored relative to the material presented in the Sequences. Not that the information in the Sequences isn’t highly relevant and important, just that Jaynes' arguments are much more technical, and their strength can be overlooked for this reason. 

The way to start an exposition on one-magisterium rationality is by contrast to multi-magisteria modes of thought. I would go as far as to argue that the multi-magisterium view, or what I sometimes prefer to call tool-boxism, is by far the most dominant way of thinking today. Tool-boxism can be summarized by “There is no one correct way to arrive at the truth. Every model we have today about how to arrive at the correct answer is just that – a model. And there are many, many models. The only way to get better at finding the correct answer is through experience and wisdom, with a lot of insight and luck, just as one would master a trade such as woodworking. There’s nothing that can replace or supersede the magic of human creativity. [Sometimes it will be added:] Also, don’t forget that the models you have about the world are heavily, if not completely, determined by your culture and upbringing, and there’s no reason to favor your culture over anyone else’s.”

As I hope to argue in this post, tool-boxism has many downsides that should push us further towards accepting the one-magisterium view. It also very dramatically differs in how it suggests we should approach the problem of intelligence and cognition, with many corollaries in both rationalism and artificial intelligence. Some of these corollaries are the following:

  • If there is no unified theory of intelligence, we are led towards the view that recursive self-improvement is not possible, since an increase in one type of intelligence does not necessarily lead to an improvement in a different type of intelligence.
  • With a diversification in different notions of correct reasoning within different domains, it heavily limits what can be done to reach agreement on different topics. In the end we are often forced to agree to disagree, which while preserving social cohesion in different contexts, can be quite unsatisfying from a philosophical standpoint.
  • Related to the previous corollary, it may lead to beliefs that are sacred, untouchable, or based on intuition, feeling, or difficult to articulate concepts. This produces a complex web of topics that have to be avoided or tread carefully around, or a heavy emphasis on difficult to articulate reasons for preferring one view over the other.
  • Developing AI around a tool-box / multi-magisteria approach, where systems are made up of a wide array of various components, limits generalizability and leads to brittleness. 

One very specific trend I’ve noticed lately in articles that aim to discredit the AGI intelligence explosion hypothesis, is that they tend to take the tool-box approach when discussing intelligence, and use that to argue that recursive self-improvement is likely impossible. So rationalists should be highly interested in this kind of reasoning. One of Eliezer’s primary motivations for writing the Sequences was to make the case for a unified approach to reasoning, because it lends credence to the view of intelligence in which intelligence can be replicated by machines, and where intelligence is potentially unbounded. And also that this was a subtle and tough enough subject that it required hundreds of blog posts to argue for it. So because of the subtle nature of the arguments I’m not particularly surprised by this drift, but I am concerned about it. I would prefer if we didn’t drift.

I’m trying not to sound No-True-Scotsman-y here, but I wonder what it is that could make one a rationalist if they take the tool-box perspective. After all, even if you have a multi-magisterium world-view, there still always is an underlying guiding principle directing the use of the proper tools. Often times, this guiding principle is based on intuition, which is a remarkably hard thing to pin down and describe well. I personally interpret the word ‘rationalism’ as meaning in the weakest and most general sense that there is an explanation for everything – so intelligence isn’t irreducibly based on hand-wavy concepts such as ingenuity and creativity. Rationalists believe that those things have explanations, and once we have those explanations, then there is no further use for tool-boxism.

I’ll repeat the distinction between tool-boxism and one-magisterium Bayes, because I believe it’s that important: Tool-boxism implies that there is no underlying theory that describes the mechanisms of intelligence. And this assumption basically implies that intelligence is either composed of irreducible components (where one component does not necessarily help you understand a different component) or some kind of essential property that cannot be replicated by algorithms or computation.

Why is tool-boxism the dominant paradigm then? Probably because it is the most pragmatically useful position to take in most circumstances when we don’t actually possess an underlying theory. But the fact that we sometimes don’t have an underlying theory or that the theory we do have isn’t developed to the point where it is empirically beating the tool box approach is sometimes taken as evidence that there isn't a unifying theory. This is, in my opinion, the incorrect conclusion to draw from these observations.

Nevertheless, it seems like a startlingly common conclusion to draw. I think the great mystery is why this is so. I don’t have very convincing answers to that question, but I suspect it has something to do with how heavily our priors are biased against a unified theory of reasoning. It may also be due to the subtlety and complexity of the arguments for a unified theory. For that reason, I highly recommend reviewing those arguments (and few people other than E.T. Jaynes and Yudkowsky have made them). So with that said, let’s review a few of those arguments, starting with one of the myths surrounding Bayes theorem I’d like to debunk:

Bayes Theorem is a trivial consequence of the Kolmogorov Axioms, and is therefore not powerful.

This claim usually presented as part of a claim that “Bayesian” probability is just a small part of regular probability theory, and therefore does not give us any more useful information than you’d get from just studying probability theory. And as a consequence of that, if you insist that you’re a “Strong” Bayesian, that means you’re insisting on using only on that small subset of probability theory and associated tools we call Bayesian.

And the part of the statement that says the theorem is a trivial consequence of the Kolmogorov axioms is technically true. It’s the implication typically drawn from this that is false. The reason it’s false has to do with Bayes theorem being a non-trivial consequence of a simpler set of axioms / desiderata. This consequence is usually formalized by Cox’s theorem, which is usually glossed over or not quite appreciated for how far-reaching it actually is.

Recall that the qualitative desiderata for a set of reasoning rules were:

  1. Degrees of plausibility are represented by real numbers.
  2. Qualitative correspondence with common sense.
  3. Consistency. 

You can read the first two chapters of Jaynes’ book, Probability Theory: The Logic of Science if you want more detail into what those desiderata mean. But the important thing to note from them is that they are merely desiderata, not axioms. This means we are not assuming those things are already true, we just want to devise a system that satisfies those properties. The beauty of Cox’s theorem is that it specifies exactly one set of rules that satisfy these properties, of which Bayes Theorem as well as the Kolmogorov Axioms are a consequence of those rules.

The other nice thing about this is that degrees of plausibility can be assigned to any proposition, or any statement that you could possibly assign a truth value to. It does not limit plausibility to “events” that take place in some kind of space of possible events like whether a coin flip comes up heads or tails. What’s typically considered the alternative to Bayesian reasoning is Classical probability, sometimes called Frequentist probability, which only deals with events drawn from a sample space, and is not able to provide methods for probabilistic inference of a set of hypotheses.

For axioms, Cox’s theorem merely requires you to accept Boolean algebra and Calculus to be true, and then you can derive probability theory as extended logic from that. So this should be mindblowing, right? One Magisterium Bayes? QED? Well apparently this set of arguments is not convincing to everyone, and it’s not because people find Boolean logic and calculus hard to accept.

Rather, there are two major and several somewhat minor difficulties encountered within the Bayesian paradigm. The two major ones are as follows:

  • The problem of hypothesis generation.
  • The problem of assigning priors. 

The list of minor problems are as follows, although like any list of minor issues, this is definitely not exhaustive:

  • Should you treat “evidence” for a hypothesis, or “data”, as having probability 1?
  • Bayesian methods are often computationally intractable.
  • How to update when you discover a “new” hypothesis.
  • Divergence in posterior beliefs for different individuals upon the acquisition of new data.

Most Bayesians typically never deny the existence of the first two problems. What some anti-Bayesians conclude from them, though, is that Bayesianism must be fatally flawed due to those problems, and that there is some other way of reasoning that would avoid or provide solutions to those problems. I’m skeptical about this, and the reason I’m skeptical is because if you really had a method for say, hypothesis generation, this would actually imply logical omniscience, and would basically allow us to create full AGI, RIGHT NOW. If you really had the ability to produce a finite list containing the correct hypothesis for any problem, the existence of the other hypotheses in this list is practically a moot point – you have some way of generating the CORRECT hypothesis in a finite, computable algorithm. And that would make you a God.

As far as I know, being able to do this would imply that P = NP is true, and as far as I know, most computer scientists do not think it’s likely to be true (And even if it were true, we might not get a constructive proof from it).  But I would ask: Is this really a strike against Bayesianism? Is the inability of Bayesian theory to provide a method for providing the correct hypothesis evidence that we can’t use it to analyze and update our own beliefs?

I would add that there are plenty of ways to generate hypotheses by other methods. For example, you can try to make the hypothesis space gargantuan, and encode different hypotheses in a vector of parameters, and then use different optimization or search procedures like evolutionary algorithms or gradient descent to find the most likely set of parameters. Not all of these methods are considered “Bayesian” in the sense that you are summarizing a posterior distribution over the parameters (although stochastic gradient descent might be). It seems like a full theory of intelligence might include methods for generating possible hypotheses. I think this is probably true, but I don’t know of any arguments that it would contradict Bayesian theory.

The reason assigning prior probabilities is such a huge concern is that it forces Bayesians to hold “subjective” probabilities, where in most cases, if you’re not an expert in the domain of interest, you don’t really have a good argument for why you should hold one prior over another. Frequentists often contrast this with their methods which do not require priors, and thus hold some measure of objectivity.

E.T. Jaynes never considered to this be a flaw in Bayesian probability, per se. Rather, he considered hypothesis generation, as well as assigning priors, to be outside the scope of “plausible inference” which is what he considered to be the domain of Bayesian probability. He himself argued for using the principle of maximum entropy for creating a prior distribution, and there are also more modern techniques such as Empirical Bayes.

In general, Frequentists often have the advantage that their methods are often simpler and easier to compute, while also having strong guarantees about the results, as long as certain constraints are satisfied. Bayesians have the advantage that their methods are “ideal” in the sense that you’ll get the same answer each time you run an analysis. And this is the most common form of the examples that Bayesians use when they profess the superiority of their approach. They typically show how Frequentist methods can give both “significant” and “non-significant” labels to their results depending on how you perform the analysis, whereas the Bayesian way just gives you the probability of the hypothesis, plain and simple.

I think that in general, once could say that Frequentist methods are a lot more “tool-boxy” and Bayesian methods are more “generally applicable” (if computational tractability wasn’t an issue).  That gets me to the second myth I’d like to debunk:

Being a “Strong Bayesian” means avoiding all techniques not labeled with the stamp of approval from the Bayes Council.

Does this mean that Frequentist methods, because they are tool box approaches, are wrong or somehow bad to use, as some argue that Strong Bayesians claim? Not at all. There’s no reason not to use a specific tool, if it seems like the best way to get what you want, as long as you understand exactly what the results you’re getting mean. Sometimes I just want a prediction, and I don’t care how I get it – I know that a specific algorithm being labeled “Bayesian” doesn’t confer it any magical properties. Any Bayesian may want to know the frequentist properties of their model. It’s easy to forget that different communities of researchers flying the flag of their tribe developed some methods and then labeled them according to their tribal affiliation. That’s ok. The point is, if you really want to have a Strong Bayesian view, then you also have to assign probabilities to various properties of each tool in the toolbox.

Chances are, if you’re a statistics/data science practitioner with a few years of experience applying different techniques to different problems and different data sets, and you have some general intuitions about which techniques apply better to which domains, you’re probably doing this in a Bayesian way. That means, you hold some prior beliefs about whether Bayesian Logistic Regression or Random Forests is more likely to get what you want on this particular problem, you try one, and possibly update your beliefs once you get a result, according to what your models predicted.

Being a Bayesian often requires you to work with “black boxes”, or tools that you know give you a specific result, but you don’t have a full explanation of how it arrives at the result or how it fits in to the grand scheme of things. A Bayesian fundamentalist may refuse to work with any statistical tool like that, not realizing that in their everyday lives they often use tools, objects, or devices that aren’t fully transparent to them. But you can, and in fact do, have models about how those tools can be used and the results you’d get if you used them. The way you handle these models, even if they are held in intuition, probably looks pretty Bayesian upon deeper inspection.

I would suggest that instead of using the term “Fully Bayesian” we use the phrase “Infinitely Bayesian” to refer to using a Bayesian method for literally everything, because it more accurately shows that it would be impossible to actually model every single atom of knowledge probabilistically. It also makes it easier to see that even the Strongest Bayesian you know probably isn’t advocating this.

Let me return to the “minor problems” I mentioned earlier, because they are pretty interesting.  Some epistemologists have a problem with Bayesian updating because it requires you to assume that the “evidence” you receive at any given point is completely true with probability 1. I don’t really understand why it requires this. I’m easily able to handle the case where I’m uncertain about my data. Take the situation where my friend is rolling a six-sided die, and I want to know the probability of it coming up 6. I assume all sides are equally likely, so my prior probability for 6 is 1/6. Let’s say that he rolls it where I can’t see it, and then tells me the die came up even. What do I update p(6) to?

Let’s say that I take my data as saying “the die came up even.” Then p(6 | even) = p(even | 6) * p(6) / p(even) = 1 * (1/6) / (1 / 2) = 1/3. Ok, so I should update p(6) to 1/3 now right? Well, that’s only if I take the evidence of “the die came up even” as being completely true with probability one. But what actually happened is that my friend TOLD ME the die came up even. He could have been lying, maybe he forgot what “even” meant, maybe his glasses were really smudged, or maybe aliens took over his brain at that exact moment and made him say that. So let’s say I give a 90% chance to him telling the truth, or equivalently, a 90% chance that my data is true. What do I update p(6) to now?

It’s pretty simple. I just expand p(6) over “even” as p(6) = p(6 | even) p(even)  + p(6 | odd) p(odd). Before he said anything, p(even) = p(odd) and this formula evaluated to (1/3)(1/2) + (0)(1/2) = 1/6, my prior. After he told me the die came up even, I update p(even) to 0.9, and this formula becomes (1/3)(9/10) + (0)(1/10) = 9/30. A little less than 1/3. Makes sense.

In general, I am able to model anything probabilistically in the Bayesian framework, including my data. So I’m not sure where the objection comes from. It’s true that from a modeling perspective, and a computational one, I have to stop somewhere, and just accept for the sake of pragmatism that probabilities very close to 1 should be treated as if they were 1, and not model those. Not doing that, and just going on forever, would mean being Infinitely Bayesian. But I don’t see why this counts as problem for Bayesianism. Again, I’m not trying to be omniscient. I just want a framework for working with any part of reality, not all of reality at once. The former is what I consider “One Magisterium” to mean, not the latter.

The rest of the minor issues are also related to limitations that any finite intelligence is going to have no matter what. They should all, though, get easier as access to data increases, models get better, and computational ability gets better.

Finally, I’d like to return to an issue that I think is most relevant to the ideas I’ve been discussing here. In AI risk, it is commonly argued that a sufficiently intelligent agent will be able to modify itself to become more intelligent. This premise assumes that an agent will have some theory of intelligence that allows it to understand which updates to itself are more likely to be improvements. Because of that, many who argue against “AI Alarmism” will argue against the premise that there is a unified theory of intelligence. In “Superintelligence: The Idea that Eats Smart People”, I think most of the arguments can be reduced to basically saying as much.

From what I can tell, most arguments against AI risk in general will take the form of anecdotes about how really really smart people like Albert Einstein were very bad at certain other tasks, and that this is proof that there is no theory of intelligence that can be used to create a self-improving AI. Well, more accurately, these arguments are worded as “There is no single axis on which to measure intelligence” but what they mean is the former, since even multiple axes of intelligence (such as measure of success on different tasks) would not actually imply that there isn’t one theory of reasoning. What multiple axes of measuring intelligence do imply is that within a given brain, the brain may have devoted more space to better modeling certain tasks than others, and that maybe the brain isn’t quite that elastic, and has a hard time picking up new tasks.

The other direction in which to argue against AI risk is to argue against the proposed theories of reasoning themselves, like Bayesianism. The alternative, it seems, is tool-boxism. I really want to avoid tool-boxism because it makes it difficult to be a rationalist. Even if Bayesianism turns out to be wrong, does this exclude other, possibly undiscovered theories of reasoning? I’ve never seen that touched upon by any of the AI risk deniers. As long as there is a theory of reasoning, then presumably a machine intelligence could come to understand that theory and all of its consequences, and use that to update itself.

I think the simplest summary of my post is this: A Bayesian need not be Bayesian in all things, for reasons of practicality. But a Bayesian can be Bayesian in any given thing, and this is what is meant by “One Magisterium”.

I didn’t get to cover every corollary of tool-boxing or every issue with Bayesian statistics, but this post is already really long, and for the sake of brevity I will probably end it here. Perhaps I can cover those issues more thoroughly in a future post. 

New business opportunities due to self-driving cars

8 chaosmage 06 September 2017 08:07PM

This is a slightly expanded version of a talk presented at the Less Wrong European Community Weekend 2017.

Predictions about self-driving cars in the popular press are pretty boring. Truck drivers are losing their jobs, self-driving cars will be more rented than owned, transport becomes cheaper, so what. The interesting thing is how these things change the culture and economy and what they make possible.

I have no idea about most of this. I don't know if self-driving cars accelerate or decelerate urbanization, I don't know how public transport responds, I don't even care which of the old companies survive. What I do think is somewhat predictable is some of the business opportunities become economical that previously weren't. I disregard retail, which would continue moving to online retail at the expense of brick and mortar stores even if the FedEx trucks would continue to be driven by people.

Diversification of vehicle types

A family car that you own has to be somewhat good at many different jobs. It has to get you places fast. It has to be a thing that can transport lots of groceries. It has to take your kid to school.

With self-driving cars that you rent for each seperate job, you want very different cars. A very fast one to take you places. A roomy one with easy access for your groceries. And a tiny, cute, unicorn-themed one that takes your kid to school.

At the same time, the price of autonomy is dropping faster than the price of batteries, so you want the lowest mass car that can do the job. So a car that is very fast and roomy and unicorn-themed at the same time isn't economical.

So if you're an engineer or a designer, consider going into vehicle design. There's an explosion of creativity about to happen in that field that will make it very different from the subtle iterations in car design of the past couple of decades.

Who wins: Those who design useful new types of autonomous vehicles for needs that are not, or badly, met by general purpose cars.

Who loses: Owners of general purpose cars, which lose value rapidly.

Services at home

If you have a job where customers come to visit you, say you're a doctor or a hairdresser or a tattoo artist, your field of work is about to change completely. This is because services that go visit the customer outcompete ones that the customer has to go visit. They're more convenient and they can also easily service less mobile customers. This already exists for rich people: If you have a lot of money, you pay for your doctor's cab and have her come to your mansion. But with transport prices dropping sharply, this reaches the mass market.

This creates an interesting dynamic. In this kind of job, you have some vague territory - your customers are mostly from your surrounding area and your number of competitors inside this area is relatively small. With services coming to the home, everyone's territories become larger, so more of them overlap, creating competition and discomfort. I believe the typical solution, which reinstates a more stable business situation and requires no explicit coordination, is increased specialization within your profession. So a doctor might be less of her district's general practitioner and more of her city's leading specialist in one particular illness within one particular demographic. A hairdresser might be the city's expert for one particular type of haircut for one particular type of hair. And so on.

Who wins: Those who adapt quickly and steal customers from stationary services.

Who loses: Stationary services and their landlords.

Rent anything

You will not just rent cars, you will rent anything that a car can bring to your home and take away again. You don't go to the gym, you have a mobile gym visit you twice a week. You don't own a drill that sits unused 99,9% of the time, you have a little drone bring you one for an hour for like two dollars. You don't buy a huge sound system for your occasional party, you rent one that's even huger and on wheels.

Best of all, you can suddenly have all sort of absurd luxuries, stuff that previously only millionaires or billionaires would afford, provided you only need it for an hour and it fits in a truck. The possibilities for business here are dizzying.

Who wins: People who come up with clever business models and the vehicles to implement them.

Who loses: Owners and producers of infrequently used equipment.

Self-driving hotel rooms

This is a special case of the former but deserves its own category. Self-driving hotel rooms replace not just hotel rooms, but also tour guides and your holiday rental car. They drive you to all the tourist sites, they stop at affiliated restaurants, they occasionally stop at room service stations. And on the side, they do overnight trips from city to faraway city, competing with airlines.

Who wins: The first few companies who perfect this.

Who loses: Stationary hotels and motels.

Rise of alcoholism and drug abuse

Lots of people lack intrinsic motivation to be sober. They basically can't decide against taking something. Many of them currently make do with extrinsic motivation: They manage to at least not drink while driving. In other words, for a large number of people, driving is their only reason not to drink or do drugs. That reason is going away and consumption is sure to rise accordingly.

Hey I didn't say all the business opportunities were particularly ethical. But if you're a nurse or doctor, if you go into addiction treatment you're probably good.

Who wins: Suppliers of mind-altering substances and rehab clinics.

Who loses: The people who lack intrinsic motivation to be sober, and their family and friends.

Autonomous boats and yachts

While there's a big cost advantage to vehicle autonomy in cars, it is arguably even bigger in boats. You don't need a sailing license, you don't need to hire skilled sailors, you don't need to carry all the room and food those sailors require. So the cost of going by boat drops a lot, and there's probably a lot more traffic in (mostly coastal) waters. Again very diverse vehicles, from the little skiff that transports a few divers or anglers to the personal yacht that you rent for your honeymoon. This blends into the self-driving hotel room, just on water.

Who wins: Shipyards, especially the ones that adapt early.

Who loses: Cruise ships and marine wildlife.

Mobile storage

The only reason we put goods in warehouses is that it is too expensive to just leave them in the truck all the way from the factory to the buyer. That goes away as well, although with the huge amounts of moved mass involved this transition is probably slower than the others. Shipping containers on wheels already exist.

Who wins: Manufacturers, and logistics companies that can provide even better just in time delivery.

Who loses: Intermediate traders, warehouses and warehouse workers.

That's all I got for now. And I'm surely missing the most important innovation that self-driving vehicles will permit. But until that one becomes clear, maybe work with the above. All of these are original ideas that I haven't seen written down anywhere. So if like one of these and would like to turn it into a business, you're a step ahead of nearly everybody right now and I hope it makes you rich. If it does, you can buy me a beer. :-)

Heuristics for textbook selection

8 John_Maxwell_IV 06 September 2017 04:17AM

Back in 2011, lukeprog posted a textbook recommendation thread.  It's a nice thread, but not every topic has a textbook recommendation.  What are some other heuristics for selecting textbooks besides looking in that thread?

Amazon star rating is the obvious heuristic, but it occurred to me that Amazon sales rank might actually be more valuable: It's an indicator that profs are selecting the textbook for their classes.  And it's an indicator that the textbook has achieved mindshare, meaning you're more likely to learn the same terminology that others use.  (But there are also disadvantages of having the same set of mental models that everyone else is using.)

Somewhere I read that Elements of Statistical Learning was becoming the standard machine learning text partially because it's available for free online.  That creates a wrinkle in the sales rank heuristic, because people are less likely to buy a book if they can get it online for free.  (Though Elements of Statistical Learning appears to be a #1 bestseller on Amazon, in bioinformatics.)

Another heuristic is to read the biographies of the textbook authors and figure out who has the most credible claim to expertise, or who seems to be the most rigorous thinker (e.g. How Brands Grow is much more data-driven than a typical marketing book).  Or try to figure out what text the most expert professors are choosing for their classes.  (Oftentimes you can find the syllabi of their classes online.  I guess the naive path would probably look something like: go to US News to see what the top ranked universities are for the subject you're interested in.  Look at the university's course catalog until you find the course that covers the topic you want to learn.  Do course_id on Google in order to find the syllabus for the most recent time that course was taught.)

The Reality of Emergence

8 DragonGod 19 August 2017 09:58PM

Social Insight: When a Lie Is Not a Lie; When a Truth Is Not a Truth

8 Bound_up 11 August 2017 06:28PM

//The point has already been made, that if you wish to truly be honest, it is not enough to speak the truth.

I generally don't tell people I'm an atheist (I describe my beliefs without using any common labels). Why? I know that if I say the words "I am an atheist," that they will hear the following concepts:

- I positively believe there is no God

- I cannot be persuaded by evidence any more than most believers can be persuaded by evidence, ie, I have a kind of faith in my atheism

- I wish to distance myself from members of religious tribes

As I said, the point has already been made; If I know that they will hear those false ideas when I say a certain phrase, how can I say I am honest in speaking it, knowing that I will cause them to have false beliefs? Hence the saying, if you wish to protect yourself, speak the truth. If you wish to be honest, speak so that truth will be heard.

Many a politician convincingly lies with truths by saying things that they know will be interpreted in a certain positive (and false) way, but which they can always defend as having been intended to convey some other meaning.


The New

There is a counterpart to this insight, come to me as I've begun to pay more attention to the flow of implicit social communication. If speaking the truth in a way you know will deceive is a lie, then perhaps telling a lie in a way that you know will communicate a true concept is not a lie.

I've relaxed my standards of truth-telling as I've come to understand this. "You're the best" and "You can do this" statements have been opened to me, no qualifiers needed. If I know that everyone in a group has to say "I have XYZ qualification," but I also know that no one actually believes anybody when they say it, I can comfortably recite those words, knowing that I'm not actually leading anybody to believe false things, and thus, am not being dishonest.

Politicians use this method, too, and I think I'm more or less okay with it. You see, we have a certain problem that arises from intellectual inequality. There are certain truths which literally cannot be spoken to some people. If someone has an IQ of 85, you literally cannot tell them the truth about a great number of things (or they cannot receive it). And there are a great many more people who have the raw potential to understand certain difficult truths, but whom you cannot reasonably tell these truths (they'd have to want to learn, put in effort, receive extensive teaching, etc).

What if some of these truths are pertinent to policy? What do you do, say a bunch of phrases that are "true" in a way you will interpret them, but which will only be heard as...

As what? What do people hear when you explain concepts they cannot understand? If I had to guess, very often they interpret this as an attack on their social standing, as an attempt by the speaker to establish themselves as a figure of superior ability, to whom they should defer. You sound uppity, cold, out-of-touch, maybe nerdy or socially inept.

So, then...if you're socially capable, you don't say those things. You give up. You can't speak the truth, you literally cannot make a great many people hear the real reasons why policy Z is a good idea; they have limited the vocabulary of the dialogue by their ability and willingness to engage. 

Your remaining moves are to limit yourself to their vocabulary, or say something outside of that vocabulary, all the nuance of which will evaporate en route to their ears, and which will be heard as a monochromatic "I think I'm better than you."

The details of this dynamic at play go on and on, but for now, I'll just say that this is the kind of thing Scott Adams is referring to when he says that what Trump has said is "emotionally true" even if it "doesn't pass the fact checks" (see dialogue with Sam Harris).

In a world of inequality, you pick your poison. Communicate what truths can be received by your audience, a nerd, and stay out of elections.

Bi-Weekly Rational Feed

8 deluks917 24 July 2017 09:56PM

===Highly Recommended Articles:

Superintelligence Risk Project Update II by Jeff Kaufman - Jeff's thoughts and the sources he found most useful. Project is wrapping up in a few day. Topics: Technical Distance to AI. Most plausible scenarios of Superintelligence risk. OpenPhil's notes on how progress was potentially stalled in Cryonics and Nanotech.

Superintelligence Risk Project Update by Jeff Kaufman - Links to the three most informative readings on AI risk. Details on the large number of people Jeff has talked to. Three fundamental points of view on AI-Safety. Three Fundamental points of disagreement. An update on the original questions Jeff was trying to answer.

Podcast The World Needs Ai Researchers Heres How To Become One by 80,000 Hours - "OpenAI’s latest plans and research progress. Concrete Papers in AI Safety, which outlines five specific ways machine learning algorithms can act in dangerous ways their designers don’t intend - something OpenAI has to work to avoid. How listeners can best go about pursuing a career in machine learning and AI development themselves."

Radical Book Club The Decentralized Left by davidzhines (Status 451) - The nature of leftwing organizing and what righties can learn from it. An exposition of multiple books on radical left organization building. Major themes are "doing the work" and "decentralized leadership".

Study Of The Week To Remediate Or Not To Remediate by Freddie deBoer - Should low math proficiency students take remedial algebra or credit bearing statistics. The City University of New York ran an actual randomized study to test this. The study had pretty good controls. For example students were randomly assigned to three groups, participating professors taught one section of each group.

Kenneth Arrow On The Welfare Economics Of Medical Care A Critical Assessment by Artir (Nintil) - "Kenneth Arrow wrote a paper in 1963, Uncertainty and the Welfare Economics of Medical Care. This paper tends to appear in debates regarding whether healthcare can be left to the market (like bread), or if it should feature heavy state involvement. Here I explain what the paper says, and to what extent it is true."

Becoming Stronger Together by b4yes (lesswrong) - "About a year ago, a secret rationalist group was founded. This is a report of what the group did during that year."

The Destruction Of American Cuisine by Small Truths - America used to have a tremendous number of regional cuisines, most are dead. They were killed by supermarkets and frozen food. This has been costly both in terms of culture and health (antibiotic resistance, crop monoculture risk)


Targeting Meritocracy by Scott Alexander - Education and merit are different. Programming is one of the last meritocracies, this lets disadvantaged people get into the field. If a job is high impact we want to hire on merit. The original, literal meaning of meritocracy is important.

Classified Thread 2 Best In Classified by Scott Alexander - Scott is promoting a project to accelerate the trend of rationalists living near each other. There are four houses available for rent near Ward Street in Berkeley. Ward street is currently the rationalist hub in the Bay. Commenters can advertise other projects and services.

Url Of Sandwich by Scott Alexander - Standard links post, somewhat longer than usual.

Opec Thread by Scott Alexander - Bi-weekly open thread. Update on Scott and Katja's travels. Salt Lake City Meetup highlight. Topher Brennan is running for Senate.

Can We Link Perception And Cognition by Scott Alexander - SSC survey optical illusions. "So there seems to be a picture where high rates of perceptual ambiguity are linked to being weirder and (sometimes, in a very weak statistical way) lower-functioning." Speculation about fundamental connections between perception and cognitive style. Ideas for further research.

Change Minds Or Drive Turnout by Scott Alexander - Extreme candidates lower turnout among their own party. Is base turnout really the only thing that matters? Lots of quotes from studies.


Learning From Past Experiences by mindlevelup - "This is about finding ways to quickly learn from past experiences to inform future actions. We briefly touch upon different learning models." Model-based and Model-Free reinforcement learning. Practical advice and examples.

How Long Has Civilization Been Going by Elo (BearLamp) - Human agricultural society is only 342-1000 generations old. "Or when you are 24 years old you have lived one day for every year humans have had written records." Human civilization is only a few hundred lifetimes old.

Choices Are Bad by Zvi Moshowitz - Choices reduce perceived value. Choices require time and energy. Making someone choose is imposing a real cost.

Erisology Of Self And Will: The Gulf by Everything Studies - "Part 4 will discuss some scientific disciplines with bearing on the self, and how their results are interpreted differently by the traditional paradigm vs. the scientific."

Philosophy Vs Duck Tests by Robin Hanson - Focusing on deep structure vs adding up weak cues. If it looks like an x... More discussion of whether most people will consider ems people and/or conscious.

Knowing How To Define by AellaGirl - "These are three ways in which a word can be ‘defined’ – the role it plays in the world around it (the up-definition), synonyms (lateral-definition), and the parts which construct the thing (down-definition)." Applications to morality and free-will.

Change Is Bad by Zvi Moshowitz - "Change space, like mind space, is deep and wide. Friendly change space isn’t quite to change space what friendly mind space is to mind space, but before you apply any filters of common sense, it’s remarkably close." A long list of conditions that mean change has lower expected value. Why we still need to make changes. Keep your eyes open.

Meditation Insights Suffering And Pleasure Are Intrinsically Bound Together by Kaj Sotala - The concrete goal of meditation is to train your peripheral awareness. Much suffering comes from false promises of pleasure. Procrastinating to play a videogame won't actually make you feel better. Temptation losses its power once you truly see the temptations for what they truly are.

Be My Neighbor by Katja Grace - Katja lives in a rationalist house on ward street in Berkeley and its great. The next step up is a rationalist neighborhood. Katja is promoting the same four houses as Scott. Be her neighbor?

What Value Subagents by G Gordan (Map and Territory) - Splitting the mind into subagents is a common rationalist model (links to Alicorn, Briene Yudkowsky, etc). However the author preferred model is a single process with inconsistent preferences. Freud. System 1 and System 2. The rider and the Elephant become one. Subagents as masks. Subagents as epicycles.

The Order Of The Soul by Ben Hoffman (Compass Rose) - The philosophy of accepting things vs the impulse to reshape them. Many philosophical and psychological models split the soul into three. Internalized authority vs seeing the deep structure of moral reality. In some sense math is the easiest thing in the world to learn. School poisons the enjoyment of rational thought. Lockhart's lament. Feynman. Eichmann and thinking structurally.

Aliens Merely Sleeping by Tyler Cowen - The universe is currently too hot for artificial life to be productive. Advanced civilizations might be freezing themselves until the universe cools. "They could achieve up to 1030 times more than if done today" [short]

Book Reviews by Torello (lesswrong) - Rationalist Adjacent. Each book has an interesting 'ideas per page' rating. Homo Deus, Sapiens, Super-intelligence, Surfaces and Essences, What Technology Wants, Inside Jokes, A Skeptic's Guide to the Mind.

Geometers Scribes Structure Intelligence by Ben Hoffman (Compass Rose) - "How does spatial reasoning lead to formal, logical reasoning?" Fluid and crystalized intelligence. Some history of Philosophy. How social dynamics lead to the evolution of reasoning. Talmudic and Western law, and their oddities. Universal Grammar and connecting with the divine. FizzBuzz.

High Dimensional Societies by Robin Hanson - In high dimensional space the distance between points varies less. What implications does this have for 'spatial' social science models (ex analogues of 1D spectrums and 2D graphs).

Feelings In The Map by Elo (BearLamp) - Confusion is not a property of the external world. The same holds for many emotions. Non-violent communication and speaking from your own perspective.

Lesswrong Is Not About Forum Software by enye-word (lesswrong) - The best way to increase activity on lesswrong is to get back the top posters, especially Scott and Eliezer.

Explication by mindlevelup - "This essay is about explication, the notion of making things specific. I give some examples involving Next Actions and systematization. This might also just be obvious to many people. Part of it is also a rehash of Act Into Uncertainty. Ultimately, explication is about changing yourself."

Concrete Instructions by Elo (BearLamp) - "The objective test of whether the description is concrete is whether the description can be followed by an anonymous person to produce the same experience." Some examples including the 'paper folding game'.

Human Seems Low Dimensional by Robin Hanson - 'Humanness' seems to be a one dimensional variable. Hence people are likely to consider ems conscious and worthy of decent treatment since ems are human-like on many important factors. Some discussion of a study where people rated how human-like various entities were.

Erisology Of Self And Will: A Natural Offering by Everything Studies - A description of naturalism and it relation to science. Daniel Dennet. Many philosophers are still dualists about the self. The self as a composite. Freedom as emergent.

The Hungry Brain by Bayesian Investor - A short review that focuses on the basics of Guynet's ideas and meta-discussion of why Guynet included so much neuroscience. "Guyenet provides fairly convincing evidence that it’s simple to achieve a healthy weight while feeling full. (E.g. the 20 potatoes a day diet)."

Boost From The Best by Robin Hanson - [Age of Em] How many standard deviations above the mean will be the best em be? How much better will they be than the second best em? How much of a wage/leisure premium will the best em receive.

Becoming Stronger Together by b4yes (lesswrong) - "About a year ago, a secret rationalist group was founded. This is a report of what the group did during that year."

In Praise Of Fake Frameworks by Valentine (lesswrong) - "I use a lot of fake frameworks — that is, ways of seeing the world that are probably or obviously wrong in some important way. I think this is an important skill. There are obvious pitfalls, but I think the advantages are more than worth it. In fact, I think the "pitfalls" can even sometimes be epistemically useful."

Letter To Future Layperson by Sailor Vulcan (BYS) - A letter from someone in our age to someone post singularity. Description of the hardships and terrors of pre-singularity life. Emotional and poetic. ~5K words.


Conversation With An Ai Researcher by Jeff Kaufman - The anonymous researcher thinks AI progress is almost entirely driven by hardware and data. Back propagation has existed for a long time. Go would have taken at least 10 more years if go-aI work had remained constrained by academic budgets.

Openai Baselines PPO by Open Ai - "We’re releasing a new class of reinforcement learning algorithms, Proximal Policy Optimization (PPO), which perform comparably or better than state-of-the-art approaches while being much simpler to implement and tune. PPO has become the default reinforcement learning algorithm at OpenAI because of its ease of use and good performance."

Superintelligence Risk Project Update II by Jeff Kaufman - Jeff's thoughts and the sources he found most useful. Project is wrapping up in a few day. Topics: Technical Distance to AI. Most plausible scenarios of Superintelligence risk. OpenPhil's notes on how progress was potentially stalled in Cryonics and Nanotech.

Real Debate Robots Education by Tyler Cowen - Robots are already becoming part of the classroom. K-12 is an artificially creation anyway. Robots can help autistic or disabled children. Children sometimes trust robots too much.

Robust Adversarial Inputs by Open Ai - "We’ve created images that reliably fool neural network classifiers when viewed from varied scales and perspectives. This challenges a claim from last week that self-driving cars would be hard to trick maliciously since they capture images from multiple scales, angles, perspectives, and the like."

What Is Overfitting Exactly by Andrew Gelman - "If your model is correct, “overfitting” is impossible. In its usual form, “overfitting” comes from using too weak of a prior distribution."

Conversation With Bryce Wiedenbeck by Jeff Kaufman - "AGI is possible, it could be a serious problem, but we can't productively work on it now." AGI will look very different from current technologies. Utility functions are a poor model of human behavior.

Examples Of Superintelligence Risk by Jeff Kaufman - A series of extended quotes describing ways AI with innocent seeming goals can destroy the world. Authors: Nick Bostrom, Eliezer (and collaborators), Luke M, 80K hours, Tim Urban. Jeff finds them unpersuasive and asks for better ones. Lots of interesting comments. Eleizer himself comments describing how 'paperclip maximizers' might realistically occur.

Superintelligence Risk Project Update by Jeff Kaufman - Links to the three most informative readings on AI risk. Details on the large number of people Jeff has talked to. Three fundamental points of view on AI-Safety. Three Fundamental points of disagreement. An update on the original questions Jeff was trying to answer.

Conversation With Michael Littman by Jeff Kaufman - CS Professor at Brown's opinions: Deep Learning is surprisingly brittle in his experience. General Intelligence will require large fundamental advances. The AI risk community isn't testing their ideas so they probably aren't making real progress.


EAGX Relaunch by Roxanne_Heston (EA forum) - The EA global satellite EAGA-X conferences have been low activity. Changes: More funding and flexibility. Standardized formats. Fewer groups approved. Stipends to primary organizers.

Uncertainty Smoothes Out Differences In Impact by The Foundational Research Institute - Many inside view evaluations conclude that one intervention is orders of magnitude more effective than another. Uncertainty significantly reduces these ratios.

Autonomy: A Search For A Measure Will Pearson (EA forum) - "I shall introduce a relatively formal measure of autonomy, based on the intuition that it is the ability to do things by yourself with what you have. The measure introduced allows you to move from less to more autonomy, without being black and white about it. Then I shall talk about how increasing autonomy fits in with the values of movements such as poverty reduction, ai risk reduction and the reduction of suffering."

Eight media articles on GiveDirectly, Cash Transers and Basic Income.- A world where 8 men own as much wealth as 3.6 billion people by GiveDirectly -

More Giving Vs Doing by Jeff Kaufman - EA is moving far more money than it used to and the ramp up will continue. This means direct work has become relatively more valuable. Nonetheless giving money is still useful, capacity isn't being filled. Jeff plans on earning to give based on his personal constraints.

Why I Think The Foundational Research Institute by Mike Johnson (EA forum) - A description of the FRI. Good things about FRI. FRI's research framework and why the author is worried. Eight long objections. TLDR: "functionalism ("consciousness is the sum-total of the functional properties of our brains") sounds a lot better than it actually turns out to be in practice. In particular, functionalism makes it impossible to define ethics & suffering in a way that can mediate disagreements."

Tranquilism by The Foundational Research Institute - A paper arguing that reducing suffering is more important than promoting happiness. Axiology. Non-consciousness. Common Objections. Conclusion.

An Argument For Why The Future May Be Good by Ben West (EA forum) - Factory farming shows that humans are deeply cruel. Technology enabled this cruelty, perhaps the future will be even darker. Counterargument: Humans are lazy, not evil. Humans as a group will spend at least small amounts altruistically. In the future the cost of reducing suffering will go down low enough that suffering will be rare or non-existent.

Arguments Moral Advocacy by The Foundational Research Institute - "What does moral advocacy look like in practice? Which values should we spread, and how? How effective is moral advocacy compared to other interventions such as directly influencing new technologies? What are the most important arguments for and against focusing on moral advocacy?"

An Argument For Broad And Inclusive by Kaj Sotala (EA forum) - "I argue for a very broad, inclusive EA, based on the premise that the culture of a region is more important than any specific group within that region... As a concrete strategy, I propose a division into low-level and high-level EA"

Not Everybody wants a Goat by GiveDirectly - Eight links on GiveDirectly, Cash Transfers, Effective Altruism and Basic Income.

Mid Year Update by The GiveWell Blog - Encouraging more charities to apply. More research of potential interventions. Short operations recap. GiveWell is focusing more on outreach.

===Politics and Economics:

College Tuition by Tom Bartleby - Sticker prices for college have gone up 15K in twenty years, but the average actual cost has only gone up 2.5K. High prices are almost compensated by high aid. Advantage: more equitable access to education. Disadvantages: Not everyone knows about the aid, financial aid is large enough it can seriously distort family financial decisions.

War Of Wages Part 1 Apples And Walmarts by Jacob Falkovich (Put A Number On It!) - The Author thinks minimum wage hurts the poor. Walmart can't afford higher wages. Copenhagan Interpretation of Ethics: Walmart helps the poor and gets blamed, Apple does nothing for the poor but avoids blame.

Links 10 by Artir (Nintil) - Tons of links. Economics, Psychology, AI, Philosophy, Misc.

Pretend Ask Answer by Ben Hoffman (Compass Rose) - A short dialogue about Patriarchy and the meaning of oppression. Defensive actions are often a response to bad faith from the other side. Its not ok to explicitly say you think your partner is arguing in bad faith.

Cultural Studies Ironically Is Something Of A Colonizer by Freddie deBoer - An origin story for Writing Studies. The fields initial methodological diversity. Cultural studies took over the field, empirical work has been pushed out. Evidence that some cultural studies professors really do believe its fundamentally bigoted to do science and empirical research endangers marginalized students. The field has become insular.

The Dark Arts Examples From The Harris Adams Debate by Stabilizer (lesswrong) - The author accuses Scott Adams of using various dark Arts: Changing the subject, Motte-and-bailey, Euphemisation, Diagnosis, Excusing, Cherry-picking evidence.

Study Of The Week Modest But Real Benefits From Lead Exposure Interventions by Freddie deBoer - Freddie reviews a survey he found via SSC. The study had very good controls. Methodology is explained and key graphs are posted and discussed. Scott and Freddie seem to agree on the facts but have a different opinion on how large to consider the effects.

Descriptive And Prescriptive Standards by Simon Penner (Status 451) - Leadership means winning the Keynesian Beauty Contest. Public opinion doesn't exist as a stable reality. Prescribing public opinion. Dangers of social reform and leaders twisting the facts to promote noble outcomes.

A Taylorism For All Seasons by Lou (sam[]zdat) - "Christopher Lasch – The Culture of Narcissism, part 1/X, current essay being more of an overview." A Masquerade where you must act out the mask you choose.

Mechanism Agnostic Low Plasticity Educational Realism by Freddie deBoer - Freddie's educational philosophy. People sort into persistent academic strata. Educational attainment is heavily determined by factors outside of school's control. The mechanism differences in academic ability is unknown. Social and political implications.

Kin Aesthetics Excommunicate Me From The Church Of Social Justice by Frances Lee - A SJ-insider's critical opinion of SJ. Fear of being impure. Original Sin. Reproducing colonial structures of power and domination within social justice. Everyday Feminism's belittling articles. More humility. Bringing humanity to everyone, even those who have been inhumane.

Study Of The Week To Remediate Or Not To Remediate by Freddie deBoer - Should low math proficiency students take remedial algebra or credit bearing statistics. The City University of New York ran an actual randomized study to test this. The study had pretty good controls. For example students were randomly assigned to three groups, participating professors taught one section of each group.

Should We Build Lots More Housing In San Francisco: Three Reasons People Disagree by Julia Galef - For each of the three reasons Julia describes multiple sub-reasons. More housing might not lower prices much. More housing won't help the poor. NIMBY objections might be legitimate.

Kenneth Arrow On The Welfare Economics Of Medical Care A Critical Assessment by Artir (Nintil) - "Kenneth Arrow wrote a paper in 1963, Uncertainty and the Welfare Economics of Medical Care. This paper tends to appear in debates regarding whether healthcare can be left to the market (like bread), or if it should feature heavy state involvement. Here I explain what the paper says, and to what extent it is true."

Thoughts On Doxxing by Ozy (Thing of Things) - CNN found the identity of the guy who made the video of Trump beating up CNN. They implied they would dox him if he continued being racist. Is doxxing him ok? What about doxxing someone who runs r/jailbait? Ozy discusses the practical effect of doxxing and unleashing hate mobs.

On The Seattle Minimum Wage Study Part 2 by Zvi Moshowitz - Several relevant links are included. Seattle's economic boom and worker composition changes are important factors. Zvi dives deep into the numbers and tries to resolve an apparent contradiction.

Radical Book Club The Decentralized Left by davidzhines (Status 451) - The nature of leftwing organizing and what righties can learn from it. An exposition of multiple books on radical left organization building. Major themes are "doing the work" and "decentralized leadership".

On The Seattle Minimum Wage Study Part 1 by Zvi Moshowitz - The claimed effect sizes are huge. Zvi's priors about the minimum wage. Detailed description of some of the paper's methods and how it handle potential issues. Discussion of the raw data. More to come in part 2.


Childcare II by Jeff Kaufman - A timeline of childcare for Jeff's two children. Methods: Staying at home, Daycare, Au pair, Nanny.

Easier Chess Problem by protokol2020 - How many pieces do you need to capture a black queen?

Book Review Mathematics For Computer Science by richard_reitz (lesswrong) - Why the text should be in the MIRI research guide. Intro. Prereqs. Detailed comparisons to similar texts. Complaints.

Information is Physical by Scott Aaronson - Is information is physical a contentful expression? Why 'physics is information' is tautological. A proposed definition. Double slit experiment. Observation in Quantum Mechanics. Information takes up a minimum amount of space. Entropy. Information has nowhere to go.

Book Review Working Effectively With Legacy Code By Michael C Feathers by Eli Bendersky - To improve code we must refactor, to refactor we have to test, making code testable may take heroic efforts. "The techniques described by the author are as terrible as the code they're up against."

The Ominouslier Roar Of The Bitcoin Wave by Artem and Venkat (ribbonfarm) - A video visualizing and audiolizing the bitcoin blockchain. A related dialogue.

From Monkey Neurons To The Meta Brain by Hal Morris (ribbonfarm) - Neurons that only fire in response to Jennifer Anniston. Mirror Neurons. Theory of Mind. The path from copying movement to human-level empathy. Infant development. Dreams as social simulator. Communicating with our models of other people. He rapidly accelerating and dangerous future. We need to keep our mind open to possibilities.

Newtonism Question by protokol2020 - Balancing Forces. Gravity problem.

Short Interview Writing by Tyler Cowen - Tyler Cowen's writing habits. Many concrete details such as when he writes and what program he uses. Some more general thoughts on writing such as Tyler's surprising answer to which are his favorite books on writing.

Unexpected by protokol2020 - Discussion of gaps between primes. "Say, that you have just sailed across some recordly wide composite lake and you are on a prime island again. What can you expect, how much wider will the next record lake be?"

Interacting With A Long Running Child Process In Python by Eli Bendersky - Using the subprocess module to run an http server. Solutions and analysis of common use cases. Lots of code.

4d Mate Problem by protokol2020 - How many queens do you need to get a checkmate in 4D chess.

The Destruction Of American Cuisine by Small Truths - America used to have a tremendous number of regional cuisines, most are dead. They were killed by supermarkets and frozen food. This has been costly both in terms of culture and health (antibiotic resistance, crop monoculture risk)


Sally Satel On Organ Donation by EconTalk - "The challenges of increasing the supply of donated organs for transplantation and ways that public policy might increase the supply." Tax Credits. The ethics of donor compensation.

Podcast The World Needs Ai Researchers Heres How To Become One by 80,000 Hours - "OpenAI’s latest plans and research progress. Concrete Papers in AI Safety, which outlines five specific ways machine learning algorithms can act in dangerous ways their designers don’t intend – something OpenAI has to work to avoid. How listeners can best go about pursuing a career in machine learning and AI development themselves."

88 Must We Accept A Nuclear North Korea by Waking Up with Sam Harris - "Mark Bowden and the problem of a nuclear-armed North Korea."

Triggered by Waking Up with Sam Harris - "Sam Harris and Scott Adams debate the character and competence of President Trump."

Conversation Atul Gawande by Tyler Cowen - The marginal value of health care, AI progress in medicine, fear of genetic engineering, whether the checklist method applies to marriage, FDA regulation, surgical regulation, Michael Crichton and Stevie Wonder, wearables, what makes him weep, Knausgaard and Ferrante, why surgeons leave sponges in patients.

Nneka Jones Tapia by The Ezra Klein Show - The first psychologist to run a prison. 30% of inmates have diagnosed mental health problems. Mental heath view of the penal system, balancing punishment and treatment, responsibility versus mental instability, the tension between what we use jail for and what we should use jail for.

Tamar Haspel by EconTalk - "Why technology helps make some foods inexpensive, how animals are treated, the health of the honey bee, and whether eggs from your backyard taste any better than eggs at the grocery."

From Cells To Cities by Waking Up with Sam Harris - "Biological and social systems scale, the significance of fractals, the prospects of radically extending human life, the concept of “emergence” in complex systems, the importance of cities, the necessity for continuous innovation"

Inside The World Of Supertraining: Mark Bell by Tim Feriss - "Mark’s most important lessons for building strength. How to avoid injury and breakdown. Lesser-known training techniques that nearly everyone overlooks. How Mark became a millionaire by offering his gym memberships for free."

Eddie Izzard by The Ezra Klein Show - 27 marathons in 27 days, process for writing jokes, why he wants to run for parliament, inspiration from Al Franken's, borrowing confidence from his future self. What he learned as a street performer, routines are based on history and anthropology, World War I, 'cake or death?'. His gender identity, and how he integrated it into his act early on, etc.

Martha Nussbaum by EconTalk - "The tension between acquiring power and living a life of virtue. Topics discussed include Hamilton's relationship with Aaron Burr, Burr's complicated historical legacy, and the role of the humanities in our lives."

Rs 188 Robert Kurzban On Being Strategically Wrong by Rationally Speaking - Why Everyone (Else) is a Hypocrite." The "modular mind" hypothesis, and how it explains hypocrisy, self-deception, and other seemingly irrational features of human nature.

How long has civilisation been going?

7 Elo 22 July 2017 06:41AM

I didn't realise how short human history was.  Somewhere around 130,000 years ago we were standing upright as we are today.  Somewhere around 50,000 years ago we broadly arrived at:

the fully modern capacity for Culture *

That's roughly when we started, "routine use of bone, ivory, and shell to produce formal (standardized) artifacts".  Agriculture and humans staying still to grow plants happened at about 10,000BCE (or 12,000 years ago).

Writing started happening around 6600BCE* (8600 or so years ago).  

This year is 5777 in the Hebrew calendar.  So someone has been counting for roughly that long.

The pyramids are estimated to have been built at around 2600 BCE (4600 years ago)

Somewhere between then and zero by the christian calendar we sorted out a lot of metals and how to use them.

And somewhere between then and now we finished up all the technological advances that lead to present day.

But it's hard to get a feel for that.  Those are just some numbers of years.  Instead I want to relate that to our lives.  And our generations.

12,000 years ago is a good enough point to start paying attention to.

If a human generation is normally between 12* and 35* years.  Considering that further back the generations would have been closer to 12 years apart and today they are shifting to being more like 30 years apart (and up to 35 years apart).  That means the bounds are:

12,000/35 = 342

342-1000 generations.  That's all we have.  In all of humanity.  We are SO YOUNG!

(if you take the 8600 year number as a starting point you get a range of 717-242.)

Let's make it personal

I know my grandparents which means I am a not-negligible chance to also know my grandchildren and maybe even more (depending on medical technology).  I already have a living niece so I have already experienced 4 generations.  Without being unreasonable I can expect to see 5 and dream to see 6, 7 or infinite.  

(5/1000)->(7/342) = between a half a percent and two percent of human history.  I will have lived through 1/2% - 2% of human generations (ignoring longevity escape for a moment) to date.

Compared to other life numbers:

Days in a year * 100 year = 36,500 days in a 100 year lifespan.

52 weeks *100 = 5200.  Or one week of a 100 year lifespan is equivalent to one generation of humans.

12,000 years / 365 days = 32.8 years.  Or when you are 32 years old you have lived more days than humans have been collecting artefacts of worth.

8600 years/365 = 23.5 years.  Or when you are 24 years old you have lived one day for every year humans have had written records.

Discrete human lives

If you put an olden day discrete human life at 25 years - maybe more, and a modern day discrete life at 90 years and compare those to the numbers above

12,000/25 = 480 discrete human lifetimes

12,000/90=133 discrete human lifetimes

8600/25=344 discrete human lifetimes

8600/90=95 discrete human lifetimes

That's to say the entire of recorded history is only about 350 independent human lives stacked end on end.

Everything we know in history has been done on somewhere less than 480 discrete lifetime runthroughs.

Humanity is so young.  And we forget so easily that 500 lifetimes ago we were nothing.

Meta:  Thanks billy for hanging out and thinking about the numbers with me.  This idea came up on a whim and took a day of thinking about and about an hour to write up

Original post:

Machine Learning Group

7 Regex 16 July 2017 08:58PM

After signing up for this post, those of us that want to study machine learning have made a team.

In an effort to actually get high returns on our time we won't delay, and instead actually build the skills. First project: work through Python Machine Learning by Sebastian Raschka, with the mid-term goal of being able to implement the "recognizing handwritten digits" code near the end.

As a matter of short term practicality currently we don't have the hardware for GPU acceleration. This limits the things we can do, but at this stage of learning most of the time spent is on understanding and implementing the basic concepts anyway.

Here is our discord invite link if you're interested in joining in on the fun.


September 14th 2017 edit:

Wasn't enough activity to keep it going. The book we chose also focused on scikitlearn rather than teaching how some of the algorithms are programmed after ~chapter 2 so I think that was part of the problem, but the main failing I suspect is that I personally did not push forward hard enough and got distracted by other projects.

So alas, this effort did not succeed. Feel free to check out the discord to find the people interested in it, and maybe look at the logs to see where we went wrong ;)

90% of problems are recommendation and adaption problems

7 casebash 12 July 2017 04:53AM

Want to improve your memory? Start a business? Fix your dating life?

The chances are that out of the thousands upon thousands of books and blogs out there on each of these topics there are already several that will tell you all that you need. I'm not saying that this will immediately solve your problem - you will still need to put in the hard yards of experiment and practise - just that lack of knowledge will no longer be the limiting factor.

This suggests if we want to be winning at life (as any good rationalist should), what is most important isn't creating brilliant and completely unprecedented approaches to solve these problems, but rather taking ideas that already exist.

The first problem is recommendation - finding which out of all of the thousands of books out there are the most helpful for a particular problem. Unfortunately, recommendation is not an easy problem at all. Two people may both be dealing with procrastination problems, but what works for one person may not work for another person. Further, even for the same idea, it is incredibly subjective what counts as a clear explanation - some people may want more detail, others less, some people may find some examples really compelling, others won't. Recommendations are generally either one person's individual recommendations or those which recieved the highest vote, but there probably are other methods of producing a recommendation that should be looked into, such as asking people survey questions and matching on that, or asking people to rate a book on different factors.

The second problem is adaption. Although you shouldn't need to create any new ideas, it is likely that certain elements will need more explanation and certain elements less. For example, when writing for the rationalist community, you may need to be more precise and be clearer when you are talking figuratively, rather than literally. Alternatively, you can probably just link people to certain common ideas such as the map and territory without having to explain it.

I'll finish with a rhetorical question - what percent of solutions here are new ideas and what percentage are existing solutions? Are these in the right ratio?

UPDATE: (Please note: This article is not about time spent learning vs. time spent practising, but about existing ideas vs. new ideas. The reason why this is the focus is because LW can potentially recommend resources or adapt resources, but it can't practise for you!).







Call to action

7 Elo 07 July 2017 09:10AM

Core knowledge: List of common human goals
Part 1Exploration-Exploitation
Part 1a: The application of the secretary problem to real life dating
Part 1b: adding and removing complexity from models
Part 2Bargaining Trade-offs to your brain.
Part 2a.1: A strategy against the call of the void.
Part 2a.2: The call of the void
Part 2b.1: Empirical time management
Part 2b.2: Memory and notepads
Part 3The time that you have
Part 3a: A purpose finding exercise
Part 3b: Schelling points, trajectories and iteration cycles
Part 4What does that look like in practice?
Part 4a: Lost purposes – Doing what’s easy or what’s important
Part 4b.1: In support of yak shaving
Part 4b.2: Yak shaving 2
Part 4c: Filter on the way in, Filter on the way out…
Part 4d.1: Scientific method
Part 4d.2: Quantified self
Part 5: Skin in the game
Part 6
Call to action

A note about the contents list; you can find the list in the main parts, the a,b,c parts are linked to from the main posts.  If you understand them in the context they are mentioned you can probably skip them, but if you need the explanation, click through.

If you understand exploration and exploitation, you realise that sometimes you need to stop exploring and take advantage of what you know based on the value of the information that you have. At other times you will find your exploitations are giving you diminishing returns, you are stagnating and you need to dive into the currents again, take some risks.  If you are accurately calibrated, you will know what to do, whether to sharpen the saw, educate yourself more or cut down the tree right now.

If you are not calibrated yet and you want to start, you might want to empirically assess your time.  You might like to ask yourself in light of the information of your time use all on one page – Am I exploring and exploiting enough?  Remembering you probably make the most measurable and ongoing returns in the Exploitation phase, however the exploration might be seem more fun (to find exciting and new knowledge), and the place where you grow, but are you sure that’s what you want to be doing in regard to the value return by exploiting?

Why were you not already exploring and exploiting in the right ratio?  Brains are tricky things.  You might need to bargain trade-offs to your own brain.  You might be dealing with a System2!understanding of what you want to do and trying to carry out a System1!motivated_action.  The best thing to do is to ask the internal disagreeing parts, “How could I resolve this disagreement in my head?”, “How will I resolve my indecision at this time?“, “How do I go about gathering evidence for better making this decision?”.  This all starts with noticing.  Noticing that disagreement, noticing the chance to resolve the stress in your head…

Sometimes we do things for bad, dumb, silly, irrational, frustrating, self-defeating, or irrelevant reasons.  All you really have is the time you have.  People take actions based on their desires and goals.  That’s fine.  You have 168 hours a week. As long as you are happy with how you spend it.  If you are not content, that’s when you have the choice to do something else.

Look at all the things that you are doing or not doing that does not contribute to a specific goal (a process called the immunity to change).  This fundamentally hits on a universal; Namely what you are doing with your time is everything you are choosing not to do with your time.  There is an equal and opposite opportunity cost to each thing that you do.  And that’s where we come to revealed preferences.

Revealed preferences are different to preferences, they are in fact distinctly different.  I would argue that revealed preferences are much more real and the only real preference, because it’s made up of what actually happens, not just what you say you want to happen.  It’s firmly grounded in reality.  The reality of what you choose to do with your time (what you chose to do with your time yesterday).

On the one hand you can introspect, consider your existing revealed preferences and let that inform your future judgements and future actions.  As a person who has always watched every season of your favourite TV show, you might decide to be the type of person for which TV shows matter more than <exercise|relationships|learning> or any number of things.  Good!  Make that decision with pride!  What you cared about can be what you want to care about in the future, but it also might not be.  That’s why you might want to take stock of what you are doing and align what you are doing with your desired goals.  Change what you reveal with your ongoing actions so that they reflect who you want to be as a person.

Do you have skin in the game?  Who do you want to be as a person?  It’s a hard problem.  You want to figure out your desired goals.  I don’t know how exactly to do that but I have some ideas.  You can look around you at how other people do it, you can consider common human goals.  Without explaining why, “knowing what your goals are” is important, even if it takes a while to work that out.

If you know what your goals are you can compare your goals and the list of your empirical time use.  Realise that everything that you do will take time.  If these were your revealed preferences, what do you reveal that you care about?  But wait, don’t stop there, consider your potential:

Potential To:

  • Discover/Define/Declare what you really care about.
  • Define what results you think you can aim for within what you really care about.
  • Define what actions you can take to yield a trajectory towards those results.
  • Stick to it because it’s what you really want to do.  What you care about.

That’s what’s important right?  Doing the work you value because it leads towards your goals (which are the things you care about).  If you are not doing that, then maybe your revealed preferences are showing that you are not a very strategic human.  There is a solution to that.  Keeping yourself on track looks pretty easy when you think about it.

And If you find parts of your brain doing what they want at the detriment of other parts of your goals, you need to reason with them.  This whole; define what you really care about and then head towards it, you should know that it needs doing ASAP, or you are already making bad trade offs with your time.

Consider this post a call to action as a chance to be the you that you really want to be!  Get to it! With passion and joy!

Core knowledge: List of common human goals
Part 1Exploration-Exploitation
Part 1a: The application of the secretary problem to real life dating
Part 1b: adding and removing complexity from models
Part 2Bargaining Trade-offs to your brain.
Part 2a.1: A strategy against the call of the void.
Part 2a.2: The call of the void
Part 2b.1: Empirical time management
Part 2b.2: Memory and notepads
Part 3The time that you have
Part 3a: A purpose finding exercise
Part 3b: Schelling points, trajectories and iteration cycles
Part 4What does that look like in practice?
Part 4a: Lost purposes – Doing what’s easy or what’s important
Part 4b.1: In support of yak shaving
Part 4b.2: Yak shaving 2
Part 4c: Filter on the way in, Filter on the way out…
Part 4d.1: Scientific method
Part 4d.2: Quantified self
Part 5: Skin in the game
Part 6
Call to action

A note about the contents list; you can find the list in the main parts, the a,b,c parts are linked to from the main posts.  If you understand them in the context they are mentioned you can probably skip them, but if you need the explanation, click through.

Meta: This took about 3 hours to write, and was held up by many distractions in my life.

I am not done.  Not by any means.  I feel like I left some unanswered questions along the way.  Things like:

  • “I don’t know what is good, am I somehow bound by a duty to go seeking out what is good or truly important to go do that?”
  • “So maybe I know what’s good, but I keep wondering if it is the best thing to do.  How can I be sure?”
  • “I am sure it is the best thing but I don’t seem to be doing it.  What’s up?”
  • “I am doing the things I thing are right but other people keep trying to tell me I am not.  What now?”
  • “I have a track record of getting it wrong a lot.  How do I even trust myself this time?”
  • “I am doing the thing but I feel wrong, what should I do about that?”

And many more.  But I see other problems worth writing about first.

View more: Next