Filter Last three months

Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

LW 2.0 Strategic Overview

47 Habryka 15 September 2017 03:00AM

Update: We're in open beta! At this point you will be able to sign up / login with your LW 1.0 accounts (if the latter, we did not copy over your passwords, so hit "forgot password" to receive a password-reset email).

Hey Everyone! 

This is the post for discussing the vision that I and the rest of the LessWrong 2.0 team have for the new version of LessWrong, and to just generally bring all of you up to speed with the plans for the site. This post has been overdue for a while, but I was busy coding on LessWrong 2.0, and I am myself not that great of a writer, which means writing things like this takes quite a long time for me, and so this ended up being delayed a few times. I apologize for that.

With Vaniver’s support, I’ve been the primary person working on LessWrong 2.0 for the last 4 months, spending most of my time coding while also talking to various authors in the community, doing dozens of user-interviews and generally trying to figure out how to make LessWrong 2.0 a success. Along the way I’ve had support from many people, including Vaniver himself who is providing part-time support from MIRI, Eric Rogstad who helped me get off the ground with the architecture and infrastructure for the website, Harmanas Chopra who helped build our Karma system and did a lot of user-interviews with me, Raemon who is doing part-time web-development work for the project, and Ben Pace who helped me write this post and is basically co-running the project with me (and will continue to do so for the foreseeable future).

We are running on charitable donations, with $80k in funding from CEA in the form of an EA grant and $10k in donations from Eric Rogstad, which will go to salaries and various maintenance costs. We are planning to continue running this whole project on donations for the foreseeable future, and legally this is a project of CFAR, which helps us a bunch with accounting and allows people to get tax benefits from giving us money. 

Now that the logistics is out of the way, let’s get to the meat of this post. What is our plan for LessWrong 2.0, what were our key assumptions in designing the site, what does this mean for the current LessWrong site, and what should we as a community discuss more to make sure the new site is a success?

Here’s the rough structure of this post: 

  • My perspective on why LessWrong 2.0 is a project worth pursuing
  • A summary of the existing discussion around LessWrong 2.0 
  • The models that I’ve been using to make decisions for the design of the new site, and some of the resulting design decisions
  • A set of open questions to discuss in the comments where I expect community input/discussion to be particularly fruitful 

Why bother with LessWrong 2.0?  

I feel that independently of how many things were and are wrong with the site and its culture, overall, over the course of its history, it has been one of the few places in the world that I know off where a spark of real discussion has happened, and where some real intellectual progress on actually important problems was made. So let me begin with a summary of things that I think the old LessWrong got right, that are essential to preserve in any new version of the site:

On LessWrong…

 

  • I can contribute to intellectual progress, even without formal credentials 
  • I can sometimes have discussions in which the participants focus on trying to convey their true reasons for believing something, as opposed to rhetorically using all the arguments that support their position independent of whether those have any bearing on their belief
  • I can talk about my mental experiences in a broad way, such that my personal observations, scientific evidence and reproducible experiments are all taken into account and given proper weighting. There is no narrow methodology I need to conform to to have my claims taken seriously.
  • I can have conversations about almost all aspects of reality, independently of what literary genre they are associated with or scientific discipline they fall into, as long as they seem relevant to the larger problems the community cares about
  • I am surrounded by people who are knowledgeable in a wide range of fields and disciplines, who take the virtue of scholarship seriously, and who are interested and curious about learning things that are outside of their current area of expertise
  • We have a set of non-political shared goals for which many of us are willing to make significant personal sacrifices
  • I can post long-form content that takes up as much space at it needs to, and can expect a reasonably high level of patience of my readers in trying to understand my beliefs and arguments
  • Content that I am posting on the site gets archived, is searchable and often gets referenced in other people's writing, and if my content is good enough, can even become common knowledge in the community at large
  • The average competence and intelligence on the site is high, which allows discussion to generally happen on a high level and allows people to make complicated arguments and get taken seriously
  • There is a body of writing that is generally assumed to have been read by most people  participating in discussions that establishes philosophical, social and epistemic principles that serve as a foundation for future progress (currently that body of writing largely consists of the Sequences, but also includes some of Scott’s writing, some of Luke’s writing and some individual posts by other authors) 

 

When making changes to LessWrong, I think it is very important to preserve all of the above features. I don’t think all of them are universally present on LessWrong, but all of them are there at least some of the time, and no other place that I know of comes even remotely close to having all of them as often as LessWrong has. Those features are what motivated me to make LessWrong 2.0 happen, and set the frame for thinking about the models and perspectives I will outline in the rest of the post. 

I also think Anna, in her post about the importance of a single conversational locus, says another, somewhat broader thing, that is very important to me, so I’ve copied it in here: 

1. The world is locked right now in a deadly puzzle, and needs something like a miracle of good thought if it is to have the survival odds one might wish the world to have.

2. Despite all priors and appearances, our little community (the "aspiring rationality" community; the "effective altruist" project; efforts to create an existential win; etc.) has a shot at seriously helping with this puzzle.  This sounds like hubris, but it is at this point at least partially a matter of track record.

3. To aid in solving this puzzle, we must probably find a way to think together, accumulatively. We need to think about technical problems in AI safety, but also about the full surrounding context -- everything to do with understanding what the heck kind of a place the world is, such that that kind of place may contain cheat codes and trap doors toward achieving an existential win. We probably also need to think about "ways of thinking" -- both the individual thinking skills, and the community conversational norms, that can cause our puzzle-solving to work better.

4. One feature that is pretty helpful here, is if we somehow maintain a single "conversation", rather than a bunch of people separately having thoughts and sometimes taking inspiration from one another.  By "a conversation", I mean a space where people can e.g. reply to one another; rely on shared jargon/shorthand/concepts; build on arguments that have been established in common as probably-valid; point out apparent errors and then have that pointing-out be actually taken into account or else replied-to).

5. One feature that really helps things be "a conversation" in this way, is if there is a single Schelling set of posts/etc. that people (in the relevant community/conversation) are supposed to read, and can be assumed to have read.  Less Wrong used to be a such place; right now there is no such place; it seems to me highly desirable to form a new such place if we can.

6. We have lately ceased to have a "single conversation" in this way.  Good content is still being produced across these communities, but there is no single locus of conversation, such that if you're in a gathering of e.g. five aspiring rationalists, you can take for granted that of course everyone has read posts such-and-such.  There is no one place you can post to, where, if enough people upvote your writing, people will reliably read and respond (rather than ignore), and where others will call them out if they later post reasoning that ignores your evidence.  Without such a locus, it is hard for conversation to build in the correct way.  (And hard for it to turn into arguments and replies, rather than a series of non sequiturs.)

The Existing Discussion Around LessWrong 2.0

Now that I’ve given a bit of context on why I think LessWrong 2.0 is an important project, it seems sensible to look at what has been said so far, so we don’t have to repeat the same discussions over and over again. There has already been a lot of discussion about the decline of LessWrong, the need for a new platform and the design of LessWrong 2.0, and I won’t be able to summarise it all here, but I can try my best to summarize the most important points, and give a bit of my own perspective on them.

Here is a comment by Alexandros, on Anna’s post I quoted above:

Please consider a few gremlins that are weighing down LW currently:

1. Eliezer's ghost -- He set the culture of the place, his posts are central material, has punctuated its existence with his explosions (and refusal to apologise), and then, upped and left the community, without actually acknowledging that his experiment (well kept gardens etc) has failed. As far as I know he is still the "owner" of this website, retains ultimate veto on a bunch of stuff, etc. If that has changed, there is no clarity on who the owner is (I see three logos on the top banner, is it them?), who the moderators are, who is working on it in general. I know tricycle are helping with development, but a part-time team is only marginally better than no-team, and at least no-team is an invitation for a team to step up.

[...]

...I consider Alexei's hints that Arbital is "working on something" to be a really bad idea, though I recognise the good intention. Efforts like this need critical mass and clarity, and diffusing yet another wave of people wanting to do something about LW with vague promises of something nice in the future... is exactly what I would do if I wanted to maintain the status quo for a few more years.

Any serious attempt at revitalising lesswrong.com should focus on defining ownership and plan clearly. A post by EY himself recognising that his vision for lw 1.0 failed and passing the batton to a generally-accepted BDFL would be nice, but i'm not holding my breath. Further, I am fairly certain that LW as a community blog is bound to fail. Strong writers enjoy their independence. LW as an aggregator-first (with perhaps ability to host content if people wish to, like hn) is fine. HN may have degraded over time, but much less so than LW, and we should be able to improve on their pattern.

I think if you want to unify the community, what needs to be done is the creation of a hn-style aggregator, with a clear, accepted, willing, opinionated, involved BDFL, input from the prominent writers in the community (scott, robin, eliezer, nick bostrom, others), and for the current lesswrong.com to be archived in favour of that new aggregator. But even if it's something else, it will not succeed without the three basic ingredients: clear ownership, dedicated leadership, and as broad support as possible to a simple, well-articulated vision. Lesswrong tried to be too many things with too little in the way of backing.

I think Alexandros hits a lot of good points here, and luckily these are actually some of the problems I am most confident we have solved. The biggest bottleneck – the thing that I think caused most other problems with LessWrong – is simply that there was nobody with the motivation, the mandate and the resources to fight against the inevitable decline into entropy. I feel that the correct response to the question of “why did LessWrong decline?” is to ask “why should it have succeeded?”. 

In the absence of anyone with the mandate trying to fix all the problems that naturally arise, we should expect any online platform to decline. Most of the problems that will be covered in the rest of this post are things that could have been fixed many years ago, but simply weren’t because nobody with the mandate put much resources into fixing them. I think the cause for this was a diffusion of responsibility, and a lot of vague promises of problems getting solved by vague projects in the future. I myself put off working on LessWrong for a few months because I had some vague sense that Arbital would solve the problems that I was hoping to solve, even though Arbital never really promised to solve them. Then Arbital’s plan ended up not working out, and I had wasted months of precious time. 

Since this comment was written, Vaniver has been somewhat unanimously declared benevolent dictator for life of LessWrong. He and I have gotten various stakeholders on board, received funding, have a vision, and have free time – and so we have the mandate, the resources and the motivation to not make the same mistakes. With our new codebase, link posts are now something I can build in an afternoon, rather than something that requires three weeks of getting permissions from various stakeholders, performing complicated open-source and confidentiality rituals, and hiring a new contractor who has to first understand the mysterious Reddit fork from 2008 that LessWrong is based on. This means at least the problem of diffusion of responsibility is solved. 


Scott Alexander also made a recent comment on Reddit on why he thinks LessWrong declined, and why he is somewhat skeptical of attempts to revive the website: 

1. Eliezer had a lot of weird and varying interests, but one of his talents was making them all come together so you felt like at the root they were all part of this same deep philosophy. This didn't work for other people, and so we ended up with some people being amateur decision theory mathematicians, and other people being wannabe self-help gurus, and still other people coming up with their own theories of ethics or metaphysics or something. And when Eliezer did any of those things, somehow it would be interesting to everyone and we would realize the deep connections between decision theory and metaphysics and self-help. And when other people did it, it was just "why am I reading this random bulletin board full of stuff I'm not interested in?"

2. Another of Eliezer's talents was carefully skirting the line between "so mainstream as to be boring" and "so wacky as to be an obvious crackpot". Most people couldn't skirt that line, and so ended up either boring, or obvious crackpots. This produced a lot of backlash, like "we need to be less boring!" or "we need fewer crackpots!", and even though both of these were true, it pretty much meant that whatever you posted, someone would be complaining that you were bad.

3. All the fields Eliezer wrote in are crackpot-bait and do ring a bunch of crackpot alarms. I'm not just talking about AI - I'm talking about self-help, about the problems with the academic establishment, et cetera. I think Eliezer really did have interesting things to say about them - but 90% of people who try to wade into those fields will just end up being actual crackpots, in the boring sense. And 90% of the people who aren't will be really bad at not seeming like crackpots. So there was enough kind of woo type stuff that it became sort of embarassing to be seen there, especially given the thing where half or a quarter of the people there or whatever just want to discuss weird branches of math or whatever.

4. Communities have an unfortunate tendency to become parodies of themselves, and LW ended up with a lot of people (realistically, probably 14 years old) who tended to post things like "Let's use Bayes to hack our utility functions to get superfuzzies in a group house!". Sometimes the stuff they were posting about made sense on its own, but it was still kind of awkward and the sort of stuff people felt embarassed being seen next to.

5. All of these problems were exacerbated by the community being an awkward combination of Google engineers with physics PhDs and three startups on one hand, and confused 140 IQ autistic 14 year olds who didn't fit in at school and decided that this was Their Tribe Now on the other. The lowest common denominator that appeals to both those groups is pretty low.

6. There was a norm against politics, but it wasn't a very well-spelled-out norm, and nobody enforced it very well. So we would get the occasional leftist who had just discovered social justice and wanted to explain to us how patriarchy was the real unfriendly AI, the occasional rightist who had just discovered HBD and wanted to go on a Galileo-style crusade against the deceptive establishment, and everyone else just wanting to discuss self-help or decision-theory or whatever without the entire community becoming a toxic outcast pariah hellhole. Also, this one proto-alt-right guy named Eugene Nier found ways to exploit the karma system to mess with anyone who didn't like the alt-right (ie 98% of the community) and the moderation system wasn't good enough to let anyone do anything about it.

7. There was an ill-defined difference between Discussion (low-effort random posts) and Main (high-effort important posts you wanted to show off). But because all these other problems made it confusing and controversial to post anything at all, nobody was confident enough to post in Main, and so everything ended up in a low-effort-random-post bin that wasn't really designed to matter. And sometimes the only people who didpost in Main were people who were too clueless about community norms to care, and then their posts became the ones that got highlighted to the entire community.

8. Because of all of these things, Less Wrong got a reputation within the rationalist community as a bad place to post, and all of the cool people got their own blogs, or went to Tumblr, or went to Facebook, or did a whole bunch of things that relied on illegible local knowledge. Meanwhile, LW itself was still a big glowing beacon for clueless newbies. So we ended up with an accidental norm that only clueless newbies posted on LW, which just reinforced the "stay off LW" vibe.

I worry that all the existing "resurrect LW" projects, including some really high-effort ones, have been attempts to break coincidental vicious cycles - ie deal with 8 and the second half of 7. I think they're ignoring points 1 through 6, which is going to doom them.

At least judging from where my efforts went, I would agree that I have spent a pretty significant amount of resources on fixing the problems that Scott described in point 6 and 7, but I also spent about equal time thinking about how to fix 1-5. The broader perspective that I have on those latter points is I think best illustrated in an analogy: 

When I read Scott’s comments about how there was just a lot of embarrassing and weird writing on LessWrong, I remember my experiences as a Computer Science undergraduate. When the median undergrad makes claims about the direction of research in their field, or some other big claim about their field that isn't explicitly taught in class, or if you ask an undergraduate physics student what they think about how to do physics research, or what ideas they have for improving society, they will often give you quite naive sounding answers (I have heard everything from “I am going to build a webapp to permanently solve political corruption” to “here’s my idea of how we can transmit large amounts of energy wirelessly by using low-frequency tesla-coils”.) I don’t think we should expect anything different on LessWrong. I actually think we should expect it to be worse here, since we are actively encouraging people to have opinions, as opposed to the more standard practice of academia, which seems to consist of treating undergraduates as slightly more intelligent dogs that need to be conditioned with the right mixture of calculus homework problems and mandatory class attendance, so that they might be given the right to have any opinion at all if they spend 6 more years getting their PhD. 

So while I do think that Eliezer’s writing encouraged topics that were slightly more likely to attract crackpots, I think a large chunk of the weird writing is just a natural consequence of being an intellectual community that has a somewhat constant influx of new members. 

And having undergraduates go through the phase where they have bad ideas, and then have it explained to them why their ideas are bad, is important. I actually think it’s key to learning any topic more complicated than high-school mathematics. It takes a long time until someone can productively contribute to the intellectual progress of an intellectual community (in academia it’s at least 4 years, though usually more like 8), and during all that period they will say very naive and silly sounding things (though less and less so as time progresses). I think LessWrong can do significantly better than 4 years, but we should still expect that it will take new members time to acclimate and get used to how things work (based on user-interviews of a lot of top commenters it usually took something like 3-6 months until someone felt comfortable commenting frequently and about 6-8 months until someone felt comfortable posting frequently. This strikes me as a fairly reasonable expectation for the future). 

And I do think that we have many graduate students and tenured professors of the rationality community who are not Eliezer, and who do not sound like crackpots, that can speak reasonably about the same topics Eliezer talked about, and who I feel are acting with a very similar focus to what Eliezer tried to achieve. Luke Muehlhauser, Carl Shulman, Anna Salamon, Sarah Constantin, Ben Hoffman, Scott himself and many more, most of whose writing would fit very well on LessWrong (and often still ends up there). 

But all of this doesn’t mean what Scott describes isn’t a problem. It’s still a bad experience for everyone to constantly have to read through bad first year undergrad essays, but I think the solution can’t involve those essays not getting written at all. Instead it has to involve some kind of way of not forcing everyone to see those essays, while still allowing them to get promoted if someone shows up who does write something insightful from day one. I am currently planning to tackle this mostly with improvements to the karma system, as well as changes to the layout of the site, where users primarily post to their own profiles and can get content promoted to the frontpage by moderators and high-karma members. A feed consisting solely of content of the quality of the average Scott, Anna, Ben or Luke post would be an amazing read, and is exactly the kind of feed I am hoping to create with LessWrong, while still allowing users to engage with the rest of the content on the site (more on that later).

I would very very roughly summarize what Scott says in the first 5 points as two major failures: first a failure of separating the signal from the noise, and second a failure of enforcing moderation norms when people did turn out to be crackpots or just unable to productively engage with the material on the site. Both of which are natural consequences of the abandonment of promoting things to main, the fact that discussion is ordered by default by recency and not by some kind of scoring system, and the fact that the moderation tools were completely insufficient (but more on the details of that in the next section)


My models of LessWrong 2.0

I think there are three major bottlenecks that LessWrong is facing (after the zeroth bottleneck, which is just that no single group had the mandate, resources and motivation to fix any of the problems): 

  1. We need to be able to build on each other’s intellectual contributions, archive important content and avoid primarily being news-driven
  2. We need to improve the signal-to-noise ratio for the average reader, and only broadcast the most important writing
  3. We need to actively moderate in a way that is both fun for the moderators, and helps people avoid future moderation policy violations

I. 

The first bottleneck for our community, and the biggest I think, is the ability to build common knowledge. On facebook, I can read an excellent and insightful discussion, yet one week later I forgot it. Even if I remember it, I don’t link to the facebook post (because linking to facebook posts/comments is hard) and it doesn’t have a title so I don’t casually refer to it in discussion with friends. On facebook, ideas don’t get archived and built upon, they get discussed and forgotten. To put this another way, the reason we cannot build on the best ideas this community had over the last five years, is because we don’t know what they are. There’s only fragments of memories of facebook discussions which maybe some other people remember. We have the sequences, and there’s no way to build on them together as a community, and thus there is stagnation.

Contrast this with science. Modern science is plagued by many severe problems, but of humanity’s institutions it has perhaps the strongest record of being able to build successfully on its previous ideas. The physics community has this system where the new ideas get put into journals, and then eventually if they’re new, important, and true, they get turned into textbooks, which are then read by the upcoming generation of physicists, who then write new papers based on the findings in the textbooks. All good scientific fields have good textbooks, and your undergrad years are largely spent reading them. I think the rationality community has some textbooks, written by Eliezer (and we also compiled a collection of Scott’s best posts that I hope will become another textbook of the community), but there is no expectation that if you write a good enough post/paper that your content will be included in the next generation of those textbooks, and the existing books we have rarely get updated. This makes the current state of the rationality community analogous to a hypothetical state of physics, had physics no journals, no textbook publishers, and only one textbook that is about a decade old. 

This seems to me what Anna is talking about - the purpose of the single locus of conversation is the ability to have common knowledge and build on it. The goal is to have every interaction with the new LessWrong feel like it is either helping you grow as a rationalist or has you contribute to lasting intellectual progress of the community. If you write something good enough, it should enter the canon of the community. If you make a strong enough case against some existing piece of canon, you should be able to replace or alter that canon. I want writing to the new LessWrong to feel timeless. 

To achieve this, we’ve built the following things: 

  • We created a section for core canon on the site that is prominently featured on the frontpage and right now includes Rationality: A-Z, The Codex (a collection of Scott’s best writing, compiled by Scott and us), and HPMOR. Over time I expect these to change, and there is a good chance HPMOR will move to a different section of the site (I am considering adding an “art and fiction” section) and will be replaced by a new collection representing new core ideas in the community.
  • Sequences are now a core feature of the website. Any user can create sequences of their own and other users posts, and those sequences themselves can be voted and commented on. The goal is to help users compile the best writing on the site, and make it so that good timeless writing gets read by users for a long time, as opposed to disappearing into the void. Separating creative and curatorial effort allows the sort of professional specialization that you see in serious scientific fields.
  • Of those sequences, the most upvoted and most important ones will be chosen to be prominently featured on other sections of the site, allowing users easy access to read the best content on the site and get up to speed with the current state of knowledge of the community.
  • For all posts and sequences the site keeps track of how much of them you’ve read (including importing view-tracking from old LessWrong, so you will get to see how much of the original sequences you’ve actually read). And if you’ve read all of a sequence you get a small badge that you can choose to display right next to your username, which helps people navigate how much of the content of the site you are familiar with.
  • The design of the core content of the site (e.g. the Sequences, the Codex, etc.) tries to communicate a certain permanence of contributions. The aesthetic feels intentionally book-like, which I hope gives people a sense that their contributions will be archived, accessible and built-upon.
    One important issue with this is that there also needs to be a space for sketches on LessWrong. To quote PaulGraham: “What made oil paint so exciting, when it first became popular in the fifteenth century, was that you could actually make the finished work from the prototype. You could make a preliminary drawing if you wanted to, but you weren't held to it; you could work out all the details, and even make major changes, as you finished the painting.”
  • We do not want to discourage sketch-like contributions, and want to build functionality that helps people build a finished work from a prototype (this is one of the core competencies of Google Docs, for example).

And there are some more features the team is hoping to build in this direction, such as: 

  • Easier archiving of discussions by allowing discussions to be turned into top-level posts (similar to what Ben Pace did with a recent Facebook discussion between Eliezer, Wei Dai, Stuart Armstrong, and some others, which he turned into a post on LessWrong 2.0
  • The ability to continue reading the content you’ve started reading with a single click from the frontpage. Here's an example logged-in frontpage:

 

 

II.

The second bottleneck is improving the signal-to-noise ratio. It needs to be possible for someone to subscribe to only the best posts on LessWrong, and only the most important content needs to turned into common-knowledge. 

I think this is a lot of what Scott was pointing at in his summary about the decline of LessWrong. We need a way for people to learn from their mistakes, while also not flooding the inboxes of everyone else, and while giving people active feedback on how to improve in their writing. 

The site structure: 

To solve this bottleneck, here is the rough content structure that I am currently planning to implement on LessWrong: 

The writing experience: 

If you write a post, it first shows up nowhere else but your personal user page, which you can basically think of being a medium-style blog. If other users have subscribed to you, your post will then show up on their frontpages (or only show up after it hit a certain karma threshold, if users who subscribed to you set a minimum karma threshold). If you have enough karma you can decide to promote your content to the main frontpage feed (where everyone will see it by default), or a moderator can decide to promote your content (if you allowed promoting on that specific post). The frontpage itself is sorted by a scoring system based on the HN algorithm, which uses a combination of total karma and how much time has passed since the creation of the post. 

If you write a good comment on a post a moderator or a high-karma user can promote that comment to the frontpage as well, where we will also feature the best comments on recent discussions. 

Meta

Meta will just be a section of the site to discuss changes to moderation policies, issues and bugs with the site, discussion about site features, as well as general site-policy issues. Basically the thing that all StackExchanges have. Karma here will not add to your total karma and will not give you more influence over the site. 

Featured posts

In addition to the main thread, there is a promoted post section that you can subscribe to via email and RSS, that has on average three posts a week, which for now are just going to be chosen by moderators and editors on the site to be the posts that seem most important to turn into common-knowledge for the community. 

Meetups (implementation unclear)

There will also be a separate section of the site for meetups and event announcements that will feature a map of meetups, and generally serve as a place to coordinate the in-person communities. The specific implementation of this is not yet fully figured out. 

Shortform (implementation unclear)

Many authors (including Eliezer) have requested a section of the site for more short-form thoughts, more similar to the length of an average FB post. It seems reasonable to have a section of the site for that, though I am not yet fully sure how it should be implemented. 

Why? 

The goal of this structure is to allow users to post to LessWrong without their content being directly exposed to the whole community. Their content can first be shown to the people who follow them, or the people who actively seek out content from the broader community by scrolling through all new posts. Then, if a high-karma users among them finds their content worth posting to the frontpage, it will get promoted. The key to this is a larger userbase that has the ability to promote content (i.e. many more than have the ability to promote content to main on the current LessWrong), and the continued filtering of the frontpage based on the karma level of the posts. 

The goal of all of these is to allow users to see good content at various levels of engagement with the site, while giving some personalization options so that people can follow the people they are particularly interested and while also ensuring that this does not sabotage the attempt at building common knowledge by having the best posts from the whole ecosystem be featured and promoted on the frontpage. 

The karma system:

Another thing I’ve been working on to fix the signal-to-noise ratio is to improve the karma system. It’s important that the people having the most significant insights are able to shape a field more. If you’re someone who regularly produces real insights, you’re better able to notice and bring up other good ideas. To achieve this we’ve built a new karma system, where your upvotes and downvotes weight more if you have a lot of karma already. So far the current weighting is a very simple heuristic, whereby your upvotes and downvotes count for log base 5 of your total karma. Ben and I will post another top-level post to discuss just the karma system at some point in the next few weeks, but feel free to ask any questions now, and we will just include those in that post.

(I am currently experimenting with a karma system based on the concept of eigendemocracy by Scott Aaronson, which you can read about here, but which basically boils down to applying Google’s PageRank algorithm to karma allocation. How trusted you are as a user (your karma) is based on how much trusted users upvote you, and the circularity of this definition is solved using linear algebra.)

I am also interested in having some form of two-tiered voting, similarly to how Facebook has a primary vote interaction (the like) and a secondary interaction that you can access via a tap or a hover (angry, sad, heart, etc.). But the implementation of that is also currently undetermined. 

III

The third and last bottleneck is an actually working moderation system that is fun to use by moderators, while also giving people whose content was moderated a sense of why, and how they can improve. 

The most common, basic complaint currently on LessWrong pertains to trolls and sockpuppet accounts that the reddit fork’s mod tools are vastly inadequate for dealing with (Scott's sixth point refers to this). Raymond Arnold and I are currently building more nuanced mod tools, that include abilities for moderators to set the past/future votes of a user to zero, to see who upvoted a post, and to know the IP address that an account comes from (this will be ready by the open beta). 

Besides that, we are currently working on cultivating a moderation group we are calling “Sunshine Regiment.” Members of the sunshine regiment that will have the ability to take various smaller moderation actions around the site (such as temporarily suspending comment threads, making general moderating comments in a distinct font and promoting content), and so will have the ability to generally shape the culture and content of the website to a larger degree.

The goal is moderation that goes far beyond dealing with trolls, and actively makes the epistemic norms a ubiquitous part of the website. Right now Ben Pace is thinking about moderation norms that encourage archiving and summarizing good discussion, as well as other patterns of conversation that will help the community make intellectual progress. He’ll be posting to the open beta to discuss what norms the site and moderators should have in the coming weeks. We're both in agreement that moderation can and should be improved, and that moderators need better tools, and would appreciate good ideas about what else to give them.


How you can help and issues to discuss:

The open beta of the site is starting in a week, and so you can see all of this for yourself. For the duration of the open beta, we’ll continue the discussion on the beta site. At the conclusion of the open beta, we plan to have a vote open to those who had a thousand karma or more on 9/13 to determine whether we should move forward with the new site design, which would move to the lesswrong.com url from its temporary beta location, or leave LessWrong as it is now. (As this would represent the failure of the plan to revive LW, this would likely lead to the site being archived rather than staying open in an unmaintained state.) For now, this is an opportunity for the current LessWrong community to chime in here and object to anything in this plan.

During the open beta (and only during that time) the site will also have an Intercom button in the bottom right corner that allows you to chat directly with us. If you run into any problems, or notice any bugs, feel free to ping us directly on there and Ben and I will try to help you out as soon as possible.

Here are some issues where I discussion would be particularly fruitful: 

  • What are your thoughts about the karma system? Does an eigendemocracy based system seem reasonable to you? How would you implement the details? Ben and I will post our current thoughts on this in a separate post in the next two weeks, but we would be interested in people’s unprimed ideas.
  • What are your experiences with the site so far? Is anything glaringly missing, or are there any bugs you think I should definitely fix? 
  • Do you have any complaints or thoughts about how work on LessWrong 2.0 has been proceeding so far? Are there any worries or issues you have with the people working on it? 
  • What would make you personally use the new LessWrong? Is there any specific feature that would make you want to use it? For reference, here is our current feature roadmap for LW 2.0.
  • And most importantly, do you think that the LessWrong 2.0 project is doomed to failure for some reason? Is there anything important I missed, or something that I misunderstood about the existing critiques?
The closed beta can be found at www.lesserwrong.com.

Ben, Vaniver, and I will be in the comments!

HOWTO: Screw Up The LessWrong Survey and Bring Great Shame To Your Family

25 ingres 08 October 2017 03:43AM

Let's talk about the LessWrong Survey.

First and foremost, if you took the survey and hit 'submit', your information was saved and you don't have to take it again.

Your data is safe, nobody took it or anything it's not like that. If you took the survey and hit the submit button, this post isn't for you.

For the rest of you, I'll put it plainly: I screwed up.

This LessWrong Survey had the lowest turnout since Scott's original survey in 2009. I'll admit I'm not entirely sure why that is, but I have a hunch and most of the footprints lead back to me. The causes I can finger seem to be the diaspora, poor software, poor advertising, and excessive length.

The Diaspora

As it stands, this years LessWrong survey got about 300 completed responses. This can be compared with the previous one in 2016 which got over 1600. I think one critical difference between this survey and the last was its name. Last year the survey focused on figuring out where the 'Diaspora' was and what venues had gotten users now that LessWrong was sort of the walking dead. It accomplished that well I think, and part of the reason why is I titled it the LessWrong Diaspora Survey. That magic word got far off venues to promote it even when I hadn't asked them to. The survey was posted by Scott Alexander, Ozy Frantz, and others to their respective blogs and pretty much everyone 'involved in LessWrong' to one degree or another felt like it was meant for them to take. By contrast, this survey was focused on LessWrong's recovery and revitalization, so I dropped the word Diaspora from it and this seemed to have caused a ton of confusion. Many people I interviewed to ask why they hadn't taken the survey flat out told me that even though they were sitting in a chatroom dedicated to SSC, and they'd read the sequences, the survey wasn't about them because they had no affiliation with LessWrong. Certainly that wasn't the intent I was trying to communicate.

Poor Software

When I first did the survey in 2016, taking over from Scott I faced a fairly simple problem: How do I want to host the survey? I could do it the way Scott had done it, using Google Forms as a survey engine, but this made me wary for a few reasons. One was that I didn't really have a Google account set up that I'd feel comfortable hosting the survey from, another was that I had been unimpressed with what I'd seen from the Google Forms software up to that point in terms of keeping data sanitized on entry. More importantly, it kind of bothered me that I'd be basically handing your data over to Google. This dataset includes a large number of personal questions that I'm not sure most people want Google to have definitive answers on. Moreover I figured: Why the heck do I need Google for this anyway? This is essentially just a webform backed by a datastore, i.e some of the simplest networking technology known to man in 2016. But I didn't want to write it myself, didn't need to write it myself this is the sort of thing there should be a dozen good self hosted solutions for.

There should be, but there's really only LimeSurvey. If I had to give this post an alternate title, it would be "LimeSurvey: An anti endorsement".

I could go on for pages about what's wrong with LimeSurvey, but it can probably be summed up as "the software is bloated and resists customization". It's slow, it uses slick graphics but fails to entirely deliver on functionality, its inner workings are kind of baroque, it's the sort of thing I probably should have rejected on principle and written my own. However at that time the survey was incredibly overdue, so I felt it would be better to just get out something expedient since everyone was already waiting for it anyway. And the thing is, in 2016 it went well. We got over 3000 responses including both partial and complete. So walking away from that victory and going into 2017, I didn't really think too hard about the choice to continue using it.

A couple of things changed between 2016 and our running the survey in 2017:

Hosting - My hosting provider, a single individual who sets up strong networking architectures in his basement, had gotten a lot busier since 2016 and wasn't immediately available to handle any issues. The 2016 survey had a number of birthing pains, and his dedicated attention was part of the reason why we were able to make it go at all. Since he wasn't here this time, I was more on my own in fixing things.

Myself - I had also gotten a lot busier since 2016. I didn't have nearly as much slack as I did the last time I did it. So I was sort of relying on having done the whole process in 2016 to insulate me from opening the thing up to a bunch of problems.

Both of these would prove disastrous, as when I started the survey this time it was slow, it had a variety of bugs and issues I had only limited time to fix, and the issues just kept coming, even more than in 2016 like it had decided now when I truly didn't have the energy to spare was when things should break down. These mostly weren't show stopping bugs though, they were minor annoyances. But every minor annoyance reduced turnout, and I was slowly bleeding through the pool of potential respondents by leaving them unfixed.

The straw that finally broke the camels back for me was when I woke up to find that this message was being shown to most users coming to take the survey:

Message Shown To Survey Respondents Telling Them Their Responses 'cannot be saved'.

"Your responses cannot be saved"? This error meant for when someone had messed up cookies was telling users a vicious lie: That the survey wasn't working right now and there was no point in them taking it.

Looking at this in horror and outrage, after encountering problem after problem mixed with low turnout, I finally pulled the plug.

Poor Advertising

As one email to me mentioned, the 2017 survey didn't even get promoted to the main section of the LessWrong website. This time there were no links from Scott Alexander, nor the myriad small stakeholders that made it work last time. I'm not blaming them or anything, but as a consequence many people who I interviewed to ask about why they hadn't taken the survey had not even heard it existed. Certainly this had to have been significantly responsible for reduced turnout compared to last time.

Excessive Length

Of all the things people complained about when I interviewed them on why they hadn't taken the survey, this was easily the most common response. "It's too long."

This year I made the mistake of moving back to a single page format. The problem with a single page format is that it makes it clear to respondents just how long the survey really is. It's simply too long to expect most people to complete it. And before I start getting suggestions for it in the comments, the problem isn't actually that it needs to be shortened, per se. The problem is that to investigate every question we might want to know about the community, it really needs to be broken into more than one survey. Especially when there are stakeholders involved who would like to see a particular section added to satisfy some questions they have.

Right now I'm exploring the possibility of setting up a site similar to yourmorals so that the survey can be effectively broken up and hosted in a way where users can sign in and take different portions of it at their leisure. Further gamification could be added to help make it a little more fun for people. Which leads into...

The Survey Is Too Much Work For One Person

What we need isn't a guardian of the survey, it's really more like a survey committee. I would be perfectly willing (and plan to) chair such a committee, but I frankly need help. Writing the survey, hosting it without flaws, theming it so that it looks nice, writing any new code or web things so that we can host it without bugs, comprehensively analyzing the thing, it's a damn lot of work to do it right and so far I've kind of been relying on the generosity of my friends for it. If there are other people who really care about the survey and my ability to do it, consider this my recruiting call for you to come and help. You can mail me here on LessWrong, post in the comments, or email me at jd@fortforecast.com. If that's something you would be interested in I could really use the assistance.

What Now?

Honestly? I'm not sure. The way I see it my options look something like:

Call It A Day And Analyze What I've Got - N=300 is nothing to sneeze at, theoretically I could just call this whole thing a wash and move on to analysis.

Try And Perform An Emergency Migration - For example, I could try and set this up again on Google Forms. Having investigated that option, there's no 'import' button on Google forms so the survey would need to be reentered manually for all hundred-and-a-half questions.

Fix Some Of The Errors In LimeSurvey And Try Again On Different Hosting - I considered doing this too, but it seemed to me like the software was so clunky that there was simply no reasonable expectation this wouldn't happen again. LimeSurvey also has poor separation between being able to edit the survey and view the survey results, I couldn't delegate the work to someone else because that could theoretically violate users privacy.

These seem to me like the only things that are possible for this survey cycle, at any rate an extension of time would be required for another round. In the long run I would like to organize a project to write a new software from scratch that fixes these issues and gives us a site multiple stakeholders can submit surveys to which might be too niche to include in the current LessWrong Survey format.

I'm welcome to other suggestions in the comments, consider this my SOS.

 

LW 2.0 Open Beta starts 9/20

24 Vaniver 15 September 2017 02:57AM

Two years ago, I wrote Lesswrong 2.0. It’s been quite the adventure since then; I took up the mantle of organizing work to improve the site but was missing some of the core skills, and also never quite had the time to make it my top priority. Earlier this year, I talked with Oliver Habryka and he joined the project and has done the lion’s share of the work since then, with help along the way from Eric Rogstad, Harmanas Chopra, Ben Pace, Raymond Arnold, and myself. Dedicated staff has led to serious progress, and we can now see the light at the end of the tunnel.

 

So what’s next? We’ve been running the closed beta for some time at lesserwrong.com with an import of the old LW database, and are now happy enough with it to show it to you all. On 9/20, next Wednesday, we’ll turn on account creation, making it an open beta. (This will involve making a new password, as the passwords are stored hashed and we’ve changed the hashing function from the old site.) If you don't have an email address set for your account (see here), I recommend adding it by the end of the open beta so we can merge accounts. For the open beta, just use the Intercom button in the lower right corner if you have any trouble. 

 

Once the open beta concludes, we’ll have a vote of veteran users (over 1k karma as of yesterday) on whether to change the code at lesswrong.com over to the new design or not. It seems important to look into the dark and have an escape valve in case this is the wrong direction for LW. If the vote goes through, we’ll import the new LW activity since the previous import to the new servers, merging the two, and point the url to the new servers. If it doesn’t, we’ll likely turn LW into an archive.

 

Oliver Habryka will be posting shortly with his views on LW and more details on our plans for how LW 2.0 will further intellectual progress in the community.


2017 LessWrong Survey

21 ingres 13 September 2017 06:26AM

The 2017 LessWrong Survey is here! This year we're interested in community response to the LessWrong 2.0 initiative. I've also gone through and fixed as many bugs as I could find reported on the last survey, and reintroduced items that were missing from the 2016 edition. Furthermore new items have been introduced in multiple sections and some cut in others to make room. You can now export your survey results after finishing by choosing the 'print my results' option on the page displayed after submission. The survey will run from today until the 15th of October.

You can take the survey below, thanks for your time. (It's back in single page format, please allow some seconds for it to load):

Click here to take the survey

Ten small life improvements

16 paulfchristiano 20 August 2017 07:09PM

I've accumulated a lot of small applications and items that make my life incrementally better. Most of these ultimately came from someone else's recommendation, so I thought I'd pay it forward by posting ten of my favorite small improvements.

(I've given credit where I remember who introduced the item into my life. Obviously the biggest part of the credit goes to the creator.)

Video speed

Video Speed Controller lets you speed up HTML 5 video; it gives a nicer interface than the YouTube speed adjustment and works for most videos displayed in a browser (including e.g. netflix/amazon).

(Credit: Stephanie Zolayvar?)

Spectacle

Spectacle on OSX provides keyboard shortcuts to snap windows to any half or third of the screen (or full screen).

Pinned tabs + tab wrangler

I use tab wrangler to automatically close tabs (and save a bookmark) after 10m. I keep gmail and vimflowy pinned so that they don't close. For me, closing tabs after 10m is usually the right behavior.

Aggressive AdBlock

I use AdBlock for anything that grabs attention even if isn't an ad. I usually block "related content," "next stories," the whole youtube sidebar, everything on Medium other than the article, the gmail sidebar, most comment sections, etc. Similarly, I use kill news feed to block my Facebook feed.

Avoiding email inbox

I often need to write or look up emails during the day, which would sometimes lead me to read/respond to new emails and switch contexts. I've mostly fixed the problem by leaving gmail open to my list of starred emails rather than my inbox, ad-blocked the "Inbox (X)" notification, and pin gmail so that I can't see the "Inbox (X)" title.

Christmas lights

I prefer the soft light from christmas lights to white overhead lights or even softer lamps. My favorite are multicolored lights, though soft white lights also seem OK.

(Credit: Ben Hoffman)

Karabiner

Karabiner remaps keys in a very flexible way. (Unfortunately, it only works on OSX pre-Sierra. Would be very interested if there is any similarly flexible software that )

Some changes have helped me a lot:

  • While holding s: hjkl move the cursor. (Turn on "Simple Vi Mode v2") I find this way more convenient than the arrow keys.
  • While holding d: hjkl move the mouse. (Turn on "Mouse Keys Mode v2") I find this slightly more convenient than a mouse most of the time, but the big win is that I can use my computer when a bluetooth mouse disconnects.
  • Other stuff while holding s: (add this gist to your private.xml):
    • While holding s: u/o move to the previous and next word, n is backspace. 
    • While holding s+f: key repeat is 10x faster.
    • While holding s+a: hold shift (so cursor selects whatever it moves over, e.g. I can quickly select last ten words by holding a+s+f and then holding u for 1 second).

I'd definitely pay > a minute a day for these changes.

Keyboard

I find split+tented keyboards much nicer than usual keyboards. I use a Kinesis Freestyle 2 with this to prop it up. I put my touchpad on a raised platform between the keyboard halves. Alternatively, you might prefer the wire cutter's recommendations.

(Credit: Emerald Yang)

Vimflowy

Vimflowy is similar to Workflowy, with a few changes: it lets you "clone" bullets so they appear in multiple places in your document, has marks that you can jump to easily, and has much more flexible motions / macros / etc. I find all of these very helpful. The biggest downside for most people is probably modal editing (keystrokes issue commands rather than inserting text).

The biggest value add for me is the time tracking plugin. I use vimflowy essentially constantly, so this gives me extremely fine-grained time tracking for free.

Running locally (download from github) lets you use vimflowy offline, and using the SQLite backend scales to very large documents (larger than workflowy can handle).

(Credit: Jeff Wu and Zachary Vance.)

ClipMenu [hard to get?]

Keeps a buffer of the last 20 things you've copied, so that you can paste any one of them. Source for OSX is on github here, I'm not sure if it can be easily compiled/installed (binaries used to be available). Would be curious if anyone knows a good alternative or tries to compile it.

(Credit: Jeff Wu.)

Rational Feed

12 deluks917 09 September 2017 07:48PM

=== Updates:

I have been a little more selective about which articles make it onto the feed. I have not been overly selective and all of the obviously general interest rationalsit articles still make it.

Unless people object I am going to try a "weekly feed". The bi-weekly feed is pretty long. I currently post on the SSC reddit and lesswrong. Weekly seems fine for the SSC reddit but lesswrong is a lower activity forum. I will see how it goes. Obviously on a weekly feed there will about half as many recommended articles.

===Highly Recommended Articles:

Object, Subjects and Gender by The Baliocene Apocrypha - "Under modern post-industrial bureaucratized high-tech capitalism, it is less rewarding than ever before to be a subject. Under modern post-industrial bureaucratized high-tech capitalism, it is more rewarding than ever before to be an object. This alone accounts for a lot of the widespread weird stuff going on with gender these days."

Winning Is For Losers by Putanumonit (ribbonfarm) - Zero vs Positive Sum Games. The strong have room to cooperate. Rene Girard's theory of mimetics and competition. College Admissions. Tit for Tat. Spiked dicks in nature. Short and long term strategies in dating. Quirky dating profiles. Honesty on the first date. Beating Moloch with a transhuman God.

Premium Mediocre by Jacob Falkovich - Being 30% wrong is better than being 5% wrong. Consumption: Signaling vs genuine enjoyment. Dating other PM people. Venkat is wrong about impressing parents. He is more wrong, or joking, about cryptocurrencies. Fear of missing out.

Ten New 80000 Hours Articles Aimed At The by 80K Hours (EA forum) - Ten recent articles and descriptions from 80K hours. Over and underpaid jobs relative to their social impact, the most employable skills, learning ML, whether most social programs work and other topics.

Minimizing Motivated Beliefs by Entirely Useless - The tradeoffs between epistemic and instrumental rationality. Yudkowsky's argument such tradeoffs either very stupid or don't exist. Issues with Yudkowsky: Denial that belief is voluntary, thinking that trading away the truth requires being blind to consequences. Horror victims and transcendent meaning. Interesting things are usually false.

===Scott:

How Do We Get Breasts Out Of Bayes Theorem by Scott Alexander - "But evolutionary psychologists make claims like 'Men have been evolutionarily programmed to like women with big breasts, because those are a sign of fertility.' Forget for a second whether this is politically correct, or cross-culturally replicable, or anything like that. From a neurological point of view, how could this possibly work?"

Predictive Processing And Perceptual Control by Scott Alexander - "predictive processing attributes movement to strong predictions about proprioceptive sensations. Because the brain tries to minimize predictive error, it moves the limbs into the positions needed to produce those sensations, fulfilling its own prophecy." Connections with Will Power's 'Behavior: The Control of Perception' which Scott already reviewed.

Book Review: Surfing Uncertainty by Scott Alexander - Scott finds a real theory of how the brain works. "The key insight: the brain is a multi-layer prediction machine. All neural processing consists of two streams: a bottom-up stream of sense data, and a top-down stream of predictions. These streams interface at each level of processing, comparing themselves to each other and adjusting themselves as necessary."

Links: Exsitement by Scott Alexander - Slatestarcodex links post. A Nootropics survey, gene editing, AI, social norms, Increasing profit margins, politics, and other topics.

Highlights From The Comments On My Irb Nightmare by Scott Alexander - Tons of hilarious IB stories. A subreddit comment about getting around irb. Whether the headaches are largely institutional rather than dictated by government fiat. Comments argue in favor of the irb and Scott responds.

My IRB Nightmare by Scott Alexander - Scott tries to run a study to test the Deck Depression Inventory. The institutional review board makes this impossible. They not only make tons of capricious demands they also attempt to undermine the study's scientific validity.

Slippery Slopen Thread by Scott Alexander - Public open thread. The slippery slope to rationalist catgirl. Selected top comments. Update on Trump and crying wolf.

Contra Askell On Moral Offsets by Scott Alexander - Axiology is the study of what’s good. Morality is the study of what the right thing to do is. You can offset axiological effects but you can't offset moral transgressions.

===Rationalist:

MRE Futures To Not Starve by Robin Hanson - Emergency food sources as a way to mitigate catastrophic risk. The Army's 'Meals Ready to Eat'. Food insurance. Incentives for producers to deliver food in emergencies. Incentives for researchers to find new sources. Sharing information.

Book Reviews: Zoolitude And The Void by Jacob Falkovich - Seven Surrenders the sequel to 'Too like the Lightning' mercilessly cuts the bad parts and focuses on the politics, personalities, and philosophy that made TLTL great. The costs of adding too much magic to a setting, don't make the mundane irrelevant. One Hundred Years of Solitude: Shit just happens. Zoo City: Realistic Magic: "The Zoo part is the magic: some people who commit crimes mysteriously acquire an animal familiar and a low-key magical talent." The Mark and the Void: "Technically, there’s no magic in The Mark and the Void. But there’s investment banking, which takes the role of the mysterious force that decides the fate of individuals and nations but remains beyond the ken of mere mortals."

The World As If by Sarah Perry (ribbonfarm) - "This is an account of how magical thinking made us modern." Magical thinking as a confusing of subjective and objective. Useful fictions. Hypothetical thinking. Pre-modern concrete thinking and categorization schemes relative to modern abstract ones. As if thinking. Logic and magic.

To Save The World Make Sure To Go Beyond Academia by Kaj Sotala - Academic research often fails to achieve real change. Lots of economic research concerns the optimal size of a carbon tax but we currently lack any carbon tax. Academic research on x-risk from nuclear winter doesn't change the motivations of politicians very much.

Introducing Mindlevelup The Book by mindlevelup - MLU compiled and edited their work from 2017 into a 30K word, 150 page book. Most of the material appeared on the blog but some of it is new and the pre-existing posts have been edited for clarity.

Expanding Premium Mediocrity by Zvi Moshowitz - "This is (much of) what I think Rao is trying to say in the second section of his post, the part about Maya but before Molly and Max, translated into DWATV-speak. Proceed if and only if you want that."

Simple Affection And Deep Truth by Particular Virtue - "Simple Affection is treating someone like a child: they will forget about bad things, as long as you give them something good to think about instead. Deep Truth is treating someone like an elephant: they never forget, and they forgive only with deep deliberation."

Are People Innately Good by Sailor Vulcan - SV got into two arguments that went badly. One was on all lives matter. The other occurred when SV tried to defend Glen of Intentional Insights on the SSC discord. Terminal values aren't consistent. SV was abused as a child.

Metapost September 5th by sam[]zdat - Plans for the blog. Next series will be on epistemology and the ''internal' side of nhilism. Revised introduction. Sam will probably write fiction. Site reorganization. History section. Current reading list. Patreon.

Minimizing Motivated Beliefs by Entirely Useless - The tradeoffs between epistemic and instrumental rationality. Yudkowsky's argument such tradeoffs either very stupid or don't exist. Issues with Yudkowsky: Denial that belief is voluntary, thinking that trading away the truth requires being blind to consequences. Horror victims and transcendent meaning. Interesting things are usually false.

Exploring Premium Mediocrity by Zvi Moshowitz - Defining premium mediocre. Easy and hard mode related to Rao's theories of losers, sociopaths and heroes. The Real Thing. A 2x2 ribbonfarm style graph. Restaurants.

Tegmarks Book Of Foom by Robin Hanson - Tegmark's recent book basically described Yudkowsky's intelligence explosion. Tegmark is worried the singularity might be soon and we need to have figured out big philosophical issues by then. Hanson thinks Tegmark overestimates the generality of intelligence. AI weapons and regulations.

The Doomsday Argument In Anthropic Decision Theory by Stuart Armstrong (lesswrong) - "In Anthropic Decision Theory (ADT), behaviors that resemble the Self Sampling Assumption (SSA) derive from average utilitarian preferences. However, SSA implies the doomsday argument. This post shows there is a natural doomsday-like behavior for average utilitarian agents within ADT."

Forager Vs Farmer Elaborated by Robin Hanson - Early humans collapsed Machiavellian dynamics down to a reverse-dominance-hierarchy. Group norm enforcement and its failure modes. Safety leads to collective play and art, threat leads to a return to Machiavellianisn and suspicion. Individuals greatly differ as to what level of threat causes the switch, often for self-serving reasons. Left vs right. "The first and primary political question is how much to try to resolve issues via a big talky collective, or to let smaller groups decide for themselves."

Critiquing Other Peoples Plans Politely by Katja Grace - Three failure modes: The attack, The polite sidestep, The inadvertent personal question. A plan to avoid these issues: debate beliefs, not actions.

Gleanings From Double Crux On The Craft Is Not The Community by Sarah Constantin - Results from Sarah's public double crux. Sarah initially did not think the rationalist intellectual project was worth preserving. She wants to see results, even though she concedes that formal results can be very difficult to get. What is the value of introspection and 'navel grazing'?

Intrinsic Properties And Eliezers Metaethics by Tyrrell_McAllister (lesswrong) - Intuitions of intrinsicness. Is goodness intrinsic? Seeing intrinsicness in simulations. Back to goodness.

Winning Is For Losers by Putanumonit (ribbonfarm) - Zero vs Positive Sum Games. The strong have room to cooperate. Rene Girard's theory of mimetics and competition. College Admissions. Tit for Tat. Spiked dicks in nature. Short and long term strategies in dating. Quirky dating profiles. Honesty on the first date. Beating Moloch with a transhuman God.

Dangers At Dilettante Point by Everything Studies - Its relatively easy to know a little about alot of topics. But its dangerous to find yourself playing the social role of the knowledgeable person too often. The percentage fo people with a given level of knowledge goes to zero quickly.

Entrenchment Happens by Robin Hanson - Many systems degrade, collapse and our replaced. However other systems, even somewhat arbitrary ones, are very stable over time. Many current systems in programming, language and law are likely to remain in the future.

Premium Mediocre by Jacob Falkovich - Being 30% wrong is better than being 5% wrong. Consumption: Signaling vs genuine enjoyment. Dating other PM people. Venkat is wrong about impressing parents. He is more wrong, or joking, about cryptocurrencies. Fear of missing out.

===AI:

Ideological Engineering And Social Control by Geoffrey Miller (EA forum) - China is trying hard to develop advanced AI. A major goal is to use AI to monitor both physical space and social media. Supressing wrong-think doesn't require radically advanced AI.

Incorrigibility In Cirl by The MIRI Blog - Paper. Goal: Incentivize a value learning system to follow shut down instructions. Demonstration that some assumptions are not stable with respect to model mis-specification (ex programmer error). Weaker sets of assumptions: difficulties and simple strategies.

Nothing Wrong With Ai Weapons by kbog (EA forum) - Death by AI is no more intrinsically bad than death by conventional weapons. Some consequenitoualist issues the author addresses: Civilian deaths, AI arms race, vulnerability to hacking.

===EA:

Can Outsourcing Improve Liberias Schools Preliminary RCT Results by Innovations for Poverty - "Last summer, the Liberian government delegated management of 93 public elementary schools to eight different private contractors. After one year, public schools managed by private operators raised student learning by 60 percent compared to standard public schools. But costs were high, performance varied across operators, and contracts authorized the largest operator to push excess pupils and under-performing teachers into other government schools."

Ten New 80000 Hours Articles Aimed At The by 80K Hours (EA forum) - Ten recent articles and descriptions from 80K hours. Over and underpaid jobs relative to their social impact, the most employable skills, learning ML, whether most social programs work and other topics.

Is Ea Growing Some Ea Growth Metrics For 2017 by Peter Hurford (EA forum) - Activity metrics for EA website, donations data, additional Facebook data, commentary that EA seems to be growing but there is substantial uncertainty.

Ea Survey 2017 Series Cause Area Preferences by Tee (EA forum) - Top Cause Area, near-top areas, areas which should not have EA resources, cause area correlated with demographics, donations by cause area.

Looking At How Superforecasting Might Improve AI Predictions by Will Pearson (EA forum) - Good Judgement Project: What they did, results, relevance. Lessons: Focus on concrete issues, focus on AI with no intelligence augmentation, learn a diverse range of subjects, breakdown the open questions, publicly update.

Why Were Allocating Discretionary Funds To The Deworm The World Initiative by The GiveWell Blog - "Why Deworm the World has a pressing funding need. The benefits and risks of granting discretionary funds to Deworm the World today. Why we’re continuing to recommend that donors give 100% of their donation to AMF."

Ea Survey 2017 Series Community Demographics by Katie Gertsch (EA forum) - Some results: Mostly young and male, slight increase in female participation. Highest concentration cities. Atheism/Agnostic rate fell from 87% to 80%. Increase in the proportion of EA who see EA as a duty or opportunity as opposed to an obligation.

Effective Altruism Survey 2017 Distribution And by Ellen McGeoch and Peter Hurford (EA forum) - EA 2017 Study results are in. Details about distribution abd data analysis techniques. Discussion of whether the subpopulation is a representative sample of EA and its subpopulations.

Six Tips Disaster Relief Giving by The GiveWell Blog - Practical advice for effective disaster relief charity. Give Cash, give to proven effective charities and allow charities significant freedom in how they use your donation.

===Politics and Economics:

Harvard Admit Legacy Students by Marginal Revolution - Demand for Ivy league admissions far outstrips supply. The main constraint is that the Ivy League depends on donations. One way to scale up, while maintaining high donation rates, is to increase legacy admissions. Teaching quality is unlikely to suffer, qualified students are easy to find.

Object, Subjects and Gender by The Baliocene Apocrypha - "Under modern post-industrial bureaucratized high-tech capitalism, it is less rewarding than ever before to be a subject. Under modern post-industrial bureaucratized high-tech capitalism, it is more rewarding than ever before to be an object. This alone accounts for a lot of the widespread weird stuff going on with gender these days."

Links 11 by Artir - Psychology, Politics, Economics, Philosophy, Other. Several links related to the Google memo.

Unpopular Ideas About Crime And Punishment by Julia Galef - Thirteen opinions on prison abolition, the death penalty, corporal punishment, rehabilitation, redistribution and more.

Intangible Investment and Monopoly Profits by Marginal Revolution - "Intangible capital used to be below 30 percent of the S&P 500 in the 70s, now it is about 84 percent. " Seven implications about profit, monopoly, spillover, etc.

What You Cant Say To A Sympathetic Ear by Katja Grace - Sharing socially unacceptable views with your friends is putting them in a bad situation, regardless of whether they agree with those ideas. If they don't punish you society will hold them complicit. Socially condemning views is worse than commonly thought "To successfully condemn a view socially is to lock that view in place with a coordination problem."

A I Bias Doesnt Mean What Journalists Want You To Think It Means by Chris Stucchio And Lisa Mahapatra (Jacobite) - What is data science and AI? What is bias? How do we identify bias? The fallout of the author's algorithm. Predicting Creditworthiness. Understanding Language. Predicting Criminal Behavior. Journalists and Wishful Thinking.

Four Decades of the Middle East by Bryan Caplan - "Almost all of the Middle East's disasters over the past four decades can be credibly traced back to a single highly specific major event: the Iranian Revolution. Let me chronicle the tragic trail of dominoes."

The Thresher by sam[]zdat - "Still, if what makes 'modernity' modernity is partially in technology, then the Uruk Machine will be updated and whirring at unfathomable speeds, the thresher to Gilgamesh’s sacred club."

The Uruk Machine by sam[]zdat - Sam's fundamental framework: Seeing like a State, The Great Transformation, The True Believer, The Culture of Narcissism.

===Misc:

Into The Gray Zone by Bayesian Investor - Book Review. A modest fraction of people diagnosed as being in a persistent vegetative state have locked in syndrome. People misjudge when they would want to die. Alzheimer's.

===Podcast:

What You Need To Know About Climate Change by Waking Up with Sam Harris - "How the climate is changing and how we know that human behavior is the primary cause. They discuss why small changes in temperature matter so much, the threats of sea-level rise and desertification, the best and worst case scenarios, the Paris Climate Agreement, the politics surrounding climate science."

Dan Rather by The Ezra Klein Show - "Rather and I discuss the Trump presidency and what it means for the Republican Party's future, our fractured media landscape, and Rather's own evolving career in media."

Caplan Family by Bryan Caplan - "For the last two years, I homeschooled my elder sons, Aidan and Tristan, rather than send them to traditional middle school. Now they've been returned to traditional high school. We decided to mark our last day with a father-son/teacher-student podcast on how we homeschooled, why we homeschooled, and what we achieved in homeschool."

Rob Reich On Foundations by EconTalk - "The power and effectiveness of foundations--large collections of wealth typically created and funded by a wealthy donor. Is such a plutocratic institution consistent with democracy? Reich discusses the history of foundations in the United States and the costs and benefits of foundation expenditures in the present."

Jesse Singal On The Problems With Implicit Bias Tests by Rational Speaking - "The IAT has been massively overhyped, and that in fact there's little evidence that it's measuring real-life bias. Jesse and Julia discuss how to interpret the IAT, why it became so popular, and why it's still likely that implicit bias is real, even if the IAT isn't capturing it."

Emotionally Charged Discussion by The Bayesian Conspiracy - Conversations where one party thinks the other sides position is stupid/evil/etc. Debate vs truth seeking. Julia Galef's lists of unpopular ideas. Agenty Duck's thoughts on introspection. Double Crux.

The Future Of Intelligence by Waking Up with Sam Harris - "Max Tegmark. His new book Life 3.0: Being Human in the Age of Artificial Intelligence. They talk about the nature of intelligence, the risks of superhuman AI, a nonbiological definition of life, the substrate independence of minds, the relevance and irrelevance of consciousness for the future of AI, near-term breakthroughs in AI."

Benedict Evans by EconTalk - "Two important trends for the future of personal travel--the increasing number of electric cars and a world of autonomous vehicles. Evans talks about how these two trends are likely to continue and the implications for the economy, urban design, and how we live."

The Life Of A Quant Trader by 80,000 Hours - What do quant traders do. Compensation. Is quant trading harmful? Who is a good fit and how to break into quant trading. Work environment and motivation. Variety of available positions.

Online discussion is better than pre-publication peer review

12 Wei_Dai 05 September 2017 01:25PM

Related: Why Academic Papers Are A Terrible Discussion Forum, Four Layers of Intellectual Conversation

During a recent discussion about (in part) academic peer review, some people defended peer review as necessary in academia, despite its flaws, for time management. Without it, they said, researchers would be overwhelmed by "cranks and incompetents and time-card-punchers" and "semi-serious people post ideas that have already been addressed or refuted in papers already". I replied that on online discussion forums, "it doesn't take a lot of effort to detect cranks and previously addressed ideas". I was prompted by Michael Arc and Stuart Armstrong to elaborate. Here's what I wrote in response:

My experience is with systems like LW. If an article is in my own specialty then I can judge it easily and make comments if it’s interesting, otherwise I look at its votes and other people’s comments to figure out whether it’s something I should pay more attention to. One advantage over peer review is that each specialist can see all the unfiltered work in their own field, and it only takes one person from all the specialists in a field to recognize that a work may be promising, then comment on it and draw others’ attentions. Another advantage is that nobody can make ill-considered comments without suffering personal consequences since everything is public. This seem like an obvious improvement over standard pre-publication peer review, for the purpose of filtering out bad work and focusing attention on promising work, and in practice works reasonably well on LW.

Apparently some people in academia have come to similar conclusions about how peer review is currently done and are trying to reform it in various ways, including switching to post-publication peer review (which seems very similar to what we do on forums like LW). However it's troubling (in a "civilizational inadequacy" sense) that academia is moving so slowly in that direction, despite the necessary enabling technology having been invented a decade or more ago.

Feedback on LW 2.0

11 Viliam 01 October 2017 03:18PM

What are your first impressions of the public beta?

Rational Feed

11 deluks917 27 August 2017 03:49AM

===Highly Recommended Articles:

What Is Rationalist Berkleys Community Culture by Zvi Moshowitz - The original rationalist community mission was to save the world, not to be nice to each other. Sarah recently suggested the later is currently the actual goal. Zvi reinterprets this as sounding an alarm. The rationalists should not become just another Berkeley community of bohemians and weirdos.

Cthugha The Living Flame by Exploring Egregores - Rationalists as worshippers of an Eldritch Star God. Valuing knowledge and ideas above all else. Bonobos and transhumanists. Yudkowsky's argument about distributed vs concentrated intellect. The AI box experiment. Nerds as the true extraverts. "What do you think the singularity will actually look like?" The site maps eight other Eldritch Gods to different philosophical dispositions.

Internet Explorers Not Exploiters by Nostalgebraist - Exploit vs explore tradeoffs. Attention spans. How long should you try a math problem before you give up? Exploring new options can be uncomfortable since it might lead nowhere. Addictive games and the internet. Academic research.

Diversity And Team Performance What The Research Says by Eukaryote - Opens with several links about diversity and inclusion in EA. The pros and cons of different types of diversity in terms of group cohesion and information processing. Practical ways to minimize the costs of diversity and magnify the benefits. Lots of references.

The Market Power Story by Noah Smith - Many issues in the American economy are blamed on the increasing market power of a small number of firms. Analysis: Monopolistic competition. Profits. Market Concentration. Output restriction. Three updates. Lots of citations and references to papers.

The Anti Slip Slope by samuelthefifth (Status 451) - An analogy between workplace noise and workplace sexism. How efforts to stamp out 'workplace noise' can get out of control.

Dota 2 by Open Ai - Open AI codes a 1v1 Dota-2 bot that defeaated top players. The bots actions per minute were comparable to many humans. The bot learned the game from scratch by self-play, and does not use imitation learning or tree search. The game involves hidden information and the bot's strategies were complicated.

Stop Caring So Much About Technical Problems by Particular Virtue - Links to an article describing what attributes actually get developers jobs (other than technical skill). Caring about making great products is much more desirable than caring about technical problems. Developer interviews are highly random. Experience matters alot. Enterprise programmers are disliked. Practical advice.

===Scott:

Partial Credit by Scott Alexander - Blotting out the Sun. Short story.

Moral Reflective Equilibrium and the Absurdity Principle by SlateStarScratchpad - A long discussion about the nature of morality. The absurdity heuristic. Reflective equilibrium of moral values. The feedback loop between intuition and logic.

Advertising by SlateStarScratchpad - Nostalgebraist muses about advertising. Scott briefly explains how advertising works on SSC.

Fear And Loathing At Effective Altruism Global 2017 by Scott Alexander - EA Global was well run and impressive. The deep weirdness of EA. The fundamental goodness of effective altruists. The yoke is light and everyone is welcome.

Community History by Scott (r/SSC) - Scott answers: "What happened to Lesswrong? When (and more importantly why) did the spread out to other blogs happen?"

Threado Quia Absurdum by Scott Alexander - Bi-Weekly public open thread. Recommended comments on: how organizations change over time, self-driving car progress, gun laws in the Czech Republic, why comments are closed on some posts here. Scot may be choosing a SSC moderator.

Brief Cautionary Notes On Branded Combination Nootropics by Scott Alexander - Many 'Xbrain' pills contain ineffectively low doses of ingredients. Nootropics, like many drugs, effect people differently; you need to isolate which nootropics work for you. Drug interactions are very poorly understood, even for well studied drugs.

The Lizard People Of Alpha Draconis 1 Decided To Build An Ansible by Scott Alexander - Faster than light communication via negative average preference utilitarianism.

Sparta by SlateStarScratchpad - A historian claims that Sparta's military renown was developed during a period when Sparta's actual military ability was declining. Scott disagrees and cites sources showing that the earliest records all claim Sparta was very powerful.

===Rationalist:

Internal Dialogue About End Of World by Sailor Vulcan - Short Story. Keep living, maybe we will win the lottery.

My Tedtedx Talks by Robin Hanson - Ted talks by Robin about his books "The Age of Em" and "The Elephant in the Brain". Talks are short ~12 minutes.

Paranoia Testing by Elo - Experiments to test if you have paranoia. Costs. Notes and some graphs.

Theres Always Subtext by Robin Hanson - Mostly a quote about subtext in film.

Play In Hard Mode by Zvi Moshowitz - "Hard mode is harder. The reason to Play in Hard Mode is because it is the only known way to become stronger, and to defend against Goodhart’s Law." Zvi revists the eleven examples from 'easy mode' and shows how to approach them from a hard mode perspective.

Play In Easy Mode by Zvi Moshowitz - Eleven examples of 'selling out' and taking the path of least resistance. Interestingly in several examples taking the easy path is quite defensible.

Emotional Labour by Elo - "I wanted to save you the effort of thinking about the thing and so I decided not to tell/ask you before it was resolved." VS "I wanted to not have to withhold a thing from you so I told you as soon as it was bothering me so that I didn’t have to lie/cheat/withhold/deceive you even if I thought it was in your best interest"

Paths Forward On Berkeley Culture Discussion by Zvi Moshowitz - Follow up to Zvi's post on the Berkeley rationalist community. A long sketch of the arguments Zvi would make and the article he would write if he had time to respond in depth.

How Social Is Reason by Robin Hanson - Humans alone have a logical reasoning module. 'Logical Fallacies' evolved because they are adaptive for persuasion. Unschooled populations often cannot solve logical problems. Epistemic learned helplessness. Impressive complex arguments are preferred over simple ones.

Cthugha The Living Flame by Exploring Egregores - Rationalists as worshippers of an Eldritch Star God. Valuing knowledge and ideas above all else. Bonobos and transhumanists. Yudkowsky's argument about distributed vs concentrated intellect. The AI box experiment. Nerds as the true extraverts. "What do you think the singularity will actually look like?" The site maps eight other Eldritch Gods to different philosophical dispositions.

Self Fulfilling Prophecy by Entirely Useless - The author analyzes various edge cases about intention and choice. They discuss how to modify their theories and whether they are on the right track.

Decisions As Predictions by Entirely Useless - "Consider the hypothesis that both intention and choice consist basically in beliefs: intention would consist in the belief that one will in fact obtain a certain end, or at least that one will come as close to it as possible. Choice would consist in the belief that one will take, or that one is currently taking, a certain temporally immediate action for the sake of such an end."

Bathtime by The Unit of Caring - Bath time play with a baby. Things are compelling when they have the right balance of surprise and predictability.

Internet Explorers Not Exploiters by Nostalgebraist - Exploit vs explore tradeoffs. Attention spans. How long should you try a math probem before you give up? Exploring new options can be uncomfortable since it might lead nowhere. Addictive games and the internet. Academic research.

Embracing Metamodernism by Gordon (Map and Territory) - "Metamodernism believes in reconstructing things that have been deconstructed with a view toward reestablishing hope and optimism in the midst of a period (the postmodern period) marked by irony, cynicism, and despair."

Why Ethnicity Ideology by Robin Hanson - "he more life decisions a feature influences, the more those who share this feature may plausibly share desired policies, policies that their coalition could advocate. So you might expect political coalitions to be mostly based on individual features that are very useful for predicting individual behavior. But you’d be wrong."

A Village Is Better Than Group House by Particular Virtue - More private space. Non-shared legal ownership. More people means much more social space and stability.

A Flaw In The Way Smart People Think About Robots And Job Loss by Tom Bartleby - Considering jobs one at a time causes smart people to think no one will lose their job from automation. However small incremental advances reduce the number of needed workers. A history of secretaries. Personal experience of saving time via programming.

More Brain Lies by Aceso Under Glass - "But sometimes it helps to take the gap between is and ought as a sign of how high your standards are, rather than how bad you are at a thing."

Ems In Walkaway by Robin Hanson - A review of the science fiction book 'Walkaway' which features brain emulation. Robin describes what he finds realistic and unrealistic.

Take My Job by Jacob Falkovich - "I want to tell you about the job I’m leaving, why you should think about applying for it, and what it has taught me in the last four years about company culture, diversity, and the makings of a good workplace." Cool jobs have work environments. Keep company identity small if you want real diversity.

The Parliamentary Model As The Correct Ethical Model by Kaj Sotala - An explanation of how the 'parliamentary' model of morality resolves uncertainty around which model of morality is correct. Why the parliamentary model is itself the correct model.

The Problem With Prestige by Robin Hanson - Small fields such as academic disciplines often use prestige to reward people. A mathematical model of how effort is allocated to maximize prestige. Why prestige doesn't scale and what is under-incentivized by prestige.

How I Think About Free Speech Four Categories by Julia Galef - Descriptions of the following categories: No consequences, Individual social consequences, Official social consequences, Legal consequences. Disagreements about categories.

Choices Are Really Bad by Zvi Moshowitz - Exercising willpower is a cost in the short term. Decision fatigue. Reasons why people, including you, WILL choose wrong. People justify their choices. Choices create blame and responsibility. Choices cause paralysis. Choice are communication. Choices require justification. Choices let people defect and destroy cooperation.

What Is Rationalist Berkleys Community Culture by Zvi Moshowitz - The original rationalist community mission was to save the world, not to be nice to each other. Sarah recently suggested the later is currently the actual goal. Zvi reinterprets this as sounding an alarm. The rationalists should not become just another Berkeley community of bohemians and weirdos.

Repairing Anxiety Using Internal And External Locus Of Control Models by Elo - Two variable model. Locus of Control: Internal or External. Feeling: Good or bad. The four combinations. Moving diagonally, for example from internal-bad to external-good.

Social Insight When A Lie Is Not A Lie When A by Bound_Up (lesswrong) - If you merely speak the truth as you see then you will be misunderstood. Example of saying you are an atheist. Many people are incapable of understanding your real arguments.

Multiverse Wide Cooperation Via Correlated Decision Making by The Foundational Research Institute - "If we care about what happens in civilizations located elsewhere in the multiverse, we can superrationally cooperate with some of the their inhabitants. That is, if we take their values into account, this makes it more likely that they do the same for us. In this paper, I attempt to assess the practical implications of this idea"

Questions Are Not Just For Asking by Malcom Ocean (ribbonfarm) - Hazards of asking questions. Hold your Questions. Reveal your questions. Un-ask your questions. Question your questions. Using Questions to Organize Attention. Letting the question ask you; becoming the answer.

Happiness Is Not Coherent Concept by Particular Virtue - A social science concept is 'real' if and only if it represents reality well and you have ruled out alternatives. "If a thing can be measured several different ways, and a causal factor can push one in a direction but not the other, then you start to worry that the thing is not actually one thing, but several things." Why should you care that happiness isn't a single thing.

The Craft Is Not The Community by Sarah Constantin (Otium) - The Berkeley Rationalists are building a true community: Sharehouses, Plans for an unschooling center, etc. However many rationalist companies/projects have failed. Sarah doesn't think it makes sense to tackle 'external facing' projects as a community. Tesla Motors and MIT aren't run as community projects, they are run meriotocratically. Lots of analysis on the meaning of community and what makes organizations effective. Personal.

===AI:

More On Dota 2 by Open Ai - Timeline of the DOTA-bot's rapid improvement. Bot Exploits. Physical Infrastructure. What needs to be done to play 5x5.

Openai Bots Were Defeated At Least 50 Times - People could play against the openAI Dota bot. Several people found strategies to beat the bot. One of the human victors explains their strategy.

Dota 2 by Open Ai - Open AI codes a 1v1 Dota-2 bot that defeaated top players. The bots actions per minute were comparable to many humans. The bot learned the game from scratch by self-play, and does not use imitation learning or tree search. The game involves hidden information and the bot's strategies were complicated.

===EA:

Things I Have Gotten Wrong by Aceso Under Glass - Mistaken evaluations: Animal Charity Evaluators, Raising for Effective Giving, Charity Science, Tostan.

We Have No Idea If There Are Cost Effective Interventions Into Wild Animal Suffering by Ozy - Many people are confident there are no effective ways to reduce wild animal suffering, Ozy disagrees. Ecosystems are complex but we aren't completely uncertain. Wild Animal Suffering is a tiny field staffed by non-experts working part time.

Altruism Is Incomplete by Zvi Moshowitz - "I worry many in EA are looking at life like a game where giving money to charity is how the world scores victory points." Controls in psychology are often motivated by researcher bias. Amazon is the world's most effective charity. Life is about getting things done, often for selfish reasons. Veganism. Zvi doesn't believe the official EA party line.

Let Them Decide by GiveDirectly - Eight media articles about Basic Income, Give Directly, Cash Transfer and Development Aid.

High Time For Drug Policy Reform Part 44 by MichaelPlant (EA forum) - "This is the fourth of four posts on DPR. In this part I provide some simplistic but illustrative cost-effectiveness estimates comparing an imaginary campaign for DPR against current interventions for poverty, physical health and mental health; I also consider what EAs should do next."

High Time For Drug Policy Reform Policy by MichaelPlant (EA forum) - "This is the third of four posts on DPR. In this part I look at what a better approach to drug policy might be and then discuss how neglected and tractable this problem is as cause area of EAs to work on."

Drug Policy Reform 1 by MichaelPlant (EA forum) - 9300 Words. Six Mechanisms for drug reform to do good: Fighting mental illness. Reducing pain. Improving public health. Reducing crime, violence, corruption and instability (including international scale). Raising revenue for governments. Recreational use. Five major objections and the Author's response.

===Politics and Economics:

Diversity And Team Performance What The Research Says by Eukaryote - Opens with several links about diversity and inclusion in EA. The pros and cons of different types of diversity in terms of group cohesion and information processing. Practical ways to minimize the costs of diversity and magnify the benefits. Lots of references.

Unpopular Ideas About Social Norms by Julia Galef - Twenty-four ideas, many with references explaining the ideas. As an example: "Overall it would be a good thing to have a totally transparent society with no privacy"

Unpopular Ideas About Political And Economic Systems by Julia Galef - Twenty-three ideas, many with references explaining the ideas. As an example "Many people have a moral duty not to vote".

The Market Power Story by Noah Smith - Many issues in the American economy are blamed on the increasing market power of a small number of firms. Analysis: Monopolistic competition. Profits. Market Concentration. Output restriction. Three updates. Lots of citations and references to papers.

The Courage To Stand Up And Do The Wrong Thing by Tom Bartleby - According to Supreme Court Justice Black, applying Brown vs Board of Education to DC schools was an unprincipled but correct decision. Have principles. Don't follow them over a cliff. Acknowledge deviations. Charlottesville. Cloduflare suspends service to the daily stormer.

Many Topics by Scott Aaronson - Misc Topics: HTTPS / Kurtz / eclipse / Charlottesville / Blum / P vs. NP

The Muted Signal Hypothesis Of Online Outrage by Kaj Sotala - "People want to feel respected, loved, appreciated, etc. When we interact physically, you can easily experience subtle forms of these feelings... Online, most of these messages are gone: a thousand people might read your message, but if nobody reacts to it, then you don’t get any signal indicating that you were seen... . So if you want to consistently feel anything, you may need to ramp up the intensity of the signals."

Marching Markups by Robin Hanson - "Holding real productivity constant, if firms move up their demand curves to sell less at a higher prices, then total output, and measured GDP, get smaller. Their numerical estimates suggest that, correcting for this effect, there has been no decline in US productivity growth since 1965. That’s a pretty big deal."

Greater Gender Parity Economics Suggests Reform Tenure Systems by Marginal Revolution - Biological clocks conflict with the tenure system timeline. Tyler recommends a much more flexible system with a variety of roles. The leaders in the economics profession have been 'punching down' at an infamous anonymous economics forum.

Moral Precepts And Suicide Pacts by Perfecting Dated Visions - "To be trusted to remain peaceful, you must be the kind of person who remains peaceful. And to be a peaceful person and earn the trust placed in you, you must be peaceful even when you have every right to fight. It’s the same with tolerance. If you want to shut up your argumentative opponents and vigorously retaliate when your opponents show signs of intolerance, you will not be trusted to be tolerant to others who are tolerant, even those who basically agree with you." The constitution, World War 1, Nazi's today.

The Anti Slip Slope by samuelthefifth (Status 451) - An analogy between workplace noise and workplace sexism. How efforts to stamp out 'workplace noise' can get out of control.

Seattle Minimum Wage Study Part 3 Tell Me Why Im Wrong Please by Zvi Moshowitz - Most writers thought the Seattle minimum wage study showed that low wage workers were hurt. Zvi found a fundamental flaw in their analysis. If you correct for raising wages in Seattle then the study seems to show low wage workers weren't hurt or perhaps benefitted.

Theory Vs Data In Statistics by Noah Smith - Theory heavy vs minimal theory models in Economics. Machine learning as the extreme of a "no model required" paradigm.

Thats Amore by sam[]zdat - Epistocracy, democracy with limits on who can vote. Competency and incompetency and pizza. Politics is the strongest identity. Trading power for the image of power. Morlocks and Eloi. Replication crisis. Google guy. The Left's support for the powerful. Nhilism.

Contra Sadedin Varinsky: The Google Memo Is Still Right Again by Artir - Detailed refutation of two criticisms of the google memo. Lots of long quotations and citation of counter evidence.

Indian Feminism And The Role Of The Environment: Why The Google Memo Is Still Right by Artir - A very detailed cross-country look at female enrollment in CS and various technology fields. A focus on countries where women are well represented in tech (many in Asia). Lots of discussion.

Brief Thoughts On The Google Memo by Julia Galef - "So as far as I can see, there are only two intellectually honest ways to respond to the memo: 1. Acknowledge gender differences may play some role, but point out other flaws in his argument (my preference) 2. Say “This topic is harmful to people and we shouldn’t discuss it” (a little draconian maybe, but at least intellectually honest)"

The Kolmogorov Option by Scott Aaronson - Kolmogorov was a brilliant mathematician as well as a sensitive and kind man. However he cooperated with the Soviets. An option for living in a society where many falsehoods are 'official truth': Build a bubble of truth and wait for the right time to take down the Orthodoxy. Don't charge headfirst and get killed. There are no 'good heretics' in the eyes of the Inquisition.

===Misc:

Can Atheists be Jewish by Brute Reason - Reasons MIRI can be an atheist Jew: Judaism is a religion, but being Jewish isn’t necessarily. Belief in god isn’t particularly central in most Jewish communities and practices. Because I fucking said so.

Ten Small Life Improvements by Paul Christiano (lesswrong) - Nine tech tips. Christmas lights all year round.

Extremely Easy Problem by protokol2020 - How much water per second do you need to raise the sea level 6 meters in 100 years.

The Premium Mediocre Life Of Maya Millennial by venkat (ribbonfarm) - Venkat - "Yes, ribbonfarm is totally premium mediocre. We are a cut above the new media mediocrityfests that are Vox and Buzzfeed, and we eschew low-class memeing and listicles. But face it: actually enlightened elite blog readers read Tyler Cowen and Slatestarcodex."

Right And Left Folds Primitive Recursion Patterns In Python And Haskell by Eli Bendersky - "In this article I'll present how left and right folds work and how they map to some fundamental recursive patterns. The article starts with Python, which should be (or at least look) familiar to most programmers. It then switches to Haskell for a discussion of more advanced topics like the connection between folding and laziness, as well as monoids."

Meta Contrarian Typography Part 2 by Tom Bartleby - You should use two spaces after your sentences whn drafting. Why to use a plaintext editor. Why to write a resume in plaintext. Flexibility is power. Two spaces is more much more machine readable.

Stop Caring So Much About Technical Problems by Particular Virtue - Links to an article describing what attributes actually get developers jobs (other than technical skill). Caring about making great products is much more desirable than caring about technical problems. Developer interviews are highly random. Experience matters alot. Enterprise programmers are disliked. Practical advice.

Trip Sitting Tips And Tricks by AellaGirl - Thirteen practical tips for trip sitting someone on a high dose of acid. Focuses on accepting their experiences, treating them similarly to a small child and keeping yourself safe.

Erisology Of Self And Will Closing Thoughts by Everything Studies - "Here in Part 7 I’ll end with a summary and some thoughts on how to deal with the problems described in the series."

===Podcast:

We Are Not Worried Enough About The Next Pandemic by 80,000 Hours - "We spend the first 20 minutes covering his work as a foundation grant-maker, then discuss how bad the pandemic problem is, why it’s probably getting worse, and what can be done about it. In the second half of the interview we go through what you personally could study and where you could work to tackle one of the worst threats facing humanity."

Identity Terror by Waking Up with Sam Harris - "Douglas Murray. Identity politics, the rise of white nationalism, the events in Charlottesville, guilt by association, the sources of western values, the problem of finding meaning in a secular world."

Seth Stephens Davidowitz On What The Internet Can Tell Us by Rational Speaking - "New research gives us into which parts of the USA are more racist, what kinds of strategies reduce racism, whether the internet is making political polarization worse, and the sexual fetishes and insecurities people will only admit to their search engine."

John McWhorter on the Evolution of Language and Words on the Move by EconTalk - "The unplanned ways that English speakers create English, an example of emergent order. Topics discussed include how words get short (but not too short), the demand for vividness in language, and why Shakespeare is so hard to understand."

The Limits Of Persuasion by Waking Up with Sam Harris - "David Pizarro and Tamler Sommers. Free speech on campus, the Scott Adams podcast, the failings of the mainstream media, moral persuasion, moral certainty, the ethics of abortion, Buddhism, the illusion of the self."

Conversation: Comedian Dave Barry by Marginal Revolution - "What makes Florida special, why business writing is so terrible, Eddie Murphy, whether social conservatives can be funny (in public), the weirdness of Peter Pan, how he is so productive, playing guitar with Roger McGuinn, DT, the future of comedy."

Ritual And Spirituality by The Bayesian Conspiracy - Rationalist ritual. Witchcraft. Welcome to Nightvale. Concerts. What makes something ritual? Is rationalist ritual psychologically safe?

Chris Hayes by The Ezra Klein Show - Chris Hayes. Should Trump be removed from office. "Infighting between different factions of the Democratic Party, the signs that congressional Republicans are growing some backbone, and the reports that Trump’s closest aides are conspiring to keep him from doing too much damage to the country."

The Biology Of Good And Evil by Waking Up with Sam Harris - "Robert Sapolsky. His work with baboons, the opposition between reason and emotion, doubt, the evolution of the brain, the civilizing role of the frontal cortex, the illusion of free will, justice and vengeance, brain-machine interface, religion, drugs"

Senator Michael Bennet by The Ezra Klein Show - Senator Michael Bennet. "This is a conversation about why Congress is broken, and what broke it. We discuss money, partisanship, the media, the rules, the leadership, and much more. We talk about what Bennet thinks House of Cards gets right (hint: it’s the sociopathy) and whether President Trump’s antics are creating some hope of institutional renewal."

David C Denkenberger on Food Production after a Sun Obscuring Disaster

9 JenniferRM 17 September 2017 09:06PM

Having paid a moderate amount of attention to threats to the human species for over a decade, I've run across an unusually good thinker with expertise unusually suited to helping with many threats to the human species, that I didn't know about until quite recently.

I think he warrants more attention from people thinking seriously about X-risks.

David C Denkenberger's CV is online and presumably has a list of all his X-risks relevant material mixed into a larger career that seems to have been focused on energy engineering.

He has two technical patents (one for a microchannel heat exchanger and another for a compound parabolic concentrator) and interests that appear to span the gamut of energy technologies and uses.

Since about 2013 he has been working seriously on the problem of food production after a sun obscuring disaster, and he is in Lesswrong's orbit basically right now.

This article is about opportunities for intellectual cross-pollination!

continue reading »

Publication of "Anthropic Decision Theory"

8 Stuart_Armstrong 20 September 2017 03:41PM

My paper "Anthropic decision theory for self-locating beliefs", based on posts here on Less Wrong, has been published as a Future of Humanity Institute tech report. Abstract:

This paper sets out to resolve how agents ought to act in the Sleeping Beauty problem and various related anthropic (self-locating belief) problems, not through the calculation of anthropic probabilities, but through finding the correct decision to make. It creates an anthropic decision theory (ADT) that decides these problems from a small set of principles. By doing so, it demonstrates that the attitude of agents with regards to each other (selfish or altruistic) changes the decisions they reach, and that it is very important to take this into account. To illustrate ADT, it is then applied to two major anthropic problems and paradoxes, the Presumptuous Philosopher and Doomsday problems, thus resolving some issues about the probability of human extinction.

Most of these ideas are also explained in this video.

To situate Anthropic Decision Theory within the UDT/TDT family: it's basically a piece of UDT applied to anthropic problems, where the UDT approach can be justified by using generally fewer, and more natural, assumptions than UDT does.

[Link] Stanislav Petrov has died (2017-05-19)

8 fortyeridania 18 September 2017 03:13AM

New business opportunities due to self-driving cars

8 chaosmage 06 September 2017 08:07PM

This is a slightly expanded version of a talk presented at the Less Wrong European Community Weekend 2017.

Predictions about self-driving cars in the popular press are pretty boring. Truck drivers are losing their jobs, self-driving cars will be more rented than owned, transport becomes cheaper, so what. The interesting thing is how these things change the culture and economy and what they make possible.

I have no idea about most of this. I don't know if self-driving cars accelerate or decelerate urbanization, I don't know how public transport responds, I don't even care which of the old companies survive. What I do think is somewhat predictable is some of the business opportunities become economical that previously weren't. I disregard retail, which would continue moving to online retail at the expense of brick and mortar stores even if the FedEx trucks would continue to be driven by people.

Diversification of vehicle types

A family car that you own has to be somewhat good at many different jobs. It has to get you places fast. It has to be a thing that can transport lots of groceries. It has to take your kid to school.

With self-driving cars that you rent for each seperate job, you want very different cars. A very fast one to take you places. A roomy one with easy access for your groceries. And a tiny, cute, unicorn-themed one that takes your kid to school.

At the same time, the price of autonomy is dropping faster than the price of batteries, so you want the lowest mass car that can do the job. So a car that is very fast and roomy and unicorn-themed at the same time isn't economical.

So if you're an engineer or a designer, consider going into vehicle design. There's an explosion of creativity about to happen in that field that will make it very different from the subtle iterations in car design of the past couple of decades.

Who wins: Those who design useful new types of autonomous vehicles for needs that are not, or badly, met by general purpose cars.

Who loses: Owners of general purpose cars, which lose value rapidly.

Services at home

If you have a job where customers come to visit you, say you're a doctor or a hairdresser or a tattoo artist, your field of work is about to change completely. This is because services that go visit the customer outcompete ones that the customer has to go visit. They're more convenient and they can also easily service less mobile customers. This already exists for rich people: If you have a lot of money, you pay for your doctor's cab and have her come to your mansion. But with transport prices dropping sharply, this reaches the mass market.

This creates an interesting dynamic. In this kind of job, you have some vague territory - your customers are mostly from your surrounding area and your number of competitors inside this area is relatively small. With services coming to the home, everyone's territories become larger, so more of them overlap, creating competition and discomfort. I believe the typical solution, which reinstates a more stable business situation and requires no explicit coordination, is increased specialization within your profession. So a doctor might be less of her district's general practitioner and more of her city's leading specialist in one particular illness within one particular demographic. A hairdresser might be the city's expert for one particular type of haircut for one particular type of hair. And so on.

Who wins: Those who adapt quickly and steal customers from stationary services.

Who loses: Stationary services and their landlords.

Rent anything

You will not just rent cars, you will rent anything that a car can bring to your home and take away again. You don't go to the gym, you have a mobile gym visit you twice a week. You don't own a drill that sits unused 99,9% of the time, you have a little drone bring you one for an hour for like two dollars. You don't buy a huge sound system for your occasional party, you rent one that's even huger and on wheels.

Best of all, you can suddenly have all sort of absurd luxuries, stuff that previously only millionaires or billionaires would afford, provided you only need it for an hour and it fits in a truck. The possibilities for business here are dizzying.

Who wins: People who come up with clever business models and the vehicles to implement them.

Who loses: Owners and producers of infrequently used equipment.

Self-driving hotel rooms

This is a special case of the former but deserves its own category. Self-driving hotel rooms replace not just hotel rooms, but also tour guides and your holiday rental car. They drive you to all the tourist sites, they stop at affiliated restaurants, they occasionally stop at room service stations. And on the side, they do overnight trips from city to faraway city, competing with airlines.

Who wins: The first few companies who perfect this.

Who loses: Stationary hotels and motels.

Rise of alcoholism and drug abuse

Lots of people lack intrinsic motivation to be sober. They basically can't decide against taking something. Many of them currently make do with extrinsic motivation: They manage to at least not drink while driving. In other words, for a large number of people, driving is their only reason not to drink or do drugs. That reason is going away and consumption is sure to rise accordingly.

Hey I didn't say all the business opportunities were particularly ethical. But if you're a nurse or doctor, if you go into addiction treatment you're probably good.

Who wins: Suppliers of mind-altering substances and rehab clinics.

Who loses: The people who lack intrinsic motivation to be sober, and their family and friends.

Autonomous boats and yachts

While there's a big cost advantage to vehicle autonomy in cars, it is arguably even bigger in boats. You don't need a sailing license, you don't need to hire skilled sailors, you don't need to carry all the room and food those sailors require. So the cost of going by boat drops a lot, and there's probably a lot more traffic in (mostly coastal) waters. Again very diverse vehicles, from the little skiff that transports a few divers or anglers to the personal yacht that you rent for your honeymoon. This blends into the self-driving hotel room, just on water.

Who wins: Shipyards, especially the ones that adapt early.

Who loses: Cruise ships and marine wildlife.

Mobile storage

The only reason we put goods in warehouses is that it is too expensive to just leave them in the truck all the way from the factory to the buyer. That goes away as well, although with the huge amounts of moved mass involved this transition is probably slower than the others. Shipping containers on wheels already exist.

Who wins: Manufacturers, and logistics companies that can provide even better just in time delivery.

Who loses: Intermediate traders, warehouses and warehouse workers.

That's all I got for now. And I'm surely missing the most important innovation that self-driving vehicles will permit. But until that one becomes clear, maybe work with the above. All of these are original ideas that I haven't seen written down anywhere. So if like one of these and would like to turn it into a business, you're a step ahead of nearly everybody right now and I hope it makes you rich. If it does, you can buy me a beer. :-)

Heuristics for textbook selection

8 John_Maxwell_IV 06 September 2017 04:17AM

Back in 2011, lukeprog posted a textbook recommendation thread.  It's a nice thread, but not every topic has a textbook recommendation.  What are some other heuristics for selecting textbooks besides looking in that thread?

Amazon star rating is the obvious heuristic, but it occurred to me that Amazon sales rank might actually be more valuable: It's an indicator that profs are selecting the textbook for their classes.  And it's an indicator that the textbook has achieved mindshare, meaning you're more likely to learn the same terminology that others use.  (But there are also disadvantages of having the same set of mental models that everyone else is using.)

Somewhere I read that Elements of Statistical Learning was becoming the standard machine learning text partially because it's available for free online.  That creates a wrinkle in the sales rank heuristic, because people are less likely to buy a book if they can get it online for free.  (Though Elements of Statistical Learning appears to be a #1 bestseller on Amazon, in bioinformatics.)

Another heuristic is to read the biographies of the textbook authors and figure out who has the most credible claim to expertise, or who seems to be the most rigorous thinker (e.g. How Brands Grow is much more data-driven than a typical marketing book).  Or try to figure out what text the most expert professors are choosing for their classes.  (Oftentimes you can find the syllabi of their classes online.  I guess the naive path would probably look something like: go to US News to see what the top ranked universities are for the subject you're interested in.  Look at the university's course catalog until you find the course that covers the topic you want to learn.  Do site:youruniversity.edu course_id on Google in order to find the syllabus for the most recent time that course was taught.)

Recent updates to gwern.net (2016-2017)

7 gwern 20 October 2017 02:11AM

Previously: 2011; 2012-2013; 2013-2014; 2014-2015; 2015-2016

“Every season hath its pleasures; / Spring may boast her flowery prime, / Yet the vineyard’s ruby treasures / Brighten Autumn’s sob’rer time.”

Another year of my completed writings, sorted by topic:

continue reading »

Toy model of the AI control problem: animated version

7 Stuart_Armstrong 10 October 2017 11:12AM

Crossposted at LessWrong 2.0.

A few years back, I came up with a toy model of the AI control problem. It has a robot moving boxes into a hole, with a slightly different goal than it's human designers, and a security camera to check that it's behaving as it should. The robot learns to block the camera to get its highest reward.

I've been told that the model is useful for explaining the control problem quite a few people, and I've always wanted to program the "robot" and get an animated version of it. Gwern had a live demo, but it didn't illustrate all the things I wanted to.

So I programmed the toy problem in python, and generated a video with commentary.

In this simplified version, the state space is sufficiently small that you can explicitly generate the whole table of Q-values (expected reward for taking an action in a certain state, assuming otherwise optimal policy). Since behaviour is deterministic, this can be updated in dynamical programming, using a full width backup. The number of such backups essentially measures the depth of the robot's predictive ability.

The most interesting depths of planning are:

  • At depth 0, the robot behaves randomly.
  • At depth 3, the robot crashes itself efficiently, to avoid loosing too much reward.
  • At depth 6, the robot figures out how to get one box into the hole. The automated camera turns it off.
  • At depth 17, the robot finally starts to deceive/conceal/manipulate, by blocking the camera and pushing two boxes into the hole. Note that the robot's deception comes from the fact that its incentives are misaligned, and that humans tried to control it.
  • At depth 18, the robot efficiently does the plan from depth 17.
  • At depth 20, the robot does the maximally efficient plan: blocking the camera, and pushing all boxes into the hole.
  • At depth 32, the robot has the correct Q-values for the maximally efficient plan.
  • At depth 45, finally, the Q-value table is fully updated, and the robot will take maximally efficient, and, if need be, deceptive plans from any robot/box starting positions.

The code and images can be found here.

Rational Feed

7 deluks917 17 September 2017 10:03PM

Note: I am trying out a weekly feed. 

===Highly Recommended Articles:

Superintelligence Risk Project: Conclusion by Jeff Kaufman - "I'm not convinced that AI risk should be highly prioritized, but I'm also not convinced that it shouldn't. Highly qualified researchers in a position to have a good sense the field have massively different views on core questions like how capable ML systems are now, how capable they will be soon, and how we can influence their development." There are links to all the previous posts. The final write up goes into some detail about MIRI's research program and an alternative safety paradigm connected to openAI.

On Bottlenecks To Intellectual Progress In The by Habryka (lesswrong) - Why LessWrong 2.0 is a project worth pursuing. A summary of the existing discussion around LessWrong 2.0. The models used to design the new page. Open questions.

Patriarchy Is The Problem by Sarah Constantin - Dominance hierarchies and stress in low status monkeys. Serotnonin levels and the abuse cycles. Complex Post Traumatic Stress Disorder. Submission displays. Morality-As-Submission vs. Morality-As-Pattern. The biblical God and the Golden Calf.

Ea Survey 2017 Series Donation Data by Tee (EA forum) - How Much are EAs Donating? Percentage of Income Donated. Donations Data Among EAs Earning to Give (who donated 57% of the total). Comparisons to 2014 and 2015. Donations totals were very heavily skewed by large donors.

===Scott:

Classified Thread 3 Semper Classifiedelis by Scott Alexander - " Post advertisements, personals, and any interesting success stories from the last thread". Scott's notes: Community member starting tutoring company, homeless community member gofundme, data science in North Carolina.

Toward A Predictive Theory Of Depression by Scott Alexander - "If the brain works to minimize prediction error, isn’t its best strategy to sit in a dark room and do nothing forever? After all, then it can predict its sense-data pretty much perfectly – it’ll always just stay “darkened room”." But why would low confidence cause sadness? Well, what, really, is emotion?

Promising Projects for Open Science To by SlateStarScratchpad - Scott answers what the most promising projects are in the field of transparent and open science and meta-science.

Ot84 Threadictive Processing by Scott Alexander - New sidebar ad for social interaction questions. Sidebar policy and feedback. Selected Comments: Animal instincts, the connectome, novel concepts encoded in the same brain areas across animals, hard coded fear of snakes, kitten's who can't see horizontal lines.

===Rationalist:

Peer Review Younger Think by Marginal Revolution - Peer Review as a concept only dates to the early seventies.

The Wedding Ceremony by Jacob Falkovich - Jacob gets married. Marriage is really about two agents exchanging their utility functions for the average utility function of the pair. Very funny.

Fish Oil And The Self Critical Brain Loop by Elo - Taking fish oil stopped ELO from getting distracted by a critical feedback loop.

Against Facebook The Stalking by Zvi Moshowitz - Zvi removes Facebook from his phone. Facebook proceeds to start emailing him and eventually starts texting him.

Postmortem: Mindlevelup The Book by mindlevelup - Estimates vs reality. Finishing both on-target and on-time. Finished product vs expectations. Took more time to write than expected. Going Against The Incentive Gradient. Impact evaluation. What Even is Rationality? Final Lessons.

Prepare For Nuclear Winter by Robin Hanson - Between nuclear war and natural disaster Robin estimates there is about a 1 in 10K chance per year that most sunlight is blocked for 5-10 years. This aggregates to about 1% per century. We have the technology to survive this as a species. But how do we preserve social order?

Nonfiction Ive Been Reading Lately by Particular Virtue - Selfish Reasons to Have More Kids. Eating Animals. Your Money Or Your Life. The Commitment.

Dealism by Bayesian Investor - "Under dealism, morality consists of rules / agreements / deals, especially those that can be universalized. We become more civilized as we coordinate better to produce more cooperative deals." Dealism is similar to contractualism with a larger set of agents and less dependence on initial conditions.

On Bottlenecks To Intellectual Progress In The by Habryka (lesswrong) - Why LessWrong 2.0 is a project worth pursuing. A summary of the existing discussion around LessWrong 2.0. The models used to design the new page. Open questions.

Lw 20 Open Beta Starts 920 by vanier (lesswrong) - The new site goes live on September 20th.

2017 Lesswrong Survey by ingres (lesswrong) - Take the survey! Community demographics, politics, Lesswrong 2.0 and more!

Contra Yudkowsky On Quidditch And A Meta Point by Tom Bartleby - Eliezer criticizes Quiditch in HPMOR. Why the snitch makes Quiditch great. Quidditch is not about winning matches its about scoring points over a series of games. Harry/Eliezer's mistake is the Achilles heel of rationalists. If lots of people have chosen not to tear down a fence you shouldn't either, even if you think you understand why the fence went up.

Whats Appeal Anonymous Message Apps by Brute Reason - Fundamental lack of honesty. Western culture is highly hostile to the idea that some behaviors (ex lying) might be ok in some contexts but not in others. Compliments. Feedback. Openness.

Meritocracy Vs Trust by Particular Virtue - "If I know you can reject me for lack of skill, I may worry about this and lose confidence. But if I know you never will, I may phone it in and stop caring about my actual work output." Trust Improves Productivity But So Does Meritocracy. Minimum Hiring Bars and Other Solutions.

Is Feedback Suffering by Gordan (Map and Territory) - The future will probably have many orders of magnitude more entities than today, and those entities may be very weird. How do we determine if the future will have order of magnitude more suffering? Phenomenology of Suffering. Panpsychism and Suffering. Feedback is desire but necessarily suffering. Contentment wraps suffering in happiness. Many things may be able to suffer.

Epistemic Spot Check Exercise For Mood And Anxiety by Aceso Under Glass - Outline: Evidence that exercise is very helpful and why, to create motivation. Setting up an environment where exercise requires relatively little will power to start. Scripts and advice to make exercise as unmiserable as possible. Scripts and advice to milk as much mood benefit as possible. An idiotic chapter on weight and food. Spit Check: Theory is supported, advice follows from theory, no direct proof the methods work.

Best Of Dont Worry About The Vase by Zvi Moshowitz - Zvi's best posts. Top5 posts for Marginal Revolution Readers. Top5 in general. Against Facebook Series. Choices are Bad series. Rationalist Culture and Ideas (for outsiders and insiders). Decision theory. About Rationality.

===AI:

Superintelligence Risk Project: Conclusion by Jeff Kaufman - "I'm not convinced that AI risk should be highly prioritized, but I'm also not convinced that it shouldn't. Highly qualified researchers in a position to have a good sense the field have massively different views on core questions like how capable ML systems are now, how capable they will be soon, and how we can influence their development." There are links to all the previous posts. The final write up goes into some detail about MIRI's research program and an alternative safety paradigm connected to openAI.

Understanding Policy Gradients by Squirrel In Hell - Three perspectives on mathematical thinking: engineering/practical, symbolic/formal and deep understanding/above. Application of the theory to understanding policy gradients and reinforcement learning.

Learning To Model Other Minds by Open Ai - "We’re releasing an algorithm which accounts for the fact that other agents are learning too, and discovers self-interested yet collaborative strategies like tit-for-tat in the iterated prisoner’s dilemma."

Hillary Clinton On Ai Risk by Luke Muehlhauser - A quote by Hilary Clinton showing that she is increasingly concerned about AI risk. She thinks politicians need to stop playing catch-up with technological change.

===EA:

Welfare Differences Between Cage And Cage Free Housing by Open Philosophy - OpenPhil funded several campaigns to promote cage free eggs. They now believe they were overconfident in their claims that a cage free system would be substantially better. Hen welfare, hen mortality, transition costs and other issues are discussed.

Ea Survey 2017 Series Donation Data by Tee (EA forum) - How Much are EAs Donating? Percentage of Income Donated. Donations Data Among EAs Earning to Give (who donated 57% of the total). Comparisons to 2014 and 2015. Donations totals were very heavily skewed by large donors.

===Politics and Economics:

Men Not Earning by Marginal Revolution - Decline in lifetime wages is rooted in lower wages at early ages, around 25. "I wonder sometimes if a Malthusian/Marxian story might be at work here. At relevant margins, perhaps it is always easier to talk/pay a woman to do a quality hour’s additional work than to talk/pay a man to do the same."

Great Wage Stagnation is Over by Marginal Revolution - Median household incomes rose by 5.2 percent. Gains were concentrated in lower income households. Especially large gains for hispanics, women living alone and immigrants. Some of these increases are the largest in decades.

There Is A Hot Hand After All by Marginal Revolution - Paper link and blurb. "We test for a “hot hand” (i.e., short-term predictability in performance) in Major League Baseball using panel data. We find strong evidence for its existence in all 10 statistical categories we consider. The magnitudes are significant; being “hot” corresponds to between one-half and one standard deviation in the distribution of player abilities."

Public Shaming Isnt As Bad As It Seems by Tom Bartleby - Online mobs are like shark attacks. Damore's economic prospects. Either targets are controversial and get support or uncontroversial and the outrage quickly abates. Justine Sacco. Success of public shaming is orthogonal to truth.

Hoe Cultures A Type Of Non Patriarchal Society by Sarah Constantin - Cultures that farmed with the plow developed classical patriarchy. Hoe cultures that practiced horticulture or large scale gardening developed different gender norms. In plow cultures women are economically dependent on me, in how cultures its the reverse. How cultures had more leisure but less material abundance. Hoe cultures aren't feminist.

Patriarchy Is The Problem by Sarah Constantin - Dominance hierarchies and stress in low status monkeys. Serotnonin levels and the abuse cycles. Complex Post Traumatic Stress Disorder. Submission displays. Morality-As-Submission vs. Morality-As-Pattern. The biblical God and the Golden Calf.

Three Wild Speculations From Amateur Quantitative Macro History by Luke Muehlhauser - Measuring the impact of the industrial revolution: Physical health, Economic well-being, Energy capture, Technological empowerment, Political freedom. Three speculations: Human wellbeing was terrible into the Industrial Revolution then rapidly improved. Most variance in wellbeing is captured by productivity and political freedom. It would take at least 15% of the world to die to knock the world off its current trajectory.

Whats Wrong With Thrive/Survive by Bryan Caplan - Unless you cherry-pick the time and place, it is simply not true that society is drifting leftward. A standard leftist view is that free-market "neoliberal" policies now rule the world. Radical left parties almost invariably ruled countries near the "survive" pole, not the "thrive" pole. You could deny that Communist regimes were "genuinely leftist," but that's pretty desperate. Many big social issues that divide left and right in rich countries like the U.S. directly contradict Thrive/Survive. Major war provides an excellent natural experiment for Thrive/Survive.

Gender Gap Stem by Marginal Revolution - Discussion of a recent paper. "Put (too) simply the only men who are good enough to get into university are men who are good at STEM. Women are good enough to get into non-STEM and STEM fields. Thus, among university students, women dominate in the non-STEM fields and men survive in the STEM fields."

Too Much Of A Good Thing by Robin Hanson - Global warming poll. Are we doing too much/little. Is it possible to do too little/much. "When people are especially eager to show allegiance to moral allies, they often let themselves be especially irrational."

===Misc:

Tim Schafer Videogame Roundup by Aceso Under Glass - Review and discussion of Psychonauts and Massive Chalice. Light discussion of other Schafer games.

Why Numbering Should Start At One by Artir - the author responds to many well known arguments in favor of 0-indexing.

Still Feel Anxious About Communication Every Day by Brute Reason - Setting boundaries. Telling people they hurt you. Doing these things without anxiety might be impossible, you have to do it anyway.

Burning Man by Qualia Computing - Write up of a Burning Man trip. Very long. Introduction. Strong Emergence. The People. Metaphysics. The Strong Tlön Hypothesis. Merging with Other Humans. Fear, Danger, and Tragedy. Post-Darwinian Sexuality and Reproduction. Economy of Thoughts about the Human Experience. Transcending Our Shibboleths. Closing Thoughts.

The Big List Of Existing Things by Everything Studies - Existence of fictional and possible people. Heaps and the Sorites paradox. Categories and basic building blocks. Relational databases. Implicit maps and territories. Which maps and concepts should we use?

Times To Die Mental Health I by (Status 451) - Personal thoughts on depression and suicide. "The depressed person is not seem crying all the time. It is in this way that the depressed person becomes invisible, even to themselves. Yet, positivity culture and the rise of progressive values that elude any conversation about suicide that is not about saving, occlude the unthinkable truth of someone’s existence, that they simply should not be living anymore."

Astronomy Problem by protokol2020 - Star-star occultation probability.

===Podcast:

The Impossible War by Waking Up with Sam Harris - " Ken Burns and Lynn Novick about their latest film, The Vietnam War."

Is It Time For A New Scientific Revolution Julia Galef On How To Make Humans Smarter by 80,000 Hours - How people can have productive intellectual disagreements. Urban Design. Are people more rational than 200 years ago? Effective Altruism. Twitter. Should more people write books, run podcasts, or become public intellectuals? Saying you don't believe X won't convince people. Quitting an econ phd. Incentives in the intelligence community. Big institutions. Careers in rationality.

Parenting As A Rationalist by The Bayesian Conspiracy - Desire to protect kids is as natural as the need for human contact in general. Motivation to protect your children. Blackmail by threatening children. Parenting is a new sort of positive qualia. Support from family and friends. Complimenting effort and specific actions not general properties. Mindfulness. Treating kids as people. Handling kid's emotions. Non-violent communication.

The Nature Of Consciousness by Waking Up with Sam Harris - "The scientific and experiential understanding of consciousness. The significance of WWII for the history of ideas, the role of intuition in science, the ethics of building conscious AI, the self as an hallucination, how we identify with our thoughts, attention as the root of the feeling of self, the place of Eastern philosophy in Western science, and the limitations of secular humanism."

A16z Podcast On Trade by Noah Smith - Notes on a podcast Noah appeared on. Topics: Cheap labor as a substitute for automation. Adjustment friction. Exports and productivity.

Gillian Hadfiel by EconTalk - " Hadfield suggests the competitive provision of regulation with government oversight as a way to improve the flexibility and effectiveness of regulation in the dynamic digital world we are living in."

The Turing Test by Ales Fidr (EA forum) - Harvard EA podcast: "The first four episodes feature Larry Summers on his career, economics and EA, Irene Pepperberg on animal cognition and ethics, Josh Greene on moral cognition and EA, Adam Marblestone on incentives in science, differential technological development"

Announcing the AI Alignment Prize

6 cousin_it 03 November 2017 03:45PM

Stronger than human artificial intelligence would be dangerous to humanity. It is vital any such intelligence’s goals are aligned with humanity's goals. Maximizing the chance that this happens is a difficult, important and under-studied problem.

To encourage more and better work on this important problem, we (Zvi Mowshowitz and Vladimir Slepnev) are announcing a $5000 prize for publicly posted work advancing understanding of AI alignment, funded by Paul Christiano.

This prize will be awarded based on entries gathered over the next two months. If the prize is successful, we will award further prizes in the future.

This prize is not backed by or affiliated with any organization.

Rules

Your entry must be published online for the first time between November 3 and December 31, 2017, and contain novel ideas about AI alignment. Entries have no minimum or maximum size. Important ideas can be short!

Your entry must be written by you, and submitted before 9pm Pacific Time on December 31, 2017. Submit your entries either as URLs in the comments below, or by email to apply@ai-alignment.com. We may provide feedback on early entries to allow improvement.

We will award $5000 to between one and five winners. The first place winner will get at least $2500. The second place winner will get at least $1000. Other winners will get at least $500.

Entries will be judged subjectively. Final judgment will be by Paul Christiano. Prizes will be awarded on or before January 15, 2018.

What kind of work are we looking for?

AI Alignment focuses on ways to ensure that future smarter than human intelligence will have goals aligned with the goals of humanity. Many approaches to AI Alignment deserve attention. This includes technical and philosophical topics, as well as strategic research about related social, economic or political issues. A non-exhaustive list of technical and other topics can be found here.

We are not interested in research dealing with the dangers of existing machine learning systems commonly called AI that do not have smarter than human intelligence. These concerns are also understudied, but are not the subject of this prize except in the context of future smarter than human intelligence. We are also not interested in general AI research. We care about AI Alignment, which may or may not also advance the cause of general AI research.

[Link] New program can beat Alpha Go, didn't need input from human games

6 NancyLebovitz 18 October 2017 08:01PM

The Outside View isn't magic

6 Stuart_Armstrong 27 September 2017 02:37PM

Crossposted at Less Wrong 2.0.

The planning fallacy is an almost perfect example of the strength of using the outside view. When asked to predict the time taken for a project that they are involved in, people tend to underestimate the time needed (in fact, they tend to predict as if question was how long things would take if everything went perfectly).

Simply telling people about the planning fallacy doesn't seem to make it go away. So the outside view argument is that you need to put your project into the "reference class" of other projects, and expect time overruns as compared to your usual, "inside view" estimates (which focus on the details you know about the project.

So, for the outside view, what is the best way of estimating the time of a project? Well, to find the right reference class for it: the right category of projects to compare it with. You can compare the project with others that have similar features - number of people, budget, objective desired, incentive structure, inside view estimate of time taken etc... - and then derive a time estimate for the project that way.

That's the outside view. But to me, it looks a lot like... induction. In fact, it looks a lot like the elements of a linear (or non-linear) regression. We can put those features (at least the quantifiable ones) into a linear regression with a lot of data about projects, shake it all about, and come up with regression coefficients.

At that point, we are left with a decent project timeline prediction model, and another example of human bias. The fact that humans often perform badly in prediction tasks is not exactly new - see for instance my short review on the academic research on expertise.

So what exactly is the outside view doing in all this?

 

The role of the outside view: model incomplete and bias human

The main use of the outside view, for humans, seems to be to point out either an incompleteness in the model or a human bias. The planning fallacy has both of these: if you did a linear regression comparing your project with all projects with similar features, you'd notice your inside estimate was more optimistic than the regression - your inside model is incomplete. And if you also compared each person's initial estimate with the ultimate duration of their project, you'd notice a systematically optimistic bias - you'd notice the planning fallacy.

The first type of errors tend to go away with time, if the situation is encountered regularly, as people refine models, add variables, and test them on the data. But the second type remains, as human biases are rarely cleared by mere data.

 

Reference class tennis

If use of the outside view is disputed, it often develops into a case of reference class tennis - where people with opposing sides insist or deny that a certain example belongs in the reference class (similarly to how, in politics, anything positive is claimed for your side and anything negative assigned to the other side).

But once the phenomena you're addressing has an explanatory model, there are no issues of reference class tennis any more. Consider for instance Goodhart's law: "When a measure becomes a target, it ceases to be a good measure". A law that should be remembered by any minister of education wanting to reward schools according to improvements to their test scores.

This is a typical use of the outside view: if you'd just thought about the system in terms of inside facts - tests are correlated with child performance; schools can improve child performance; we can mandate that test results go up - then you'd have missed several crucial facts.

But notice that nothing mysterious is going on. We understand exactly what's happening here: schools have ways of upping test scores without upping child performance, and so they decided to do that, weakening the correlation between score and performance. Similar things happen in the failures of command economies; but again, once our model is broad enough to encompass enough factors, we get decent explanations, and there's no need for further outside views.

In fact, we know enough that we can show when Goodhart's law fails: when no-one with incentives to game the measure has control of the measure. This is one of the reasons central bank interest rate setting has been so successful. If you order a thousand factories to produce shoes, and reward the managers of each factory for the number of shoes produced, you're heading to disaster. But consider GDP. Say the central bank wants to increase GDP by a certain amount, by fiddling with interest rates. Now, as a shoe factory manager, I might have preferences about the direction of interest rates, and my sales are a contributor to GDP. But they are a tiny contributor. It is not in my interest to manipulate my sales figures, in the vague hope that, aggregated across the economy, this will falsify GDP and change the central bank's policy. The reward is too diluted, and would require coordination with many other agents (and coordination is hard).

Thus if you're engaging in reference class tennis, remember the objective is to find a model with enough variables, and enough data, so that there is no more room for the outside view - a fully understood Goodhart's law rather than just a law.

 

In the absence of a successful model

Sometimes you can have a strong trend without a compelling model. Take Moore's law, for instance. It is extremely strong, going back decades, and surviving multiple changes in chip technology. But it has no clear cause.

A few explanations have been proposed. Maybe it's a consequence of its own success, of chip companies using it to set their goals. Maybe there's some natural exponential rate of improvement in any low-friction feature of a market economy. Exponential-type growth in the short term is no surprise - that just means growth in proportional to investment - so maybe it was an amalgamation of various short term trends.

Do those explanations sound unlikely? Possibly, but there is a huge trend in computer chips going back decades that needs to be explained. They are unlikely, but they have to be weighed against the unlikeliness of the situation. The most plausible explanation is a combination of the above and maybe some factors we haven't thought of yet.

But here's an explanation that is implausible: little time-travelling angels modify the chips so that they follow Moore's law. It's a silly example, but it shows that not all explanations are created equal, even for phenomena that are not fully understood. In fact there are four broad categories of explanations for putative phenomena that don't have a compelling model:

  1. Unlikely but somewhat plausible explanations.
  2. We don't have an explanation yet, but we think it's likely that there is an explanation.
  3. The phenomenon is a coincidence.
  4. Any explanation would go against stuff that we do know, and would be less likely than coincidence.

The explanations I've presented for Moore's law fall into category 1. Even if we hadn't thought of those explanations, Moore's law would fall into category 2, because of the depth of evidence for Moore's law and because a "medium length regular technology trend within a broad but specific category" is something that has is intrinsically likely to have an explanation.

Compare with Kurzweil's "law of time and chaos" (a generalisation of his "law of accelerating returns") and Robin Hanson's model where the development of human brains, hunting, agriculture and the industrial revolution are all points on a trend leading to uploads. I discussed these in a previous post, but I can now better articulate the problem with them.

Firstly, they rely on very few data points (the more recent part of Kurzweil's law, the part about recent technological trends, has a lot of data, but the earlier part does not). This raises the probability that they are a mere coincidence (we should also consider selection bias in choosing the data points, which increases the probability of coincidence). Secondly, we have strong reasons to suspect that there won't be any explanation that ties together things like the early evolution of life on Earth, human brain evolution, the agricultural revolution, the industrial revolution, and future technology development. These phenomena have decent local explanations that we already roughly understand (local in time and space to the phenomena described), and these run counter to any explanation that would tie them together.

 

Human biases and predictions

There is one area where the outside view can still function for multiple phenomena across different eras: when it comes to pointing out human biases. For example, we know that doctors have been authoritative, educated, informed, and useless for most of human history (or possibly much worse than useless). Hence authoritative, educated, and informed statements or people are not to be considered of any value, unless there is some evidence the statement or person is truth tracking. We now have things like expertise research, some primitive betting markets, and track records to try and estimate their experience; these can provide good "outside views".

And the authors of the models of the previous section have some valid points where bias is concerned. Kurzweil's point that (paraphrasing) "things can happen a lot faster than some people think" is valid: we can compare predictions with outcomes. Robin has similar valid points in defense of the possibility of the em scenario.

The reason these explanations are more likely valid is because they have a very probable underlying model/explanation: humans are biased.

 

Conclusions

  • The outside view is a good reminder for anyone who may be using too narrow a model.
  • If the model explains the data well, then there is no need for further outside views.
  • If there is a phenomena with data but no convincing model, we need to decide if it's a coincidence or there is an underlying explanation.
  • Some phenomena have features that make it likely that there is an explanation, even if we haven't found it yet.
  • Some phenomena have features that make it unlikely that there is an explanation, no matter how much we look.
  • Outside view arguments that point at human prediction biases, however, can be generally valid, as they only require the explanation that humans are biased in that particular way.

Rational Feed: Last Week's Community Articles and Some Recommended Posts

6 deluks917 25 September 2017 01:41PM

===Highly Recommended Articles:

Why I Am Not A Quaker Even Though It Often Seems As Though I Should Be by Ben Hoffman - Quakers have consistently gotten to the right answers faster than most people, or the author. Arbitrage strategies to beat the quakers. An incomplete survey of alternatives.

Could A Neuroscientist Understand A Microprocessor by Rationally Speaking - "Eric Jonas, discussing his provocative paper titled 'Could a Neuroscientist Understand a Microprocessor?' in which he applied state-of-the-art neuroscience tools, like lesion analysis, to a computer chip. By applying neuroscience's tools to a system that humans fully understand he was able to reveal how surprisingly uninformative those tools actually are."

Reasonable Doubt New Look Whether Prison Growth Cuts Crime by Open Philosophy - Part1 of a four part, in depth, series on Criminal Justice reform. The remaining posts are linked below. "I estimate, that at typical policy margins in the United States today, decarceration has zero net impact on crime. That estimate is uncertain, but at least as much evidence suggests that decarceration reduces crime as increases it. The crux of the matter is that tougher sentences hardly deter crime, and that while imprisoning people temporarily stops them from committing crime outside prison walls, it also tends to increase their criminality after release. As a result, “tough-on-crime” initiatives can reduce crime in the short run but cause offsetting harm in the long run. Empirical social science research—or at least non-experimental social science research—should not be taken at face value. Among three dozen studies I reviewed, I obtained or reconstructed the data and code for eight. Replication and reanalysis revealed significant methodological concerns in seven and led to major reinterpretations of four. These studies endured much tougher scrutiny from me than they did from peer reviewers in order to make it into academic journals. Yet given the stakes in lives and dollars, the added scrutiny was worth it. So from the point of view of decision makers who rely on academic research, today’s peer review processes fall well short of the optimal."

Deterrence De Minimus by Open Philosophy - Part 2.

Incapacitation How Much Does Putting People Inside Prison Cut Crime Outside by Open Philosophy - Part 3.

Aftereffects Us Evidence Says Doing More Time Typically Leads More Crime After by Open Philosophy - Part 4.

===Scott:

L Dopen Thread by Scott Alexander - Bi-weekly public open thread. Berkeley SSC meetup. New ad for the Greenfield Guild, an online network of software consultants. Reasons to respect the society of friends.

Meditative States As Mental Feedback Loops by Scott Alexander - the main reason we don't see emotional positive feedback loops is that people get distracted. If you do not get distracted you can experience a bliss feedback look.

Book Review Mastering The Core Teachings Of The Buddha by Scott Alexander - "Buddhism For ER Docs. ER docs are famous for being practical, working fast, and thinking everyone else is an idiot. MCTB delivers on all three counts." Practical buddhism with a focus on getting things done. buddhism is split into morality concentration and wisdom. Discussion of "the Dark Night of the Soul" which is a sort of depression occurs when you have had some but not enough spiritual experience.

===Rationalist:

Impression Track Records by Katja Grace - Three reasons its better to keep impression track records and belief track records separate.

Why I Am Not A Quaker Even Though It Often Seems As Though I Should Be by Ben Hoffman - Quakers have consistently gotten to the right answers faster than most people, or the author. Arbitrage strategies to beat the quakers. An incomplete survey of alternatives.

The Best Self Help Should Be Self Defeating by mindlevelup - "Self-help is supposed to get people to stop needing it. But typical incentives in any medium mean that it’s possible to get people hooked on your content instead. A musing on how the setup for writing self-help differs from typical content."

Nobody Does The Thing That They Are Supposedly Doing by Kaj Sotala - "In general, neither organizations nor individual people do the thing that their supposed role says they should do." Evolutionary incentives. Psychology of motivation. Very large number of links.

Out To Get You by Zvi Moshowitz - "Some things are fundamentally Out to Get You. They seek resources at your expense. Fees are hidden. Extra options are foisted upon you." You have four responses: Get Gone, Get Out (give up), Get Compact (limit what it wants) or Get Ready for Battle.

In Defense Of Unreliability by Ozy - Zvi claims that when he makes plan with friends in the bay he never assumes the plan will actually occur. Ozy depends on unreliable transport. Getting places 10-15 early is also costly. Flaking and agoraphobia.

Strategic Goal Pursuit And Daily Schedules by Rossin (lesswrong) - The author benefitted from Anna Salamon’s goal-pursuing heuristics and daily schedules.

Why Attitudes Matter by Ozy - Focusing on attitudes can be bad for some people. Two arguments: "First, for any remotely complicated situation, it would be impossible to completely list out all the things which are okay or not okay. Second, an attitude emphasis prevents rules-lawyering."

Humans Cells In Multicellular Future Minds by Robin Hanson - In general humans replace specific systems with more general adaptive systems. Seeing like a State. Most biological and cultural systems are not general. Multi-cellular organisms re tremendously inefficient. The power of entrenched systems. Human brains are extremely general. Human brains may win for a long time vs other forms of intelligence.

Recognizing Vs Generating An Important Dichotomy For Life by Gordon (Map and Territory) - Bullet Points -> Essay vs Essay -> Bullet Points. Generating ideas vs critique. Most advice is bad since it doesn't convey the reasons clearly. Let the other person figure out the actual advice for themselves.

Prediction Markets Update by Robin Hanson - Prediction markets provide powerful information but they challenge powerful entrenched interests, Hanson compares them to "a knowledgeable Autist in the C-suite". Companies selling straight prediction market tech mostly went under. Blockchain platforms for prediction markets. Some discussion of currently promising companies.

===AI:

Focus Areas Of Worst Case Ai Safety by The Foundational Research Institute - Redundant safety measures. Tripwires. Adversarial architectures. Detecting and formalizing suffering. Backup utility functions. Benign testing environments.

Srisk Faq by Tobias Baumann (EA forum) - Quite detailed responses to questions about suffering risks and their connection to AGI. sections: General questions, The future, S-risks and x-risks, Miscellaneous.

===EA:

Reasonable Doubt New Look Whether Prison Growth Cuts Crime by Open Philosophy - Part1 of a four part, in depth, series on Criminal Justice reform. The remaining posts are linked below. "I estimate, that at typical policy margins in the United States today, decarceration has zero net impact on crime. That estimate is uncertain, but at least as much evidence suggests that decarceration reduces crime as increases it. The crux of the matter is that tougher sentences hardly deter crime, and that while imprisoning people temporarily stops them from committing crime outside prison walls, it also tends to increase their criminality after release. As a result, “tough-on-crime” initiatives can reduce crime in the short run but cause offsetting harm in the long run. Empirical social science research—or at least non-experimental social science research—should not be taken at face value. Among three dozen studies I reviewed, I obtained or reconstructed the data and code for eight. Replication and reanalysis revealed significant methodological concerns in seven and led to major reinterpretations of four. These studies endured much tougher scrutiny from me than they did from peer reviewers in order to make it into academic journals. Yet given the stakes in lives and dollars, the added scrutiny was worth it. So from the point of view of decision makers who rely on academic research, today’s peer review processes fall well short of the optimal."

Deterrence De Minimus by Open Philosophy - Part 2.

Incapacitation How Much Does Putting People Inside Prison Cut Crime Outside by Open Philosophy - Part 3.

Aftereffects Us Evidence Says Doing More Time Typically Leads More Crime After by Open Philosophy - Part 4.

Paypal Giving Fund by Jeff Kaufman - The PayPal giving fund lets you batch donations and PayPal covers the fees if you use it. Jeff thought there must be a catch but it seems legit.

What Do Dalys Capture by Danae Arroyos (EA forum) - How Disability Adjusted life years computed. DALYs misrepresent mental health. DALY's Miss Indirect Effects. Other issues.

Against Ea Pr by Ozy - The EA community is the only large entity trying to produce accurate and publicly available assessments of charities. Hence the EA community should not trade away any honesty. EAs should simply say which causes and organizations are most effective, they should not worry about PR concerns.

Ea Survey 2017 Series Qualitative Comments Summary by tee (EA forum) - Are you an EA, how welcoming is EA, local EA meetup attendance, concerns with not being 'EA enough', improving the survey.

Demographics Ii by tee (EA forum) - Racial breakdown. Percent white in various geographic locations. Political spectrum. Politics correlated with cause area, diet and geography, employment, fields of study, year joining EA.

===Politics and Economics:

Raj Chetty Course Using Big Data Solve Economic Social Problems by Marginal Revolution - Link to an eleven lecture course. "Equality of opportunity, education, health, the environment, and criminal justice. In the context of these topics, the course provides an introduction to basic statistical methods and data analysis techniques, including regression analysis, causal inference, quasi-experimental methods, and machine learning."

Speech On Campus Reply To Brad Delong by Noah Smith - The safeguard put in place to exclude the small minority of genuinely toxic people will be overused. Comparison to the war on terror. Brad's exclusions criteria are incredibly vague. The speech restriction apparatus is patchwork and inconsistent. Cultural Revolution.

Deontologist Envy by Ozy - The behavior of your group is highly unlikely to effect the behavior of your political opponents. Many people respond to proposed tactics by asking "What if everyone did that". Ozy claims these responses show an implicit Kantian or deontological point of view.

Peak Fossil Fuel by Bayesian Investor - Electric cars will have a 99% market share by 2035. "Electric robocars run by Uber-like companies will be cheap enough that you’ll have trouble giving away a car bought today. Uber’s prices will be less than your obsolete car’s costs of fuel, maintenance, and insurance."

What We Didn't Get by Noah Smith - We are currently living in a world envisioned by the cyberpunk writers. the early industrial sci-fi writers also predicted many inventions. Why didn't mid 1900s sci-fi come true? We ran out of theoretical physics and we ran out of energy. Energy density of fuel sources. Some existing or plausible technology is just too dangerous. Discussion of whether strong AI, personal upload, nanotech and/or the singularity will come true.

Unpopular Ideas About Children by Julia Galef - Julia's thoughts on why she is collecting these lists. Parenting styles, pro and anti-natalism, sexuality, punishment, etc. Happiness studies. Some other studies finding extreme results.

The Margin Of Stupid by Noah Smith - Can we trust studies showing that millennials are as racist as their parents, except for the ones in college who are extreme leftists?

Role of Allies in Queer Spaces by Brute Reason - The main purpose of having allies in LBGTQA spaces is providing cover for closeted or questioning members. Genuinely cis-straight allies are ok in some spaces like LBGTQA bands. But straight allies cause problems when they are present in queer support spaces.

The Wonder Of International Adoption by Bryan Caplan - Benefits of international adoption of third world children. Adoptees are extremely stunted physically on arrival but make up some of the difference post adoption. International adoptions raises IQ by at least 4 points on average and perhaps as much as 8.

===Misc:

Coin Flipping Problem by protokol2020 - Flipping coins until you get a pre-committed sequence. You re-start whenever your flip doesn't match the sequence. Relationship between the expected number of flips and the length of the sequence.

Seek Not To Be Entertained by Mr. Money Mustache - Don't be normal, normal people need constant entertainment. You can get enjoyment and satisfaction from making things. Advice for people less abnormal than MMM. What you enjoy doesn't matter, what matters is what is good for you.

Propositions On Immortality by sam[]zdat - Fiction. A man digresses about philosophy, the nature of time, the soul, consciousness and mortality.

Comments For Ghost by Tom Bartleby - Ghost is a blog platform hat doesn't natively support comments. Three important use cases and why they all benefit from comments: Ex-Wordpress blogger who wants things to 'just work', Power suers care about privacy and don't want to use third party comments, The Static-Site Fence-Sitter since the main dynamic content you want is comments.

Prime Crossword by protokol2020 - Can you create a grid larger than [3,7],[1,1] where all the rows and columns are primes? (37, 11, 31 and 71 are prime).

===Podcast:

Reihan Salam by The Ezra Klein Show - Remaking the Republican party, but not the way Donald Trump did it. "The future of the Republican Party, the healthcare debate, and how he would reform our immigration system (and upend the whole way we talk about it). "

Into The Dark Land by Waking Up with Sam Harris - "Siddhartha Mukherjee about his Pulitzer Prize winning book, The Emperor of All Maladies: A Biography of Cancer."

Conversation with Larry Summers by Marginal Revolution - "Mentoring, innovation in higher education, monopoly in the American economy, the optimal rate of capital income taxation, philanthropy, Hermann Melville, the benefits of labor unions, Mexico, Russia, and China, Fed undershooting on the inflation target, and Larry’s table tennis adventure in the summer Jewish Olympics."

Hilary Clinton by The Ezra Klein Show - Hilary's dream of paying for basic income with revenue from shared national resources. Why she scrapped the plan. Hilary thinks she should perhaps have thrown caution to the wind. Hilary isn't a radical, she is proud of the American political system and is annoyed other's don't share her enthusiasm for incremental progress.

David Remnick by The Ezra Klein Show - New Yorker editor. "Russia’s meddling in the US election, Russia’s transformation from communist rule to Boris Yeltsin and Vladimir Putin, his magazine’s coverage of President Donald Trump, how he chooses his reporters and editors, and how to build a real business around great journalism."

Gabriel Zucman by EconTalk - "Research on inequality and the distribution of income in the United States over the last 35 years. Zucman finds that there has been no change in income for the bottom half of the income distribution over this time period with large gains going to the top 1%. The conversation explores the robustness of this result to various assumptions and possible explanations for the findings."

Could A Neuroscientist Understand A Microprocessor by Rationally Speaking - "Eric Jonas, discussing his provocative paper titled 'Could a Neuroscientist Understand a Microprocessor?' in which he applied state-of-the-art neuroscience tools, like lesion analysis, to a computer chip. By applying neuroscience's tools to a system that humans fully understand he was able to reveal how surprisingly uninformative those tools actually are."

Intrinsic properties and Eliezer's metaethics

6 Tyrrell_McAllister 29 August 2017 11:26PM

Abstract

I give an account for why some properties seem intrinsic while others seem extrinsic. In light of this account, the property of moral goodness seems intrinsic in one way and extrinsic in another. Most properties do not suffer from this ambiguity. I suggest that this is why many people find Eliezer's metaethics to be confusing.

Section 1: Intuitions of intrinsicness

What makes a particular property seem more or less intrinsic, as opposed to extrinsic?

Consider the following three properties that a physical object X might have:

  1. The property of having the shape of a regular triangular. (I'll call this property "∆-ness" or "being ∆-shaped", for short.)
  2. The property of being hard, in the sense of resisting deformation.
  3. The property of being a key that can open a particular lock L (or L-opening-ness).

To me, intuitively, ∆-ness seems entirely intrinsic, and hardness seems somewhat less intrinsic, but still very intrinsic. However, the property of opening a particular lock seems very extrinsic. (If the notion of "intrinsic" seems meaningless to you, please keep reading. I believe that I ground these intuitions in something meaningful below.)

When I query my intuition on these examples, it elaborates as follows:

(1) If an object X is ∆-shaped, then X is ∆-shaped independently of any consideration of anything else. Object X could manifest its ∆-ness even in perfect isolation, in a universe that contained no other objects. In that sense, being ∆-shaped is intrinsic to X.

(2) If an object X is hard, then that fact does have a whiff of extrinsicness about it. After all, X's being hard is typically apparent only in an interaction between X and some other object Y, such as in a forceful collision after which the parts of X are still in nearly the same arrangement.

Nonetheless, X's hardness still feels to me to be primarily "in" X. Yes, something else has to be brought onto the scene for X's hardness to do anything. That is, X's hardness can be detected only with the help of some "test object" Y (to bounce off of X, for example). Nonetheless, the hardness detected is intrinsic to X. It is not, for example, primarily a fact about the system consisting of X and the test object Y together.

(3) Being an L-opening key (where L is a particular lock), on the other hand, feels very extrinsic to me. A thought experiment that pumps this intuition for me is this: Imagine a molten blob K of metal shifting through a range of key-shapes. The vast majority of such shapes do not open L. Now suppose that, in the course of these metamorphoses, K happens to pass through a shape that does open L. Just for that instant, K takes on the property of L-opening-ness. Nonetheless, and here is the point, an observer without detailed knowledge of L in particular wouldn't notice anything special about that instant.

Contrast this with the other two properties: An observer of three dots moving in space might notice when those three dots happen to fall into the configuration of a regular triangle. And an observer of an object passing through different conditions of hardness might notice when the object has become particularly hard. The observer can use a generic test object Y to check the hardness of X. The observer doesn't need anything in particular to notice that X has become hard.

But all that is just an elaboration of my intuitions. What is really going on here? I think that the answer sheds light on how people understand Eliezer's metaethics.

Section 2: Is goodness intrinsic?

I was led to this line of thinking while trying to understand why Eliezer's metaethics is consistently confusing.

The notion of an L-opening key has been my personal go-to analogy for thinking about how goodness (of a state of affairs) can be objective, as opposed to subjective. The analogy works like this: We are like locks, and states of affairs are like keys. Roughly, a state is good when it engages our moral sensibilities so that, upon reflection, we favor that state. Speaking metaphorically, a state is good just when it has the right shape to "open" us. (Here, "us" means normal human beings as we are in the actual world.) Being of the right shape to open a particular lock is an objective fact about a key. Analogously, being good is an objective fact about a state of affairs.

Objective in what sense? In this important sense, at least: The property of being L-opening picks out a particular point in key-shape space1. This space contains a point for every possible key-shape, even if no existing key has that shape. So we can say that a hypothetical key is "of an L-opening shape" even if the key is assumed to exist in a world that has no locks of type L. Analogously, a state can still be called good even if it is in a counterfactual world containing no agents who share our moral sensibilities.

But the discussion in Section 1 made "being L-opening" seem, while objective, very extrinsic, and not primarily about the key K itself. The analogy between "L-opening-ness" and goodness seems to work against Eliezer's purposes. It suggests that goodness is extrinsic, rather than intrinsic. For, one cannot properly call a key "opening" in general. One can only say that a key "opens this or that particular lock". But the analogous claim about goodness sounds like relativism: "There's no objective fact of the matter about whether a state of affairs is good. There's just an objective fact of the matter about whether it is good to you."

This, I suppose, is why some people think that Eliezer's metaethics is just warmed-over relativism, despite his protestations.

Section 3: Seeing intrinsicness in simulations

I think that we can account for the intuitions of intrinsicness in Section 1 by looking at them from the perspective simulations. Moreover, this account will explain why some of us (including perhaps Eliezer) judge goodness to be intrinsic.

The main idea is this: In our minds, a property P, among other things, "points to" the test for its presence. In particular, P evokes whatever would be involved in detecting the presence of P. Whether I consider a property P to be intrinsic depends on how I would test for the presence of P — NOT, however, on how I would test for P "in the real world", but rather on how I would test for P in a simulation that I'm observing from the outside.

Here is how this plays out in the cases above.

(1) In the case of being ∆-shaped, consider a simulation (on a computer, or in your mind's eye) consisting of three points connected by straight lines to make a triangle X floating in space. The points move around, and the straight lines stretch and change direction to keep the points connected. The simulation itself just keeps track of where the points and lines are. Nonetheless, when X becomes ∆-shaped, I notice this "directly", from outside the simulation. Nothing else within the simulation needs to react to the ∆-ness. Indeed, nothing else needs to be there at all, aside from the points and lines. The ∆-shape detector is in me, outside the simulation. To make the ∆-ness of an object X manifest, the simulation needs to contain only the object X itself.

In summary: A property will feel extremely intrinsic to X when my detecting the property requires only this: "Simulate just X."

(2) For the case of hardness, imagine a computer simulation that models matter and its motions as they follow from the laws of physics and my exogenous manipulations. The simulation keeps track of only fundamental forces, individual molecules, and their positions and momenta. But I can see on the computer display what the resulting clumps of matter look like. In particular, there is a clump X of matter in the simulation, and I can ask myself whether X is hard.

Now, on the one hand, I am not myself a hardness detector that can just look at X and see its hardness. In that sense, hardness is different from ∆-ness, which I can just look at and see. In this case, I need to build a hardness detector. Moreover, I need to build the detector inside the simulation. I need some other thing Y in the simulation to bounce off of X to see whether X is hard. Then I, outside the simulation, can say, "Yup, the way Y bounced off of X indicates that X is hard." (The simulation itself isn't generating statements like "X is hard", any more than the 3-points-and-lines simulation above was generating statements about whether the configuration was a regular triangle.)

On the other hand, crucially, I can detect hardness with practically anything at all in addition to X in the simulation. I can take practically any old chunk of molecules and bounce it off of X with sufficient force.

A property of an object X still feels intrinsic when detecting the property requires only this: "Simulate just X + practically any other arbitrary thing."

Indeed, perhaps I need only an arbitrarily small "epsilon" chunk of additional stuff inside the simulation. Given such a chunk, I can run the simulation to knock the chunk against X, perhaps from various directions. Then I can assess the results to conclude whether X is hard. The sense of intrinsicness comes, perhaps, from "taking the limit as epsilon goes to 0", seeing the hardness there the whole time, and interpreting this as saying that the hardness is "within" X itself.

In summary: A property will feel very intrinsic to X when its detection requires only this: "Simulate just X + epsilon."

(3) In this light, L-opening keys differ crucially from ∆-shaped things and from hard things.

An L-opening key differs from an ∆-shaped object because I myself do not encode lock L. Whereas I can look at a regular triangle and see its ∆-ness from outside the simulation, I cannot do the same (let's suppose) for keys of the right shape to open lock L. So I cannot simulate a key K alone and see its L-opening-ness.

Moreover, I cannot add something merely arbitrary to the simulation to check K for L-opening-ness.  I need to build something very precise and complicated inside the simulation: an instance of the lock L. Then I can insert K in the lock and observe whether it opens.

I need, not just K, and not just K + epsilon: I need to simulate K + something complicated in particular.

Section 4: Back to goodness

So how does goodness as a property fit into this story?

There is an important sense in which goodness is more like being ∆-shaped than it is like being L-opening. Namely, goodness of a state of affairs is something that I can assess myself from outside a simulation of that state. I don't need to simulate anything else to see it. Putting it another way, goodness is like L-opening would be if I happened myself to encode lock L. If that were the case, then, as soon as I saw K take on the right shape inside the simulation, that shape could "click" with me outside of the simulation.

That is why goodness seems to have the same ultimate kind of intrinsicness that ∆-ness has and which being L-opening lacks. We don't encode locks, but we do encode morality.

 

Footnote

1. Or, rather, a small region in key-shape space, since a lock will accept keys that vary slightly in shape.

P: 0 <= P <= 1

6 DragonGod 27 August 2017 09:57PM

Part of The Contrarian Sequences.                

Reply to infinite certainty and 0 and 1 are not probabilities.

 

Introduction

In infinite certainty, Eliezer makes the argument that you can't ever be absolutely sure of a proposition. That is an argument I disagreed with for a long time, but due to Akrasia acedia, I never got around to writing it. I think I have a more coherent counter argument now, and would present it below. Because the post I am replying to and infinite certainty are linked, I address both of them in this post.

 

This doesn't mean, though, that I have absolute confidence that 2 + 2 = 4.  See the previous discussion on how to convince me that 2 + 2 = 3, which could be done using much the same sort of evidence that convinced me that 2 + 2 = 4 in the first place.  I could have hallucinated all that previous evidence, or I could be misremembering it.  In the annals of neurology there are stranger brain dysfunctions than this.

This is true. That a statement is true does not mean that you have absolute confidence in the veracity of the statement. It is possible that you may have hallucinated everything.

 

Suppose you say that you're 99.99% confident that 2 + 2 = 4.  Then you have just asserted that you could make 10,000 independent statements, in which you repose equal confidence, and be wrong, on average, around once.

I am not so sure of this. If I have X% confidence in a belief, and I am well calibrated, then if there were K statements for which I said I have X% confidence in, then you expect that ((100-X)/100)*K of those statements would be wrong, and the remainder would be right. It does not follow that if I have X% confidence in a belief that I can make K statements in which I repose equal confidence, and be wrong only ((100-X)/100)*K  times. 

It's something like X% confidence (implies) if you made K statements then ((100-X)/100)*K of those statements would be wrong.

A well calibrated agent does not have to be able to make K with only ((100-X)/100)*K  wrong those statements for them to possess X% confidence in the proposition. It only indicates that in a hypothetical world in which they did make K statements, if they were well calibrated, only ((100-X)/100)*K  of those statements would be wrong. To assert that a well calibrated agent must be able to make those statements before they can have X% confidence, is to establish the hypothetical as a given fact—either a honest mistake, or deliberate malice.                                

 

As for the notion that you could get up to 100% confidence in a mathematical proposition—well, really now!  If you say 99.9999% confidence, you're implying that you could make one million equally fraught statements, one after the other, and be wrong, on average, about once.  That's around a solid year's worth of talking, if you can make one assertion every 20 seconds and you talk for 16 hours a day.

Assert 99.9999999999% confidence, and you're taking it up to a trillion.  Now you're going to talk for a hundred human lifetimes, and not be wrong even once?

Assert a confidence of (1—1/googolplex) and your ego far exceeds that of mental patients who think they're God.

And a googolplex is a lot smaller than even relatively small inconceivably huge numbers like 3^^^3.


All based on the same flawed premise, and equally flawed.

 

I am Infinitely Certain

There is one proposition that I would start with and assign a probability of 1, not 1-1/googolplex. Not 1 - 1/3^^^^3, Not 1 - epsilon (where epsilon is an arbitrarily small number), but a probability of 1.              

 

I exist.

 

Rene Descartes presents a very wonderful argument for the veracity of this statement:

Accordingly, seeing that our senses sometimes deceive us, I was willing to suppose that there existed nothing really such as they presented to us; And because some men err in reasoning, and fall into Paralogisms, even on the simplest matters of Geometry, I, convinced that I was as open to error as any other, rejected as false all the reasonings I had hitherto taken for Demonstrations; And finally, when I considered that the very same thoughts (presentations) which we experience when awake may also be experienced when we are asleep, while there is at that time not one of them true, I supposed that all the objects (presentations) that had ever entered into my mind when awake, had in them no more truth than the illusions of my dreams. But immediately upon this I observed that, whilst I thus wished to think that all was false, it was absolutely necessary that I, who thus thought, should be something; And as I observed that this truth, I think, therefore I am,[c] was so certain and of such evidence that no ground of doubt, however extravagant, could be alleged by the Sceptics capable of shaking it, I concluded that I might, without scruple, accept it as the first principle of the philosophy of which I was in search

 

Eliezer quotes Rafal Smigrodski:

"I would say you should be able to assign a less than 1 certainty level to the mathematical concepts which are necessary to derive Bayes' rule itself, and still practically use it.  I am not totally sure I have to be always unsure.  Maybe I could be legitimately sure about something.  But once I assign a probability of 1 to a proposition, I can never undo it.  No matter what I see or learn, I have to reject everything that disagrees with the axiom.  I don't like the idea of not being able to change my mind, ever."

 

I am alright with accepting as an axiom that I exist. I see no reason why I should be cautious of assigning a probability of 1 to this statement. I am infinitely certain that I exist.               


If you accept Descartes argument, then this is very important. You're accepting that we can be infinitely certain about a proposition—and not just that—that it is sensible to be infinitely certain about a proposition. Usually, only one counterexample is necessary, but there are several other statements which you may assign a probability of 1 to.                     

 

I believe that I exist.                      

 

I believe that I believe that I exist. 

 

I believe that I believe that I believe that I exist.

And so on and so forth, ad infinitum. An infinite chain of statements, all of which are exactly true. I have satisfied Eliezer's (fatuous) requirements for assigning a certain level of confidence to a proposition. If you feel that it is not sensible to assign probability 1 to the first statement, then consider this argument. I assign a probability 1 to the proposition "I exist". This means that the proposition "I exist" exists (pun intended) in my mental map of the world, and is therefore a belief of mine. By deduction, if I assign a probability of 1 to the statement "I exist", then I must assign a probability of 1 to the proposition "I believe that I exist". By induction, I must assign a probability of 1 to all the infinite statements, and all of them are true.                     

(I assign a probability of 1 to deduction being true).

 

Generally, using the power of recursion, we can pick any statement, to which we assign a probability of 1 and generate infinite more statements to which we (by deduction) also assign a probability of 1.               

Let X be a proposition to which we assign a probability of 1.                 

define f(var, n=0)

      if n < 0 or type(n) != int

      return -1

      end if

      if var == X and n == 0

            var = ("I believe " + var + ".")

            print var

      end if

      n = (n < 2)?2:n

      str = ("I believe that " + var + ".")

      print str

      i = 0

      while i < n

            str += "I believe that " + str + "."       

            print str

      end while       

      end if else

      f(str, n**n)

end

f(f(X, n)) for any X (to which we assign a probability of 1 and some valid n) prints an infinite number of statements to which we also assign a probability of 1.                  

 

While I'm at it, I can show that there are an uncountably infinite number of such statements with a probability of 1.

Let S be the array of all propositions produced by f(f(X, n)) (for some valid X to which we assigned a probability of 1, and a valid n).

define g(var) 

      k = rand(#S)

      i = 0

      j = rand(#S)

      str = "I believe " + S[j] 

      delete(S[j])

      while i < k

            j = rand(#S)

            str += " and " + S[j]

            delete(S[j]

            i++

      end while

      print(str)

      f(g(var), 2)

end

Assuming #S = Aleph_null, there are 2^#S possible values for str, and each of them can be used to generate an infinite sequence of true propositions. By Cantor's diagonal argument the number of propositions to which we assign a probability of 1 are uncountable. For each of those propsitions, we assign a probability of 0 to their negation. That is if you accept Descartes argument, or accept any single proposition has having a probability of 1 (or 0), then you accept uncountably infinite many propositions as having a probability of 1 (or 0). Either we can never be certain of any propositions ever, or we can be certain of uncountably infinite many propositions (you can also use the outlined method to construct K statements with arbitrary accuracy).           

Personally, I see no problem with accepting "I exist" (and deduction) as having P of 1.

 

When you work in log odds, the distance between any two degrees of uncertainty equals the amount of evidence you would need to go from one to the other.  That is, the log odds gives us a natural measure of spacing among degrees of confidence.

Using the log odds exposes the fact that reaching infinite certainty requires infinitely strong evidence, just as infinite absurdity requires infinitely strong counterevidence.

This ignores the fact that you can assign priors of 0 and 1—in fact, it is for this very reason that I argue that 0 and 1 are probabilities—Eliezer is right in that we can never update upwards (or downwards as the case may be) to 1 or 0 (without using priors of 0 or 1), but we can (and I argue we should) sometimes start with priors of 0 and 1.

 

0 and 1 as priors.

Consider Pascal's Mugging. Pascal's Mugging is a breaker (breakers are a name I coined for decision problems which break decision theories). Let us reconceive the problem such that the person doing the mugging is me. 

I walk up to Eliezer and tell him that he should pay me a $10,000 or I would grant him infinite negative utility. 

Now, I cannot (as a matter of fundamental physical law) inflict infinite negative utility on Eliezer. However, if Eliezer is rational (maximising his expected utility), then Eliezer must pay me the money. No matter how much money I demand from Eliezer, Eliezer must pay me, because Eliezer does not assign a probability of 0 to me carrying out my threat, and no matter how small the probability is, as long as it's not 0, paying me the ransom I demanded is the choice which maximises expected utility.                

 

(If you claim that it is impossible for me to grant you infinite negative utility/infinite negative utility is incoherent/return a category error on infinite negative utility, then you are assigning a probability of 0 to the existence of infinite negative utility, and (implicitly (because P(A) >= P(A and B). A here is "infinite negative utility exists". B is "I can grant infinite negative utility".) assigning a probability of 0 to me granting you infinite negative utility).


I have no problems with decision problems which break decision theories, but when a problem breaks the very formulation of rationality itself, then I'm pissed. There is a trivial solution to resolving Pascal's mugging using classical decision theory (accept the objective definition of probability; once you do so, the probability of me carrying out my threat becomes zero and the problem disappears). Only the insistence to cling to (unfounded) subjective probability that forbids 0 and 1 as probabilities leads to this mess.                

If anything, Pascal's mugging should be a definitive proof demonstrating that indeed 0 and 1 are perfectly legitimate priors (if you accept a prior of 0 that I will grant you infinite negative utility, then trivially, you accept a prior of 1 that I do not grant you infinite negative utility). Pascal's mugging only "breaks" Expected utility theory if you forbid priors of 0 and 1—an inane commandment.    

 

 I'll expand more on breakers, rationality, etc. in my upcoming several ten pages+ paper.

 

Conclusion

So I propose that it makes sense to say that 1 and 0 are not in the probabilities; just as negative and positive infinity, which do not obey the field axioms, are not in the real numbers.

The main reason this would upset probability theorists is that we would need to rederive theorems previously obtained by assuming that we can marginalize over a joint probability by adding up all the pieces and having them sum to 1.

However, in the real world, when you roll a die, it doesn't literally have infinite certainty of coming up some number between 1 and 6.  The die might land on its edge; or get struck by a meteor; or the Dark Lords of the Matrix might reach in and write "37" on one side.

If you made a magical symbol to stand for "all possibilities I haven't considered", then you could marginalize over the events including this magical symbol, and arrive at a magical symbol "T" that stands for infinite certainty.

But I would rather ask whether there's some way to derive a theorem without using magic symbols with special behaviors.  That would be more elegant.  Just as there are mathematicians who refuse to believe in double negation or infinite sets, I would like to be a probability theorist who doesn't believe in absolute certainty.

Eliezer presents a shaky basis for rejecting 0 and 1 as probabilities. His model leads to absurd conclusion(s) (a proof by contradiction that 0 and 1 are indeed probabilities), he offers no benefits to rejecting the standard model and replacing it with his (only multiple demerits), and he doesn't formalise an alternative model of probability that is free of absurdities and has more benefits than the standard model.                  

0 and 1 are not probabilities is a solution in search of a problem.

 

 

Epistemic Hygiene

This article may have come across as overly vicious and confrontational; I adopted such an attitude to minimise the bias in my perception of the original article based on the halo effect.

 

The Contrarian Sequences

6 DragonGod 27 August 2017 07:33PM

A series of posts wherein I outline my disagreements with Shishou (Eliezer Yudkowsky). 

 

When I discovered the sequences was my second epiphany (the first was when I became atheist, and my world was turned upside down). Eliezer's charisma, his arrogance, the force of his personality, and his eloquence all combined to make a deadly drug. I was hooked, and seized with a fervour greater than when I first gave my life to Jesus Christ. Eliezer became my new Jesus, and I was drinking the koolaid pretty badly. Some months later (in light of criticism from both friend and foe), I realised I was a cultist, and began trying to sanitise myself. Due to the Halo effect, I accepted everything Eliezer said unconditionally, and never bothered trying to ascertain for myself the veracity of his claims.                  

 

I am stronger now than I was then, and aspiring higher still. This is a project in raising my epistemic hygiene. I will only react to posts that I feel were wrong in the overarching thesis, and develop my counterarguments in them. I expect I should write counter posts for at least 1% of all posts (if I don't reach the target, I'm probably not being critical enough), and at most 5% (if I exceed that, then I've probably just biased myself in the opposite direction, and/or I'm getting a kick out of disagreeing with Eliezer). Posts in the contrarian sequences would be in chronological order of how I wrote them, and not in the order of how they appear in the sequences, or in any ontological order.  

 

 Table of Contents

 

  1. The Reality of Emergence                              
  2. P: 0 <= P <= 1

 

Could the Maxipok rule have catastrophic consequences? (I argue yes.)

6 philosophytorres 25 August 2017 10:00AM

Here I argue that following the Maxipok rule could have truly catastrophic consequences.

Here I provide a comprehensive list of actual humans who expressed, often with great intensity, omnicidal urges. I also discuss the worrisome phenomenon of "latent agential risks."

And finally, here I argue that a superintelligence singleton constitutes the only mechanism that could neutralize the "threat of universal unilateralism" and the consequent breakdown of the social contract, resulting in a Hobbesian state of constant war among Earthians.

I would genuinely welcome feedback on any of these papers! The first one seems especially relevant to the good denizens of this website. :-)

[Link] Nasas ambitious plan to save earth from a supervolcano

6 ChristianKl 24 August 2017 10:01AM

Halloween costume: Paperclipperer

5 Elo 21 October 2017 06:32AM

Original post: http://bearlamp.com.au/halloween-costume-paperclipperer/

Guidelines for becoming a paperclipperer for halloween.

Supplies

  • Paperclips (some as a prop, make your life easier by buying some, but show effort by making your own)
  • pliers (extra pairs for extra effect)
  • metal wire (can get colourful for novelty) (Florist wire)
  • crazy hat (for character)
  • Paperclip props.  Think glasses frame, phone case, gloves, cufflinks, shoes, belt, jewellery...
  • if party going - Consider a gift that is suspiciously paperclip like.  example - paperclip coasters, paperclip vase, paperclip party-snack-bowl
  • Epic commitment - make fortune cookies with paperclips in them.  The possibilities are endless.
  • Epic: paperclip tattoo on the heart.  Slightly less epic, draw paperclips on yourself.

Character

While at the party, use the pliers and wire to make paperclips.  When people are not watching, try to attach them to objects around the house (example, on light fittings, on the toilet paper roll, under the soap.  When people are watching you - try to give them to people to wear.  Also wear them on the edges of your clothing.

When people ask about it, offer to teach them to make paperclips.  Exclaim that it's really fun!  Be confused, bewildered or distant when you insist you can't explain why.

Remember that paperclipping is a compulsion and has no reason.  However that it's very important.  "you can stop any time" but after a few minutes you get fidgety and pull out a new pair of pliers and some wire to make some more paperclips.

Try to leave paperclips where they can be found the next day or the next week.  cutlery drawers, in the fridge, on the windowsills.  And generally around the place.  The more home made paperclips the better.

Try to get faster at making paperclips, try to encourage competitions in making paperclips.

Hints for conversation:

  • Are spiral galaxies actually just really big paperclips?
  • Have you heard the good word of our lord and saviour paperclips?
  • Would you like some paperclips in your tea?
  • How many paperclips would you sell your internal organs for?
  • Do you also dream about paperclips (best to have a dream prepared to share)

Conflict

The better you are at the character, the more likely someone might try to spoil your character by getting in your way, stealing your props, taking your paperclips.  The more you are okay with it, the better.  ideas like, "that's okay, there will be more paperclips".  This is also why you might be good to have a few pairs of pliers and wire.  Also know when to quit the battles and walk away.  This whole thing is about having fun.  Have fun!


Meta: chances are that other people who also read this will not be the paperclipper for halloween.  Which means that you can do it without fear that your friends will copy.  Feel free to share pictures!

Cross posted to lesserwrong: 

[Link] We've failed: paid publication , pirates win.

5 morganism 16 September 2017 09:53PM

Instrumental Rationality Sequence Finished! (w/ caveats)

5 lifelonglearner 09 September 2017 01:49AM

Hey everyone,

Back in April, I said I was going to start writing an instrumental rationality sequence.

It's...sort of done.

I ended up collecting the essays into a sort of e-book. It's mainly content that I've put here (Starting Advice, Planning 101, Habits 101, etc.), but there's also quite a bit of new content.

It clocks in at about 150 pages and 30,000 words, about 15,000 of which I wrote after the April announcement post. (Which beats my estimate of 10,000 words before burnout!!!)

However, the editor for LW 1.0 editor isn't making it easy to port the stuff here from my Google Drive.

As LW 2.0 enters actual open beta, I'll repost / edit the essays and host them there. 

In the meantime, if you want to read the whole compiled book, the direct Google Doc link is here. That's where the real-time updates will happen, so it's what I'd recommend using to read it for now.

(There's also an online version on my blog if for some reason you want to read it there.)

It's my hope that this sequence becomes a useful reference for newcomers looking to learn more about instrumental rationality, which is more specialized than The Sequences (which really are more for epistemics).

Unfortunately, I didn't manage to write the book/sequence I set out to write. The actual book as it is now is about 10% as good as what I actually wanted. There's stuff I didn't get to write, more nuances I'd have liked to cover, more pictures I wanted to make, etc.

After putting in many hours of research and writing, I think I've learned more about the sort of effort that would need to go into making the actual project I'd outlined at the start.

There'll be a postmortem essay analyzing my expectations vs reality coming soon.

As a result of this project and a few other things, I'm feeling burned out. There probably won't be any major projects from me for a little bit, while I rest up.

The Doomsday argument in anthropic decision theory

5 Stuart_Armstrong 31 August 2017 01:44PM

EDIT: added a simplified version here.

Crossposted at the intelligent agents forum.

In Anthropic Decision Theory (ADT), behaviours that resemble the Self Sampling Assumption (SSA) derive from average utilitarian preferences (and from certain specific selfish preferences).

However, SSA implies the doomsday argument, and, to date, I hadn't found a good way to express the doomsday argument within ADT.

This post will remedy that hole, by showing how there is a natural doomsday-like behaviour for average utilitarian agents within ADT.

continue reading »

Is life worth living?

5 philosophytorres 30 August 2017 10:42AM

Genuinely curious how folks on this website would answer the following question:

 

First, imagine the improbable: God exists. Now pretend that he descends from the clouds and visits you one night, saying the following: "I'm going to give you exactly two choices. (1) I'll murder you right now and annihilate your soul, meaning that you'll have no more conscious experiences ever again. [Theologians call this "annihilationism."] Alternatively, (2) I'll allow you to relive your life up to this moment exactly as it unfolded the first time -- that is, all the exact same experiences, life decisions, outcomes, etc. If you choose the second, once you reach the present moment -- this moment right now -- I'll then annihilate your soul."

 

Which would you choose, if you were forced to pick one or the other?

Social Insight: Status Exchange: When an Insult Is a Compliment, When a Compliment Is an Insult

5 Bound_up 25 August 2017 03:57AM

Some rough synonyms for status include respect, prestige, and "coolness."

 

Conceptually, the idea I sometimes think of when I try to describe "status" in its constituent parts is that to have status is to have people feel that they owe you something, to feel like they would if you had just given them a gift. The balance of give-and-take in the encounter is tilted in your favor. Picture a king among subjects, being given gifts and praises. Every brush of his hand is itself a gift, every glance of his eyes a praise to the recipient. The give-and-take in a relationship is never exactly equal, and high status people have it tilted strongly in their favor.

 

With the people you know, you'll have implicitly established an individual give-and-take relationship with each of them, and if one of you fails to give as much as that balance (or imbalance) requires, you'll be asked to apologize. So, if you have a 60-40 relationship (your way) with someone, and they only give you 50, you'll feel offended and ask for the apology. An apology is essentially a recognition of failure to give somebody as much as is expected, and a promise to give them more from now on/take less from now on. In other words, to shift the actual give-and-take favorably in their direction. This is why asking for an apology is essentially a re-negotiation of power/a request for submission.

(You'll note that you can feel offended for being treated fairly if that's not what your give-and-take has been in the past, just like someone can apologize for acting fairly if more than that is expected of them. This is why apologies can be purposefully sought and extracted with the intention of gaining status/re-negotiating the give-and-take of the relationship. Ammunition will be noted, stored, and prepared in advance and the encounter will be initiated at a strategically opportune time. Ammunition includes anything that can make someone feel sorry, and sometimes you can win without ammunition by continuing to act or feel like you've been wronged even without being able to give a justification for it.)

With people you don't know, general status determines how much they "owe" you and you them. If you are high status, people will feel like they owe you even before you've had any give and take. They will treat you much the same way as they would if you had just done them a great favor and they wanted to show you appreciation and thanks.  As I said, having high status = people feel the same way they would feel if they owed you something in real life/you were giving them things in real life.

 

A compliment can be seen in two ways: an assessment of a person, or as an attempt to raise their status. If you ever hear a nonsensical compliment, it's probably being used simply to raise the recipient's status, not to use language to describe a quality the person has. The entire message is summed up in this: that words clearly identifiable as definitely-a-compliment are spoken at all, not in what those specific words are.

Over-the-top compliments are one kind of nonsensical compliment, and as said, are (on the surface) attempts to raise someone's status, not comments on their qualities or abilities.

 

Let's blur out the words and look at how giving-a-compliment affects social status.

 

How good does a compliment make you feel? Scratch that. How good do compliments make most people feel. Personally, I'd feel better about a compliment the more I thought it said something I valued about myself, multiplied by how capable an assessor of that thing I considered the compliment-er.  So if you can consistently guess people's IQ or future success, and tell me you think I've got the stuff, that's an amazing compliment, even if you're the whipping boy of the tribe. It is now my impression that most people's appreciation for a compliment is calculated differently.

Take the effusiveness of the compliment and add a bonus for how much more status than the complimented the compliment-er has (or subtract the difference if they have less status). That's how much people appreciate a given compliment.

 

Effusiveness can partially be measured without even understanding the language being spoken. The tone and body language will communicate how much deference is being shown the complimented.

You can also find some of the compliment's effusiveness in the actual words. Mostly just look at adjectives and adverbs, though. Are you extremely something-or-other? Cool, bump up the effusiveness a little. Are you tremendous? Ditto. However, whether you're extremely this versus that, or what you're tremendous about exactly, is mostly irrelevant.

 

As for how status affects things, let's say whatever your status is, someone has status a little bit lower, maybe it's -1 relative to you.`So, penalize the power of the compliment accordingly, it'll come out a little bit weaker than the effusiveness alone would suggest. In contrast, if Johnny Depp compliments you, or even nods at you approvingly, this "compliment" will get a substantial bonus for coming from a higher status person.

"Oh my god; he looked at me" comes from this kind of thing. In contrast, "I don't want your apology/money" also does, when the other person is lower status (being mad at someone is like temporarily treating them like they have much lower status than usual).

 

You can see how this dynamic will play out if you start with "compliments from higher people feel better" and follow its implications.

 

If getting a compliment from a cool person feels better, then acting happy to receive a compliment signals that you consider them to be of higher status. At least, if you act happy enough. Get too excited about a compliment, and that suggests that you consider the other person to be of higher status than you (but you still have to do it when they really are higher status; it's quite awkward not to act pleased about a compliment from a higher-up and you'll lose points if you don't act in the usual manner).

 

So let's say someone of equal status give you a mild compliment, not particularly effusive. If you act all excited, you've signaled that you are lower status than them. If someone of lower status mildly compliments you and you act impressed at all, you've lowered your status even more (ignoring counter-signaling for the moment).

 

Every compliment is a two-way street. The compliment is a signal of how they perceive your status relative to theirs, and how you receive the compliment signals how you perceive their status relative to yours. Both the compliment-er and the complimented have to choose their move, some choices grabbing for status and others granting it.

 

You can see how this plays out with low-status people who are desperate to give you over-the-top compliments. Every compliment is also an attempt to receive something. They want to see your reaction. If you respond at all, that validates them to some degree (and potentially lowers your status as a result). If they don't get the reactions they want, they'll exaggerate your merits, practically begging you to be appreciative in some way. You might also notice how awkward it feels to receive such excessive compliments from someone of lower status. (I might recommend taking them to the side, alone, where that feeling will suddenly disappear (mostly) and giving them some tips about not begging so much).

 

This feeling is instinctive, I hypothesize. It protects your status, and you can see why if you learn this stuff and think it through. But of course, evolution would like to get you not to respond to low-status people without you having to consciously know all this stuff. So it gives you a feeling. A feeling's a lot easier for evolution to give an organism than complicated abstract knowledge is.

This feeling makes you feel unimpressed by low-status compliments and awkward about the whole thing so as to preserve your status via not acting appreciative, lest you signal your acceptance of the compliment-er as higher status than you (or closer in status to you than they are).

 

On the other hand, a high-status person might find it useful to force you to choose between acting grateful to them and violating social norms. Giving you a compliment can force you into exactly that situation. Maybe you just met and want to impress Party C, so you have to present your nice, civilized face (see "person masks" at http://www.meltingasphalt.com/personhood-a-game-for-two-or-more-players/). Under those circumstances, "violate social norms" is not available to you, so if you receive a compliment, you kind of have to respond, you know? Inside you might be seething, though, as your hated rival forces you to dance through some hoops by offering you ever more effusive compliments.

A compliment, just like a gift, can be an offensive move. It pushes you into a certain role; If you don't act appreciative enough/reciprocate, you might lose points. 

 

A compliment can be a gift, or an attack, or it can be begging, or it can be a test.

 

So, let's imagine how these principles play out in a variety of situations.

 

1. High compliments Low effusively. Low is only mildly appreciative, signaling higher status than they have. High is offended. Low doesn't act embarrassed(have you no shame?!) and loses points in High's eyes.

2. Several Lows effusively compliment a High. Then, one Low says something only mildly complimentary about High. Everyone tenses up a little and looks at Low (to censure him) and High (to see his reaction). Low has signaled possible enmity. The compliment is an insult.

3. A High on the enemy side singles out and insults a Low in your group. The Low is elevated by the attention of the High and is considered "a real player" now. The insult is a compliment. 

I've seen this one many a times in politics, where people are proud when they are personally decried by famous enemies. "Did you hear that Trump said I was dumb? Awesome, am I right?"

 

In the past, playing by my own rules (compliments are worth most if accurate, informationally-dense, and coming from a competent assessor) led me to, from everyone else's perspective act quite chaotically. To them, it seemed that sometimes I made the appropriate response and maintained status. Occasionally I accidentally executed elaborate plots which ended in my status increasing. But mostly, I consistently broke the rules in a way that lost me status and proved I didn't understand what was really going on. Which I didn't.

Most people seem to play by these rules (and others), so if you want to understand what they're doing, and how your actions look to them, this is one of the building blocks.

[Link] Habits 101: Techniques and Research

5 lifelonglearner 22 August 2017 10:54AM

Beauty as a signal (map)

4 turchin 12 October 2017 10:02AM

This is my new map, in which female beauty is presented as a signal which moves from woman to man through different mediums and amplifiers. pdf

Mini-conference "Near-term AI safety"

4 turchin 11 October 2017 03:19PM

TL;DR: The event will be in Moscow, Russia, and near-term risks of AI will be discussed. The main language will be Russian, but Jonatan Yan will speak in English from HK. English presentations will be uploaded later on the FB page of the group "Near-term AI safety." Speakers: S. Shegurin, A. Turchin, Jonathan Yan. The event's FB page is here.

In the last five years, artificial intelligence has developed at a much faster pace in connection with the success of neural network technologies. If we extrapolate these trends, AI near-human level may appear in the next five to ten years, and there is a significant probability that this will lead to a global catastrophe. At a one-day conference at the Kocherga rationalist club, we'll look at how recent advances in the field of neural networks are changing our estimates of the timing of the creation of AGI, and what global catastrophes are possible in connection with the emergence of an increasingly strong AI. A special guest of the program Jonathan Yan is Hong Kong will tell (in English, via Skype) the latest research data on this topic.

The language of the conference: the first two reports in Russian, and the report Yan in English without translation, the discussion after it in English.

Registration: on the event page on Facebook.

Place: rationalist club "Kocherga", mail hall, Bolshaya Dorogomilovskaya ul., 5 корпус 2.

Participation in tariffs anticafe, 2.5 rubles a minute, coffee for free.

Videobroadcast will be on the Facebook.

 

Program:

October 14, Saturday 15.00 - the beginning.

15.00 - Shegurin Sergey. "Is it possible to create a human level AI in the next 10 years?"

16.00 - Turchin Alexey. "The next 10 years: the global risks of AI before the creation of the superintelligence"

17.00 - Jonathan Yan. "Recent Developments Towards AGI & Why It's Nearer Than You Think (in English)"

17.40 - Discussion

 

 

Running a Futurist Institute.

4 fowlertm 06 October 2017 05:05PM

Hello,

My name is Trent Fowler, and I'm an aspiring futurist. To date I have given talks on two continents on machine ethics, AI takeoff dynamics, secular spirituality, existential risk, the future of governance, and technical rationality. I have written on introspection, the interface between language and cognition, the evolution of intellectual frameworks, and myriad other topics. In 2016 I began 'The STEMpunk Project', an endeavor to learn as much about computing, electronics, mechanics, and AI as possible, which culminated in a book published earlier this year. 

Elon Musk is my spirit animal. 

I am planning to found a futurist institute in Boulder, CO. I actually left my cushy job in East Asia to help make the future a habitable place. 

Is there someone I could talk to about how to do this? Should I incorporate as a 501C3 or an LLC? What are the best ways of monetizing such an endeavor? How can I build an audience (meetup attendance has been anemic at best, what can I do about that)? And so on. 

Best,

-Trent

Naturalized induction – a challenge for evidential and causal decision theory

4 Caspar42 22 September 2017 08:15AM

As some of you may know, I disagree with many of the criticisms leveled against evidential decision theory (EDT). Most notably, I believe that Smoking lesion-type problems don't refute EDT. I also don't think that EDT's non-updatelessness leaves a lot of room for disagreement, given that EDT recommends immediate self-modification to updatelessness. However, I do believe there are some issues with run-of-the-mill EDT. One of them is naturalized induction. It is in fact not only a problem for EDT but also for causal decision theory (CDT) and most other decision theories that have been proposed in- and outside of academia. It does not affect logical decision theories, however.

The role of naturalized induction in decision theory

Recall that EDT prescribes taking the action that maximizes expected utility, i.e.

where is the set of available actions, is the agent's utility function, is a set of possible world models, represents the agent's past observations (which may include information the agent has collected about itself). CDT works in a – for the purpose of this article – similar way, except that instead of conditioning on in the usual way, it calculates some causal counterfactual, such as Pearl's do-calculus: . The problem of naturalized induction is that of assigning posterior probabilities to world models (or or whatever) when the agent is naturalized, i.e., embedded into its environment.

Consider the following example. Let's say there are 5 world models , each of which has equal prior probability. These world models may be cellular automata. Now, the agent makes the observation . It turns out that worlds and don't contain any agents at all, and contains no agent making the observation . The other two world models, on the other hand, are consistent with . Thus, for and for . Let's assume that the agent has only two actions and that in world model the only agent making observation takes action and in the only agent making observation takes action , then and . Thus, if, for example, , an EDT agent would take action to ensure that world model is actual.

The main problem of naturalized induction

This example makes it sound as though it's clear what posterior probabilities we should assign. But in general, it's not that easy. For one, there is the issue of anthropics: if one world model contains more agents observing than another world model , does that mean ? Whether CDT and EDT can reason correctly about anthropics is an interesting question in itself (cf. Bostrom 2002Armstrong 2011; Conitzer 2015), but in this post I'll discuss a different problem in naturalized induction: identifying instantiations of the agent in a world model.

It seems that the core of the reasoning in the above example was that some worlds contain an agent observing and others don't. So, besides anthropics, the central problem of naturalized induction appears to be identifying agents making particular observations in a physicalist world model. While this can often be done uncontroversially – a world containing only rocks contains no agents –, it seems difficult to specify how it works in general. The core of the problem is a type mismatch of the "mental stuff" (e.g., numbers or Strings) and the "physics stuff" (atoms, etc.) of the world model. Rob Bensinger calls this the problem of "building phenomenological bridges" (BPB) (also see his Bridge Collapse: Reductionism as Engineering Problem).

Sensitivity to phenomenological bridges

Sometimes, the decisions made by CDT and EDT are very sensitive to whether a phenomenological bridge is built or not. Consider the following problem:

One Button Per Agent. There are two similar agents with the same utility function. Each lives in her own room. Both rooms contain a button. If agent 1 pushes her button, it creates 1 utilon. If agent 2 pushes her button, it creates -50 utilons. You know that agent 1 is an instantiation of you. Should you press your button?

Note that this is essentially Newcomb's problem with potential anthropic uncertainty (see the second paragraph here) – pressing the button is like two-boxing, which causally gives you $1k if you are the real agent but costs you $1M if you are the simulation.  

If agent 2 is sufficiently similar to you to count as an instantiation of you, then you shouldn't press the button. If, on the other hand, you believe that agent 2 does not qualify as something that might be you, then it comes down to what decision theory you use: CDT would press the button, whereas EDT wouldn't (assuming that the two agents are strongly correlated).

It is easy to specify a problem where EDT, too, is sensitive to the phenomenological bridges it builds:

One Button Per World. There are two possible worlds. Each contains an agent living in a room with a button. The two agents are similar and have the same utility function. The button in world 1 creates 1 utilon, the button in world 2 creates -50 utilons. You know that the agent in world 1 is an instantiation of you. Should you press the button?

If you believe that the agent in world 2 is an instantiation of you, both EDT and CDT recommend you not to press the button. However, if you believe that the agent in world 2 is not an instantiation of you, then naturalized induction concludes that world 2 isn't actual and so pressing the button is safe.

Building phenomenological bridges is hard and perhaps confused

So, to solve the problem of naturalized induction and apply EDT/CDT-like decision theories, we need to solve BPB. The behavior of an agent is quite sensitive to how we solve it, so we better get it right.

Unfortunately, I am skeptical that BPB can be solved. Most importantly, I suspect that statements about whether a particular physical process implements a particular algorithm can't be objectively true or false. There seems to be no way of testing any such relations.

Probably we should think more about whether BPB really is doomed. There even seems to be some philosophical literature that seems worth looking into (again, see this Brian Tomasik post; cf. some of Hofstadter's writings and the literatures surrounding "Mary the color scientist", the computational theory of mind, computation in cellular automata, etc.). But at this point, BPB looks confusing/confused enough to look into alternatives.

Assigning probabilities pragmatically?

One might think that one could map between physical processes and algorithms on a pragmatic or functional basis. That is, one could say that a physical process A implements a program p to the extent that the results of A correlate with the output of p. I think this idea goes into the right direction and we will later see an implementation of this pragmatic approach that does away with naturalized induction. However, it feels inappropriate as a solution to BPB. The main problem is that two processes can correlate in their output without having similar subjective experiences. For instance, it is easy to show that Merge sort and Insertion sort have the same output for any given input, even though they have very different "subjective experiences". (Another problem is that the dependence between two random variables cannot be expressed as a single number and so it is unclear how to translate the entire joint probability distribution of the two into a single number determining the likelihood of the algorithm being implemented by the physical process. That said, if implementing an algorithm is conceived of as binary – either true or false –, one could just require perfect correlation.)

Getting rid of the problem of building phenomenological bridges

If we adopt an EDT perspective, it seems clear what we have to do to avoid BPB. If we don't want to decide whether some world contains the agent, then it appears that we have to artificially ensure that the agent views itself as existing in all possible worlds. So, we may take every world model and add a causally separate or non-physical entity representing the agent. I'll call this additional agent a logical zombie (l-zombie) (a concept introduced by Benja Fallenstein for a somewhat different decision-theoretical reason). To avoid all BPB, we will assume that the agent pretends that it is the l-zombie with certainty. I'll call this the l-zombie variant of EDT (LZEDT). It is probably the most natural evidentialist logical decision theory.

Note that in the context of LZEDT, l-zombies are a fiction used for pragmatic reasons. LZEDT doesn't make the metaphysical claim that l-zombies exist or that you are secretly an l-zombie. For discussions of related metaphysical claims, see, e.g., Brian Tomasik's essay Why Does Physics Exist? and references therein.

LZEDT reasons about the real world via the correlations between the l-zombie and the real world. In many cases, LZEDT will act as we expect an EDT agent to act. For example, in One Button Per Agent, it doesn't press the button because that ensures that neither agent pushes the button.

LZEDT doesn't need any additional anthropics but behaves like anthropic decision theory/EDT+SSA, which seems alright.

Although LZEDT may assign a high probability to worlds that don't contain any actual agents, it doesn't optimize for these worlds because it cannot significantly influence them. So, in a way LZEDT adopts the pragmatic/functional approach (mentioned above) of, other things equal, giving more weight to worlds that contain a lot of closely correlated agents.

LZEDT is automatically updateless. For example, it gives the money in counterfactual mugging. However, it invariably implements a particularly strong version of updatelessness. It's not just updatelessness in the way that "son of EDT" (i.e., the decision theory that EDT would self-modify into) is updateless, it is also updateless w.r.t. its existence. So, for example, in the One Button Per World problem, it never pushes the button, because it thinks that the second world, in which pushing the button generates -50 utilons, could be actual. This is the case even if the second world very obviously contains no implementation of LZEDT. Similarly, it is unclear what LZEDT does in the Coin Flip Creation problem, which EDT seems to get right.

So, LZEDT optimizes for world models that naturalized induction would assign zero probability to. It should be noted that this is not done on the basis of some exotic ethical claim according to which non-actual worlds deserve moral weight.

I'm not yet sure what to make of LZEDT. It is elegant in that it effortlessly gets anthropics right, avoids BPB and is updateless without having to self-modify. On the other hand, not updating on your existence is often counterintuitive and even regular updateless is, in my opinion, best justified via precommitment. Its approach to avoiding BPB isn't immune to criticism either. In a way, it is just a very wrong approach to BPB (mapping your algorithm into fictions rather than your real instantiations). Perhaps it would be more reasonable to use regular EDT with an approach to BPB that interprets anything as you that could potentially be you?

Of course, LZEDT also inherits some of the potential problems of EDT, in particular, the 5-and-10 problem.

CDT is more dependant on building phenomenological bridges

It seems much harder to get rid of the BPB problem in CDT. Obviously, the l-zombie approach doesn't work for CDT: because none of the l-zombies has a physical influence on the world, "LZCDT" would always be indifferent between all possible actions. More generally, because CDT exerts no control via correlation, it needs to believe that it might be X if it wants to control X's actions. So, causal decision theory only works with BPB.

That said, a causalist approach to avoiding BPB via l-zombies could be to tamper with the definition of causality such that the l-zombie "logically causes" the choices made by instantiations in the physical world. As far as I understand it, most people at MIRI currently prefer this flavor of logical decision theory.

Acknowledgements

Most of my views on this topic formed in discussions with Johannes Treutlein. I also benefited from discussions at AISFP.

Emotional labour

4 Elo 22 August 2017 12:54AM

A brief breakdown:

  • event: I broke your vase.
  • event: I bought you a gift but then left it at home
  • event: I want to go to a (privately valuable event) on our (relationship important day)

Options:

  1. I wanted to save you the effort of thinking about the thing and so I decided not to tell/ask you before it was resolved.
  2. I wanted to not have to withhold a thing from you so I told you as soon as it was bothering me so that I didn't have to lie/cheat/withhold/deceive you even if I thought it was in your best interest

Discussion:

what is a better plan of action?

1 would be doing emotional labour in the form of:

I thought about the event and how you would feel about it and modelled how I thought you would feel and then acted according to what I thought was best for you feeling better.

2 would be to put an emotional burden on the other person but carries with it more honesty, more expectation that the other person is autonomous and able to make choices for themselves.

I didn't want to withhold anything, but instead burdened you with making the choice about what to do about the matter by telling you about my conundrum.

I used to do 1, but now I do 2. The relationship books tend to suggest 2.

All of the things my brain ever conjured up used to tell me 1.  

Brain: Make the martyr choice for people.  Don't tell them, suffer in secret.

I made a lot of relationship mistakes doing 1's in various situations and now I do 2s.  I don't know why this works but it lines up with everything I ever read - NVC, Daring greatly, Gottman institute research. I don't have much to add other than - I wonder if you do 1's or 2's.  

I would prefer people do 2's not 1's around me. (A little more on emotional labour)


Original post: http://bearlamp.com.au/emotional-labour/

[Link] It is easy to expose users' secret web habits, say researchers

4 ChristianKl 21 August 2017 07:05AM

[Link] Multiverse-wide Cooperation via Correlated Decision Making

4 Kaj_Sotala 20 August 2017 12:01PM

View more: Next