Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Leaving beta: Voting on moving to LessWrong.com

6 Vaniver 11 March 2018 11:40PM

It took longer than we hoped, but LessWrong 2.0 is finally ready to come out of beta. As discussed in the original announcement, we’re going to have a vote on whether or not to migrate the new site to the lesswrong.com URL. The vote will be open to people who had 1,000 or more LW karma at the time we announced the vote back in September, and they’ll receive a link by email or private message on the current LessWrong.com. If you had above 1000 karma in September and did not receive an email or PM, send an email to habryka@lesserwrong.com and we will send you the form link.

We take rationalist virtues seriously, and I think it’s important that the community actually be able to look at the new implementation and vision and be able to say “no thanks.” If over half of the votes are to not migrate, the migration will not happen and we’ll figure out how we want to move forward with the website we’ve built.

Unfortunately, the alternative option for what will happen with the lesswrong.com URL is not great. Before I got involved, the original dominant plan was to replace it with a static HTML site, which would require minimal maintenance while preserving the value of old Sequences articles. So in the absence of another team putting forward heroic efforts and coordinating with Trike, MIRI, etc. that would be the world we would be moving towards.

Why not just keep things as they are? At the time, it was the consensus among old regulars that LW felt like an abandoned ghost town. A major concern about keeping it alive for the people still using it was that newcomers would read Sequences articles linked from elsewhere, check out the recent discussion and find it disappointing, and then bounce off of LW. This reduced its value for bringing people into the community.

More recently, various security concerns have made it a worse option to just keep old websites running – Trike has run into some issues where updating the server and antiquated codebase to handle security patches proved difficult, and they would prefer to no longer be responsible for maintaining the old website.

In case you’re just tuning in now, some basic details: I’ve been posting on LW for a long time, and about two years ago thought I was the person who cared most about making sure LW stayed alive, so decided to put effort into making sure that happened. But while I have some skills as a writer and a programmer, I’m not a webdev and not great at project management, and so things have been rather slow. My current role is mostly in being something like the ‘senior rationalist’ on the team, and supporting the team with my models of what should happen and why. The actual work is being done by a combination of Oliver Habryka, Raymond Arnold, and Ben Pace, and their contributions are why we finally have a site that’s ready to come out of beta.

You can read more about our vision for the new LessWrong here.

Comment author: PhilGoetz 16 December 2017 02:44:20PM *  1 point [-]

I don't think that what you need has any bearing on what reality has actually given you. Nor can we talk about different decision theories here--as long as we are talking about maximizing expected utility, we have our decision theory; that is enough specification to answer the Newcomb one-shot question. We can only arrive at a different outcome by stating the problem differently, or by sneaking in different metaphysics, or by just doing bad logic (in this case, usually allowing contradictory beliefs about free will in different parts of the analysis.)

Your comment implies you're talking about policy, which must be modelled as an iterated game. I don't deny that one-boxing is good in the iterated game.

My concern in this post is that there's been a lack of distinction in the community between "one-boxing is the best policy" and "one-boxing is the best decision at one point in time in a decision-theoretic analysis, which assumes complete freedom of choice at that moment." This lack of distinction has led many people into wishful or magical rather than rational thinking.

Comment author: Vaniver 22 December 2017 07:01:06PM 0 points [-]

I don't think that what you need has any bearing on what reality has actually given you.

As far as I can tell, I would pay Parfit's Hitchhiker because of intuitions that were rewarded by natural selection. It would be nice to have a formalization that agrees with those intuitions.

or by sneaking in different metaphysics

This seems wrong to me, if you're explicitly declaring different metaphysics (if you mean the thing by metaphysics that I think you mean). If I view myself as a function that generates an output based on inputs, and my decision-making procedure being the search for the best such function (for maximizing utility), then this could be considered as different metaphysics from trying to cause the most increase in utility for myself by making decisions, but it's not obvious that the latter leads to better decisions.

Comment author: PhilGoetz 16 December 2017 01:03:00AM *  0 points [-]

I can believe that it would make sense to commit ahead of time to one-box at such an event. Doing so would affect your behavior in a way that the predictor might pick up on.

Hmm. Thinking about this convinces me that there's a big problem here in how we talk about the problem, because if we allow people who already knew about Newcomb's Problem to play, there are really 4 possible actions, not 2:

  • intended to one-box, one-boxed
  • intended to one-box, two-boxed
  • intended to two-box, one-boxed
  • intended to two-box, two-boxed

I don't know if the usual statement of Newcomb's problem specifies whether the subjects learns the rules of the game before or after the predictor makes a prediction. It seems to me that's a critical factor. If the subject is told the rules of the game before the predictor observes the subject and makes a prediction, then we're just saying Omega is a very good lie detector, and the problem is not even about decision theory, but about psychology: Do you have a good enough poker face to lie to Omega? If not, pre-commit to one-box.

We shouldn't ask, "Should you two-box?", but, "Should you two-box now, given how you would have acted earlier?" The various probabilities in the present depend on what you thought in the past. Under the proposition that Omega is perfect at predicting, the person inclined to 2-box should still 2-box, 'coz that $1M probably ain't there.

So Newcomb's problem isn't a paradox. If we're talking just about the final decision, the one made by a subject after Omega's prediction, then the subject should probably two-box (as argued in the post). If we're talking about two decisions, one before and one after the box-opening, then all we're asking is whether you can convince Omega that you're going to one-box if you aren't. Then it would not be terribly hard to say that a predictor might be so good (say, an Amazing Kreskin-level cold-reader of humans, or that you are an AI) that your only hope is to precommit to one-box.

Comment author: Vaniver 16 December 2017 03:00:14AM 0 points [-]

I don't think this gets Parfit's Hitchhiker right. You need a decision theory that, when safely returned to the city, pays the rescuer even though they have no external obligation to do so. Otherwise they won't have rescued you.

Comment author: Vaniver 15 December 2017 10:18:36PM 1 point [-]

The argument for one-boxing is that you aren't entirely sure you understand physics, but you know Omega has a really good track record--so good that it is more likely that your understanding of physics is false than that you can falsify Omega's prediction. This is a strict reliance on empirical observations as opposed to abstract reason: count up how often Omega has been right and compute a prior.

Isn't it that you aren't entirely sure that you understand psychology, or that you do understand psychology well enough to think that you're predictable? My understanding is that many people have run Newcomb's Problem-style experiments at philosophy departments (or other places) and have a sufficiently high accuracy that it makes sense to one-box at such events, even against fallible human predictors.

Comment author: curi 31 October 2017 07:35:01AM 0 points [-]
Comment author: Vaniver 31 October 2017 03:02:11PM 0 points [-]

When I got the idea a long time ago, it was a single person's position and was called something like "Minister of Dissent." The idea was that a lot of useful criticism comes in bothersome packages, and having someone with a dual role of enforcing discourse standards and improving the relevant skills of people not meeting those standards would do more to lead to good discussion than just enforcing the standards. I was quickly convinced that this would be an especially draining job, and that it was better to have a team of people, such that they could do it only sometimes / not feel like they're always on the hook to steelman or help someone write a better comment.

I haven't come up with a better name yet than 'Sunshine Regiment' for pointing at the dual functionality of the moderation team, and am open to suggestions.

Comment author: wallowinmaya 01 October 2017 11:24:17AM 2 points [-]

The open beta will end with a vote of users with over a thousand karma on whether we should switch the lesswrong.com URL to point to the new code and database

How will you alert these users? (I'm asking because I have over 1000 karma but I don't know where I should vote.)

Comment author: Vaniver 05 October 2017 05:55:43PM 2 points [-]

Our current plan is to send an email with a vote link to everyone over the threshold; we're going to decide when to have the vote later in the open beta period.

Comment author: kgalias 24 September 2017 07:25:44PM 1 point [-]

When was the last data migration from LW 1.0? I'm getting an "Invalid email" message, even though I have a linked email here.

Comment author: Vaniver 27 September 2017 09:55:25PM *  2 points [-]

Late May, if I recall correctly. We'll be able to merge accounts if you made it more recently or there was some trouble with the import.

Comment author: Rain 22 September 2017 02:27:13PM 3 points [-]

Any RSS feeds?

In response to comment by Rain on LW 2.0 Open Beta Live
Comment author: Vaniver 22 September 2017 06:36:40PM 1 point [-]

https://www.lesserwrong.com/feed.xml is the primary one; more customization is coming soon.

LW 2.0 Open Beta Live

23 Vaniver 21 September 2017 01:15AM

The LW 2.0 Open Beta is now live; this means you can create an account, start reading and posting, and tell us what you think.

Four points:

1) In case you're just tuning in, I took up the mantle of revitalizing LW through improving its codebase some time ago, and only made small amounts of progress until Oliver Habryka joined the project and put full-time engineering effort into it. He deserves the credit for the new design, and you can read about his strategic approach here.

2) If you want to use your current LW account on LW2.0, we didn't import the old passwords, and so you'll have to use the reset password functionality. If your LW account isn't tied to a current email, send a PM to habryka on lesswrong and he'll update the user account details on lesserwrong. He's also working on improving the site and sleeping and things like that, so don't expect an immediate response.

3) During the open beta there will be a green message in the bottom right hand corner of the screen. This is called Intercom, and is how you can tell us about issues with the site and ask other questions.

4) The open beta will end with a vote of users with over a thousand karma on whether we should switch the lesswrong.com URL to point to the new code and database. If this succeeds, all the activity from the open beta and the live site will be merged together. If the vote fails, we expect to archive LW until another team comes along to revive it. We currently don't have a date set, but this will be announced a week in advance.

Comment author: IlyaShpitser 17 September 2017 02:05:36AM *  0 points [-]

Vaniver, I sympathize with the desire to automate figuring out who experts are via point systems, but consider that even in academia (with a built-in citation pagerank), people still rely on names. That's evidence about pagerank systems not being great on their own. People game the hell out of citations.

Probably should weigh my opinion of rationality stuff quite low, I am neither a practitioner nor a historian of rationality. I have gotten gradually more pessimistic about the whole project.

Comment author: Vaniver 19 September 2017 06:34:40PM 0 points [-]

Vaniver, I sympathize with the desire to automate figuring out who experts are via point systems

To be clear, in this scheme whether or not someone had access to the expert votes would be set by hand.

View more: Next