Comment author: turchin 19 October 2016 11:02:27AM 3 points [-]

The page http://lesswrong.com/r/discussion/new/ returns error for me for 12 hours, but other pages are fine. Is it only my glitch?

error text: "You have encountered an error in the code that runs Less Wrong. The site maintainers have been informed and will get to it is as soon as they can. In the unlikely event that you've bumped into this error before and think that no-one is paying attention, please report the error and how to reproduce it on http://code.google.com/p/lesswrong/issues/list'

If the error is localised you might still find awesome Less Wrong content in the Main article area or in the Discussion area.

Comment author: Vaniver 22 October 2016 05:55:19PM 0 points [-]

There was an issue with how the linkposts handled unicode URLs, fixed by wezm here and here.

Comment author: SquirrelInHell 01 October 2016 08:34:52PM -1 points [-]

I do not argue that my idea is sane; however I think your critique doesn't do it justice. So let me briefly point out that:

measuring probabilities of world destruction is very hard; being able to measure them at the 1e-12 level seems very, very hard

It's enough to use upper bounds. If we have e.g. an additional module to check our AI source code for errors, and such a module decreases probability of one of the bits being flipped, we can use our risk budget to calculate how many modules at minimum we need. Etc.

How should it decide what budget level to give itself?

It doesn't. You don't build any intelligent system without a risk budget. Initial budgets are distributed to humans, e.g. 10^-15 to each human alive in 2016.

looks like simple utility maximization (go to the movie if the benefits outweigh the costs) gives the right answer

If utility is dominated by survival of humanity, then simple utility maximization is exactly the same as reducing total "existential risk emissions" in the sense I want to use them above.

Whether or not your utility is dominated by survival of humanity is an individual question.

the budget replenishes

Not at all. A risk budget is decreased by your best estimate of your total risk "emission", which is what fraction of the future multiverse (weighted by probability) you spoiled.

So I think budgets are the wrong way to think about this--they rely too heavily on subjective perceptions of risk, they encourage being too cautious (or too risky) instead of seeing tail risks as linear in probability, and they don't update on survival when they should.

Quite likely they are - but probably not for these reasons.

Comment author: Vaniver 03 October 2016 02:43:03AM *  0 points [-]

You don't build any intelligent system without a risk budget. Initial budgets are distributed to humans, e.g. 10^-15 to each human alive in 2016.

But where did that number come from? At some point, an intelligent system that was not handed a budget selects a budget for itself. Presumably the number is set according to some cost-benefit criterion, instead of chosen because it's three hands worth of fingers in a log scale based on two hands worth of fingers.

Whether or not your utility is dominated by survival of humanity is an individual question.

If it isn't, how do you expect the agent to actually stick to such a budget?

Not at all. A risk budget is decreased by your best estimate of your total risk "emission", which is what fraction of the future multiverse (weighted by probability) you spoiled.

I understood your proposal. My point is that it doesn't carve reality at the joints: if you play six-chambered Russian Roulette once, then one sixth of your future vanishes, but given that it came up empty, then you still have 100% of your future, because conditioning on the past in the branch where you survive eliminates the branch where you fail to survive.

What you're proposing is a rule where, if your budget starts off at 1, you only play it six times over your life. But if it makes sense to play it once, it might make sense to play it many times--playing it seven times, for example, still gives you a 28% chance of survival (assuming the chambers are randomized after every trigger pull).

Which suggests a better way to point out what I want to point out--you're subtracting probabilities when it makes sense to multiply probabilities. You're penalizing later risks as if they were the first risk to occur, which leads to double-counting, and means the system is vulnerable to redefinitions. If I view the seven pulls as independent events, it depletes my budget by 7/6, but if I treat them as one event, it depletes my budget by only 1-(5/6)^7, which is about 72%.

Comment author: Vaniver 30 September 2016 06:50:13PM *  2 points [-]

CO2 emissions have the virtues that they are both easy to measure and their effects are roughly linear.* I don't see a similar thing being true for perceived risk, and I think conserved budgets are probably worse than overall preferences.

First: measuring probabilities of world destruction is very hard; being able to measure them at the 1e-12 level seems very, very hard, especially if most probabilities of world destruction are based around conflict. ("Will threatening my opponent here increase or decrease the probably of the world ending?")

Second: suppose we grant that the system has the ability to measure the probability of the world being destroyed, to arbitrary precision. How should it decide what budget level to give itself? (Suppose it's the original agent, instead of one handed a budget by its creator.)

To make it easier to think about, you can reformulate the question in terms of your own life. You can take actions that increase the chance that you die sooner rather than later, and gain some benefit from doing so. (Perhaps you decide to drive to a movie theater to see a new movie instead of something on Netflix.)

But now a few interesting things pop up. One, it looks like simple utility maximization (go to the movie if the benefits outweigh the costs) gives the right answer, and being more or less cautious than that suggests is a mistake (at least, of how the utility is measured).

Two, the budget replenishes. If I go to the theater on Friday and come back unharmed, then from the perspective of Thursday!me I took on some risk, but from the perspective of Saturday!me that risk turned out to not cost anything. That is, Thursday!me thinks I'm picking up 1e-7 in additional risk but Saturday!me knows that I survived, and still has '100%' of risk to allocate anew.

So I think budgets are the wrong way to think about this--they rely too heavily on subjective perceptions of risk, they encourage being too cautious (or too risky) instead of seeing tail risks as linear in probability, and they don't update on survival when they should.


*I don't mean that the overall effect of CO2 emissions are linear, which seems false, but instead that participants are small enough relative to overall CO2 production that they don't expect their choices to affect the overall CO2 price, and thus the price is linear for them individually.

In response to Linkposts now live!
Comment author: casebash 29 September 2016 08:42:31AM 2 points [-]

I am worried that this change may reduce self-posts even further. After all, they will now have to compete with a host of other low effort links. I think that there should be separate sections for links and self-posts.

Comment author: Vaniver 29 September 2016 01:36:18PM 3 points [-]

My impression is that activity begets more activity--if there were 0 posts today, having your self-post be the post for the day is more bothersome than if there were 10 posts today. But we can look at this in a month and see how it turned out.

In response to Linkposts now live!
Comment author: VipulNaik 28 September 2016 10:56:22PM *  4 points [-]

I'm unable to edit past posts of mine; it seems that this broke very recently and I'm wondering if it's related to the changes you made.

Specifically, when I click the Submit or the "Save and Continue" buttons after making an edit, it goes to lesswrong.com/submit with a blank screen. When I look at the HTTP error code it says it's a 404.

I also checked the post after that to see if the edit still went through, and it didn't. In other words, my edit did not get saved.

Do you know what's going on? There were a few corrections/expansions on past posts that I need to push live soon.

Comment author: Vaniver 29 September 2016 03:04:09AM 2 points [-]

I think this is probably due to the change; with my tests it was limited to posts in Main, but I haven't tested editing old posts in discussion. If you've got experience to report please put it here.

In response to Linkposts now live!
Comment author: ike 28 September 2016 03:54:41PM 3 points [-]

In feedly, I need to click once to get to the post and a second time to get to the link. Can you include a link within the body of the RSS so I can click to it directly?

In response to comment by ike on Linkposts now live!
Comment author: Vaniver 28 September 2016 05:40:15PM 2 points [-]

Made a github issue.

In response to Linkposts now live!
Comment author: Houshalter 28 September 2016 04:24:57PM 7 points [-]

This is really awesome and could change the fate of lesswrong. I really think this will bring people back (at least more than any other easy to implement change.) I personally expect to spend more time here now, at least.

One thing to take note of is that lesswrong, by default, sorts by /new. As the volume of posts increases, it may be necessary to change the default sort to /hot or /top/?t=week. Especially if you want it to be presentable to newcomers or even old timers coming back to the site, you want them to see the best links first.

Comment author: Vaniver 28 September 2016 05:36:45PM 2 points [-]

Agreed that it makes sense to change the default. I think it also shouldn't be too hard to have an 'unread' feed, which works off whether you've clicked through before or the post has attracted enough new comments since you last saw it.

In response to Linkposts now live!
Comment author: WhySpace 28 September 2016 05:06:25PM 9 points [-]

Awesome! This strikes me as a very good thing, especially with your suggested social norms. I have 3 additional suggestions, though:

  1. Add a social norm where commenters make short summaries, or quote a couple sentences of new info, without the fluff. The title of the link serves much the same purpose, and gives readers enough info to decide whether or not to click through. This is standard practice on the more intellectual subreddit, since they already have the background context and knowledge that 90% of the article is spent explaining.

  2. Add a social norm where the best comments get linked to. I enjoy Yvain's SSC posts, and the comments section often contains some gems, but digging through all of them to find the gems is tedious. I intend to quote or rephrase gems when I find them, and link to them in comments here.

  3. Maybe we should have subreddits on LW. I'm not sure about this one. Tags serve some of the same purposes, so perhaps what would be ideal would be to subscribe and unsubscribe from tags you're interested in. However, just copying the Reddit code for subreddits would be simpler. It would divide up the community though, so probably not desirable while we're still small.

Comment author: Vaniver 28 September 2016 05:35:18PM 1 point [-]

Add a social norm where commenters make short summaries, or quote a couple sentences of new info, without the fluff.

Posting links should be low-friction, and so it should be fine to post links without comment. That said, writing summaries in comments is very useful, and you should feel willing to do that even on links you didn't post.

Maybe we should have subreddits on LW. I'm not sure about this one. Tags serve some of the same purposes, so perhaps what would be ideal would be to subscribe and unsubscribe from tags you're interested in. However, just copying the Reddit code for subreddits would be simpler. It would divide up the community though, so probably not desirable while we're still small.

Different subreddits seem best when used to separate norms / rules of discussion rather than topics. (Topics are often overlapping, and thus best dealt with using tags.) I think something like 'cold' and 'warm' subreddits, where the first has a more academic style and the second has a more friendly / improvisational style, might be sensible, but this remains to be seen.

Linkposts now live!

26 Vaniver 28 September 2016 03:13PM

 

You can now submit links to LW! As the rationality community has grown up, more and more content has moved off LW to other places, and so rather than trying to generate more content here we'll instead try to collect more content here. My hope is that Less Wrong becomes something like "the Rationalist RSS," where people can discover what's new and interesting without necessarily being plugged in to the various diaspora communities.

Some general norms, subject to change:

 

  1. It's okay to link someone else's work, unless they specifically ask you not to. It's also okay to link your own work; if you want to get LW karma for things you make off-site, drop a link here as soon as you publish it.
  2. It's okay to link old stuff, but let's try to keep it to less than 5 old posts a day. The first link that I made is to Yudkowsky's Guide to Writing Intelligent Characters.
  3. It's okay to link to something that you think rationalists will be interested in, even if it's not directly related to rationality. If it's political, think long and hard before deciding to submit that link.
  4. It's not okay to post duplicates.

As before, everything will go into discussion. Tag your links, please. As we see what sort of things people are linking, we'll figure out how we need to divide things up, be it separate subreddits or using tags to promote or demote the attention level of links and posts.

(Thanks to James Lamine for doing the coding, and to Trike (and myself) for supporting the work.)

[Link] Yudkowsky's Guide to Writing Intelligent Characters

4 Vaniver 28 September 2016 02:36PM

View more: Next