Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Lesswrong Potential Changes

17 Elo 19 March 2016 12:24PM

I have compiled many suggestions about the future of lesswrong into a document here:

https://docs.google.com/document/d/1hH9mBkpg2g1rJc3E3YV5Qk-b-QeT2hHZSzgbH9dvQNE/edit?usp=sharing

It's long and best formatted there.

In case you hate leaving this website here's the summary:

Summary

There are 3 main areas that are going to change.

  1. Technical/Direct Site Changes

 

  1.  
    1. new home page

    2. new forum style with subdivisions

      1. new sub for “friends of lesswrong” (rationality in the diaspora)

    3. New tagging system

    4. New karma system

    5. Better RSS

  2. Social and cultural changes

    1. Positive culture; a good place to be.

    2. Welcoming process

    3. Pillars of good behaviours (the ones we want to encourage)

    4. Demonstrate by example

    5. 3 levels of social strategies (new, advanced and longtimers)

  3. Content (emphasis on producing more rationality material)

    1. For up-and-coming people to write more

      1. for the community to improve their contributions to create a stronger collection of rationality.

    2. For known existing writers

      1. To encourage them to keep contributing

      2. To encourage them to work together with each other to contribute

Less Wrong Potential Changes

Summary

Why change LW?

How will we know we have done well (the feel of things)

How will we know we have done well (KPI - technical)

Technical/Direct Site Changes

Homepage

Subs

Tagging

Karma system

Moderation

Users

RSS magic

Not breaking things

Funding support

Logistical changes

Other

Done (or Don’t do it):

Social/cultural

General initiatives

Welcoming initiatives

Initiatives for moderates

Initiatives for long-time users

Rationality Content

Target: a good 3 times a week for a year.

Approach formerly prominent writers

Explicitly invite

Place to talk with other rationalists

Pillars of purpose
(with certain sub-reddits for different ideas)

Encourage a declaration of intent to post

Specific posts

Other notes


Why change LW?

 

Lesswrong has gone through great times of growth and seen a lot of people share a lot of positive and brilliant ideas.  It was hailed as a launchpad for MIRI, in that purpose it was a success.  At this point it’s not needed as a launchpad any longer.  While in the process of becoming a launchpad it became a nice garden to hang out in on the internet.  A place of reasonably intelligent people to discuss reasonable ideas and challenge each other to update their beliefs in light of new evidence.  In retiring from its “launchpad” purpose, various people have felt the garden has wilted and decayed and weeds have grown over.  In light of this; and having enough personal motivation to decide I really like the garden, and I can bring it back!  I just need a little help, a little magic, and some little changes.  If possible I hope for the garden that we all want it to be.  A great place for amazing ideas and life-changing discussions to happen.


How will we know we have done well (the feel of things)

 

Success is going to have to be estimated by changes to the feel of the site.  Unfortunately that is hard to do.  As we know outrage generates more volume than positive growth.  Which is going to work against us when we try and quantify by measurable metrics.  Assuming the technical changes are made; there is still going to be progress needed on the task of socially improving things.  There are many “seasoned active users” - as well as “seasoned lurkers” who have strong opinions on the state of lesswrong and the discussion.  Some would say that we risk dying of niceness, others would say that the weeds that need pulling are the rudeness.  


Honestly we risk over-policing and under-policing at the same time.  There will be some not-niceness that goes unchecked and discourages the growth of future posters (potentially our future bloggers), and at the same time some other niceness that motivates trolling behaviour as well as failing to weed out potential bad content which would leave us as fluffy as the next forum.  there is no easy solution to tempering both sides of this challenge.  I welcome all suggestions (it looks like a karma system is our best bet).


In the meantime I believe being on the general niceness, steelman side should be the motivated direction of movement.  I hope to enlist some members as essentially coaches in healthy forum growth behaviour.  Good steelmanning, positive encouragement, critical feedback as well as encouragement, a welcoming committee and an environment of content improvement and growth.


While at the same time I want everyone to keep up the heavy debate; I also want to see the best versions of ourselves coming out onto the publishing pages (and sometimes that can be the second draft versions).


So how will we know?  By trying to reduce the ugh fields to people participating in LW, by seeing more content that enough people care about, by making lesswrong awesome.


The full document is just over 11 pages long.  Please go read it, this is a chance to comment on potential changes before they happen.


Meta: This post took a very long time to pull together.  I read over 1000 comments and considered the ideas contained there.  I don't have an accurate account of how long this took to write; but I would estimate over 65 hours of work has gone into putting it together.  It's been literally weeks in the making, I really can't stress how long I have been trying to put this together.

If you want to help, please speak up so we can help you help us.  If you want to complain; keep it to yourself.

Thanks to the slack for keeping up with my progress and Vanvier, Mack, Leif, matt and others for reviewing this document.

As usual - My table of contents

An extended class of utility functions

3 Stuart_Armstrong 17 June 2014 04:36PM

This is a technical result that I wanted to check before writing up a major piece on value loading.

The purpose of a utility function is to give an agent criteria with which to make a decision. If two utility functions always give the same decisions, they're generally considered the same utility function. So, for instance, the utility function u always gives the same decisions as u+C for some constant C, or Du for some positive constant D. Thus we can say that utility functions are equivalent if they are related by a positive affine transformation.

For specific utility functions, and specific agents, the class of functions that give the same decisions is quite a bit larger. For instance, imagine that v is a utility function with the property v("any universe which contains humans")=constant. Then any human who attempts to follow u, could equivalently follow u+v (neglecting acausal trade) - it makes no difference. In general, if no action the agent could ever take would change the value of v, then u and u+v give the same decisions.

More subtly, if the agent can change v but cannot change the expectation of v, then u and u+v still give the same decisions. This is because for any actions a and b the agent could take:

E(u+v | a) = E(u | a) + E(v | a) = E(u | a) + E(v | b).

Hence E(u+v | a) > E(u+v | b) if and only if E(u | a) > E(u | b), and so the decision hasn't changed.

Note that E(v | a) need not be constant for all actions: simply that for every actions and b that an agent could take at a particular decision point, E(v | a) = E(v | b). It's perfectly possible for the expectation of v to be different at different moments, or conditional on different decisions made at different times.

Finally, as long as v obeys the above properties, there is no reason for it to be a utility function in the classical sense - it could be constructed any way we want.

 

An example: suffer not from probability, nor benefit from it

The preceding seems rather abstract, but here is the motivating example. It's a correction term T that adds or subtracts utility, as external evidence comes in (it's important that the evidence is external - the agent gets no correction from knowing what its own actions are/were). If the AI knows evidence e, and new (external) evidence f comes in, then its utility gets adjusted by T(e,f) which is defined as

T(e,f) = E(u | e) - E(u | e, f)

In other words, the agents utility gets adjusted by the difference between the new expected utility and the old - and hence the agent's expected utility is unchanged by new external evidence.

Consider for instance an agent with a utility u linear in money. It much choose between a bet that goes 50-50 on $0 (heads) or $100 (tails), versus a sure $49. It correctly choose the bet, having an expected utility of u=$50 - in other words, E(u, bet)=$50. But now imagine that the coin comes out heads. The utility u plunges to $0 (in other words E(u | bet, heads)=0). But the correction term cancels that out:

u(bet, heads) + T(bet, heads) = $0 + E(u | bet) - E(u |bet, heads) = $0 + $50 -$0 = $50.

A similar effect leaves utility unchanging if the coin is tails, cancelling the increase. In other words, adding the T correction term removes the impact of stochastic effects on utility.

But the agent will still make the same decisions. This is because before seeing evidence f, it cannot predict its impact on EU(u). In other words, summing over all possible evidences f:

E(u | e) = Σ p(f)E(u | e, f),

which is another way of phrasing "conservation of expected evidence". This implies that

E(T(e,-)) = Σ p(f)T(e,f)

Σ p(f)((E(u | e) - E(u | e, f))

= E(u | e) - Σ p(f)E(u | e, f)

= 0,

and hence that adding the T term does not change the agent's decisions. All the various corrections add on to the utility as the agent continues making decisions, but none of them make the agent change what it does.

The relevance of this will be explained in a subsequent post (unless someone finds an error here).

Anyone want a LW Enhancement Suite?

15 MBlume 15 February 2012 08:48AM

Reddit Enhancement Suite

If anyone cares, I could probably port this to work on LW without too much trouble. Optimistically it'd just involve opening up the source and replacing reddit.com with lesswrong.com. More realistically, there'd probably be a lot of baked-in assumptions about DOM structure that'd need to be updated to have the UI enhancements make sense.

Anyway, this is mostly just a straw poll to see how many others would be interested in such a thing.