Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

2017 LessWrong Survey

18 ingres 13 September 2017 06:26AM

The 2017 LessWrong Survey is here! This year we're interested in community response to the LessWrong 2.0 initiative. I've also gone through and fixed as many bugs as I could find reported on the last survey, and reintroduced items that were missing from the 2016 edition. Furthermore new items have been introduced in multiple sections and some cut in others to make room. You can now export your survey results after finishing by choosing the 'print my results' option on the page displayed after submission. The survey will run from today until the 15th of October.

You can take the survey below, thanks for your time. (It's back in single page format, please allow some seconds for it to load):

Click here to take the survey

Requesting Questions For A 2017 LessWrong Survey

6 ingres 09 April 2017 12:48AM

It's been twelve months since the last LessWrong Survey, which means we're due for a new one. But before I can put out a new survey in earnest, I feel obligated to solicit questions from community members and check in on any ideas that might be floating around for what we should ask.

The basic format of the thread isn't too complex, just pitch questions. For best chances of inclusion, however, it's best to include:

  • A short cost/benefit analysis of including the question. Keep in mind that some questions are too invasive or embarrassing to be reasonably included. Other questions might leak too many bits. There is limited survey space and some items might be too marginal to include at the cost of others.
  • An example of a useful analysis that could be done with this question(s), especially interesting analysis in concert with other questions. eg. It's best to start with a larger question like "how does parental religious denomination affect the cohorts current religion?" and then translate that into concrete questions about religion.
  • Some idea of how the question can be done without using write-ins. Unfortunately write-in questions add massive amounts of man-hours to the total analysis time for a survey and make it harder to get out a final product when all is said and done.

The last survey included 148 questions; some sections will not be repeated in the 2017 survey, which gives us an estimate about our question budget. I would prefer to not go over 150 questions, and if at all possible come in at many fewer than that. Removed sections are:

  • The Basilisk section on the last survey provided adequate information on the phenomena it was surveying, and I do not currently plan to include it again on the 2017 survey. This frees up six questions.
  • The LessWrong Feedback portion of the last survey also adequately provided information, and I would prefer to replace it on the 2017 survey with a section measuring the site's recovery, if any. This frees up 19 questions.

I also plan to do significant reform to multiple portions of the survey. I'm particularly interested in making changes to:

  • The politics section. In particular I would like to update the questions about feelings on political issues with new entries and overhaul some of the options on various questions.
  • I handled the calibration section poorly last year, and would like to replace it this year with an easily scored set of questions. To be more specific, a good calibration section should:
    • Good calibration questions should be fermi estimable with no more than a standard 5th grade education. They should not rely on particular hidden knowledge or overly specific information. eg. "Who wrote the foundation novels?" is a terrible calibration question and "What is the height of the Eiffel Tower in meters within a multiple of 1.5?" is decent.
    • Good calibration questions should have a measurable distance component, so that even if an answer is wrong (as the vast majority of answers will be) it can still be scored.
    • A measure of distance should get proportionately smaller the closer an answer is to being correct and proportionately larger the further it is from being correct.
    • It should be easily (or at least sanely) calculable by programmatic methods.
  • The probabilities section is probably due for some revision, I know in previous years I haven't even answered it because I found the wording of some questions too confusing to even consider.

So for maximum chances of inclusion, it would be best to keep these proposed reforms in mind with your suggestions.

(Note: If you have suggestions on questions to eliminate, I'd be glad to hear those too.)

[Link] arguman.org, an argument analysis platform

1 dyokomizo 19 October 2015 03:46PM

I recently found out about argumanIt's an online tool to dissect arguments and structure agreement and refutation.

It seems like something that's been discussed about in LW some times in the past.

Is my theory on why censorship is wrong correct?

-24 hoofwall 12 April 2015 04:03AM

So, I have next to no academic knowledge. I have literally not read or perhaps even picked up any book since eighth grade, which is where my formal education ended, and I turn 20 this year, but I am sitting on some theories pertaining to my understanding of rationality, and procrastinating about expressing them has gotten me here. I'd like to just propose my theory on why censorship is wrong, here. Please tell me whether or not you agree or disagree, and feel free to express anything else you feel you would like to in this thread. I miss bona fide argument, but this community seems way less hostile than the one community I was involved in elsewhere....

 

Also, I feel I should affirm again that my academic knowledge is almost entirely just not there... I know the LessWrong community has a ton of resources they turn to and indulge in, which is more or less a bible of rationality by which you all abide, but I have read or heard of none of it. I don't mean to offend you with my willful ignorance. Sorry. Also, sorry for possibly incorporating similes and stuff into my expression... I know many out there are on the autistic spectrum and can't comprehend it so I'll try to stop doing that unless I'm making a point.

 

Okay, so, since the following has been bothering me a lot since I joined this site yesterday and even made me think against titling this what I want, consider the written and spoken word. Humans literally decided as a species to sequence scribbles and mouth noises in an entirely arbitrary way, ascribe emotion to their arbitrary scribbles and mouth noises, and then claim, as a species, that very specific arbitrary scribbles and mouth noises are inherent evil and not to be expressed by any human. Isn't that fucking retarded?

 

I know what you may be thinking. You might be thinking, "wow, this hoofwall character just fucking wrote a fucking arbitrary scribble that my species has arbitrarily claimed to be inherent evil without first formally affirming, absolutely, that the arbitrary scribble he uttered could never be inherent evil and that writing it could never in itself do any harm. This dude obviously has no interest in successfully defending himself in argument". But fuck that. This is not the same as murdering a human and trying to conceive an excuse defending the act later. This is not the same as effecting the world in any way that has been established to be detrimental and then trying to defend the act later. This is literally sequencing the very letters of the very language the human has decided they are okay with and will use to express themselves in such a way that it reminds the indoctrinated and conditioned human of emotion they irrationally ascribe to the sequence of letters I wrote. This is possibly the purest argument conceivable for demonstrating superfluity in the human world, and the human psyche. There could never be an inherent correlation to one's emotionality and an arbitrary sequence of mouth noises or scribbles or whatever have you that exist entirely independent of the human. If one were to erase an arbitrary scribble that the human irrationally ascribes emotion to, the human will still have the capacity to feel the emotion the arbitrary scribble roused within them. The scribble is not literally the embodiment of emotionality. This is why censorship is retarded.

 

Mind you, I do not discriminate against literal retards, or blacks, or gays, or anything. I do, however, incorporate the words "retard", "nigger", and "faggot" into my vocabulary literally exclusively because it triggers humans and demonstrates the fact that the validity of one's argument and one's ability to defend themselves in argument does not matter to the human. I have at times proposed my entire argument, actually going so far to quantify the breadth of this universe as I perceive it, the human existence, emotionality, and right and wrong before even uttering a fuckdamn swear, but it didn't matter. Humans think plugging their ears and chanting a mantra of "lalala" somehow gives themselves a valid argument for their bullshit, but whatever. Affirming how irrational the human is is a waste of time. There are other forms of censorship I shout address, as well, but I suppose not before proposing what I perceive the breadth of everything less fundamental than the human to be.

 

It's probably very easy to deduce the following, but nothing can be proven to exist. Also, please do bear with my what are probably argument by assertion fallacies at the moment... I plan on defending myself before this post ends.

 

Any opinion any human conceives is just a consequence of their own perception, the likes of which appears to be a consequence of their physical form, the likes of which is a consequence of properties in this universe as we perceive it. We cannot prove our universe's existence beyond what we have access to in our universe as we perceive it, therefore we cannot prove that we exist. We can't prove that our understanding of existence is true existence; we can only prove, within our universe, that certain things appear to be in concurrence with the laws of this universe as we perceive it. We can propose for example that an apple we can see occupies space in this universe, but we can't prove that our universe actually exists beyond our understanding of what existence is. We can't go more fundamental than what composes our universe... We can't go up if we are mutually exclusive with the very idea of "up", or are an inferior consequence of "up" which is superior to us.

 

I really don't remember what else I would say after this but, I guess, without divulging how much I obsess about breaking emotionality into a science, I believe nudity can't be inherent evil either because it is literally the cause of us, the human, and we are necessary to be able to perceive good and evil in the first place. If humans were not extant to dominate the world and force it to tend to the end they wanted it to anything living would just live, breed, and die, and nothing would be inherently "good" or "evil". It would just be. Until something evolved if it would to gain the capacity to force distinctions between "good" and "evil" there would be no such constructs. We have no reason to believe there would be. I don't know how I can affirm that further. If nudity- and exclusively human nudity, mind you- were to be considered inherent evil that would mean that the human is inherent evil, that everything the human perceives is is inherent evil and that the human's understanding of "rationality" is just a poor, grossly-misled attempt at coping with the evil properties that they retain and is inherently worthless. Which I actually believe, but an opinion that contrary is literally satanism and fuck me if I think I'm going to be expounding all of that here. But fundamentally, human nudity cannot be inherent evil if the human's opinions are to be considered worth anything at all, and if you want to go less fundamental than that and approach it from a "but nudity makes me feel bad" standpoint, you can simply warp your perception of the world to force seeing or otherwise being reminded of things to be correlated to certain emotion within you. I'm autistic it seems so I obsess about breaking emotionality down to a science every day but this isn't the post to be talking about shit like that. In any case, you can't prove that the act of you seeing another human naked is literal evil, so fuck you and your worthless opinions.

 

Yeah... I don't know what else I could say here, or if censorship exists in forms other than preventing humans from being exposed to human nudity, or human-conceived words. I should probably assert as well that I believe the human's thinking that the inherent evil of human nudity somehow becomes okay to see when a human reaches the age of 18, or 21, or 16, or 12 depending on which subset of human you ask is retarded. Also, by "retarded" I do not literally mean "retarded". I use the word as a trigger word that's meant to embody and convey bad emotion the human decides they want to feel when they're exposed to it. This entire post is dripping with the grossest misanthropy but I'm interested in seeing what the responses to this are... By the way, if you just downvote me without expressing to me what you think I'm doing wrong, as far as I can tell you are just satisfied with vaguely masturbating your dissenting opinion you care not for even defining in my direction, so, whatever makes you sleep at night, if you do that... but you're wrong though, and I would argue that to the death.

Politics Discussion Thread February 2013

1 OrphanWilde 06 February 2013 09:33PM

 

  1. Top-level comments should introduce arguments; responses should be responses to those arguments. 
  2. Upvote and downvote based on whether or not you find an argument convincing in the context in which it was raised.  This means if it's a good argument against the argument it is responding to, not whether or not there's a good/obvious counterargument to it; if you have a good counterargument, raise it.  If it's a convincing argument, and the counterargument is also convincing, upvote both.  If both arguments are unconvincing, downvote both. 
  3. A single argument per comment would be ideal; as MixedNuts points out here, it's otherwise hard to distinguish between one good and one bad argument, which makes the upvoting/downvoting difficult to evaluate.
  4. In general try to avoid color politics; try to discuss political issues, rather than political parties, wherever possible.

As Multiheaded added, "Personal is Political" stuff like gender relations, etc also may belong here.

 

NKCDT: The Big Bang Theory

-12 [deleted] 10 November 2012 01:15PM

Hi, Welcome to the first Non-Karmic-Casual-Discussion-Thread.

This is a place for [purpose of thread goes here].

In order to create a causal non karmic environment for every one we ask that you

-Do not upvote or downvote any zero karma posts

-If you see a vote with positive karma, downvote it towards zero, even if it’s a good post

-If you see a vote with negative karma, upvote it towards zero, even if it’s a weak post

-Please be polite and respectful to other users

-Have fun!”

 

 

This is my first attempt at starting a casual conversation on LW where people don't have to worry about winning or losing points, and can just relax and have social fun together.

 

So, Big Bang Theory. That series got me wondering. It seems to be about "geeks", and not the basement-dwelling variety either; they're highly successful and accomplished professionals, each in their own field. One of them has been an astronaut, even. And yet, everything they ever accomplish amounts to absolutely nothing in terms of social recognition or even in terms of personal happiness. And the thing is, it doesn't even get better for their "normal" counterparts, who are just as miserable and petty.

 

Consider, then; how would being rationalists would affect the characters on this show? The writing of the show relies a lot on laughing at people rather than with them; would rationalist characters subvert that? And how would that rationalist outlook express itself given their personalities? (After all, notice how amazingly different from each other Yudkowsky, Hanson, and Alicorn are, just to name a few; they emphasize rather different things, and take different approaches to both truth-testing and problem-solving).

Note: this discussion does not need to be about rationalism. It can be a casual, normal discussion about the series. Relax and enjoy yourselves.

 

But the reason I brought up that series is that its characters are excellent examples of high intelligence hampered by immense irrationality. The apex of this is represented by Dr. Sheldon Cooper, who is, essentially, a complete fundamentalist over every single thing in his life; he applies this attitude to everything, right down to people's favorite flavor of pudding: Raj is "axiomatically wrong" to prefer tapioca, because the best pudding is chocolate. Period. This attitude makes him a far, far worse scientist than he thinks, as he refuses to even consider any criticism of his methods or results. 

 

Let's talk about politics

-14 WingedViper 19 September 2012 05:25PM

Hello fellow LWs,

As I have read repeatedly on LW (http://lesswrong.com/lw/gw/politics_is_the_mindkiller/) you don't like discussing politics because it produces biased thinking/arguing which I agree is true for the general populace. What I find curious is that you don't seem to even try it here where people would be very likely to keep their identities small (www.paulgraham.com/identity.html). It should be the perfect (or close enough) environment to talk politics because you can have reasonable discussions here.

I do understand that you don't like to bring politics into discussions about rationality, but I don't understand why there shouldn't be dedicated political threads here. (Maybe you could flag them?)

all the best

Viper

 

Preventing discussion from being watered down by an "endless September" user influx.

14 Epiphany 02 September 2012 03:46AM

  In the thread "LessWrong could grow a lot, but we're doing it wrong.", I explained why LessWrong has the potential to grow quite a lot faster in my opinion, and volunteered to help LessWrong grow.  Of course, a lot of people were concerned about the fact that a large quantity of new members will not directly translate to higher quality contributions or beneficial learning and social experiences in discussions, so I realized it would be better to help protect LessWrong first.  I do not assume that fast growth has to cause a lowering of standards.  I think fast growth can be good if the right people are joining and all goes well (specifics herein).  However, if LessWrong grows carelessly, we could be inviting an "Endless September", a term used to describe a never ending deluge of newbies that "degraded standards of discourse and behavior on Usenet and the wider Internet" (named after a phenomenon caused by an influx of college freshmen).  My perspective on this is that it could happen at any time, regardless of whether any of us does anything.  Why do I think that?  LessWrong is growing very fast and could snowball on it's own.  I've seen that happen, I saw it ruin a forum.  That site wasn't even doing anything special to advertise the forum that I am aware of.  The forum was just popular and growth went exponential.  For this reason, I asked for a complete list of LessWrong registration dates in order to make a growth chart.  I received it on 08-23-2012.  The data shows that LessWrong has 13,727 total users, not including spammers and accounts that were deleted.  From these, I have created a LessWrong growth bar graph:

 

 

 

  Each bar represents a one month long total of registration dates (the last bar is a little short, being that it only goes up until the 23rd).  The number of pixels in each bar is equal to the number of registrations each month.  The first (leftmost) bar that hits the top of the picture (it actually goes waaaay off the page) mostly represents the transfer of over 2000 accounts from Overcoming Bias.  The right bar that goes off the page is so far unexplained for me - 921 users joined in September 2011, more than three times the number in the months before and after it.  If you happen to know what caused that, I would be very interested in finding out. (No, September 2010 does not stand out, if you were wondering the same thing).  If anyone wants to do different kinds of analysis, I can generate more numbers and graphs fairly easily.

  As you can see, LessWrong has experienced pretty rapid growth.

  Growth is in a downward trend at the moment, but as you can see from the wild spikes everyplace, this could change any time.  In addition to LessWrong growing on it's own, other events that could trigger an "endless September" effect are:

  LessWrong could be linked to by somebody really big (see: Slashdot effect on Wikipedia).

  LessWrong could end up on the news after somebody does something news worthy or because a reporter discovers LessWrong culture and finds it interesting or weird.

  (A more detailed explanation is located here.)

  For these reasons, I feel it is a good idea to begin constructing endless September protection, so I have volunteered some of my professional web services to get it done.  This has to be done carefully because if it is not done right, various unwanted things may happen.  I am asking for any ideas or links to ideas you guys have that you think were good and am laying out my solutions and the pitfalls I have planned for below in order to seek your critiques and suggestions.

 

Cliff Notes Version:

  I really thought this out quite a bit because I think it's going to be tricky and because it's important.  So I wrote a cliff notes version of the below solution ideas with pros and cons for each which is about a tenth the size.

 

The most difficult challenge and my solution:

  People want the site to be enriching for those who want to learn better reasoning but haven't gotten very far yet.

  People also want an environment where they can get a good challenge, where they are encouraged to grow, where they can get exposed to new ideas and viewpoints, and where they can get useful, constructive criticism. 

  The problem is that a basic desire all humans seem to share is a desire to avoid boredom.  There is possibly a survival reason for this:  There is no way to know everything, but missing even one piece of information can spell disaster.  This may be why the brain appears to have evolved built-in motivators to prod you to learn constantly.  From the mild ecstasy of flow state (cite Flow: The psychology of peak experiencing) to tedium, we are constantly being punished and rewarded based on whether we're receiving the optimal challenge for our level of ability. 

  This means that those who are here for a challenge aren't going to spend their time being teachers for everybody who wants to learn.  Not everyone has a teacher's personality and skill set to begin with, and some people who teach do it as writers, explaining to many thousands, rather than by explaining it one-to-one.  If everyone feels expected to teach by hand-holding, most will be punished by their brains for not learning more themselves, and will be forced to seek a new learning environment.  If beginners are locked out, we'll fail at spreading rationality.  The ideal is to create an environment where everyone gets to experience flow, and no one has to sacrifice optimal challenge.

  To make this challenge a bit more complicated, American culture (yes, a majority of the visits, 51.12%, are coming from the USA - I have access to the Google Analytics) can get pretty touchy about elitism and anti-intellectualism.  Even though the spirit of LessWrong - wanting to promote rational thought - is not elitist but actually inherently opposite to that (to increase good decision making in the world "spreads the wealth" rather than hoarding it or demanding privileges for being capable of good decisions), there is a risk that people will see this place as elitist.  And even though self-improvement is inherently non-pretentious (by choosing to do self-improvement, you're admitting that you've got flaws), undoubtedly there will be a large number of people who might really benefit from learning here but instead insta-judge the place as "pretentious".  Interpreting everything intellectual as pretentious and elitist is an unfortunate habit in our culture.  I think, with the right wording on the most prominent pages (about us, register, home page, etc.) LessWrong might be presented as a unique non-elitist, non-pretentious place.

  For these reasons, I am suggesting multiple discussion areas that are separated by difficulty levels.  Presenting them as "Easy and Hard" will do three things:

  1. Serve as a reminder to those who attend that it's a place of learning where the objective is to get an optimal challenge and improve as far as possible.  This would help keep it from coming across as pretentious or elitist.

  2. Create a learning environment that's open to all levels, rather than a closed, elitist environment or one that's too daunting.  The LessWrong discussion area is a bit daunting to users, so it might be really desirable for people to have an "easy" discussion area where they can learn in an environment that is not intimidating.

  3. Give us an opportunity to experiment with approaches that help willing people learn faster.

 

Endless September protection should be designed to avoid causing these side-effects:

 

  Creating an imbalance in the proportion of thick-skinned individuals to normal individuals.

  Anything that annoys, alienates or discourages users is going to deter a lot of people while retaining thick-skinned individuals.  Some thick-skinned individuals are leaders, but many are trolls, and thick-skinned individuals may be more likely to resist acculturation or try to change the culture (though it could be argued the other way - that their thick skin allows them to take more honest feedback).  For example: anonymous, unexplained down votes create a gauntlet for new users to endure which selects for a high tolerance to negative feedback.  This may be the reason it has been reported that there are a lot of "annoying debater types".

 

  People that we do want fail to join because the method of protection puts them off.

  There are two pitfalls that I think are going to be particularly attractive, but we should really avoid them:

  1.) Filtering into hard/easy based on anything other than knowledge about rational thinking.  There are various reasons that could go very wrong.

    - Filtering in any other way will keep out advanced folks who may have a lot to teach.

    If a person has already learned good reasoning skills in some other way, do we want them at the site?  There might be logic professors, Zen masters, debate competition champs, geniuses, self-improvement professionals, hard-core bookworms and other people who are already advanced and are interested in teaching others to improve their skills, or interested in finding a good challenge, or are interested in contributing articles, but have already learned much of the material the sequences cover.  Imagine that a retired logic professor comes by hoping to get a challenge from similarly advanced minds and perhaps do a little volunteer work teaching about logic as a past time.  Now imagine requiring them to read 2,000 pages of "how to think rationally" in order to gain access to all the discussion areas.  This will almost guarantee that they go elsewhere.

    - Filtering based on the sequences or other cultural similarities would promote conformity and repel the true thinkers.

    If true rationalists think for themselves, some of them will think differently, some of them will disagree.  Eliezer has explained in undiscriminating skeptics that "I do propose that before you give anyone credit for being a smart, rational skeptic, that you ask them to defend some non-mainstream belief." he defines this as "It has to be something that most of their social circle doesn't believe, or something that most of their social circle does believe which they think is wrong."  If we want people in the "hard" social group who are likely to hold and defend non-mainstream beliefs, we have to filter out people unable to defend beliefs without scaring off those who have beliefs different from the group.

  2.) Discouraging people with unusually flawed English from participating at all levels.  Doing that would stop two important sources of new perspectives from flowing in:

    - People with cultural differences, who may bring in fresh perspectives.

    If you're from China, you may want to share perspectives that could be new and important to a Westerner, but may be less likely to meet the technical standards of a perfectionist when it comes to writing in English.

    - People with learning differences, whose brains work differently and may offer unique insight.

    A lot of gifted people have learning disorders and gifted people who don't tend to have large gaps between skill levels.  It is not uncommon to find a gifted person whose abilities with one skill are up to 40% behind (or better than) their abilities in other areas.  This phenomenon is called "asynchronous development".  We associate spelling and grammar with intelligence, but the truth is that those who have a high verbal IQ may not have equally intelligent things to say, and people who word things crudely due to asynchronous development (engineers, for instance, are not known for their communication skills but can be brilliant at engineering) may be ignored even though they could have important things to say.  Dyslexics, who have all kinds of trouble from spelling to vocabulary to arranging sentences oddly may be ignored despite the fact that "children and adults who are dyslexic usually excel at problem solving, reasoning, seeing the big picture, and thinking out of the box" (Yale).

   Everyone understands the importance of making sure all the serious articles get published with good English, but frequently in intellectual circles, the attitude is that if you aren't a perfectionist about spelling and grammar, you're not worth listening to at all.  The problem of getting articles polished when they are written by dyslexics or people for whom English is a second language should be pretty easy - people with English problems can simply seek a volunteer editor.  The ratio of articles being published by these folks versus the number of users at the site encourages me to believe that these guys will be able to find someone to polish their work.  Since it would be so easy to accommodate for these disabilities, taking an attitude that puts form over function as a filter would not serve you well.  If dyslexics and people with cultures different from the majority feel that we're snobby about technicalities, they could be put off.  This could already be happening and we could be missing out on the most creative and most different perspectives this way.

 

People who qualify under the "letter" of the standards do not meet the spirit of the standards.

  For instance:  They claim to be rationalists because they agree with a list of things that rationalists agree with, but don't think for themselves, as Eliezer cautions about in undiscriminating skeptics.  Asking them questions like "Are you an atheist?" and "Do you think signing up for cryo makes sense?" would only draw large numbers of people who agree but do not think for themselves.  Worse, that would send a strong message saying: "If you don't agree with us about everything, you aren't welcome here."

 

The right people join, but acculturate slowly or for some reason do not acculturate. 

  - Large numbers of users, even desirable ones, will be frustrating if newbie materials are not prominently posted.

  I was very confused and disoriented as a new user.  I think that there's a need for an orientation page.  I wrote about my experiences as a new user here which I think might make a good starting point for such a new user orientation page.  I think LessWrong also needs a written list of guidelines and rules that's positioned to be "in your face" like the rest of the internet does (because if users don't see it where they expect to find it, then they will assume there isn't one).  If new users adjust quickly, both old users and new users will be less annoyed if/when lots of new users join at once.

 

The filtering mechanism gives LessWrong a bad name.

  For instance, if we were to use an IQ test to filter users, the world may feel that LessWrong is an elitist organization.  Sparking an anti-intellectual backlash would do nothing to further the cause of promoting rationality, and it doesn't truly reflect the spirit of bringing everyone up, which is what this is supposed to do.  Similarly, asking questions that may trigger racial, political or religious feelings could be a bad idea - not because they aren't sources of bias, but because they'll scare away people who may have been open to questioning and growing but are not open to being forced to choose a different option immediately.  The filters should be a test about reasoning, not a test about beliefs.

 

Proposed Filtering Mechanisms:

 

  Principle One:  A small number of questions can deter a lot of activity.

  As a web pro, I have observed a 10 question registration form slash the number of files sent through a file upload input that used to be public.  The ten questions were not that hard - just name, location, password, etc.  Asking questions deters people from signing up.  Period.  That is why, if you've observed this trend as well, I think that a lot of big websites have begun asking for minimal registration info: email address and password only.  Years ago, that was not common, it seemed that everyone wanted to give you ten or twenty questions.  For this reason, I think it would be best if the registration form stays simple, but if we create extra hoops to jump through to use the hard discussion area, only those who are seriously interested will join in there.  Specific examples of questions that meet the other criteria are located in the proposed acculturation methods section under: A test won't deter ignorant cheaters, but they can force them to educate themselves.

 

  Principle Two:  A rigorous environment will deter those who are not serious about doing it right.

  The ideal is to fill the hard discussion area with the sort of rationalists who want to keep improving, who are not afraid to disagree with each other, who think for themselves.  How do you guarantee they're interested in improving?  Require them to sacrifice for improvement.  Getting honest feedback is necessary to improve, but it's not pleasant.  That's the perfect sacrifice requirement:

  Add a check box that they have to click where it says "By entering the hard discussion area, I'm inviting everyone's honest criticisms of my ideas.  I agree to take responsibility for my own emotional reactions to feedback and to treat feedback as valuable.  In return for their valuable feedback, which is a privilege and service to me, I will state my honest criticisms of their ideas as well, regardless of whether the truth could upset them."

  I think it's common to assume that in order to give honest feedback one has to throw manners out the window.  I disagree with that.  I think there's a difference between pointing out a brutal reality, and making the statement of reality itself brutal.  Sticking to certain guidelines like attacking the idea, not the person and being objective instead of ridiculing makes a big difference.  

  There are other ways, also, for less bold people, like the one that I use in IRL environments: Hint first (sensitive people get it, and you spare their dignity) then be clear (most people get it) then be brutally honest (slightly dense people get it). If I have to resort to the 2x4, then I really have to decide whether enlightening this person is going to be one of those battles I choose or one of those battles I do not choose.  (I usually choose against those battles.)

  How do you guarantee they're capable of disagreeing with others?  Making it clear that they're going to experience disagreements by requiring them to invite disagreements will not appeal to conformists.  Those who are not yet thinking for themselves will find it impossible to defend their ideas if they do join, so most of them will become frustrated and go back to the easy discussion area.  People who don't want intellectual rigor will be put off and leave.

  It's important that the wording for the check box has some actual bite to it, and that the same message about the hard discussion area is echoed in any pages that advise on the rules, guidelines, etiquette, etc.  To explain why, I'll tell a little story about an anonymous friend:

  I have a friend that worked at Microsoft.  He said the culture there was not open to new ideas and that management was not open to hearing criticism.  He interviewed with various companies and chose Amazon.  According to this friend, Amazon actually does a good job of fulfilling values like inviting honest feedback and creating an environment conductive to innovation.  He showed me the written values for each.  I didn't think much of this at first because most of them are boring and read like empty marketing copy.  Amazon.com has the most incredible written values page I've ever seen - it does more than sit there like a static piece of text.  It gives you permission.  Instead of saying something fluffy like: "We value integrity and honesty and our managers are happy to hear your criticisms." it first creates expectations for management: " Leaders are sincerely open-minded, genuinely listen, and are willing to examine their strongest convictions with humility." and then gives employees permission to give honest feedback to decision-makers: "Leaders (all employees are referred to as "leaders") are obligated to respectfully challenge decisions when they disagree, even when doing so is uncomfortable or exhausting.  Leaders have conviction and are tenacious. They do not compromise for the sake of social cohesion."  The Amazon values page gives their employees permission to innovate as well: "As we do new things, we accept that we may be misunderstood for long periods of time."  If you look at Microsoft's written values, there's no bite to them.  What do I mean by bite?

  Imagine you're an employee at Amazon.  Your boss does something stupid.  The cultural expectation is that you're not supposed to say anything - offending the boss is bad news, right?  So you're inhibited.  But the thing they've done is stupid.  So you remember back to the values page and go bring it up on your computer.  It says explicitly that your boss is expected to be humble and that you are expected to sacrifice social cohesion in this case and disagree.  Now, if your boss gets irritated with you for disagreeing, you can point back to that page and say "Look, it's in writing, I have permission to tell you."

  Similarly, there is, what I consider to be, a very unfortunate social skills requirement that more or less says if you don't have something nice to say, don't say anything at all.  Many people feel obligated to keep constructive criticism to themselves.  A lot of us are intentionally trained to be non-confrontational.  If people are going to overcome this lifetime of training to squelch constructive criticism, they need an excuse to ignore that social training.  Not just any excuse.  It needs to be worded to require them to do that and it needs to be worded to require them to do it explicitly despite the consequences.

 

  Principle Three:  If we want innovation, we have to make innovators feel welcome.

  That brings me to another point.  If you want innovation, you can't deter the sort of person who will bring it to you: the "people who will be misunderstood for long periods of time", as Amazon puts it.  If you give specific constructive criticism to a misunderstood person, this will help them figure out how to communicate - how else will they navigate the jungle of perception and context differences between themselves and others?  If you simply vote them down, silently and anonymously, they have no opportunity to learn how to communicate with you and what's worse is that they'll be censored after three votes.  This ability for three people to censor somebody with no accountability, and without even needing a reason, encourages posters to keep quiet instead of taking the sort of risk an innovator needs to take in presenting new ideas, and it robs misunderstood innovators of those opportunities for important feedback - which is required for them to explain their ideas.  Here is an example of how feedback can transform an innovator's description of a new idea from something that seems incomprehensible into something that shows obvious value:

  On the "Let's start an important start-up" thread, KrisC posts a description of an innovative phone app idea.  I read it and I cannot even figure out what it's about.  My instinct is to write it off as "gibberish" and go do something else.  Instead, I provide feedback, constructive criticism and questions.  It turns out that the idea KrisC has is actually pretty awesome.  All it took was for KrisC to be listened to and to get some feedback, and the next description that KrisC wrote made pretty good sense.  It's hard to explain new ideas but with detailed feedback, innovation may start to show through.  Link to KrisC and I discussing the phone app idea.

 

Proposed Acculturation Methods:

 

   Send them to Center for Modern Rationality

   Now that I have discovered the post on the Center for Modern Rationality and have see that they're targeting the general population and beginners with material for local meetups, high schools and colleges and they're planning some web apps to help with rationality training, I see that referring people over to them might be a great suggestion.  Saturn suggested sending them to appliedrationality.org before I found this but I'm not sure if that would be adequate since I don't see a lot of stuff for people to do on their website.

 

    Highlight the culture.

    A database of cultural glossary terms can be created and used to highlight those terms on the forum.  The terms are already on the page, so what good would this do?  Well, first they can be automatically linked to the relevant sequence or wiki page.  If old users do not have to look for the link, this speeds up the process of mentioning them to new users quite a lot.  Secondly, it would make the core cultural items stand out from all of the other information, which will likely cause new users to prioritize it.  Thirdly, there will be a visual effect on the page.  You'll be able to see that this place has it's own vocabulary, it's own personality, it's own memes.  It's one thing to say "LessWrong has been influenced by the sequences" to a new user who hasn't seen all those references on all of those pages, and even if they do see them, won't know where they're from, versus making it immediately obvious how by giving them a visual that illustrates the point.

 

    Provide new users with real feedback instead of mysterious anonymous down votes:

    We have karma vote buttons, but this is not providing useful feedback for new users.  Without a specific reason, I have no way to tell if I'm being down voted by trolls and I may see ten different possible reasons for being voted down and not know which one to choose.  This annoyance selects for thick-skinned individuals like trolls and fails to avoid the "imbalance in the proportion of thick-skinned individuals to normal individuals" side-effect.

    If good new users are to be preserved, and the normal people to troll ratio is to be maintained, we need to add a "vote to ban" button that's used only for blatant misbehavior, and if an anonymous feedback system is to be used for voting down, it needs to prompt you for more detailed feedback - either allowing you to select from categories, or give at least one or two words as an explanation.  Also, the comments need to should show both up votes and down votes.  If you don't know when you've said something controversial and are being encouraged to view everything you say as black-and-white good-or-bad, this promotes conformity.

 

     A test won't deter ignorant cheaters, but they can force them to educate themselves.

    Questions can be worded in such a way that they serve as a crash course in reasoning in the event that someone posts a cheat sheet or registrants look up all the answers on the internet.  Assuming that the answer options are randomly ordered so that you have to actually read them then the test should, at the very least, familiarize them with the various biases and logical fallacies, etc.  Examples:

    --------------

    Person A in a debate explains a belief but it's not well-supported.  Their opponent, person B, says they're an idiot.  What is this an example of?

    A. Attacking the person, a great way to really nail a debate.

    B. Attacking the person, a great way to totally fail in debate because you're not even attacking their ideas.

    --------------

    You are with person X and person Y.  Person Y says they have been considering some interesting new evidence of what might be an alien space craft and aren't sure what to think yet.  You both see person Y's evidence, and neither of you has seen it before.  Person X says to you that they don't believe in UFOs and don't care about person Y's silly evidence.  Who is the better skeptic?

    Person X because they have the correct belief about UFOs.

    Person Y because they are actually thinking about it, avoiding undiscriminating skepticism.

    --------------

    Note:  These questions are intentionally knowledge-based.  If the purpose is to avoid requiring an IQ test, and to create an obstacle that requires you to learn about reasoning before posting in "hard", that's the only way that these can be done.

 

    Encouraging users to lurk more. 

   Vaniver contributed this: Another way to cut down on new-new interaction is to limit the number of comments someone can make in a time period- if people can only comment once an day until their karma hits 20, and then once an hour until their karma hits 100, and then they're unrestricted, that will explicitly encourage lurking / paying close attention to karma among new members. (It would be gameable, unless you did something like prevent new members from upvoting the comments of other new members, or algorithmically keeping an eye out for people gaming the system and then cracking down on them.) [edit] The delay being a near-continuous function of the karma- say, 24 hours*exp(-b karma)- might make the incentives better, and not require partitioning users explicitly. No idea if that would be more or less effort on the coding side.

    Cons:  This would deter some new users from becoming active users by causing them to lose steam on their initial motivation to join.  It might be something that would deter the right people.  It might also filter users, selecting for the most persistent ones, or for some other trait that might change the personality of the user base.  This would exacerbate the filtering effect that the current karma system is exerting, which, I theorize, is causing there to be a disproportionate number of thick-skinned individuals like trolls and debate-oriented newbies.  My theory about how the karma system is having a bad influence

 

    Give older users more voting power. 

    Luke suggested "Maybe this mathematical approach would work. (h/t matt)" on the "Call for Agreement" thread. 

    I question, though, whether changing the karma numbers on the comments and posts in any way would have a significant influence on behavior or a significant influence on who joins and stays. Firstly, votes may reward and punish but they don't instruct very well - unless people are very similar, they won't have accurate assumptions about what they did wrong. I also question whether having a significant influence on behavior would prevent a new majority from forming because these are different problems. The current users who are the right type may be both motivated and able to change, but future users of the wrong type may not care or may be incapable of changing. They may set a new precedent where there are a lot of people doing unpopular things so new people are more likely to ignore popularity. The technique uses math and the author claims that "the tweaks work" but I didn't see anything specific about what the author means by that nor evidence that this is true. So this looks good because it is mathematical, but it's less direct than other options so I'm questioning whether it would work.

  Vladimir_Nesov posted a variation here.

 

  Make a different discussion area for users with over 1000 karma.

  Posted by Konkvistador here.

 

  Make a Multi Generation Culture.

  Limit the number of new users that join the forum to a certain percentage per month, sending the rest to a new forum.  If that forum grows too fast, create additional forums.  This would be like having different generations.  New people would be able to join an older generation if there is space.  Nobody would be labeled a "beginner".

 

  Temporarily turn off registration or limit the number of users that can join.

  (See the cliff notes version for more.)

 

Should easy discussion participants be able to post articles?

  I think the answer to this is yes, because no filtering mechanism is perfect and the last thing you want to do is filter out people with a different and important point of view.  Unless the site is currently having issues with trolls posting new articles, or with the quality of the articles going down, leaving that freedom intact is best.  I definitely think, though, that written guidelines for posting an article need to be put in "in your face" expected places.  If a lot of new users join at once, well-meaning but confused people will be posting the wrong sorts of things there - making sure they've got the guidelines right there is all that's probably needed to deter them.

 

Testing / measuring results:

  How do we tell if this worked?  Tracking something subjective like whether we're feeling challenged or inundated with newbies is not going to be a straightforward matter of looking at numbers.  (Methods to assist willing people learn faster deserves it's own post.)  Just because it's subjective doesn't mean tracking is impossible or that working out whether it's made a difference cannot be done.  I suspect that a big difference will be noticed in the hard discussion area right away.  Here are some figures that are relevant and can be tracked, that may give us insight and ways to check our perceptions:

  1.  How many people are joining the hard forum versus the easy forum?  If we've got a percentage, we know how *much* we've filtered, though we won't know exactly *who* we've filtered.

  2.  Survey the users to ask whether the conversations they're reading have increased in quality.

  3.  Survey the users to ask whether they've been learning more since the change.

  4.  See which area has the largest ratio of users with lots of vote downs. 

  (This could be tricky because people who frequently state disagreements might be doing a great service to the group, but might be unpopular because of it, and people who are innovative may be getting voted down due to being misunderstood.  One would think, though, that people who are unpopular due to disagreeing, or being innovative, assuming they're serious about good reasoning, would end up in the hard forum.) 

 

Request for honest feedback:

  Your honest criticisms of this idea and your suggestions will be appreciated, and I will update this idea or write a new one to reflect any good criticisms or ideas you contribute.

 

This is in the public domain:

  This idea is hereby released into the public domain, with acknowledgement from Luke Muehlhauser that those were my terms prior to posting.  My intent is to share this idea to make it impossible to patent and my hope is that it will be free for the whole world to use.

  Preventing discussion from being watered down by an "endless September" user influx. by Epiphany is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

 

Less Wrong: The podcast

7 mapnoterritory 01 June 2012 08:33PM

Would it be possible to have a monthly podcast on Less Wrong topics? A possible format could be roughly four panelist (maybe half core and half rotating members) discussing theoretical and practical aspects or rationality, AI/singularity, cognitive science etc.

Episodes can be also easily framed by assigning some reading from sequences or recent LW articles and then discussing them in podcast form. This format seems to work great for www.partiallyexaminedlife.com (quite entertaining and informative podcast albeit on the diseased discipline of philosophy).

To keep things interesting occasional episodes could be done in the form of discussions with guests (via skype), e.g. the usual suspects from the SAIA and fellow AI scientists, people like Robin Hanson, Aubrey de Grey, other rationalist/skeptical bloggers/podcasters but also AI skeptics and so on.

The level of the podcast should be still accesible to newcomers but the discussion could thread into bit deeper waters. I would love to hear discussions also on more technical topics (like CEV, Solomonoff induction, AIXI etc.). Just imagine how exciting could be discussions on the more controversial decision-theoretic paradoxes!

Further possibility is from time to time plan ahead and cover in more depth the topic from that week's Harry Potter and the Methods of Rationality podcast, which would potentially help to increase the audience (it would be like a post-grad HPMoR).

What do you think? Could this be done? With the depth and breadth of material we have here and all the interesting people to talk to I don't think there would be a shortage of topics.

 

Edit: Changed the (bi-)weekly timescale to monthly.

 

 

[META] Recent Posts for Discussion and Main

9 Oscar_Cunningham 13 May 2012 10:42AM

This link

http://lesswrong.com/r/all/recentposts

gives a page which lists all the recent posts in both the Main and Discussion sections. I've posted it in the comments section before, but I decided to put it in a discussion post because it's a really handy way of accessing the site. I found it by guessing the URL.

[Link] Study on Group Intelligence

9 atucker 15 August 2011 08:56AM

Full disclosure: This has already been discussed here, but I see utility in bringing it up again. Mostly because I only heard about it offline.

The Paper:

Some researchers were interested if, in the same way that there's a general intelligence g that seems to predict competence in a wide variety of tasks, there is a group intelligence c that could do the same. You can read their paper here.

Their abstract:

Psychologists have repeatedly shown that a single statistical factor—often called “general intelligence”—emerges from the correlations among people’s performance on a wide variety of cognitive tasks. But no one has systematically examined whether a similar kind of “collective intelligence” exists for groups of people. In two studies with 699 people, working in groups of two to five, we find converging evidence of a general collective intelligence factor that explains a group’s performance on a wide variety of tasks. This “c factor” is not strongly correlated with the average or maximum individual intelligence of group members but is correlated with the average social sensitivity of group members, the equality in distribution of conversational turn-taking, and the proportion of females in the group.

Basically, groups with higher social sensitivity, equality in conversational turn-taking, and proportion of females are collectively more intelligent. On top of that, those effects trump out things like average IQ or even max IQ.

I theorize that proportion of females mostly works as a proxy for social sensitivity and turn-taking, and the authors speculate the same.

Some thoughts:

What does this mean for Less Wrong?

The most important part of the study, IMO, is that "social sensitivity" (measured by a test where you try and discern emotional states from someone's eyes) is such a stronger predictor of group intelligence. It probably helps people to gauge other people's comprehension, but based on the fact that people sharing talking time more equally also helps, I would speculate that another chunk of its usefulness comes from being able to tell if other people want to talk, or think that there's something relevant to be said.

One thing that I find interesting in the meatspace meetups is how in new groups, conversation tends to be dominated by the people who talk the loudest and most insistently. Often, those people are also fairly interesting. However, I prefer the current, older DC group to the newer one, and there's much more equal time speaking. Even though this means that I don't talk as much. Most other people seem to share similar sentiments, to the point that at one early meetup it was explicitly voted to be true that most people would rather talk more.

Solutions/Proposals:

Anything we should try doing about this? I will hold off on proposing solutions for now, but this section will get filled in sometime.

Topics to discuss CEV

6 diegocaleiro 06 July 2011 02:19PM

     CEV is our current proposal for what ought to be done once you have AGI flourishing around. Many people have had bad feelings about this. When in Singularity Institute, I decided to write a text do discuss CEV, from what it is for, to how likely it is to achieve it's goals, and how much fine-grained detail needs to be added before it is an actual theory.

Here you find a draft of the topics I'll be discussing in that text. The purpose of showing this is that you take a look at the topics, spot something that is missing, and write a comment saying: "Hey, you forgot this problem, which, summarised, is bla bla bla bla" and also "be sure to mention paper X when discussing topic 2.a.i,"

Please take a few minutes to help me add better discussions.

Do not worry about pointing previous Less Wrong posts about it, I have them all.

 

  1. Summary of CEV
  2. Troubles with CEV
    1. Troubles with the overall suggestion
      1. Concepts on which CEV relies that may not be well shaped enough
    2. Troubles with coherence
      1. The volitions of the same person when in two different emotional states might be different - it’s as if they are two different people. Is there any good criteria by which a person’s “ultimate” volition may be determined? If not, is it certain that even the volitions of one person’s multiple selves will be convergent?
      2. But when you start dissecting most human goals and preferences, you find they contain deeper layers of belief and expectation. If you keep stripping those away, you eventually reach raw biological drives which are not a human belief or expectation. (Though even they are beliefs and expectations of evolution, but let’s ignore that for the moment.)
      3. Once you strip away human beliefs and expectations, nothing remains but biological drives, which even the animals have. Yes, an animal, by virtue of its biological drives and ability to act, is more than a predicting rock, but that doesn’t address the issue at hand.
    3. Troubles with extrapolation
      1. Are small accretions of inteligence analogous to small accretions of time in terms of identity? Is extrapolated person X still a reasonable political representant of person X?
    4. Problems with the concept of Volition
      1. Blue eliminating robots (Yvain post)
      2. Error minimizer
      3. Goals x Volitions
    5. Problems of implementation
      1. Undesirable solutions for hardware shortage, or time shortage (the machine decides to only CV, but not E)
      2. Sample bias
      3. Solving apparent non-coherence by meaning shift
  3. Praise of CEV
    1. Bringing the issue to practical level
    2. Ethical strenght of egalitarianism

 

  1. Alternatives to CEV
    1. (                     )
    2. (                     )
    3. Normative approach
    4. Extrapolation of written desires

 

  1. Solvability of remaining problems
    1. Historical perspectives on problems
    2. Likelihood of solving problems before 2050
    3. How humans have dealt with unsolvable problems in the past

 

Q: What has Rationality Done for You?

11 atucker 02 April 2011 04:13AM

So after reading SarahC's latest post I noticed that she's gotten a lot out of rationality.

More importantly, she got different things out of it than I have.

Off the top of my head, I've learned...

On top of becoming a little bit more effective at a lot of things, and with many fewer problems.
(I could post more on the consequences of this, but I'm going for a different point)

Where she got...

  • a habit of learning new skills
  • better time-management habits
  • an awesome community
  • more initiative
  • the idea that she can change the world

I've only recently making a habit out of trying new things, and that's been going really well for me. Is there other low hanging fruit that I'm missing?

What cool/important/useful things has rationality gotten you?