Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: ImmortalRationalist 29 June 2017 08:29:35PM 0 points [-]

Are there any 2017 LessWrong surveys planned?

Comment author: ingres 13 August 2017 05:00:13AM 1 point [-]

Sorry for the late response but yes, I was just working on finishing one up now.

Comment author: ingres 20 June 2017 12:52:41AM 1 point [-]

This topic was actually covered a bit in the 2016 LessWrong Survey, in which respondents were asked if they'd ever had any 'moral anxiety' over Effective Altruism:


Comment author: Unnamed 14 April 2017 09:01:13PM 0 points [-]

I'd like to see some changes to the CFAR-related questions; I've sent a PM with details.

Comment author: ingres 15 April 2017 07:15:10AM 0 points [-]

I have not received one. Please re-send.

Comment author: btrettel 12 April 2017 09:45:41PM *  2 points [-]

I'd be interested in a question about aerobic fitness. My impression is that most rationalists severely underrate aerobic physical activity compared against anaerobic, which is surprising because anaerobic doesn't help cardiovascular capacity much. Presumably given the interest in cryonics and whatnot here, rationalists are interested in living longer. Cardiovascular capacity (VO2max, typically) is strongly correlated with longevity, and it's easy to see the direction of causation.

Possible question: "Over the past month, have you typically met the US federal guidelines for aerobic physical activity? This means at least 150 minutes (2 hours and 30 minutes) a week of moderate-intensity, or 75 minutes (1 hour and 15 minutes) a week of vigorous-intensity aerobic physical activity, or an equivalent combination of moderate- and vigorous-intensity aerobic activity. See health.gov for more information."

There's a lot of data on this question, so it will be easy to compare LessWrongers against other groups.

Comment author: ingres 12 April 2017 10:42:09PM 2 points [-]

I don't know if I'll include this or not yet, but I just wanted to thank you for your awesome presentation of this question. Great sample question, excellent assurance that the data will in fact be analyzable afterward in comparison to other studies, good connection made to 'rationalist' type interests.

I'm impressed, and definitely considering it.

Comment author: Raemon 09 April 2017 04:29:37PM 2 points [-]

I like the framework through which you're approaching this (i.e. managing complexity)

Something I notice that I wish I had was access to my answers to past year's surveys, to track how my answers have evolved over time. The effort for facilitating that is probably nontrivial, but something you could do to start enabling backwards compatibility for that is including some kind of identifying tag that you're likely to remember in the future but is untraceable (or, leaves it to individuals to decide how to tradeoff "I will easily remember this next year" vs "this can't be used to identify me"

Questions that might be good are "CFAR followup" type questions, like "Have you tried a new thing in the past week/month? When's the last time you made a major career change on purpose?"

Comment author: ingres 09 April 2017 06:09:32PM *  1 point [-]

Previous surveys have included this, I just overlooked it.

The effort for facilitating that is probably nontrivial

No the effort to facilitate at least the weak form you're talking about is fairly trivial. In terms of looking at your personal results it might be possible to coax the software into sending you a copy of your answers, if I remember correctly the software used last time supports this but I turned it off as an extraneous feature.

Requesting Questions For A 2017 LessWrong Survey

6 ingres 09 April 2017 12:48AM

It's been twelve months since the last LessWrong Survey, which means we're due for a new one. But before I can put out a new survey in earnest, I feel obligated to solicit questions from community members and check in on any ideas that might be floating around for what we should ask.

The basic format of the thread isn't too complex, just pitch questions. For best chances of inclusion, however, it's best to include:

  • A short cost/benefit analysis of including the question. Keep in mind that some questions are too invasive or embarrassing to be reasonably included. Other questions might leak too many bits. There is limited survey space and some items might be too marginal to include at the cost of others.
  • An example of a useful analysis that could be done with this question(s), especially interesting analysis in concert with other questions. eg. It's best to start with a larger question like "how does parental religious denomination affect the cohorts current religion?" and then translate that into concrete questions about religion.
  • Some idea of how the question can be done without using write-ins. Unfortunately write-in questions add massive amounts of man-hours to the total analysis time for a survey and make it harder to get out a final product when all is said and done.

The last survey included 148 questions; some sections will not be repeated in the 2017 survey, which gives us an estimate about our question budget. I would prefer to not go over 150 questions, and if at all possible come in at many fewer than that. Removed sections are:

  • The Basilisk section on the last survey provided adequate information on the phenomena it was surveying, and I do not currently plan to include it again on the 2017 survey. This frees up six questions.
  • The LessWrong Feedback portion of the last survey also adequately provided information, and I would prefer to replace it on the 2017 survey with a section measuring the site's recovery, if any. This frees up 19 questions.

I also plan to do significant reform to multiple portions of the survey. I'm particularly interested in making changes to:

  • The politics section. In particular I would like to update the questions about feelings on political issues with new entries and overhaul some of the options on various questions.
  • I handled the calibration section poorly last year, and would like to replace it this year with an easily scored set of questions. To be more specific, a good calibration section should:
    • Good calibration questions should be fermi estimable with no more than a standard 5th grade education. They should not rely on particular hidden knowledge or overly specific information. eg. "Who wrote the foundation novels?" is a terrible calibration question and "What is the height of the Eiffel Tower in meters within a multiple of 1.5?" is decent.
    • Good calibration questions should have a measurable distance component, so that even if an answer is wrong (as the vast majority of answers will be) it can still be scored.
    • A measure of distance should get proportionately smaller the closer an answer is to being correct and proportionately larger the further it is from being correct.
    • It should be easily (or at least sanely) calculable by programmatic methods.
  • The probabilities section is probably due for some revision, I know in previous years I haven't even answered it because I found the wording of some questions too confusing to even consider.

So for maximum chances of inclusion, it would be best to keep these proposed reforms in mind with your suggestions.

(Note: If you have suggestions on questions to eliminate, I'd be glad to hear those too.)

Comment author: Viliam 03 April 2017 09:45:04AM *  26 points [-]

Everyone, could we please stop using the word "sociopath" to mean things other than... you know... sociopathy?

I also like the linked article and I believe it does a great job at describing social dynamic at subcultures. I shared that article many times. But while it is funny to use exaggerations for shocking value, making the exaggerated word a new normal is... I guess in obvious conflict with the goal of rationality and clear communication. Sometimes I don't even know how many people are actually aware that "trying to make profit from things you don't deeply care about" and "being diagnosed as a sociopath" are actually two different things.

To explain why I care about this, imagine a group that decides that it is cool to refer to "kissing someone for social reasons, not because you actually desire to", as "rape". Because, you know, there are some similarities; both are a kind of an intimate contact, etc. Okay, if you write an article describing the analogies, that's great, and you have a good point. It just becomes idiotic when the whole community decides to use "rape" in this sense, and then they keep talking like this: "Yesterday we visited Grandma. When we entered the house, she raped us, and then we raped her back. I really don't like it when old people keep raping me like this, but I don't want to create conflicts in the family. But maybe I am just making a mountain out of a molehill, and being raped is actually not a big deal." Followed by dozen replies using the same vocabulary.

First, this is completely unnecessarily burning your weirdness points. Weird jargon makes communication with outsiders more difficult, and makes it more difficult for outsiders to join the group, even if they would otherwise agree with the group's values. After this point, absurdity heuristics works against anything you say. Sometimes there is a good reason for using jargon (it can compress difficult concepts), but I believe in this case the benefits are not proportional to the costs.

More importantly, imagine that if talking like this would become the group norm, how difficult it would be to have a serious discussion about actual rape. Like, anytime someone would mention being actually raped by a grandparent as a child, there would be a guaranteed reaction from someone "yeah, yeah, happens to me when we visit Grandma every weekend, not a big deal". Or someone would express concern about possible rape at community weekend, and people would respond by making stickers "kisses okay" and "don't like kissing", believing they are addressing the issue properly.

I believe it would be really bad if rationalist community would lose the ability to talk about actual sociopathy rationally. Because one day this topic may become an important one, and we may be too busy calling everyone who sells Bayes T-shirts without having read the Sequences a "sociopath". But even if you disagree with me on the importance of this, I hope you can agree that using words like this is stupid. How about just calling it "exploiting"? As in: "some people are only exploiting the rationalist community to get money for their causes, or to get free work from us, without providing anything to our causes in return -- we seriously need to put stop to this". Could words like this get the message across, too?

Also, if you want to publicly address these people "hey guys, we suspect you are just using us for free resources; how about demonstrating some commitment to our causes first?", it will probably help to keep the discussion friendly, if you don't call them "sociopaths". Similarly, imagine LessWrong having an article saying (a) "vegans as a group benefit from the rationalist community, but don't contribute anything to the art of Bayes in return", or (b) "vegans are sociopaths". Regardless of whether you personally happen to be a vegan or not, this is obviously harmful.

tl;dr -- we are in the rationality business here, not in the clickbait business; talk accordingly

(EDIT: Just to be explicit about this, ignoring the terminology issue, I completely agree with the parent comment.)

Comment author: ingres 04 April 2017 04:06:18PM *  2 points [-]

Thank you. This was really bothering me but it didn't occur that I should say anything about it.

Comment author: John_Maxwell_IV 28 November 2016 03:26:11AM *  2 points [-]

Reputations seem to be very fragile on the Internet. I wonder if there's anything we could do about that? The one crazy idea I had was (rot13'd so you'll try to come up with your own idea first): znxr n fvgr jurer nyy qvfphffvba vf cevingr, naq gb znxr vg vzcbffvoyr gb funer perqvoyr fperrafubgf bs gur qvfphffvba, perngr n gbby gung nyybjf nalbar gb znxr n snxr fperrafubg bs nalbar fnlvat nalguvat.

Comment author: ingres 28 November 2016 09:22:05PM 2 points [-]

Ooh, your idea is interesting. Mine was to perngr n jro bs gehfg sbe erchgngvba fb gung lbh pna ng n tynapr xabj jung snpgvbaf guvax bs fvgrf/pbzzhavgvrf/rgp, gung jnl lbh'yy xabj jung gur crbcyr lbh pner nobhg guvax nf bccbfrq gb univat gb rinyhngr gur perqvovyvgl bs enaqbz crbcyr jvgu n zrtncubar.

Comment author: AnnaSalamon 27 November 2016 10:29:20PM *  35 points [-]

Re: 1, I vote for Vaniver as LW's BDFL, with authority to decree community norms (re: politics or anything else), decide on changes for the site; conduct fundraisers on behalf of the site; etc. (He already has the technical admin powers, and has been playing some of this role in a low key way; but I suspect he's been deferring a lot to other parties who spend little time on LW, and that an authorized sole dictatorship might be better.)

Anyone want to join me in this, or else make a counterproposal?

Comment author: ingres 28 November 2016 09:10:59PM *  7 points [-]

I'm concerned that we're only voting for Vaniver because he's well known, but I'll throw in a tentative vote for him.

Who are our other options?

Comment author: RobinHanson 27 November 2016 06:31:46PM 20 points [-]

I have serious doubts about the basic claim that "the rationalist community" is so smart and wise and on to good stuff compared to everyone else that it should focus on reading and talking to each other at the expense of reading others and participating in other conversations. There are obviously cultish in-group favoring biases pushing this way, and I'd want strong evidence before I attributed this push to anything else.

Comment author: ingres 28 November 2016 08:38:10PM 4 points [-]

Spot on in my opinion, and one of the many points I was trying to get at with the 2016 LW Survey. For example, this community seems to have basically ignored Tetlock's latest research, relegating it to the status of a "good book" that SSC reviewed. I wish I'd included a 'never heard of it' button on the communities question because I suspect the vast majority of LessWrongers have never heard of the Good Judgement Project.

I've long felt that Eliezer Yudkowsky's sequences could use somebody going over them with a highlighter and filling in the citations for all the books and papers he borrowed from.

View more: Prev | Next