Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

2016 LessWrong Diaspora Survey Analysis: Part Three (Mental Health, Basilisk, Blogs and Media)

9 ingres 25 June 2016 03:40AM

2016 LessWrong Diaspora Survey Analysis

Overview


Mental Health

We decided to move the Mental Health section up closer in the survey this year so that the data could inform accessibility decisions.

LessWrong Mental Health As Compared To Base Rates In The General Population
Condition Base Rate LessWrong Rate LessWrong Self dx Rate Combined LW Rate Base/LW Rate Spread Relative Risk
Depression 17% 25.37% 27.04% 52.41% +8.37 1.492
Obsessive Compulsive Disorder 2.3% 2.7% 5.6% 8.3% +0.4 1.173
Autism Spectrum Disorder 1.47% 8.2% 12.9% 21.1% +6.73 5.578
Attention Deficit Disorder 5% 13.6% 10.4% 24% +8.6 2.719
Bipolar Disorder 3% 2.2% 2.8% 5% -0.8 0.733
Anxiety Disorder(s) 29% 13.7% 17.4% 31.1% -15.3 0.472
Borderline Personality Disorder 5.9% 0.6% 1.2% 1.8% -5.3 0.101
Schizophrenia 1.1% 0.8% 0.4% 1.2% -0.3 0.727
Substance Use Disorder 10.6% 1.3% 3.6% 4.9% -9.3 0.122

Base rates are taken from Wikipedia, US rates were favored over global rates where immediately available.

Accessibility Suggestions

So of the conditions we asked about, LessWrongers are at significant extra risk for three of them: Autism, ADHD, Depression.

LessWrong probably doesn't need to concern itself with being more accessible to those with autism as it likely already is. Depression is a complicated disorder with no clear interventions that can be easily implemented as site or community policy. It might be helpful to encourage looking more at positive trends in addition to negative ones, but the community already seems to do a fairly good job of this. (We could definitely use some more of it though.)

Attention Deficit Disorder - Public Service Announcement

That leaves ADHD, which we might be able to do something about, starting with this:

A lot of LessWrong stuff ends up falling into the same genre as productivity advice or 'self help'. If you have trouble with getting yourself to work, find yourself reading these things and completely unable to implement them, it's entirely possible that you have a mental health condition which impacts your executive function.

The best overview I've been able to find on ADD is this talk from Russell Barkely.

30 Essential Ideas For Parents

Ironically enough, this is a long talk, over four hours in total. Barkely is an entertaining speaker and the talk is absolutely fascinating. If you're even mildly interested in the subject I wholeheartedly recommend it. Many people who have ADHD just assume that they're lazy, or not trying hard enough, or just haven't found the 'magic bullet' yet. It never even occurs to them that they might have it because they assume that adult ADHD looks like childhood ADHD, or that ADHD is a thing that psychiatrists made up so they can give children powerful stimulants.

ADD is real, if you're in the demographic that takes this survey there's a decent enough chance you have it.

Attention Deficit Disorder - Accessibility

So with that in mind, is there anything else we can do?

Yes, write better.

Scott Alexander has written a blog post with writing advice for non-fiction, and the interesting thing about it is just how much of the advice is what I would tell you to do if your audience has ADD.

  • Reward the reader quickly and often. If your prose isn't rewarding to read it won't be read.

  • Make sure the overall article has good sectioning and indexing, people might be only looking for a particular thing and they won't want to wade through everything else to get it. Sectioning also gives the impression of progress and reduces eye strain.

  • Use good data visualization to compress information, take away mental effort where possible. Take for example the condition table above. It saves space and provides additional context. Instead of a long vertical wall of text with sections for each condition, it removes:

    • The extraneous information of how many people said they did not have a condition.

    • The space that would be used by creating a section for each condition. In fact the specific improvement of the table is that it takes extra advantage of space in the horizontal plane as well as the vertical plane.

    And instead of just presenting the raw data, it also adds:

    • The normal rate of incidence for each condition, so that the reader understands the extent to which rates are abnormal or unexpected.

    • Easy comparison between the clinically diagnosed, self diagnosed, and combined rates of the condition in the LW demographic. This preserves the value of the original raw data presentation while also easing the mental arithmetic of how many people claim to have a condition.

    • Percentage spread between the clinically diagnosed and the base rate, which saves the effort of figuring out the difference between the two values.

    • Relative risk between the clinically diagnosed and the base rate, which saves the effort of figuring out how much more or less likely a LessWronger is to have a given condition.

    Add all that together and you've created a compelling presentation that significantly improves on the 'naive' raw data presentation.

  • Use visuals in general, they help draw and maintain interest.

None of these are solely for the benefit of people with ADD. ADD is an exaggerated profile of normal human behavior. Following this kind of advice makes your article more accessible to everybody, which should be more than enough incentive if you intend to have an audience.1

Roko's Basilisk

This year we finally added a Basilisk question! In fact, it kind of turned into a whole Basilisk section. A fairly common question about this years survey is why the Basilisk section is so large. The basic reason is that asking only one or two questions about it would leave the results open to rampant speculation in one direction or another. By making the section comprehensive and covering every base, we've pretty much gotten about as complete of data as we'd want on the Basilisk phenomena.

Basilisk Knowledge
Do you know what Roko's Basilisk thought experiment is?

Yes: 1521 73.2%
No but I've heard of it: 158 7.6%
No: 398 19.2%

Basilisk Etiology
Where did you read Roko's argument for the Basilisk?

Roko's post on LessWrong: 323 20.2%
Reddit: 171 10.7%
XKCD: 61 3.8%
LessWrong Wiki: 234 14.6%
A news article: 71 4.4%
Word of mouth: 222 13.9%
RationalWiki: 314 19.6%
Other: 194 12.1%

Basilisk Correctness
Do you think Roko's argument for the Basilisk is correct?

Yes: 75 5.1%
Yes but I don't think it's logical conclusions apply for other reasons: 339 23.1%
No: 1055 71.8%

Basilisks And Lizardmen

One of the biggest mistakes I made with this years survey was not including "Do you believe Barack Obama is a hippopotamus?" as a control question in this section.2 Five percent is just outside of the infamous lizardman constant. This was the biggest survey surprise for me. I thought there was no way that 'yes' could go above a couple of percentage points. As far as I can tell this result is not caused by brigading but I've by no means investigated the matter so thoroughly that I would rule it out.

Higher?

Of course, we also shouldn't forget to investigate the hypothesis that the number might be higher than 5%. After all, somebody who thinks the Basilisk is correct could skip the questions entirely so they don't face potential stigma. So how many people skipped the questions but filled out the rest of the survey?

Eight people refused to answer whether they'd heard of Roko's Basilisk but went on to answer the depression question immediately after the Basilisk section. This gives us a decent proxy for how many people skipped the section and took the rest of the survey. So if we're pessimistic the number is a little higher, but it pays to keep in mind that there are other reasons to want to skip this section. (It is also possible that people took the survey up until they got to the Basilisk section and then quit so they didn't have to answer it, but this seems unlikely.)

Of course this assumes people are being strictly truthful with their survey answers. It's also plausible that people who think the Basilisk is correct said they'd never heard of it and then went on with the rest of the survey. So the number could in theory be quite large. My hunch is that it's not. I personally know quite a few LessWrongers and I'm fairly sure none of them would tell me that the Basilisk is 'correct'. (In fact I'm fairly sure they'd all be offended at me even asking the question.) Since 5% is one in twenty I'd think I'd know at least one or two people who thought the Basilisk was correct by now.

Lower?

One partial explanation for the surprisingly high rate here is that ten percent of the people who said yes by their own admission didn't know what they were saying yes to. Eight people said they've heard of the Basilisk but don't know what it is, and that it's correct. The lizardman constant also plausibly explains a significant portion of the yes responses, but that explanation relies on you already having a prior belief that the rate should be low.


Basilisk-Like Danger
Do you think Basilisk-like thought experiments are dangerous?

Yes, I think they're dangerous for decision theory reasons: 63 4.2%
Yes I think they're dangerous for social reasons (eg. A cult might use them): 194 12.8%
Yes I think they're dangerous for decision theory and social reasons: 136 9%
Yes I think they're socially dangerous because they make everybody involved look foolish: 253 16.7%
Yes I think they're dangerous for other reasons: 54 3.6%
No: 809 53.4%

Most people don't think Basilisk-Like thought experiments are dangerous at all. Of those that think they are, most of them think they're socially dangerous as opposed to a raw decision theory threat. The 4.2% number for pure decision theory threat is interesting because it lines up with the 5% number in the previous question for Basilisk Correctness.

P(Decision Theory Danger | Basilisk Belief) = 26.6%
P(Decision Theory And Social Danger | Basilisk Belief) = 21.3%

So of the people who say the Basilisk is correct, only half of them believe it is a decision theory based danger at all. (In theory this could be because they believe the Basilisk is a good thing and therefore not dangerous, but I refuse to lose that much faith in humanity.3)

Basilisk Anxiety
Have you ever felt any sort of anxiety about the Basilisk?

Yes: 142 8.8%
Yes but only because I worry about everything: 189 11.8%
No: 1275 79.4%

20.6% of respondents have felt some kind of Basilisk Anxiety. It should be noted that the exact wording of the question permits any anxiety, even for a second. And as we'll see in the next question that nuance is very important.

Degree Of Basilisk Worry
What is the longest span of time you've spent worrying about the Basilisk?

I haven't: 714 47%
A few seconds: 237 15.6%
A minute: 298 19.6%
An hour: 176 11.6%
A day: 40 2.6%
Two days: 16 1.05%
Three days: 12 0.79%
A week: 12 0.79%
A month: 5 0.32%
One to three months: 2 0.13%
Three to six months: 0 0.0%
Six to nine months: 0 0.0%
Nine months to a year: 1 0.06%
Over a year: 1 0.06%
Years: 4 0.26%

These numbers provide some pretty sobering context for the previous ones. Of all the people who worried about the Basilisk, 93.8% didn't worry about it for more than an hour. The next 3.65% didn't worry about it for more than a day or two. The next 1.9% didn't worry about it for more than a month and the last .7% or so have worried about it for longer.

Current Basilisk Worry
Are you currently worrying about the Basilisk?

Yes: 29 1.8%
Yes but only because I worry about everything: 60 3.7%
No: 1522 94.5%

Also encouraging. We should expect a small number of people to be worried at this question just because the section is basically the word "Basilisk" and "worry" repeated over and over so it's probably a bit scary to some people. But these numbers are much lower than the "Have you ever worried" ones and back up the previous inference that Basilisk anxiety is mostly a transitory phenomena.

One article on the Basilisk asked the question of whether or not it was just a "referendum on autism". It's a good question and now I have an answer for you, as per the table below:

Mental Health Conditions Versus Basilisk Worry
Condition Worried Worried But They Worry About Everything Combined Worry
Baseline (in the respondent population) 8.8% 11.8% 20.6%
ASD 7.3% 17.3% 24.7%
OCD 10.0% 32.5% 42.5%
AnxietyDisorder 6.9% 20.3% 27.3%
Schizophrenia 0.0% 16.7% 16.7%

 

The short answer: Autism raises your chances of Basilisk anxiety, but anxiety disorders and OCD especially raise them much more. Interestingly enough, schizophrenia seems to bring the chances down. This might just be an effect of small sample size, but my expectation was the opposite. (People who are really obsessed with Roko's Basilisk seem to present with schizophrenic symptoms at any rate.)

Before we move on, there's one last elephant in the room to contend with. The philosophical theory underlying the Basilisk is the CEV conception of friendly AI primarily espoused by Eliezer Yudkowsky. Which has led many critics to speculate on all kinds of relationships between Eliezer Yudkowsky and the Basilisk. Which of course obviously would extend to Eliezer Yudkowsky's Machine Intelligence Research Institute, a project to develop 'Friendly Artificial Intelligence' which does not implement a naive goal function that eats everything else humans actually care about once it's given sufficient optimization power.

The general thrust of these accusations is that MIRI, intentionally or not, profits from belief in the Basilisk. I think MIRI gets picked on enough, so I'm not thrilled about adding another log to the hefty pile of criticism they deal with. However this is a serious accusation which is plausible enough to be in the public interest for me to look at.

 

Percentage Of People Who Donate To MIRI Versus Basilisk Belief
Belief Percentage
Believe It's Incorrect 5.2%
Believe It's Structurally Correct 5.6%
Believe It's Correct 12.0%

Basilisk belief does appear to make you twice as likely to donate to MIRI. It's important to note from the perspective of earlier investigation that thinking it is "structurally correct" appears to make you about as likely as if you don't think it's correct, implying that both of these options mean about the same thing.

 

Sum Money Donated To MIRI Versus Basilisk Belief
Belief Mean Median Mode Stdev Total Donated
Believe It's Incorrect 1365.590 100.0 100.0 4825.293 75107.5
Believe It's Structurally Correct 2644.736 110.0 20.0 9147.299 50250.0
Believe It's Correct 740.555 300.0 300.0 1152.541 6665.0

Take these numbers with a grain of salt, it only takes one troll to plausibly lie about their income to ruin it for everybody else.

Interestingly enough, if you sum all three total donated counts and divide by a hundred, you find that five percent of the sum is about what was donated by the Basilisk group. ($6601 to be exact) So even though the modal and median donations of Basilisk believers are higher, they donate about as much as would be naively expected by assuming donations among groups are equal.4

 

Percentage Of People Who Donate To MIRI Versus Basilisk Worry
Anxiety Percentage
Never Worried 4.3%
Worried But They Worry About Everything 11.1%
Worried 11.3%

In contrast to the correctness question, merely having worried about the Basilisk at any point in time doubles your chances of donating to MIRI. My suspicion is that these people are not, as a general rule, donating because of the Basilisk per se. If you're the sort of person who is even capable of worrying about the Basilisk in principle, you're probably the kind of person who is likely to worry about AI risk in general and donate to MIRI on that basis. This hypothesis is probably unfalsifiable with the survey information I have, because Basilisk-risk is a subset of AI risk. This means that anytime somebody indicates on the survey that they're worried about AI risk this could be because they're worried about the Basilisk or because they're worried about more general AI risk.

 

Sum Money Donated To MIRI Versus Basilisk Worry
Anxiety Mean Median Mode Stdev Total Donated
Never Worried 1033.936 100.0 100.0 3493.373 56866.5
Worried But They Worry About Everything 227.047 75.0 300.0 438.861 4768.0
Worried 4539.25 90.0 10.0 11442.675 72628.0
Combined Worry         77396.0

Take these numbers with a grain of salt, it only takes one troll to plausibly lie about their income to ruin it for everybody else.

This particular analysis is probably the strongest evidence in the set for the hypothesis that MIRI profits (though not necessarily through any involvement on their part) from the Basilisk. People who worried from an unendorsed perspective donate less on average than everybody else. The modal donation among people who've worried about the Basilisk is ten dollars, which seems like a surefire way to torture if we're going with the hypothesis that these are people who believe the Basilisk is a real thing and they're concerned about it. So this implies that they don't, which supports my earlier hypothesis that people who are capable of feeling anxiety about the Basilisk are the core demographic to donate to MIRI anyway.

Of course, donors don't need to believe in the Basilisk for MIRI to profit from it. If exposing people to the concept of the Basilisk makes them twice as likely to donate but they don't end up actually believing the argument that would arguably be the ideal outcome for MIRI from an Evil Plot perspective. (Since after all, pursuing a strategy which involves Basilisk belief would actually incentivize torture from the perspective of the acausal game theories MIRI bases its FAI on, which would be bad.)

But frankly this is veering into very speculative territory. I don't think there's an evil plot, nor am I convinced that MIRI is profiting from Basilisk belief in a way that outweighs the resulting lost donations and damage to their cause.5 If anybody would like to assert otherwise I invite them to 'put up or shut up' with hard evidence. The world has enough criticism based on idle speculation and you're peeing in the pool.

Blogs and Media

Since this was the LessWrong diaspora survey, I felt it would be in order to reach out a bit to ask not just where the community is at but what it's reading. I went around to various people I knew and asked them about blogs for this section. However the picks were largely based on my mental 'map' of the blogs that are commonly read/linked in the community with a handful of suggestions thrown in. The same method was used for stories.

Blogs Read

LessWrong
Regular Reader: 239 13.4%
Sometimes: 642 36.1%
Rarely: 537 30.2%
Almost Never: 272 15.3%
Never: 70 3.9%
Never Heard Of It: 14 0.7%

SlateStarCodex (Scott Alexander)
Regular Reader: 1137 63.7%
Sometimes: 264 14.7%
Rarely: 90 5%
Almost Never: 61 3.4%
Never: 51 2.8%
Never Heard Of It: 181 10.1%

[These two results together pretty much confirm the results I talked about in part two of the survey analysis. A supermajority of respondents are 'regular readers' of SlateStarCodex. By contrast LessWrong itself doesn't even have a quarter of SlateStarCodexes readership.]

Overcoming Bias (Robin Hanson)
Regular Reader: 206 11.751%
Sometimes: 365 20.821%
Rarely: 391 22.305%
Almost Never: 385 21.962%
Never: 239 13.634%
Never Heard Of It: 167 9.527%

Minding Our Way (Nate Soares)
Regular Reader: 151 8.718%
Sometimes: 134 7.737%
Rarely: 139 8.025%
Almost Never: 175 10.104%
Never: 214 12.356%
Never Heard Of It: 919 53.06%

Agenty Duck (Brienne Yudkowsky)
Regular Reader: 55 3.181%
Sometimes: 132 7.634%
Rarely: 144 8.329%
Almost Never: 213 12.319%
Never: 254 14.691%
Never Heard Of It: 931 53.846%

Eliezer Yudkowsky's Facebook Page
Regular Reader: 325 18.561%
Sometimes: 316 18.047%
Rarely: 231 13.192%
Almost Never: 267 15.248%
Never: 361 20.617%
Never Heard Of It: 251 14.335%

Luke Muehlhauser (Eponymous)
Regular Reader: 59 3.426%
Sometimes: 106 6.156%
Rarely: 179 10.395%
Almost Never: 231 13.415%
Never: 312 18.118%
Never Heard Of It: 835 48.49%

Gwern.net (Gwern Branwen)
Regular Reader: 118 6.782%
Sometimes: 281 16.149%
Rarely: 292 16.782%
Almost Never: 224 12.874%
Never: 230 13.218%
Never Heard Of It: 595 34.195%

Siderea (Sibylla Bostoniensis)
Regular Reader: 29 1.682%
Sometimes: 49 2.842%
Rarely: 59 3.422%
Almost Never: 104 6.032%
Never: 183 10.615%
Never Heard Of It: 1300 75.406%

Ribbon Farm (Venkatesh Rao)
Regular Reader: 64 3.734%
Sometimes: 123 7.176%
Rarely: 111 6.476%
Almost Never: 150 8.751%
Never: 150 8.751%
Never Heard Of It: 1116 65.111%

Bayesed And Confused (Michael Rupert)
Regular Reader: 2 0.117%
Sometimes: 10 0.587%
Rarely: 24 1.408%
Almost Never: 68 3.988%
Never: 167 9.795%
Never Heard Of It: 1434 84.106%

[This was the 'troll' answer to catch out people who claim to read everything.]

The Unit Of Caring (Anonymous)
Regular Reader: 281 16.452%
Sometimes: 132 7.728%
Rarely: 126 7.377%
Almost Never: 178 10.422%
Never: 216 12.646%
Never Heard Of It: 775 45.375%

GiveWell Blog (Multiple Authors)
Regular Reader: 75 4.438%
Sometimes: 197 11.657%
Rarely: 243 14.379%
Almost Never: 280 16.568%
Never: 412 24.379%
Never Heard Of It: 482 28.521%

Thing Of Things (Ozy Frantz)
Regular Reader: 363 21.166%
Sometimes: 201 11.72%
Rarely: 143 8.338%
Almost Never: 171 9.971%
Never: 176 10.262%
Never Heard Of It: 661 38.542%

The Last Psychiatrist (Anonymous)
Regular Reader: 103 6.023%
Sometimes: 94 5.497%
Rarely: 164 9.591%
Almost Never: 221 12.924%
Never: 302 17.661%
Never Heard Of It: 826 48.304%

Hotel Concierge (Anonymous)
Regular Reader: 29 1.711%
Sometimes: 35 2.065%
Rarely: 49 2.891%
Almost Never: 88 5.192%
Never: 179 10.56%
Never Heard Of It: 1315 77.581%

The View From Hell (Sister Y)
Regular Reader: 34 1.998%
Sometimes: 39 2.291%
Rarely: 75 4.407%
Almost Never: 137 8.049%
Never: 250 14.689%
Never Heard Of It: 1167 68.566%

Xenosystems (Nick Land)
Regular Reader: 51 3.012%
Sometimes: 32 1.89%
Rarely: 64 3.78%
Almost Never: 175 10.337%
Never: 364 21.5%
Never Heard Of It: 1007 59.48%

I tried my best to have representation from multiple sections of the diaspora, if you look at the different blogs you can probably guess which blogs represent which section.

Stories Read

Harry Potter And The Methods Of Rationality (Eliezer Yudkowsky)
Whole Thing: 1103 61.931%
Partially And Intend To Finish: 145 8.141%
Partially And Abandoned: 231 12.97%
Never: 221 12.409%
Never Heard Of It: 81 4.548%

Significant Digits (Alexander D)
Whole Thing: 123 7.114%
Partially And Intend To Finish: 105 6.073%
Partially And Abandoned: 91 5.263%
Never: 333 19.26%
Never Heard Of It: 1077 62.29%

Three Worlds Collide (Eliezer Yudkowsky)
Whole Thing: 889 51.239%
Partially And Intend To Finish: 35 2.017%
Partially And Abandoned: 36 2.075%
Never: 286 16.484%
Never Heard Of It: 489 28.184%

The Fable of the Dragon-Tyrant (Nick Bostrom)
Whole Thing: 728 41.935%
Partially And Intend To Finish: 31 1.786%
Partially And Abandoned: 15 0.864%
Never: 205 11.809%
Never Heard Of It: 757 43.606%

The World of Null-A (A. E. van Vogt)
Whole Thing: 92 5.34%
Partially And Intend To Finish: 18 1.045%
Partially And Abandoned: 25 1.451%
Never: 429 24.898%
Never Heard Of It: 1159 67.266%

[Wow, I never would have expected this many people to have read this. I mostly included it on a lark because of its historical significance.]

Synthesis (Sharon Mitchell)
Whole Thing: 6 0.353%
Partially And Intend To Finish: 2 0.118%
Partially And Abandoned: 8 0.47%
Never: 217 12.75%
Never Heard Of It: 1469 86.31%

[This was the 'troll' option to catch people who just say they've read everything.]

Worm (Wildbow)
Whole Thing: 501 28.843%
Partially And Intend To Finish: 168 9.672%
Partially And Abandoned: 184 10.593%
Never: 430 24.755%
Never Heard Of It: 454 26.137%

Pact (Wildbow)
Whole Thing: 138 7.991%
Partially And Intend To Finish: 59 3.416%
Partially And Abandoned: 148 8.57%
Never: 501 29.01%
Never Heard Of It: 881 51.013%

Twig (Wildbow)
Whole Thing: 55 3.192%
Partially And Intend To Finish: 132 7.661%
Partially And Abandoned: 65 3.772%
Never: 560 32.501%
Never Heard Of It: 911 52.873%

Ra (Sam Hughes)
Whole Thing: 269 15.558%
Partially And Intend To Finish: 80 4.627%
Partially And Abandoned: 95 5.495%
Never: 314 18.161%
Never Heard Of It: 971 56.16%

My Little Pony: Friendship Is Optimal (Iceman)
Whole Thing: 424 24.495%
Partially And Intend To Finish: 16 0.924%
Partially And Abandoned: 65 3.755%
Never: 559 32.293%
Never Heard Of It: 667 38.533%

Friendship Is Optimal: Caelum Est Conterrens (Chatoyance)
Whole Thing: 217 12.705%
Partially And Intend To Finish: 16 0.937%
Partially And Abandoned: 24 1.405%
Never: 411 24.063%
Never Heard Of It: 1040 60.89%

Ender's Game (Orson Scott Card)
Whole Thing: 1177 67.219%
Partially And Intend To Finish: 22 1.256%
Partially And Abandoned: 43 2.456%
Never: 395 22.559%
Never Heard Of It: 114 6.511%

[This is the most read story according to survey respondents, beating HPMOR by 5%.]

The Diamond Age (Neal Stephenson)
Whole Thing: 440 25.346%
Partially And Intend To Finish: 37 2.131%
Partially And Abandoned: 55 3.168%
Never: 577 33.237%
Never Heard Of It: 627 36.118%

Consider Phlebas (Iain Banks)
Whole Thing: 302 17.507%
Partially And Intend To Finish: 52 3.014%
Partially And Abandoned: 47 2.725%
Never: 439 25.449%
Never Heard Of It: 885 51.304%

The Metamorphosis Of Prime Intellect (Roger Williams)
Whole Thing: 226 13.232%
Partially And Intend To Finish: 10 0.585%
Partially And Abandoned: 24 1.405%
Never: 322 18.852%
Never Heard Of It: 1126 65.925%

Accelerando (Charles Stross)
Whole Thing: 293 17.045%
Partially And Intend To Finish: 46 2.676%
Partially And Abandoned: 66 3.839%
Never: 425 24.724%
Never Heard Of It: 889 51.716%

A Fire Upon The Deep (Vernor Vinge)
Whole Thing: 343 19.769%
Partially And Intend To Finish: 31 1.787%
Partially And Abandoned: 41 2.363%
Never: 508 29.28%
Never Heard Of It: 812 46.801%

I also did a k-means cluster analysis of the data to try and determine demographics and the ultimate conclusion I drew from it is that I need to do more analysis. Which I would do, except that the initial analysis was a whole bunch of work and jumping further down the rabbit hole in the hopes I reach an oasis probably isn't in the best interests of myself or my readers.

Footnotes


  1. This is a general trend I notice with accessibility. Not always, but very often measures taken to help a specific group end up having positive effects for others as well. Many of the accessibility suggestions of the W3C are things you wish every website did.

  2. I hadn't read this particular SSC post at the time I compiled the survey, but I was already familiar with the concept of a lizardman constant and should have accounted for it.

  3. I've been informed by a member of the freenode #lesswrong IRC channel that this is in fact Roko's opinion, because you can 'timelessly trade with the future superintelligence for rewards, not just punishment' according to a conversation they had with him last summer. Remember kids: Don't do drugs, including Max Tegmark.

  4. You might think that this conflicts with the hypothesis that the true rate of Basilisk belief is lower than 5%. It does a bit, but you also need to remember that these people are in the LessWrong demographic, which means regardless of what the Basilisk belief question means we should naively expect them to donate five percent of the MIRI donation pot.

  5. That is to say, it does seem plausible that MIRI 'profits' from Basilisk belief based on this data, but I'm fairly sure any profit is outweighed by the significant opportunity cost associated with it. I should also take this moment to remind the reader that the original Basilisk argument was supposed to prove that CEV is a flawed concept from the perspective of not having deleterious outcomes for people, so MIRI using it as a way to justify donating to them would be weird.

Powering Through vs Working Around

1 lifelonglearner 24 June 2016 07:42PM

Lately, I’ve been musing on the nature of self-improvement in general.  When I notice that something I’ve been doing-- be it mental or physical, the next immediate chain of thought is “Okay, how do I improve my life now, knowing this phenomena exists?”  In doing so, I’ve recently realized that this is missing a crucial distinction that can lead to more confusion later down the road.

 

This important divide is the question of optimizing around, or powering through.  So before figuring out what actions I should be taking, it seems important to ask myself, “What am I trying to optimize for?” If the negative biases and habits I manage to identify are rocks, then the question is whether or not the best plan of action is to plan around these rocks, or crush them entirely.  This is far from a clear-cut division, however. It appears that breaking bad habits--powering through is going to be more costly in terms of resources spent.  Additionally, a successful plan for overcoming these errors will probably have a mix of these, especially if ridding oneself of the tendency entirely is the goal.

 

For an example of how these two are often blurred, take the planning fallacy:

 

One strategy may be to overestimate times when planning, pushing through the “it feels wrong” feeling to develop a better sense of how long things take.  To augment this, there are also planning techniques, like Murphyjitsu designed to get you considering “hidden factors”.  It’s far from clear how much actions that compensate for biases by countering their effects actually reduce the bias entirely, especially if the helpful action also becomes second nature.


But overall, I think this is an important distinction to keep in mind, because I’ll often be stuck asking myself “Should I work around X, or should I actively try to defeat X?”  

   Does anyone have experience trying to go specifically in one way or the other to counter their biases?

New LW Meetup: Bay City, MI

1 FrankAdamek 24 June 2016 03:58PM

New meetups (or meetups with a hiatus of more than a year) are happening in:

Irregularly scheduled Less Wrong meetups are taking place in:

The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:

Locations with regularly scheduled meetups: Austin, Berlin, Boston, Brussels, Buffalo, Canberra, Columbus, Denver, Kraków, London, Madison WI, Melbourne, Moscow, New Hampshire, New York, Philadelphia, Research Triangle NC, San Francisco Bay Area, Seattle, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers and a Slack channel for daily discussion and online meetups on Sunday night US time.

continue reading »

Diaspora roundup thread, 23rd June 2016

5 philh 23 June 2016 02:03PM

Guidelines: Top-level comments here should be links to things written by members of the rationalist community, preferably that have some particular interest to this community. Self-promotion is totally fine. Including a brief summary or excerpt is great, but not required. Generally stick to one link per top-level comment, so they can be voted on individually. Recent links are preferred.

Rule: Do not link to anyone who does not want to be linked to. In particular, Scott Alexander has asked people to get his permission, before linking to specific posts on his tumblr or in other out-of-the-way places.

Are smart contracts AI-complete?

8 Stuart_Armstrong 22 June 2016 02:08PM

Many people are probably aware of the hack at DAO, using a bug in their smart contract system to steal millions of dollars worth of the crypto currency Ethereum.

There's various arguments as to whether this theft was technically allowed or not, and what should be done about it, and so on. Many people are arguing that the code is the contract, and that therefore no-one should be allowed to interfere with it - DAO just made a coding mistake, and are now being (deservedly?) punished for it.

That got me wondering whether its ever possible to make a smart contract without a full AI of some sort. For instance, if the contract is triggered by the delivery of physical goods - how can you define what the goods are, what constitutes delivery, what constitutes possession of them, and so on. You could have a human confirm delivery - but that's precisely the kind of judgement call you want to avoid. You could have an automated delivery confirmation system - but what happens if someone hacks or triggers that? You could connect it automatically with scanning headlines of media reports, but again, this is relying on aggregated human judgement, which could be hacked or influenced.

Digital goods seem more secure, as you can automate confirmation of delivery/services rendered, and so on. But, again, this leaves the confirmation process open to hacking. Which would be illegal, if you're going to profit from the hack. Hum...

This seems the most promising avenue for smart contracts that doesn't involve full AI: clear out the bugs in the code, then ground the confirmation procedure in such a way that it can only be hacked in a way that's already illegal. Sort of use the standard legal system as a backstop, fixing the basic assumptions, and then setting up the smart contracts on top of them (which is not the same as using the standard legal system within the contract).

Open thread, June 20 - June 26, 2016

4 Elo 21 June 2016 02:45AM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

Two kinds of Expectations, *one* of which is helpful for rational thinking

2 malcolmocean 20 June 2016 04:04PM

Expectation is often used to refer to two totally distinct things: entitlement and anticipation. My basic opinion is that entitlement is a rather counterproductive mental stance to have, while anticipations are really helpful for improving your model of the world.

Here are some quick examples to whet your appetite…

1. Consider a parent who says to their teenager: “I expect you to be home by midnight.” The parent may or may not anticipate the teen being home on time (even after this remark). Instead, they’re staking out a right to be annoyed if they aren’t back on time.

Contrast this with someone telling the person they’re meeting for lunch “I expect I’ll be there by 12:10” as a way to let them know that they’re running a little late, so that the recipient of the message knows not to worry that maybe they’re not in the correct meeting spot, or that the other person has forgotten.

2. A slightly more involved example: I have a particular kind of chocolate bar that I buy every week at the grocery store. Or at least I used to, until a few weeks ago when they stopped stocking it. They still stock the Dark version, but not the Extra Dark version I’ve been buying for 3 years. So the last few weeks I’ve been disappointed when I go to look. (Eventually I’ll conclude that it’s gone forever, but for now I remain hopeful.)

There’s a temptation to feel indignant at the absence of this chocolate bar. I had an expectation that it would be there, and it wasn’t! How dare they not stock it? I’m a loyal customer, who shops there every week, and who even tells others about their points card program! I deserve to have my favorite chocolate bar in stock!

…says this voice. This is the voice of entitlement.

The entitlement also wants to not just politely ask a shelf stocker if they have any out back, but to do things like walk up to the customer service desk and demand that they give me a discount on the Dark ones because they’ve been out of the Extra Dark ones for three weeks now. To make a fuss.

Entitlement is the feeling that you have a right to something. That you deserve it. That it’s owed to you.

(Relevant aside: the word “ought” used to be a synonym for “owed”, i.e. the past tense of “to owe”.)

A brief history of entitlement

That’s not what the term “entitlement” used to mean though. It used to refer to not the feeling but simply the fact: that you were owed something. Everyone deserved different things, according to their titles: kings and queens an enormous amount, lords and landowners a lesser though still large amount, and so on down the line. In some cases, people at the bottom of the hierarchy may have in fact been considering deserving of scarcity and suffering.

What changed?

Western culture shifted from exalting rule by one (monarchy) or few (oligarchy) or the rich (plutocracy) to being broadly more democratic, meritocratic, and then ultimately relatively egalitarian, in terms of ideals. What this means is that in modern times, it may be the case that being rich or white does in fact grant someone certain privileges, in the sense that they may in fact be less likely to get arrested, or more likely to get promoted…

…but broadly speaking, mainstream culture will no longer agree that they deserve these privileges. They are no longer entitled to them.

More broadly, nobody is really considered to be entitled to much of anything anymore—oh, except for a bunch of very basic, universal rights. The U.S. Bill of Rights lays out the rights the state grants Americans. The U.N. Declaration of Human Rights lays out the rights that U.N. countries grant everyone. In theory, anyway.

And since we no longer think that people deserve special privileges, anyone who acts like they do is called “entitled”. But now we’re talking about the feeling of entitlement, not actually having the right to some benefit.

Also, note that this isn’t just about class anymore: given the meritocratic context and a few other factors, people sometimes find themselves feeling like they deserve something because they worked hard for it. This isn’t a totally unreasonable way to feel, but the world doesn’t automagically reward people who work hard.

This principle is at play when older generations criticize millennials as being entitled, and then the millennials retort “well you said that if we just got a degree, then we’d have decent careers.” What the millennials are saying is that they had an expectation that they’d have prosperity, if they did a thing.

But are they actually feeling entitled to that thing? Are they relating to it in an entitled way? It’s hard to say, and probably depends on the individual. Let’s take an easier example.

Meet James Altucher

In his article How To Break All The Rules And Get Everything You Want, Altucher describes a multipart story in which he breaks some rules to get what he wants.

We arrived at the “Boy Meets Girl” fashion show and the woman with the clipboard said, “You are not on the list.”

WHAT!?

I had been telling my daughter Mollie all week we would go to this show.

Mollie was very excited.

“Don’t worry,” Nathan had told me earlier in the day, “you will be on the list.” I am extremely grateful he got us invited to the show.

Two more times in the article, James has that “WHAT!?” reaction.

This reaction seems to me to be practically the epitome of an entitlement response: outrage. Particularly when he’s like: WHAT!? You let us in even though we weren’t on the list, but we’re at the back!? Note that the feeling of entitlement is usually not so obvious, even internally.

But note also that it’s possible to act entitled, even if you don’t feel entitled. I posit that we might call this something like “entitled to ask” or “entitled to try”.

To illustrate this, let’s take a response to James’ article called When “Life Hacking” Is Really White Privilege, Jen Dziura writes:

I have often had encounters with men who take something that’s not theirs, and when they encounter no outright resistance — there’s no loud talking, no playground-style tussle — they assume everything is fine.

It is not fine.

Sometimes, you take the best desk for yourself in the new office. Sometimes, you take credit for someone else’s work or ideas. Sometimes, you’re on a team, and someone from the client company assumes that you — the tallest, whitest member — are in charge, and you do not correct them. Sometimes, it’s just that someone baked cookies to congratulate their team on a job well-done, and you’re not on that team but you wanted a cookie, and no one seemed to mind.

I have been the cookie guy. Probably with literal cookies, although probably a different situation—not that I would know, since I was just paying attention to the cookies.

And if someone had refused me the cookies, I wouldn’t have been like “WHAT!?”. I would have said something polite and moved on. But if someone had suggested I was rude for asking, I might have been a bit indignant: “I was just asking…”

But in order to be “just asking”, I also had to be assuming that the person would feel comfortable saying no if my request didn’t make sense. Assuming that giving me a “no” isn’t a costly action. Which is often not a safe assumption, for a myriad of reasons that are outside the scope of this post. But the effect is that even without having a subjective feeling of entitlement to anything in particular, I can be relating to a situation in an entitled way.

But I’m a Nice Guy!

There’s a concept that’s been around for awhile, known as the Nice Guy phenomenon. The basic notion is of a person (canonically male, though not always) becoming frustrated when their attempts to transform a platonic friendship into a romantic and/or sexual relationship fall through, leading to rejection. Feminist circles have sometimes criticized these men as objectifying women, but as Dan Fincke points out, in many cases the men are trying to relate to them deeply.

Still, Dan writes:

They want to earn love with their moral virtues, with their genuine friendship, and with their woman-honoring priorities that put knowing women as people over trying to just bed them.

Uh oh. Trying to earn love is a recipe for the meritocratic flavour of entitlement. Dan again, a little further down:

So at this point we come to the actual entitlement issue. It’s not that they feel entitled to sex—it’s much deeper and less superficial than that and these men deserve the respect of having that acknowledged. What they really feel entitled to is love.

At any rate, there usually is a sense of entitlement here, and it makes for unpleasant interactions when the guy finally shares his feelings for his friend. He has his hopes all up and expects her to reciprocate. (Here we probably have both kinds of expectation going on—entitlement and anticipation.)

Miri at Brute Reason clarifies that the problem isn’t feeling sad when you’re rejected. That’s natural and can make lots of sense. Same with:

  • Wishing the person would change their mind
  • Thinking that you would’ve made a good partner for this person
  • Thinking that you would’ve made a better partner for this person than whoever they’re interested in
  • Feeling embarrassed that you were rejected
  • Feeling like you don’t want to see them or talk to them anymore

Miri distinguishes these from the feeling “I deserve sex/romance from this person because I was their friend.” and goes on to name some actions which follow from this feeling of entitlement. These include:

  • Pressuring the person to change their mind (which isn’t the same as saying “Well, let me know if you ever change your mind” and then stepping back)
  • Guilt-tripping them for rejecting you (which isn’t the same as being honest about your feelings about the rejection)
  • Becoming cruel to the person to get back at them (i.e. “Whatever, I never liked you anyway, you [gendered slur]”)

I think that what Miri has highlighted here is a really solid application of the two channels model: the idea that you can have multiple interpretations of something at the same time, that can be alike in valence (in this case, both negative/hurting) but different in structure and implication—and potentially leading to different actions.

The difference in action can be stark—”Whatever, I never liked you anyway” vs “I still think you’re cool, even if I feel pretty burned.”—or quite subtle… what, you might ask, is the difference between “guilt-tripping someone for rejecting you”, and “being honest about your feelings about the rejection”?

Without the two channels model, we might say that the former is when you’re entitled, and the latter is when you’re not. But the two channels model suggests that it’s more like, guilt-tripping is what happens when your entitlements own you, instead of you owning them.

So you feel entitled? Okay, accept that. Not in the sense of endorsing it, but in the sense of accepting reality as it is. The reality is that you feel entitled. One way to do this while staying outside of the frame is to say something like “so it seems that a bunch of what I’m feeling right now is entitlement”. Either to yourself, or if it makes sense, to share that with the person you’re talking with.

If the guy in this situation talks honestly about his feelings of rejection and loneliness, that could be experienced as guilt-tripping or as making the person take care of him:

I feel really rejected now. It’s so frustrating, like, I’m so unlovable. Forever alone, right here.

But maybe if he’s able to get outside of just being the feelings, and talk about the overarching structure of what’s going on:

“It seems I’m feeling both a sense of rejection, but also like I’ve been setting myself up to feel entitled to your love and affection… and I guess that doesn’t make sense. I’m feeling frustrated and lonely, and at the same time… wanting to not relate to you from there.”

If I try, I can imagine that that phrasing might sound over-the-top to some people, but it’s actually how me and many of my friends talk… and it allows us to navigate tense situations while remaining on the “same side”. We stay on the same side by putting the feelings in the center where they can be talked about, and being clear that the relating doesn’t need to be run by those feelings. I go into more detail about the value of this kind of language here.

I realize that it might not be possible to talk at this level in a given relationship. First of all, it requires the capacity to think thoughts like that when you’re in an emotional state (hint: practice when you’re calm!) Even more challengingly, it requires a certain kind of trust and shared assumptions in the relationship, which may not be available.

With those shared assumptions, much less verbose expressions can still have that same page feeling. Without them, even the most clear articulation can nonetheless be experienced as an attempt at manipulation.

Without a good segue, we now turn to the final section: expectations, entitlements, anticipations, and desire.

Anticipations and Desire

When I was maybe 15, a friend and had a principle we used for navigating relationships with our romantic interests. We would go into a situation with “no intentions and no expectations”. One framing of this is that it was to protect against disappointment, but I think it could also be understood as a defense against the whole entitlement debacle: if I had an “expectation” that me and my crush were going to kiss, but she didn’t want to, well… then what? I wouldn’t kiss her without her consent, but… was it okay to even expect that, if I didn’t know what she wanted?

And so we come back to the breakdown I introduced at the start: expectations as including both anticipations and entitlements. I seriously salute my 15-year-old self for managing to avoid the entitlement-related issues (well, at least in the situations when I remembered to use this principle).

The problem was, in turning off expectations, I had shut off not only entitlements but anticipations as well. And anticipations are important!

First of all, denotationally: from an epistemic perspective, you want to be able to predict what’s going to happen. Not just so that you could remember to bring condoms, but also to have a sense of being prepared psychologically for what sort of situation you might be navigating. Projecting what will happen in the future is important.

Then there’s the second, more connotational part of the term “anticipation”, which is the emotional quality: the pleasure of considering a longed-for event. The book Rekindling Desire contains quotations like:

Anticipation is the central ingredient in sexual desire.
[…] sex has a major cognitive component — the most important element for desire is positive anticipation.

What this means is that if you try to avoid having anticipations, you can end up with a reduced sense of desire. Hormones and curiosity being what they were, this wasn’t an issue for my teenage self on a physical level, but even now I notice a subtle effect that I think has the same roots…

I’ve sometimes found it hard to tap into my sense of what it is that I want in relationships or in physically intimate contexts. I know what feels good in the moment—pleasure gradients aren’t hard—but it’s been challenging to cultivate a sense of taste for the kinds of intimacy I want, and I think that a large part of that is the resistance I have for letting myself cultivate desire through anticipation.

An article published just a few days ago (but after I’d drafted this whole post) touches on how this may be a common phenomenon:

“I want more men to get to know their own bodies and desires. […]

“Feminist men often fall into the trap of thinking that the opposite of male sexual entitlement–the opposite of men using other people’s bodies to get themselves off without any concern for that person’s consent or desire–is to focus entirely on their partner’s pleasure and deny any preferences of their own. No. The opposite of male sexual entitlement is two (or more) people working together–playing together, rather–to create the experiences they want.”

So one conclusion I’m making as part of breaking down expectations into entitlements and anticipations is that I can start doing more anticipating of things, as long as I don’t let myself get trapped in having entitlements as well. As long as I don’t hinge my sense of self-worth on having my expectations fulfilled and on never experiencing rejection. As long as I can remember that having no preferences unsatisfied by way of having no preferences isn’t actually satisfying.

“The gap between vision and current reality is also a source of energy. If there were no gap, there would be no need for any action to move towards the vision. We call this gap creative tension.”
— Peter Senge, The Fifth Discipline

The Two Kinds of Expectations + Rationality

I’ve spent a lot of time talking about how this affects interpersonal dynamics, but I want to briefly note that this distinction matters a lot for thinking quality as well:

Having entitlement-based relationships to people or systems is kind of like writing the bottom line before you know what the argument will be. It’s assuming you know what makes sense or know what will work, even though you don’t have all of the information, and then precommitting to be reluctant to change your mind.

Having anticipations, on the contrary, is fundamental to making your beliefs pay rent: in order for your beliefs to be entangled with the real world, they necessarily must suggest which events to anticipate—and importantly, which events to not anticipate.

There’s a question to, of how expectations show up when trying to coordinate a team (or vague network of people with a shared goal). I think a sports analogy is actually valuable here: if we’re on a soccer team, it’s critical that I can expect that if I pass you the ball in a certain way, you’ll be able to kick it directly at the goal. I need to know this so that I know when to do it, because it’s an effective technique when performed well. But if that expectation is about entitlement rather than anticipation, then that will cause me to be less focused on whether my pass made sense in this situation and more focused on whether I can blame you for missing the shot.

My money’s on the team with anticipation, not the one with entitlement.

This article crossposted from malcolmocean.com.

Skills training for dating anxiety

1 Clarity 19 June 2016 07:30PM

A half-baked literature review: Skills training for dating anxiety


In order to infer whether sociosexual skills training is a useful adjunct to standard treatment of anxiety, the first page of Google scholar was systematically reviewed for unique interventional studies that include with any measure of anxiety as an outcome, studies with comment on methodological issues or otherwise theorising with implications for the interpretation of the empirical evidence were discovered using the search terms: (1) social skills training for anxiety and (2) heterosexual social skills and (3) dating anxiety. And (4) behavioural replication training and (5) sensitivity training 10 studies were found, each very dated. The search space was expanded from (1) to searches (2) till (5) due to the keywords found in potentially relevant studies.


Studies that did not contextualise in terms of sexual motivations (e.g. dating) were excluded (namely: the study - Social skills training augments the effectiveness of cognitive behavioral group therapy for social anxiety disorder : www.sciencedirect.com/science/article/pii/S0005789405800619)


The studies found were (strike out: excluded):


 

  • Social skills training and systematic desensitization in reducing dating anxiety: www.sciencedirect.com/science/article/pii/0005796775900546
  • Treatment strategies for dating anxiety in college men based on real-life practice.: psycnet.apa.org/psycinfo/1979-31475-001
  • Evaluation of three dating-specific treatment approaches for heterosexual dating anxiety.: psycnet.apa.org/journals/ccp/43/2/259/
  • A comparison between behavioral replication training and sensitivity training approaches to heterosexual dating anxiety.: psycnet.apa.org/journals/cou/23/3/190/
  • Social skills training and systematic desensitization in reducing dating anxiety: www.sciencedirect.com/science/article/pii/0005796775900546
  • Social skills training augments the effectiveness of cognitive behavioral group therapy for social anxiety disorder : www.sciencedirect.com/science/article/pii/S0005789405800619
  • Skills training as an approach to the treatment of heterosexual-social anxiety: A review.: psycnet.apa.org/journals/bul/84/1/140/
  • Self-ratings and judges' ratings of heterosexual social anxiety and skill: A generalizability study.: psycnet.apa.org/journals/ccp/47/1/164/
  • Heterosexual social skills in a population of rapists and child molesters.: psycnet.apa.org/journals/ccp/53/1/55/
  • The importance of behavioral and cognitive factors in heterosexual-social anxiety1: onlinelibrary.wiley.com/doi/10.1111/j.1467-6494.1980.tb00834.x/abstract

 


The search is halted prematurely due to the discovery of a systematic review (see: Skills training as an approach to the treatment of heterosexual-social anxiety: A review.: psycnet.apa.org/journals/bul/84/1/140/) However, other studies emerged after the review anyway. In any case, the review’s conclusions are likely to hold true and they do suggest that there is promise to sociosexual skills training, but methodological issues will hold back good empirical research. Therefore, it is not expected to be productive to continue this review.


It is hypothesised that the evidence is so dated due to changes in terminology. The literature approximates exposure treatments for social phobia or social anxiety. However, searches of the first page of Google Scholar (exposure therapy and social anxiety; exposure therapy and social phobia) yield no results except where pharmacotherapies are in adjunct to the therapy) which are inappropriate for our purposes.

 

Tl;dr. See: Skills training as an approach to the treatment of heterosexual-social anxiety: A review.: psycnet.apa.org/journals/bul/84/1/140/

 

Research translation idea

 

I have an idea for teaching certain vulnerable young people the skills needed to achieve social skills without intoxication. I was wondering if you have any feedback for my proposal so that I can revise it. Many students report they drink or get high for the disinhibiting effects that help them socialise with the other sex. It is hypothesised that this is because of latent anxieties and inproper self-medication. Due to the irresponsiveness of the target population at universities to respond to demand reduction programs and health promotion, the inflexibility of the university’s institutions to delivering supply reduction campaigns,  and the relative resource intensity of harm minimisation programs, alternative, innovative interventions are sought. One innovative strategy is to treat the underlying anxiety that motivates substance use in young people. The purpose of this social skills training program is to train groups of young people to socialise romantically and sexually with the opposite sex to replace substance-assisted romantic and sexual initiatory behaviour. Initial steps will be surveying the evidence-base, followed by the design, implementation and evaluation of a pilot program. This will be disseminated for critique by the broader scientific and clinical community before scaling if and as appropriate. The success of the program will be evaluated by structured interview eliciting psychological distress.


Background reading

 

Gender differences in social anxiety disorder: results from the national epidemiologic sample on alcohol and related conditions. - www.ncbi.nlm.nih.gov/pubmed/21903358

 

Examining Sex and Gender Differences in Anxiety Disorders - www.intechopen.com/books/a-fresh-look-at-anxiety-disorders/examining-sex-and-gender-differences-in-anxiety-disorders

 

not academic but interesting: https://www.youtube.com/watch?v=YSZky8dk7OE

General-Purpose Questions Thread

3 Sable 19 June 2016 07:29AM

Similar to the Crazy Ideas Thread and Diaspora Roundup Thread, I thought I'd try making a General-Purpose Questions Thread.

 

The purpose is to provide a forum for asking questions to the community (appealing to the wisdom of this particular crowd) in things that don't really merit their own thread.

Writing Collaboratively

6 richard_reitz 18 June 2016 07:47PM

This is a summary of the customs for collaborative writing the team on the fanfiction In Fire Forged came to, after a fair amount of time and effort figuring things out. The purpose of this piece is to share our results, thereby saving anyone who wants to write collaboratively the cost of experimentation. Obviously, different writing projects will accomplish different things with different people, and will therefore be best served by different practices. Take this as a first approximation, to be revised by experience.

Google Docs

We tried a bunch of platforms for collaboration, and found Google Docs to best fit our needs.

  1. Create a Google Doc. Multi-installment affairs may consider creating a folder and make one doc per installment.
  2. Enable editing. Collaborators are not very helpful if they can't provide feedback.

    Google Docs allows authors to restrict the changes other people can make to "suggestions" and "comments" by switching to "suggesting" mode.



    In general, the author restricts collaborator permissions to comments and suggestions. How to control these permissions should be described in the "enable editing" link above.
  3. Distribute link to collaborators.

Once the collaborators have the link, they read through it, making the comments and suggestions they think of. Google Docs does a good job facilitating discussion of this feedback; utilize this!

Micro and Macro

We found it useful to distinguish between what we were saying and how we were saying it. We termed the former "macro" and the latter "micro". This allows authors to say things like "I'm mostly looking for micro suggestions, although I'd be interested in any glaring macro errors (anything untrue or major omissions)." This succinctly communicates that collaborators should mostly restrict themselves to suggesting changes to how the author is communicating, which usually consists of small edits concerning things like technical issues (typos, omitted words, grammar) and smoother communication (word choice, resolving ambiguities, sectioning).

This contrasts macro suggestions, which would include (in nonfiction) things like making sure factual claims were true, being sure to include all relevant information, and the perspective from a different field. (In fiction, macro suggestions would include things such as plot, characterization, chapter structure and consistency of the universe.)

In general, you want to address macro issues before micro issues, since micro improvements are lost to changes on the macro level.

Team Makeup

On the macro level, you want as many people as can bring novel, relevant viewpoints to the writing. Essentially, you're looking to exploit Linus's Law by having at least one collaborator who will naturally see every improvement that could be made.

I favor erring on the size of larger teams for a few reasons. The coordination cost of adding a member isn't very high. Improving things on the micro level really benefits from having lots of eyeballs scrutinize for improvements: it's entirely plausible that the tenth reader of some passage notices a way to reword it that the first nine missed.

My favorite reason for having more collaborators, however, is that it opens up the possibility of partial editing. One collaborator flags something they notice could be improved, even if they can't think of how. Then, another collaborator, who may not have noticed that something sounded awkward, may figure out how to rewrite it better. (It may sound implausible that someone who can figure out the improvement wouldn't notice something improvable in the first place, but it happened reasonably often.)

Spreading the micro over a lot of people also helps avoid illusions of transparency. If you only have one or two people revising, it's easy for them to spend so much time that they miss statements that don't mean what they think it means or are ambiguous, since they're so familiar with what they mean to mean. Spreading out the editing keeps everyone from becoming overfamiliar with the work. It also allows for holding editors in reverse, who give the work one last pass and read it as naively as the target audience.

Collaborator Benefits

Helping someone else write their piece is the single most effective technique I've used to powerlevel my writing. SICP:

The ability to visualize the consequences of the actions under consideration is crucial to becoming an expert programmer, just as it is in any synthetic, creative activity. In becoming an expert photographer, for example, one must learn how to look at a scene and know how dark each region will appear on a print for each possible choice of exposure and development conditions. Only then can one reason backward, planning framing, lighting, exposure, and development to obtain the desired effects. So it is with programming...

...and so it is with writing. There's an awkward period when you're first starting to write, where you've read enough that you have some idea of what better and worse writing looks like, but you haven't written enough to visualize the consequences of your writing. The author of In Fire Forged got there by writing and scrapping 140k words. I got there with a fraction of the effort by helping out on a team that allowed me to see the consequences of various actions without needing to write entire pieces. I also got to see and analyze and discuss the feedback from the other collaborators, which taught me things about better writing I didn't already know. Plus, gaining this experience had positive externalities, since the suggestions I made wound up in a final product, instead of going into the trash.

Collaborating also helps you learn about the topic of the piece more effectively than just reading it, via levels of processing. Merely reading about something is fairly shallow, leading to nondurable memory, whereas collaborating on something forces deeper processing, and thus more durable understanding. You can force yourself to process something on a deeper level as you read it to get the same effect, but collaborating, again, produces positive externalities.

(You should be processing deeply anyway. One collaborator on this piece, for instance, puts comments in the margins of pieces she reads. That said, collaborating has positive externalities.)

It's also fun and social; writing collaboratively has caused me to meet some of my favorite people and strengthened many personal relationships. As such, I suggest that, should you come across some piece that you take a liking to, but see how you could improve it, you offer to collaborate with them. Worst case, they're flattered and turn you down politely.

Crazy Ideas Thread

4 James_Miller 18 June 2016 12:30AM

This thread is intended to provide a space for 'crazy' ideas. Ideas that spontaneously come to mind (and feel great), ideas you long wanted to tell but never found the place and time for and also for ideas you think should be obvious and simple - but nobody ever mentions them. 

Rules for this thread:

  1. Each crazy idea goes into its own top level comment and may be commented there.
  2. Voting should be based primarily on how original the idea is.
  3. Meta discussion of the thread should go to the top level comment intended for that purpose.

Weekly LW Meetups

0 FrankAdamek 17 June 2016 04:22PM

This summary was posted to LW Main on June 17th. The following week's summary is here.

Irregularly scheduled Less Wrong meetups are taking place in:

The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:

Locations with regularly scheduled meetups: Austin, Berlin, Boston, Brussels, Buffalo, Canberra, Columbus, Denver, Kraków, London, Madison WI, Melbourne, Moscow, New Hampshire, New York, Philadelphia, Research Triangle NC, San Francisco Bay Area, Seattle, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers and a Slack channel for daily discussion and online meetups on Sunday night US time.

continue reading »

Avoiding strawmen

0 casebash 17 June 2016 08:20AM

George Bernard Shaw wrote that, "the single biggest problem in communication is the illusion that it has taken place". Much of strawmanning is unconscious. One person says that it is important to be positive, the other person interprets this as it being important to be positive in *all* circumstances, when they are merely making a general statement.

I would suggest that a technique to avoid accidentally strawmanning someone would be to begin by intentionally strawmanning them and then try to back off to something more moderate from there. 

Take for example:

"Just be yourself"

A strawman would be, "Even if you are a serial killer, you should focus on being yourself, than changing how you behave".

Since this is a rather extreme strawman, backing off to something more moderate from here would be too easy. We might very well just back off to another strawman. Instead, we should backoff to a more reasonable strawman first, then backoff to the moderate version of their view.

The more moderate strawman, "You should never change how you act in order to better fit in"

When we back off to something more moderate, we then get, "Changing how you act in order to better fit in is generally not worth it"

You can then respond to the more moderate view. If you had responded to the original, you might have pointed out a single case when the principle didn't hold, such as making a change that didn't affect one's individuality (i.e showering regularly) and used it to attack the more general principle. When you have the more moderate principle, you can see that such a single example only negates the strict reading, not the more moderate reading. You can then either accept the moderate reading or add arguments about why you also disagree with it. If you had skipped this process, you might have made a specific critique and not realised that it didn't completely negate the other person's argument.

Secret Rationality Base in Europe

2 SquirrelInHell 17 June 2016 02:50AM

In short, I'm wondering what place/group/organisation/activity could do for rationality in Europe what Berkeley does for rationality in the US?

 

Soon, we'll have LWCW in Berlin, which I hope will be an occasion to do some networking among people who think seriously about developing rationality communities. But in the meantime, let's do some brainstorming.

 

Important note: in comments to this post, please use only consequentialist language. For example, say "If we decided for the base to be on Malta, then X would happen" instead of "I think it should be in Malta, because..."

 

  • What would happen if the rationality base was located in [insert specific city/country]?

  • What could such a place offer to you now, that would make you consider a temporary/permanent move?

  • What would happen if the European rationality community efforts were centered around some particular research topic (e.g. AI)?

  • Is there something you can think of that would speed up community-building in Europe?

Of course, share anything else that you think is relevant to the topic.

Also, see you all in Berlin :)

Diaspora roundup thread, 15th June 2016

22 philh 15 June 2016 09:36AM

This is a new experimental weekly thread.

Guidelines: Top-level comments here should be links to things written by members of the rationalist community, preferably that would be interesting specifically to this community. Self-promotion is totally fine. Including a very brief summary or excerpt is great, but not required. Generally stick to one link per top-level comment. Recent links are preferred.

Rule: Do not link to anyone who does not want to be linked to. In particular, Scott Alexander has asked people not to link to specific posts on his tumblr. As far as I know he's never rescinded that. Do not link to posts on his tumblr.

Revitalising Less Wrong is not a lost purpose

3 casebash 15 June 2016 08:10AM

ohn_Maxwell_IV argued that revitalising Less Wrong is a lost purpose. I'm also very skeptical about Less Wrong 2.0 - but I wouldn't agree with it being a lost purpose. It is just that we are currently not on a track to anywhere. The #LW_code_renovation channel resulted in a couple of minor code changes, but there hasn't been any discussion for at least a month. All that this means, however, is that if we want a better less wrong that we have to do something other than what we have been doing so far. Here are some suggestions.

Systematic changes, not content production

The key problem currently is the lack of content, so the most immediate solution is to produce more content. However, not many people are an Elizier or a Scott. Think about what percentage of blog are actually successful - now throw on the extra limitation of having to be on topic on Less Wrong. Note that many of Scott's most popular posts would be too political to be posted on Less Wrong. Trying to get a group of people together to post content on Less Wrong wouldn't work. Let's say 10 people agreed to join such a group. 5 would end up doing nothing, 3 would do 2-3 posts and it'd fall on the last 2 to drive the site. The odds would be strongly against them. Most people can't consistently pump out high quality content.

The plan to get people to return to Less Wrong and post here won't work either unless combined with changes. Presumably, people have moved to their own blogs for a reason. Why would they come back to posting on Less Wrong, unless something was changed? We might be able to convince some people to make a few posts here, but we aren't going to return the community to its glory days without consistent content.

Why not try to change how the system is set up instead to encourage more content?

Decide on a direction

We now have a huge list of potential changes, but we don't have a direction. Some of those changes would help bring in more content and solve the key issue, while other changes wouldn't. The problem is that there is currently no consensus on what needs to be done. This makes it so much less likely that anything will actually get done, particularly given that it isn't clear whether a particular change would be approved or not if someone did actually do it. At the moment, what we have is people coming on to the site suggesting features and there is discussion, but there isn't anyone or any group in charge to say if you implement this that we would use it. So people will often never start these projects.

Before we can even tackle the problem of getting things done, we need to tackle the problem of what needs to be done. The current system of people simply making posts in discussion in broken - we never even get to the consensus stage, let alone implementation. I'm still thinking about the best way to resolve this, I think I'll post more about this in a future post. Regardless, practically *any* system, would be better than what we have now where there is *no* decision that is ever made.

Below I'll suggest what I think our direction should be:

Positions

Less Wrong is the website for global movement and has a high number of programmers, yet some societies in my university are more capable of getting things done than we are. Part of the reason is that university societies have positions - people decide to run for a position and this grants them status, but also creates responsibilities. At the moment, we have *no-one* working on adding features the website. We'd actually be better off if we held an election for the position of webmaster and *only* had that person working on the website. I'm not saying we should restrict a single person to being able to contribute code for our website, I'm just saying that *right now* implementing this stupid policy would actually improve things. I imagine that there would be at least *one* decent programmer for whom the status would be worth the work given that half the people here seem to be programmers.

Links

If we want more content, then an easy way would be to have a links section, because posting a link is about 1% of the effort of trying to write a Less Wrong post. In order to avoid diluting discussion, these links would have to be posted in their own section. Given that this system is based upon Reddit, this should be super easy.

Sections

The other easy way to generate more content would be to change the rules about what content is on or off topic. This comes with risks - many people like the discussion section how it is. However, if a separate section was created, then people would be able to have these additional discussions without impacting how discussion works at the moment. Many people have argued for a tag system, but whether we simply create additional categories or use tags would be mostly irrelevant. If we have someone who is willing to build this system, then we can do it, if not, then we should just use another category. Given that there is already Main and Discussion I can't imagine that it would be that hard to add in another category of posts. There have been many, many suggestions of what categories we could have. If we just want to get something done, then the simplest thing is to add a single new category, Open, which has the same rules as the Open Threads that we are already running.

Halve downvotes

John_Maxwell_IV points out that too many posts are getting downvotes and critical comments. We could try to change the culture of Less Wrong, perhaps ask a high status individual like Scott or Elizier to request people to be less critical. And that might even work for even a week or a month, before people forget about it. Or we could just halve downvotes. While not completely trivial, this change would be about as simple as they come. We might want to only halve downvotes on articles, not comments, because we seem to get enough comments already, just not enough content. I don't think it'll lower the quality of content too much - quite often there are more people who would downvote a post, but they don't bother because the content is already below zero. I think this might be worth a go - I see a high potential upside, but not much in the way of downside.

Crowdsourcing

If we could determine that a particular set of features would have a reasonable chance of improving LessWrong, then we could crowd-source putting a bounty on someone implementing these features. I suspect that there are many people who'd be happy to donate some money and if we chose simple, well defined features, then it actually wouldn't be that expensive.

[EA relevant] Announcing "Everyday Heroes of Effective Giving" Series

-2 Gleb_Tsipursky 14 June 2016 04:21PM

We have so many great people involved in the EA movement, people who think hard and well about what cause to prioritize and who dedicate a significant portion of their money and time to advancing global flourishing in the most cost-effective manner. However, articles about EA participants typically feature the most dedicated folks, which contributes to those who don't reach such levels reluctant to call themselves EA members.

 

So to advance the cause of celebrating all in the EA movement and recognizing the value of all movement members appropriately, we at Intentional Insights are launching the "Everyday Heroes of Effective Giving" video series. This series of brief videos, around 10 minutes each, will showcase folks from across the movement, including from around the world, as we do the filming through videoconference (Google hangouts). We ask participants four questions:

1) How would you define effective giving and what makes you passionate about it?
2) What is your story of getting involved in effective giving?
3) What are you doing now in the area of effective giving?
4) What do you plan to do in the future and what do you envision as the mark you want to leave on the world?

 

Why do we frame the title and questions in terms of "effective giving?" Well, these videos are meant to be available as a resource to be shared with anyone and create a sense of narrative, identification, and emotional appeal, which are part of broader effective outreach strategies. Thus, using the terminology of effective giving decreases the likelihood of non-value aligned people trying to join the EA movement, while encouraging such people to give to more effective charities. Note that participants usually mention the EA movement in their comments, which provides a potential trail to the EA movement for those who would be interested in thinking hard about doing the most good.

 

We already released three videos. The first features Boris Yakubchik, who was involved in the EA movement before it was a movement as such. The second one is with Scott Weathers, an EA health policy expert who is currently interning at the WHO and is going on for a PhD in public health this Fall. The third features Alfredo Parra, the main organizer of EA Munich. Since the videos are short, I will post future videos on the EA Forum when we have finished doing a set of three.

 

FYI, the fact that the first three happened to be with men is a fluke. I extended an invitation for videotaping to three men and three women, and the women simply were not available until later. We already did two with the women, and are currently processing them.

 

For future developments with this series, we are planning to improve the backdrop situation for the interviewer by getting a black screen. We have also secured the domain http://www.givingeffectively.org/, and we plan to put these videos and other content there after we decide how to structure the website - we want to make it a key part of the EA Marketing Resource Bank as a venue for content about effective giving. If anyone wants to support these endeavors (the website or video series) with their programming/visual design/video skills, or with donations, please shoot me an email at gleb@intentionalinsights.org

 

I welcome your feedback about this project, in private emails to me or in comments here. My hope is that these videos will show the broad range of diversity across the EA movement, and help people understand that, even if they are not the most dedicated EA participants, they are making a welcome and valuable contribution to the cause of doing the most good effectively.

Attempts to Debias Hindsight Backfire!

7 Gram_Stone 13 June 2016 04:13PM

(Content note: A common suggestion for debiasing hindsight: try to think of many alternative historical outcomes. But thinking of too many examples can actually make hindsight bias worse.)

Followup to: Availability Heuristic Considered Ambiguous

Related to: Hindsight Bias

I.

Hindsight bias is when people who know the answer vastly overestimate its predictability or obviousness, compared to the estimates of subjects who must guess without advance knowledge.  Hindsight bias is sometimes called the I-knew-it-all-along effect.

The way that this bias is usually explained is via the availability of outcome-related knowledge. The outcome is very salient, but the possible alternatives are not, so the probability that people claim they would have assigned to an event that has already happened gets jacked up. It's also known that knowing about hindsight bias and trying to adjust for it consciously doesn't eliminate it.

This means that most attempts at debiasing focus on making alternative outcomes more salient. One is encouraged to recall other ways that things could have happened. Even this merely attenuates the hindsight bias, and does not eliminate it (Koriat, Lichtenstein, & Fischhoff, 1980; Slovic & Fischhoff, 1977).

II.

Remember what happened with the availability heuristic when we varied the number of examples that subjects had to recall? Crazy things happened because of the phenomenal experience of difficulty that recalling more examples caused within the subjects.

You might imagine that, if you recalled too many examples, you could actually make the hindsight bias worse, because if subjects experience alternative outcomes as difficult to generate, then they'll consider the alternatives less likely, and not more.

Relatedly, Sanna, Schwarz, and Stocker (2002, Experiment 2) presented participants with a description of the British–Gurkha War (taken from Fischhoff, 1975; you should remember this one). Depending on conditions, subjects were told either that the British or the Gurkha had won the war, or were given no outcome information. Afterwards, they were asked, “If we hadn’t already told you who had won, what would you have thought the probability of the British (Gurkhas, respectively) winning would be?”, and asked to give a probability in the form of a percentage.

Like in the original hindsight bias studies, subjects with outcome knowledge assigned a higher probability to the known outcome than subjects in the group with no outcome knowledge. (Median probability of 58.2% in the group with outcome knowledge, and 48.3% in the group without outcome knowledge.)

Some subjects, however, were asked to generate either 2 or 10 thoughts about how the outcome could have been different. Thinking of 2 alternative outcomes slightly attenuated hindsight bias (median down to 54.3%), but asking subjects to think of 10 alternative outcomes went horribly, horribly awry, increasing the subjects' median probability for the 'known' outcome all the way up to 68.0%!

It looks like we should be extremely careful when we try to retrieve counterexamples to claims that we believe. If we're too hard on ourselves and fail to take this effect into account, then we can make ourselves even more biased than we would have been if we had done nothing at all.

III.

But it doesn't end there.

Like in the availability experiments before this, we can discount the informational value of the experience of difficulty when generating examples of alternative historical outcomes. Then the subjects would make their judgment based on the number of thoughts instead of the experience of difficulty.

Just before the 2000 U.S. presidential elections, Sanna et al. (2002, Experiment 4) asked subjects to predict the percentage of the popular vote the major candidates would receive. (They had to wait a little longer than they expected for the results.)

Later, they were asked to recall what their predictions were.

Control group subjects who listed no alternative thoughts replicated previous results on the hindsight bias.

Experimental group subjects who listed 12 alternative thoughts experienced difficulty and their hindsight bias wasn't made any better, but it didn't get worse either.

(It seems the reason it didn't get worse is because everyone thought Gore was going to win before the election, and for the hindsight bias to get worse, the subjects would have to incorrectly recall that they predicted a Bush victory.)

Other experimental group subjects listed 12 alternative thoughts and were also made to attribute their phenomenal experience of difficulty to lack of domain knowledge, via the question: "We realize that this was an extremely difficult task that only people with a good knowledge of politics may be able to complete. As background information, may we therefore ask you how knowledgeable you are about politics?" They were then made to provide a rating of their political expertise and to recall their predictions.

Because they discounted the relevance of the difficulty of recalling 12 alternative thoughts, attributing it to their lack of political domain knowledge, thinking of 12 ways that Gore could have won introduced a bias in the opposite direction! They recalled their original predictions for a Gore victory as even more confident than they actually, originally were.

We really are doomed.


Fischhoff, B. (1975). Hindsight is not equal to foresight: the effect of outcome knowledge on judgment under uncertainty. Journal of Experimental Psychology: Human Perception and Performance, 1, 288–299.

Koriat, A., Lichtenstein, S., & Fischhoff, B. (1980). Reasons for confidence. Journal of Experimental Psychology: Human Learning and Memory, 6, 107–118.

Sanna, L. J., Schwarz, N., & Stocker, S. L. (2002). When debiasing backfires: Accessible content and accessibility experiences in debiasing hindsight through mental simulations. Journal of Experimental Psychology: Learning, Memory, and Cognition, 28, 497–502.

Slovic, P., & Fischhoff, B. (1977). On the psychology of experimental surprises. Journal of Experimental Psychology: Human Perception and Performance, 3, 544–551.

Open thread, Jun. 13 - Jun. 19, 2016

2 MrMind 13 June 2016 06:57AM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

Revitalizing Less Wrong seems like a lost purpose, but here are some other ideas

16 John_Maxwell_IV 12 June 2016 07:38AM

This is a response to ingres' recent post sharing Less Wrong survey results. If you haven't read & upvoted it, I strongly encourage you to--they've done a fabulous job of collecting and presenting data about the state of the community.

So, there's a bit of a contradiction in the survey results.  On the one hand, people say the community needs to do more scholarship, be more rigorous, be more practical, be more humble.  On the other hand, not much is getting posted, and it seems like raising the bar will only exacerbate that problem.

I did a query against the survey database to find the complaints of top Less Wrong contributors and figure out how best to serve their needs.  (Note: it's a bit hard to read the comments because some of them should start with "the community needs more" or "the community needs less", but adding that info would have meant constructing a much more complicated query.)  One user wrote:

[it's not so much that there are] overly high standards,  just not a very civil or welcoming climate . why write content for free and get trashed when I can go write a grant application or a manuscript instead?

ingres emphasizes that in order to revitalize the community, we would need more content.  Content is important, but incentives for producing content might be even more important.  Social status may be the incentive humans respond most strongly to.  Right now, from a social status perspective, the expected value of creating a new Less Wrong post doesn't feel very high.  Partially because many LW posts are getting downvotes and critical comments, so my System 1 says my posts might as well.  And partially because the Less Wrong brand is weak enough that I don't expect associating myself with it will boost my social status.

When Less Wrong was founded, the primary failure mode guarded against was Eternal September.  If Eternal September represents a sort of digital populism, Less Wrong was attempting a sort of digital elitism.  My perception is that elitism isn't working because the benefits of joining the elite are too small and the costs are too large.  Teddy Roosevelt talked about the man in the arena--I think Less Wrong experienced the reverse of the evaporative cooling EY feared, where people gradually left the arena as the proportional number of critics in the stands grew ever larger.

Given where Less Wrong is at, however, I suspect the goal of revitalizing Less Wrong represents a lost purpose.

ingres' survey received a total of 3083 responses.  Not only is that about twice the number we got in the last survey in 2014, it's about twice the number we got in 20132012, and 2011 (though much bigger than the first survey in 2009).  It's hard to know for sure, since previous surveys were only advertised on the LessWrong.com domain, but it doesn't seem like the diaspora thing has slowed the growth of the community a ton and it may have dramatically accelerated it.

Why has the community continued growing?  Here's one possibility.  Maybe Less Wrong has been replaced by superior alternatives.

  • CFAR - ingres writes: "If LessWrong is serious about it's goal of 'advancing the art of human rationality' then it needs to figure out a way to do real investigation into the subject."  That's exactly what CFAR does.  CFAR is a superior alternative for people who want something like Less Wrong, but more practical.  (They have an alumni mailing list that's higher quality and more active than Less Wrong.)  Yes, CFAR costs money, because doing research costs money!
  • Effective Altruism - A superior alternative for people who want something that's more focused on results.
  • Facebook, Tumblr, Twitter - People are going to be wasting time on these sites anyway.  They might as well talk about rationality while they do it.  Like all those phpBB boards in the 00s, Less Wrong has been outcompeted by the hot new thing, and I think it's probably better to roll with it than fight it.  I also wouldn't be surprised if interacting with others through social media has been a cause of community growth.
  • SlateStarCodex - SSC already checks most of the boxes under ingres' "Future Improvement Wishlist Based On Survey Results".  In my opinion, the average SSC post has better scholarship, rigor, and humility than the average LW post, and the community seems less intimidating, less argumentative, more accessible, and more accepting of outside viewpoints.
  • The meatspace community - Meeting in person has lots of advantages.  Real-time discussion using Slack/IRC also has advantages.

Less Wrong had a great run, and the superior alternatives wouldn't exist in their current form without it.  (LW was easily the most common way people heard about EA in 2014, for instance, although sampling effects may have distorted that estimate.)  But that doesn't mean it's the best option going forward.

Therefore, here are some things I don't think we should do:

  • Try to be a second-rate version of any of the superior alternatives I mentioned above.  If someone's going to put something together, it should fulfill a real community need or be the best alternative available for whatever purpose it serves.
  • Try to get old contributors to return to Less Wrong for the sake of getting them to return.  If they've judged that other activities are a better use of time, we should probably trust their judgement.  It might be sensible to make an exception for old posters that never transferred to the in-person community, but they'd be harder to track down.
  • Try to solve the same sort of problems Arbital or Metaculus is optimizing for.  No reason to step on the toes of other projects in the community.

But that doesn't mean there's nothing to be done.  Here are some possible weaknesses I see with our current setup:

  • If you've got a great idea for a blog post, and you don't already have an online presence, it's a bit hard to reach lots of people, if that's what you want to do.
  • If we had a good system for incentivizing people to write great stuff (as opposed to merely tolerating great stuff the way LW culture historically has), we'd get more great stuff written.
  • It can be hard to find good content in the diaspora.  Possible solution: Weekly "diaspora roundup" posts to Less Wrong.  I'm too busy to do this, but anyone else is more than welcome to (assuming both people reading LW and people in the diaspora want it).

ingres mentions the possibility of Scott Alexander somehow opening up SlateStarCodex to other contributors.  This seems like a clearly superior alternative to revitalizing Less Wrong, if Scott is down for it:

  • As I mentioned, SSC already seems to have solved most of the culture & philosophy problems that people complained about with Less Wrong.
  • SSC has no shortage of content--Scott has increased the rate at which he creates open threads to deal with an excess of comments.
  • SSC has a stronger brand than Less Wrong.  It's been linked to by Ezra Klein, Ross Douthat, Bryan Caplan, etc.

But the most important reasons may be behavioral reasons.  SSC has more traffic--people are in the habit of visiting there, not here.  And the posting habits people have acquired there seem more conducive to community.  Changing habits is hard.

As ingres writes, revitalizing Less Wrong is probably about as difficult as creating a new site from scratch, and I think creating a new site from scratch for Scott is a superior alternative for the reasons I gave.

So if there's anyone who's interested in improving Less Wrong, here's my humble recommendation: Go tell Scott Alexander you'll build an online forum to his specification, with SSC community feedback, to provide a better solution for his overflowing open threads.  Once you've solved that problem, keep making improvements and subfora so your forum becomes the best available alternative for more and more use cases.

And here's my humble suggestion for what an SSC forum could look like:

As I mentioned above, Eternal September is analogous to a sort of digital populism.  The major social media sites often have a "mob rule" culture to them, and people are increasingly seeing the disadvantages of this model.  Less Wrong tried to achieve digital elitism and it didn't work well in the long run, but that doesn't mean it's impossible.  Edge.org has found a model for digital elitism that works.  There may be other workable models out there.  A workable model could even turn in to a successful company.  Fight the hot new thing by becoming the hot new thing.

My proposal is based on the idea of eigendemocracy.  (Recommended that you read the link before continuing--eigendemocracy is cool.)  In eigendemocracy, your trust score is a composite rating of what trusted people think of you.  (It sounds like infinite recursion, but it can be resolved using linear algebra.)

Eigendemocracy is a complicated idea, but a simple way to get most of the way there would be to have a forum where having lots of karma gives you the ability to upvote multiple times.  How would this work?  Let's say Scott starts with 5 karma and everyone else starts with 0 karma.  Each point of karma gives you the ability to upvote once a day.  Let's say it takes 5 upvotes for a post to get featured on the sidebar of Scott's blog.  If Scott wants to feature a post on the sidebar of his blog, he upvotes it 5 times, netting the person who wrote it 1 karma.  As Scott features more and more posts, he gains a moderation team full of people who wrote posts that were good enough to feature.  As they feature posts in turn, they generate more co-moderators.

Why do I like this solution?

  • It acts as a cultural preservation mechanism.  On reddit and Twitter, sheer numbers rule when determining what gets visibility.  The reddit-like voting mechanisms of Less Wrong meant that the site deliberately kept a somewhat low profile in order to avoid getting overrun.  Even if SSC experienced a large influx of new users, those users would only gain power to affect the visibility of content if they proved themselves by making quality contributions first.
  • It takes the moderation burden off of Scott and distributes it across trusted community members.  As the community grows, the mod team grows with it.
  • The incentives seem well-aligned.  Writing stuff Scott likes or meta-likes gets you recognition, mod powers, and the ability to control the discussion--forms of social status.  Contrast with social media sites where hyperbole is a shortcut to attention, followers, upvotes.  Also, unlike Less Wrong, there'd be no punishment for writing a low quality post--it simply doesn't get featured and is one more click away from the SSC homepage.

TL;DR - Despite appearances, the Less Wrong community is actually doing great.  Any successor to Less Wrong should try to offer compelling advantages over options that are already available.

Availability Heuristic Considered Ambiguous

7 Gram_Stone 10 June 2016 10:40PM

(Content note: The experimental results on the availability bias, one of the biases described in Tversky and Kahneman's original work, have been overdetermined, which has led to at least two separate interpretations of the heuristic in the cognitive science literature. These interpretations also result in different experimental predictions. The audience probably wants to know about this. This post is also intended to measure audience interest in a tradition of cognitive scientific research that I've been considering describing here for a while. Finally, I steal from Scott Alexander the section numbering technique that he stole from someone else: I expect it to be helpful because there are several inferential steps to take in this particular article, and it makes it look less monolithic.)

Related to: Availability

I.

The availability heuristic is judging the frequency or probability of an event, by the ease with which examples of the event come to mind.

This statement is actually slightly ambiguous. I notice at least two possible interpretations with regards to what the cognitive scientists infer is happening inside of the human mind:

  1. Humans think things like, “I found a lot of examples, thus the frequency or probability of the event is high,” or, “I didn’t find many examples, thus the frequency or probability of the event is low.”
  2. Humans think things like, “Looking for examples felt easy, thus the frequency or probability of the event is high,” or, “Looking for examples felt hard, thus the frequency or probability of the event is low.”

I think the second interpretation is the one more similar to Kahneman and Tversky’s original description, as quoted above.

And it doesn’t seem that I would be building up a strawman by claiming that some adhere to the first interpretation, intentionally or not. From Medin and Ross (1996, p. 522):

The availability heuristic refers to a tendency to form a judgment on the basis of what is readily brought to mind. For example, a person who is asked whether there are more English words that begin with the letter ‘t’ or the letter ‘k’ might try to think of words that begin with each of these letters. Since a person can probably think of more words beginning with ‘t’, he or she would (correctly) conclude that ‘t’ is more frequent than ‘k’ as the first letter of English words.

And even that sounds at least slightly ambiguous to me, although it falls on the other side of the continuum between pure mental-content-ism and pure phenomenal-experience-ism that includes the original description.

II.

You can’t really tease out this ambiguity with the older studies on availability, because these two interpretations generate the same prediction. There is a strong correlation between the number of examples recalled and the ease with which those examples come to mind.

For example, consider a piece of the setup in Experiment 3 from the original paper on the availability heuristic. The subjects in this experiment were asked to estimate the frequency of two types of words in the English language: words with ‘k’ as their first letter, and words with ‘k’ as their third letter. There are twice as many words with ‘k’ as their third letter, but there was bias towards estimating that there are more words with ‘k’ as their first letter.

How, in experiments like these, are you supposed to figure out whether the subjects are relying on mental content or phenomenal experience? Both mechanisms predict the outcome, "Humans will be biased towards estimating that there are more words with 'k' as their first letter." And a lot of the later studies just replicate this result in other domains, and thus suffer from the same ambiguity.

III.

If you wanted to design a better experiment, where would you begin?

Well, if we think of feelings as sources of information in the way that we regard thoughts as sources of information, then we should find that we have some (perhaps low, perhaps high) confidence in the informational value of those feelings, as we have some level of confidence in the informational value of our thoughts.

This is useful because it suggests a method for detecting the use of feelings as sources of information: if we are led to believe that a source of information has low value, then its relevance will be discounted; and if we are led to believe that it has high value, then its relevance will be augmented. Detecting this phenomenon in the first place is probably a good place to start before trying to determine whether the classic availability studies demonstrate a reliance on phenomenal experience, mental content, or both. 

Fortunately, Wänke et al. (1995) conducted a modified replication of the experiment described above with exactly the properties that we’re looking for! Let’s start with the control condition.

In the control condition, subjects were given a blank sheet of paper and asked to write down 10 words that have ‘t’ as the third letter, and then to write down 10 words that begin with the letter ‘t’. After this listing task, they rated the extent to which words beginning with a ‘t’ are more or less frequent than words that have ‘t’ as the third letter. As in the original availability experiments, subjects estimated that words that begin with a ‘t’ are much more frequent than words with a ‘t’ in the third position.

Like before, this isn’t enough to answer the questions that we want to answer, but it can’t hurt to replicate the original result. It doesn’t really get interesting until you do things that affect the perceived value of the subjects’ feelings.

Wänke et al. got creative and, instead of blank paper, they gave subjects in two experimental conditions sheets of paper imprinted with pale, blue rows of ‘t’s, and told them to write 10 words beginning with a ‘t’. One condition was told that the paper would make it easier for them to recall words beginning with a ‘t’, and the other was told that the paper would make it harder for them to recall words beginning with a ‘t’.

Subjects made to think that the magic paper made it easier to think of examples gave lower estimates of the frequency of words beginning with a ‘t’ in the English language. It felt easy to think of examples, but the experimenter made them expect that by means of the magic paper, so they discounted the value of the feeling of ease. Their estimates of the frequency of words beginning with 't' went down relative to the control condition.

Subjects made to think that the magic paper made it harder to think of examples gave higher estimates of the frequency of words beginning with a ‘t’ in the English language. It felt easy to recall examples, but the experimenter made them think it would feel hard, so they augmented the value of the feeling of ease. Their estimates of the frequency of words beginning with 't' went up relative to the control condition.

(Also, here's a second explanation by Nate Soares if you want one.)

So, at least in this sort of experiment, it looks like the subjects weren’t counting the number of examples they came up with; it looks like they really were using their phenomenal experiences of ease and difficulty to estimate the frequency of certain classes of words. This is some evidence for the validity of the second interpretation mentioned at the beginning.

IV.

So we know that there is at least one circumstance in which the second interpretation seems valid. This was a step towards figuring out whether the availability heuristic first described by Kahneman and Tversky is an inference from amount of mental content, or an inference from the phenomenal experience of ease of recall, or something else, or some combination thereof.

As I said before, the two interpretations have identical predictions in the earlier studies. The solution to this is to design an experiment where inferences from mental content and inferences from phenomenal experience cause different judgments.

Schwarz et al. (1991, Experiment 1) asked subjects to list either 6 or 12 situations in which they behaved either assertively or unassertively. Pretests had shown that recalling 6 examples was experienced as easy, whereas recalling 12 examples was experienced as difficult. After listing examples, subjects had to evaluate their own assertiveness.

As one would expect, subjects rated themselves as more assertive when recalling 6 examples of assertive behavior than when recalling 6 examples of unassertive behavior.

But the difference in assertiveness ratings didn’t increase with the number of examples. Subjects who had to recall examples of assertive behavior rated themselves as less assertive after reporting 12 examples rather than 6 examples, and subjects who had to recall examples of unassertive behavior rated themselves as more assertive after reporting 12 examples rather than 6 examples.

If they were relying on the number of examples, then we should expect their ratings for the recalled quality to increase with the number of examples. Instead, they decreased.

It could be that it got harder to come up with good examples near the end of the task, and that later examples were lower quality than earlier examples, and the increased availability of the later examples biased the ratings in the way that we see. Schwarz acknowledged this, checked the written reports manually, and claimed that no such quality difference was evident.

V.

It would still be nice if we could do better than taking Schwarz’s word on that though. One thing you could try is seeing what happens when you combine the methods we used in the last two experiments: vary the number of examples generated and manipulate the perceived relevance of the experiences of ease and difficulty at the same time. (Last experiment, I promise.)

Schwarz et al. (1991, Experiment 3) manipulated the perceived value of the experienced ease or difficulty of recall by having subjects listen to ‘new-age music’ played at half-speed while they worked on the recall task. Some subjects were told that this music would make it easier to recall situations in which they behaved assertively and felt at ease, whereas others were told that it would make it easier to recall situations in which they behaved unassertively and felt insecure. These manipulations make subjects perceive recall experiences as uninformative whenever the experience matches the alleged impact of the music; after all, it may simply be easy or difficult because of the music. On the other hand, experiences that are opposite to the alleged impact of the music are considered very informative.

When the alleged effects of the music were the opposite of the phenomenal experience of generating examples, the previous experimental results were replicated.

When the alleged effects of the music match the phenomenal experience of generating examples, then the experience is called into question, since you can’t tell if it’s caused by the recall task or the music.

When this is done, the pattern that we expect from the first interpretation of the availability heuristic holds. Thinking of 12 examples of assertive behavior makes subjects rate themselves as more assertive than thinking of 6 examples of assertive behavior; mutatis mutandis for unassertive examples. When people can’t rely on their experience, they fall back to using mental content, and instead of relying on how hard or easy things feel, they count.

Under different circumstances, both interpretations are useful, but of course, it’s important to recognize that a distinction exists in the first place.


Medin, D. L., & Ross, B. H. (1996). Cognitive psychology (2nd ed.). Fort Worth: Harcourt Brace.

Schwarz, N., Bless, H., Strack, F., Klumpp, G., Rittenauer-Schatka, H., & Simons, A. (1991). Ease of retrieval as information: Another look at the availability heuristic. Journal of Personality and Social Psychology, 61, 195–202.

Tversky, A., & Kahneman, D. (1973). Availability: A heuristic for judging frequency and probability. Cognitive Psychology, 5, 207–232.

Wänke, M., Schwarz, N. & Bless, H. (1995). The availability heuristic revisited: Experienced ease of retrieval in mundane frequency estimates. Acta Psychologica, 89, 83-90.

2016 LessWrong Diaspora Survey Analysis: Part Two (LessWrong Use, Successorship, Diaspora)

24 ingres 10 June 2016 07:40PM

2016 LessWrong Diaspora Survey Analysis

Overview

  • Results and Dataset
  • Meta
  • Demographics
  • LessWrong Usage and Experience
  • LessWrong Criticism and Successorship
  • Diaspora Community Analysis (You are here)
  • Mental Health Section
  • Basilisk Section/Analysis
  • Blogs and Media analysis
  • Politics
  • Calibration Question And Probability Question Analysis
  • Charity And Effective Altruism Analysis

Introduction

Before it was the LessWrong survey, the 2016 survey was a small project I was working on as market research for a website I'm creating called FortForecast. As I was discussing the idea with others, particularly Eliot he made the suggestion that since he's doing LW 2.0 and I'm doing a site that targets the LessWrong demographic, why don't I go ahead and do the LessWrong Survey? Because of that, this years survey had a lot of questions oriented around what you would want to see in a successor to LessWrong and what you think is wrong with the site.

LessWrong Usage and Experience

How Did You Find LessWrong?

Been here since it was started in the Overcoming Bias days: 171 8.3%
Referred by a link: 275 13.4%
HPMOR: 542 26.4%
Overcoming Bias: 80 3.9%
Referred by a friend: 265 12.9%
Referred by a search engine: 131 6.4%
Referred by other fiction: 14 0.7%
Slate Star Codex: 241 11.7%
Reddit: 55 2.7%
Common Sense Atheism: 19 0.9%
Hacker News: 47 2.3%
Gwern: 22 1.1%
Other: 191 9.308%

How do you use Less Wrong?

I lurk, but never registered an account: 1120 54.4%
I've registered an account, but never posted: 270 13.1%
I've posted a comment, but never a top-level post: 417 20.3%
I've posted in Discussion, but not Main: 179 8.7%
I've posted in Main: 72 3.5%

[54.4% lurkers.]

How often do you comment on LessWrong?

I have commented more than once a week for the past year.: 24 1.2%
I have commented more than once a month for the past year but less than once a week.: 63 3.1%
I have commented but less than once a month for the past year.: 225 11.1%
I have not commented this year.: 1718 84.6%

[You could probably snarkily title this one "LW usage in one statistic". It's a pretty damning portrait of the sites vitality. A whopping 84.6% of people have not commented this year a single time.]

How Long Since You Last Posted On LessWrong?

I wrote one today.: 12 0.637%
Within the last three days.: 13 0.69%
Within the last week.: 22 1.168%
Within the last month.: 58 3.079%
Within the last three months.: 75 3.981%
Within the last six months.: 68 3.609%
Within the last year.: 84 4.459%
Within the last five years.: 295 15.658%
Longer than five years.: 15 0.796%
I've never posted on LW.: 1242 65.924%

[Supermajority of people have never commented on LW, 5.574% have within the last month.]

About how much of the Sequences have you read?

Never knew they existed until this moment: 215 10.3%
Knew they existed, but never looked at them: 101 4.8%
Some, but less than 25% : 442 21.2%
About 25%: 260 12.5%
About 50%: 283 13.6%
About 75%: 298 14.3%
All or almost all: 487 23.3%

[10.3% of people taking the survey have never heard of the sequences. 36.3% have not read a quarter of them.]

Do you attend Less Wrong meetups?

Yes, regularly: 157 7.5%
Yes, once or a few times: 406 19.5%
No: 1518 72.9%

[However the in-person community seems to be non-dead.]

Is physical interaction with the Less Wrong community otherwise a part of your everyday life, for example do you live with other Less Wrongers, or you are close friends and frequently go out with them?

Yes, all the time: 158 7.6%
Yes, sometimes: 258 12.5%
No: 1652 79.9%

About the same number say they hang out with LWers 'all the time' as say they go to meetups. I wonder if people just double counted themselves here. Or they may go to meetups and have other interactions with LWers outside of that. Or it could be a coincidence and these are different demographics. Let's find out.

P(Community part of daily life | Meetups) = 40%

Significant overlap, but definitely not exclusive overlap. I'll go ahead and chalk this one up up to coincidence.

Have you ever been in a romantic relationship with someone you met through the Less Wrong community?

Yes: 129 6.2%
I didn't meet them through the community but they're part of the community now: 102 4.9%
No: 1851 88.9%

LessWrong Usage Differences Between 2016 and 2014 Surveys

How do you use Less Wrong?

I lurk, but never registered an account: +19.300% 1125 54.400%
I've registered an account, but never posted: -1.600% 271 13.100%
I've posted a comment, but never a top-level post: -7.600% 419 20.300%
I've posted in Discussion, but not Main: -5.100% 179 8.700%
I've posted in Main: -3.300% 73 3.500%

About how much of the sequences have you read?

Never knew they existed until this moment: +3.300% 217 10.400%
Knew they existed, but never looked at them: +2.100% 103 4.900%
Some, but less than 25%: +3.100% 442 21.100%
About 25%: +0.400% 260 12.400%
About 50%: -0.400% 284 13.500%
About 75%: -1.800% 299 14.300%
All or almost all: -5.000% 491 23.400%

Do you attend Less Wrong meetups?

Yes, regularly: -2.500% 160 7.700%
Yes, once or a few times: -2.100% 407 19.500%
No: +7.100% 1524 72.900%

Is physical interaction with the Less Wrong community otherwise a part of your everyday life, for example do you live with other Less Wrongers, or you are close friends and frequently go out with them?

Yes, all the time: +0.200% 161 7.700%
Yes, sometimes: -0.300% 258 12.400%
No: +2.400% 1659 79.800%

Have you ever been in a romantic relationship with someone you met through the Less Wrong community?

Yes: +0.800% 132 6.300%
I didn't meet them through the community but they're part of the community now: -0.400% 102 4.900%
No: +1.600% 1858 88.800%

Write Ins

In a bit of a silly oversight I forgot to ask survey participants what was good about the community, so the following is going to be a pretty one sided picture. Below are the complete write ins respondents submitted

Issues With LessWrong At It's Peak

Philosophical Issues With LessWrong At It's Peak[Part One]
Philosophical Issues With LessWrong At It's Peak[Part Two]
Community Issues With LessWrong At It's Peak[Part One]
Community Issues With LessWrong At It's Peak[Part Two]

Issues With LessWrong Now

Philosophical Issues With LessWrong Now[Part One]
Philosophical Issues With LessWrong Now[Part Two]
Community Issues With LessWrong Now[Part One]
Community Issues With LessWrong Now[Part Two]

Peak Philosophy Issue Tallies

Philosophy Issues (Sample Size: 233)
Label Code Tally
Arrogance A 16
Bad Aesthetics BA 3
Bad Norms BN 3
Bad Politics BP 5
Bad Tech Platform BTP 1
Cultish C 5
Cargo Cult CC 3
Doesn't Accept Criticism DAC 3
Don't Know Where to Start DKWS 5
Damaged Me Mentally DMM 1
Esoteric E 3
Eliezer Yudkowsky EY 6
Improperly Indexed II 7
Impossible Mission IM 4
Insufficient Social Support ISS 1
Jargon  
Literal Cult LC 1
Lack of Rigor LR 14
Misfocused M 13
Mixed Bag MB 3
Nothing N 13
Not Enough Jargon NEJ 1
Not Enough Roko's Basilisk NERB 1
Not Enough Theory NET 1
No Intuition NI 6
Not Progressive Enough NPE 7
Narrow Scholarship NS 20
Other O 3
Personality Cult PC 10
None of the Above  
Quantum Mechanics Sequence QMS 2
Reinvention R 10
Rejects Expertise RE 5
Spoiled S 7
Small Competent Authorship SCA 6
Suggestion For Improvement SFI 1
Socially Incompetent SI 9
Stupid Philosophy SP 4
Too Contrarian TC 2
Typical Mind TM 1
Too Much Roko's Basilisk TMRB 1
Too Much Theory TMT 14
Too Progressive TP 2
Too Serious TS 2
Unwelcoming U 8

Well, those are certainly some results. Top answers are:

Narrow Scholarship: 20
Arrogance: 16
Too Much Theory: 14
Lack of Rigor: 14
Misfocused: 13
Nothing: 13
Reinvention (reinvents the wheel too much): 10
Personality Cult: 10

So condensing a bit: Pay more attention to mainstream scholarship and ideas, try to do better about intellectual rigor, be more practical and focus on results, be more humble. (Labeled Dataset)

Peak Community Issue Tallies

Community Issues (Sample Size: 227)
Label Code Tally
Arrogance A 7
Assumes Reader Is Male ARIM 1
Bad Aesthetics BA 1
Bad At PR BAP 5
Bad Norms BN 5
Bad Politics BP 2
Cultish C 9
Cliqueish Tendencies CT 1
Diaspora D 1
Defensive Attitude DA 1
Doesn't Accept Criticism DAC 3
Dunning Kruger DK 1
Elitism E 3
Eliezer Yudkowsky EY 2
Groupthink G 11
Insufficiently Indexed II 9
Impossible Mission IM 1
Imposter Syndrome IS 1
Jargon J 2
Lack of Rigor LR 1
Mixed Bag MB 1
Nothing N 5
??? NA 1
Not Big Enough NBE 3
Not Enough of A Cult NEAC 1
Not Enough Content NEC 7
Not Enough Community Infrastructure NECI 10
Not Enough Meetups NEM 5
No Goals NG 2
Not Nerdy Enough NNE 3
None Of the Above NOA 1
Not Progressive Enough NPE 3
Not Rational NR 3
NRx (Neoreaction) NRx 1
Narrow Scholarship NS 4
Not Stringent Enough NSE 3
Parochialism P 1
Pickup Artistry PA 2
Personality Cult PC 7
Reinvention R 1
Recurring Arguments RA 3
Rejects Expertise RE 2
Sequences S 2
Small Competent Authorship SCA 5
Suggestion For Improvement SFI 1
Spoiled Issue SI 9
Socially INCOMpetent SINCOM 2
Too Boring TB 1
Too Contrarian TC 10
Too COMbative TCOM 4
Too Cis/Straight/Male TCSM 5
Too Intolerant of Cranks TIC 1
Too Intolerant of Politics TIP 2
Too Long Winded TLW 2
Too Many Idiots TMI 3
Too Much Math TMM 1
Too Much Theory TMT 12
Too Nerdy TN 6
Too Rigorous TR 1
Too Serious TS 1
Too Tolerant of Cranks TTC 1
Too Tolerant of Politics TTP 3
Too Tolerant of POSers TTPOS 2
Too Tolerant of PROGressivism TTPROG 2
Too Weird TW 2
Unwelcoming U 12
UTILitarianism UTIL 1

Top Answers:

Unwelcoming: 12
Too Much Theory: 12
Groupthink: 11
Not Enough Community Infrastructure: 10
Too Contrarian: 10
Insufficiently Indexed: 9
Cultish: 9

Again condensing a bit: Work on being less intimidating/aggressive/etc to newcomers, spend less time on navel gazing and more time on actually doing things and collecting data, work on getting the structures in place that will onboard people into the community, stop being so nitpicky and argumentative, spend more time on getting content indexed in a form where people can actually find it, be more accepting of outside viewpoints and remember that you're probably more likely to be wrong than you think. (Labeled Dataset)

One last note before we finish up, these tallies are a very rough executive summary. The tagging process basically involves trying to fit points into clusters and is prone to inaccuracy through laziness, adding another category being undesirable, square-peg into round-hole fitting, and my personal political biases. So take these with a grain of salt, if you really want to know what people wrote in my advice would be to read through the write in sets I have above in HTML format. If you want to evaluate for yourself how well I tagged things you can see the labeled datasets above.

I won't bother tallying the "issues now" sections, all you really need to know is that it's basically the same as the first sections except with lots more "It's dead." comments and from eyeballing it a higher proportion of people arguing that LessWrong has been taken over by the left/social justice and complaints about effective altruism. (I infer that the complaints about being taken over by the left are mostly referring to effective altruism.)

Traits Respondents Would Like To See In A Successor Community

Philosophically

Attention Paid To Outside Sources
More: 1042 70.933%
Same: 414 28.182%
Less: 13 0.885%

Self Improvement Focus
More: 754 50.706%
Same: 598 40.215%
Less: 135 9.079%

AI Focus
More: 184 12.611%
Same: 821 56.271%
Less: 454 31.117%

Political
More: 330 22.837%
Same: 770 53.287%
Less: 345 23.875%

Academic/Formal
More: 455 31.885%
Same: 803 56.272%
Less: 169 11.843%

In summary, people want a site that will engage with outside ideas, acknowledge where it borrows from, focus on practical self improvement, less on AI and AI risk, and tighten its academic rigor. They could go either way on politics but the epistemic direction is clear.

Community

Intense Environment
More: 254 19.644%
Same: 830 64.192%
Less: 209 16.164%

Focused On 'Real World' Action
More: 739 53.824%
Same: 563 41.005%
Less: 71 5.171%

Experts
More: 749 55.605%
Same: 575 42.687%
Less: 23 1.707%

Data Driven/Testing Of Ideas
More: 1107 78.344%
Same: 291 20.594%
Less: 15 1.062%

Social
More: 583 43.507%
Same: 682 50.896%
Less: 75 5.597%

This largely backs up what I said about the previous results. People want a more practical, more active, more social and more empirical LessWrong with outside expertise and ideas brought into the fold. They could go either way on it being more intense but the epistemic trend is still clear.

Write Ins

Diaspora Communities

So where did the party go? We got twice as many respondents this year as last when we opened up the survey to the diaspora, which means that the LW community is alive and kicking it's just not on LessWrong.

LessWrong
Yes: 353 11.498%
No: 1597 52.02%

LessWrong Meetups
Yes: 215 7.003%
No: 1735 56.515%

LessWrong Facebook Group
Yes: 171 5.57%
No: 1779 57.948%

LessWrong Slack
Yes: 55 1.792%
No: 1895 61.726%

SlateStarCodex
Yes: 832 27.101%
No: 1118 36.417%

[SlateStarCodex by far has the highest proportion of active LessWrong users, over twice that of LessWrong itself, and more than LessWrong and Tumblr combined.]

Rationalist Tumblr
Yes: 350 11.401%
No: 1600 52.117%

[I'm actually surprised that Tumblr doesn't just beat LessWrong itself outright, They're only a tenth of a percentage point behind though, and if current trends continue I suspect that by 2017 Tumblr will have a large lead over the main LW site.]

Rationalist Facebook
Yes: 150 4.886%
No: 1800 58.632%

[Eliezer Yudkowsky currently resides here.]

Rationalist Twitter
Yes: 59 1.922%
No: 1891 61.596%

Effective Altruism Hub
Yes: 98 3.192%
No: 1852 60.326%

FortForecast
Yes: 4 0.13%
No: 1946 63.388%

[I included this as a 'troll' option to catch people who just check every box. Relatively few people seem to have done that, but having the option here lets me know one way or the other.]

Good Judgement(TM) Open
Yes: 29 0.945%
No: 1921 62.573%

PredictionBook
Yes: 59 1.922%
No: 1891 61.596%

Omnilibrium
Yes: 8 0.261%
No: 1942 63.257%

Hacker News
Yes: 252 8.208%
No: 1698 55.309%

#lesswrong on freenode
Yes: 76 2.476%
No: 1874 61.042%

#slatestarcodex on freenode
Yes: 36 1.173%
No: 1914 62.345%

#hplusroadmap on freenode
Yes: 4 0.13%
No: 1946 63.388%

#chapelperilous on freenode
Yes: 10 0.326%
No: 1940 63.192%

[Since people keep asking me, this is a postrational channel.]

/r/rational
Yes: 274 8.925%
No: 1676 54.593%

/r/HPMOR
Yes: 230 7.492%
No: 1720 56.026%

[Given that the story is long over, this is pretty impressive. I'd have expected it to be dead by now.]

/r/SlateStarCodex
Yes: 244 7.948%
No: 1706 55.57%

One or more private 'rationalist' groups
Yes: 192 6.254%
No: 1758 57.264%

[I almost wish I hadn't included this option, it'd have been fascinating to learn more about these through write ins.]

Of all the parties who seem like plausible candidates at the moment, Scott Alexander seems most capable to undiaspora the community. In practice he's very busy, so he would need a dedicated team of relatively autonomous people to help him. Scott could court guest posts and start to scale up under the SSC brand, and I think he would fairly easily end up with the lions share of the free floating LWers that way.

Before I call a hearse for LessWrong, there is a glimmer of hope left:

Would you consider rejoining LessWrong?

I never left: 668 40.6%
Yes: 557 33.8%
Yes, but only under certain conditions: 205 12.5%
No: 216 13.1%

A significant fraction of people say they'd be interested in an improved version of the site. And of course there were write ins for conditions to rejoin, what did people say they'd need to rejoin the site?

Rejoin Condition Write Ins [Part One]
Rejoin Condition Write Ins [Part Two]
Rejoin Condition Write Ins [Part Three]
Rejoin Condition Write Ins [Part Four]
Rejoin Condition Write Ins [Part Five]

Feel free to read these yourselves (they're not long), but I'll go ahead and summarize: It's all about the content. Content, content, content. No amount of usability improvements, A/B testing or clever trickery will let you get around content. People are overwhelmingly clear about this; they need a reason to come to the site and right now they don't feel like they have one. That means priority number one for somebody trying to revitalize LessWrong is how you deal with this.

Let's recap.

Future Improvement Wishlist Based On Survey Results

Philosophical

  • Pay more attention to mainstream scholarship and ideas.
  • Improved intellectual rigor.
  • Acknowledge sources borrowed from.
  • Be more practical and focus on results.
  • Be more humble.

Community

  • Less intimidating/aggressive/etc to newcomers,
  • Structures that will onboard people into the community.
  • Stop being so nitpicky and argumentative.
  • Spend more time on getting content indexed in a form where people can actually find it.
  • More accepting of outside viewpoints.

While that list seems reasonable, it's quite hard to put into practice. Rigor, as the name implies requires high-effort from participants. Frankly, it's not fun. And getting people to do un-fun things without paying them is difficult. If LessWrong is serious about it's goal of 'advancing the art of human rationality' then it needs to figure out a way to do real investigation into the subject. Not just have people 'discuss', as though the potential for Rationality is within all of us just waiting to be brought out by the right conversation.

I personally haven't been a LW regular in a long time. Assuming the points about pedanticism, snipping, "well actually"-ism and the like are true then they need to stop for the site to move forward. Personally, I'm a huge fan of Scott Alexander's comment policy: All comments must be at least two of true, kind, or necessary.

  • True and kind - Probably won't drown out the discussion signal, will help significantly decrease the hostility of the atmosphere.

  • True and necessary - Sometimes what you have to say isn't nice, but it needs to be said. This is the common core of free speech arguments for saying mean things and they're not wrong. However, something being true isn't necessarily enough to make it something you should say. In fact, in some situations saying mean things to people entirely unrelated to their arguments is known as the ad hominem fallacy.

  • Kind and necessary - The infamous 'hugbox' is essentially a place where people go to hear things which are kind but not necessarily true. I don't think anybody wants a hugbox, but occasionally it can be important to say things that might not be true but are needed for the sake of tact, reconciliation, or to prevent greater harm.

If people took that seriously and really gave it some thought before they used their keyboard, I think the on-site LessWrong community would be a significant part of the way to not driving people off as soon as they arrive.

More importantly, in places like the LessWrong Slack I see this sort of happy go lucky attitude about site improvement. "Oh that sounds nice, we should do that." without the accompanying mountain of work to actually make 'that' happen. I'm not sure people really understand the dynamics of what it means to 'revive' a website in severe decay. When you decide to 'revive' a dying site, what you're really doing once you're past a certain point is refounding the site. So the question you should be asking yourself isn't "Can I fix the site up a bit so it isn't quite so stale?". It's "Could I have founded this site?" and if the answer is no you should seriously question whether to make the time investment.

Whether or not LessWrong lives to see another day basically depends on the level of ground game its last users and administrators can muster up. And if it's not enough, it won't.

Virtus junxit mors non separabit!

Weekly LW Meetups

1 FrankAdamek 10 June 2016 03:16PM

This summary was posted to LW Main on June 10th. The following week's summary is here.

Irregularly scheduled Less Wrong meetups are taking place in:

The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:

Locations with regularly scheduled meetups: Austin, Berlin, Boston, Brussels, Buffalo, Canberra, Columbus, Denver, Kraków, London, Madison WI, Melbourne, Moscow, New Hampshire, New York, Philadelphia, Research Triangle NC, San Francisco Bay Area, Seattle, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers and a Slack channel for daily discussion and online meetups on Sunday night US time.

continue reading »

Google Deepmind and FHI collaborate to present research at UAI 2016

23 Stuart_Armstrong 09 June 2016 06:08PM

Safely Interruptible Agents

Oxford academics are teaming up with Google DeepMind to make artificial intelligence safer. Laurent Orseau, of Google DeepMind, and Stuart Armstrong, the Alexander Tamas Fellow in Artificial Intelligence and Machine Learning at the Future of Humanity Institute at the University of Oxford, will be presenting their research on reinforcement learning agent interruptibility at UAI 2016. The conference, one of the most prestigious in the field of machine learning, will be held in New York City from June 25-29. The paper which resulted from this collaborative research will be published in the Proceedings of the 32nd Conference on Uncertainty in Artificial Intelligence (UAI).

Orseau and Armstrong’s research explores a method to ensure that reinforcement learning agents can be repeatedly safely interrupted by human or automatic overseers. This ensures that the agents do not “learn” about these interruptions, and do not take steps to avoid or manipulate the interruptions. When there are control procedures during the training of the agent, we do not want the agent to learn about these procedures, as they will not exist once the agent is on its own. This is useful for agents that have a substantially different training and testing environment (for instance, when training a Martian rover on Earth, shutting it down, replacing it at its initial location and turning it on again when it goes out of bounds—something that may be impossible once alone unsupervised on Mars), for agents not known to be fully trustworthy (such as an automated delivery vehicle, that we do not want to learn to behave differently when watched), or simply for agents that need continual adjustments to their learnt behaviour. In all cases where it makes sense to include an emergency “off” mechanism, it also makes sense to ensure the agent doesn’t learn to plan around that mechanism.

Interruptibility has several advantages as an approach over previous methods of control. As Dr. Armstrong explains, “Interruptibility has applications for many current agents, especially when we need the agent to not learn from specific experiences during training. Many of the naive ideas for accomplishing this—such as deleting certain histories from the training set—change the behaviour of the agent in unfortunate ways.”

In the paper, the researchers provide a formal definition of safe interruptibility, show that some types of agents already have this property, and show that others can be easily modified to gain it. They also demonstrate that even an ideal agent that tends to the optimal behaviour in any computable environment can be made safely interruptible.

These results will have implications in future research directions in AI safety. As the paper says, “Safe interruptibility can be useful to take control of a robot that is misbehaving… take it out of a delicate situation, or even to temporarily use it to achieve a task it did not learn to perform….” As Armstrong explains, “Machine learning is one of the most powerful tools for building AI that has ever existed. But applying it to questions of AI motivations is problematic: just as we humans would not willingly change to an alien system of values, any agent has a natural tendency to avoid changing its current values, even if we want to change or tune them. Interruptibility and the related general idea of corrigibility, allow such changes to happen without the agent trying to resist them or force them. The newness of the field of AI safety means that there is relatively little awareness of these problems in the wider machine learning community.  As with other areas of AI research, DeepMind remains at the cutting edge of this important subfield.”

On the prospect of continuing collaboration in this field with DeepMind, Stuart said, “I personally had a really illuminating time writing this paper—Laurent is a brilliant researcher… I sincerely look forward to productive collaboration with him and other researchers at DeepMind into the future.” The same sentiment is echoed by Laurent, who said, “It was a real pleasure to work with Stuart on this. His creativity and critical thinking as well as his technical skills were essential components to the success of this work. This collaboration is one of the first steps toward AI Safety research, and there’s no doubt FHI and Google DeepMind will work again together to make AI safer.”

For more information, or to schedule an interview, please contact Kyle Scott at fhipa@philosophy.ox.ac.uk

Morality of Doing Simulations Is Not Coherent [SOLVED, INVALID]

3 SquirrelInHell 07 June 2016 02:34AM

Edit:


I consider this solved and no longer stand behind this or similar arguments. The answer given by hairyfigment (thank you!) is simple: combine the information-theoretic approach (as advocated by Eliezer) with admitting degrees of consciousness to be real values (not just is or isn't).

Thank you again hairyfigment for dispelling my confusion.


Original post:

 

Consider the following scenario:

1. I have technical capabilities that allow me to simulate you and your surroundings with high enough accuracy, that the "you" inside my simulation behaves as the real "you" would (at least for some time).

1b. I take an accurate snapshot "S" of your current state t=0 (with enough surroundings to permit simulation).

2. I want to simulate your behavior in various situations as a part of my decision process. Some of my possible actions involve very unpleasant consequences for you (such as torturing you for 1 hour), but I'm extremely unlikely to choose them.

3. For each action A from the set of my possible actions I do the following:

3a. Update S with my action A at t=0. Let's call this data A(S).

3b. Simulate physics in the world represented by A(S), until I reach a time t=+1 hour. Denote this by A(S)_1.

3c. Evaluate result of the simulation by computing value(A(S)_1), which is a single 32-bit floating point number. Discard all the other data.

A large portion of transhumanists might stand behind the following statement:


What I do in step 1 is acceptable, but step 3 (or in particular, step 3b) is "immoral" or "wrong". You feel that the simulated you's suffering matters, and you'd act to stop me from doing simulations of torture etc.


If you disagree with the above statement, what I write below doesn't apply to you. Congratulations.

However if you are in the group (probably a majority) that agrees with the "wrongness" of my doing the simulation, consider changing my actions to the version described below.

(Note that for simplicity of presentation, I assume that the operator of calculating future-time snapshots is linear, and therefore I can use addition to combine two snapshots, and later subtract to get one of the components back. If you think the operation of directly adding snapshots is not plausible, feel free to substitute another one - attacking this particular detail does not weaken the reasoning. The same can be done with addition of complex probability amplitudes, which is more exactly true in the sense that we are sure it is properly linear, but than we couldn't avoid a much more sophisticated mechanism that ensures that two initial snapshots are sufficiently entangled to make the computation on the sum not be trivially splittable along some dimension.)

(Edit: the operation used in the argument can be improved in a number of other ways, including clever ideas like guessing the result and verifying if it is correct only in small parts, or with tests such that each of them only gives a correct answer with a small probablility p etc. The point being, we can make the computation seem arbitrarily innocuous, and still have it arrive at a correct answer.)

1, 1b & 2. Same as before.

2b. I take an accurate snapshot "R" of a section of the world (of the same size as the one with you) that contains only inanimate objects (such as rocks).

3. For each action A from the set of my possible actions I do the following:

3a. Compute A(S): update S with my action A at t=0.

3b. Compute X = A(S) + R. This is component addition, and the state X no longer represents anything even remotely possible to construct in the physical world. From the point of view of physics, it contains garbage, and no recognizable version of you that could feel or think anything.

3b. Simulate physics in the world represented by R (snapshot of some rocks), until I reach a time t=+1 hour. Denote result by R_1.

3c. Run the physics simulation on X as if moving the time forward by 1 hour, obtain X_1.

3c. Compute value(X_1 - R_1). Discard all the other data.

Note that with the assumption of linear physics, value(X_1 - R_1) is exactly equal to value(A(S)_1).

However at no point did I do anything that could be described as "simulating you".

I'll leave you to ruminate on this.

Review and Thoughts on Current Version of CFAR Workshop

10 Gleb_Tsipursky 06 June 2016 01:44PM

Outline: I will discuss my background and how I prepared for the workshop, and then how I would have prepared differently if I could go back and have the chance to do it again; I will then discuss my experience at the CFAR workshop, and what I would have done differently if I had the chance to do it again; I will then discuss what my take-aways were from the workshop, and what I am doing to integrate CFAR strategies into my life; finally, I will give my assessment of its benefits and what other folks might expect to get who attend the workshop.


 

Acknowledgments: Thanks to fellow CFAR alumni and CFAR staff for feedback on earlier versions of this post


 

Introduction

 

Many aspiring rationalists have heard about the Center for Applied Rationality, an organization devoted to teaching applied rationality skills to help people improve their thinking, feeling, and behavior patterns. This nonprofit does so primarily through its intense workshops, and is funded by donations and revenue from its workshop. It fulfills its social mission through conducting rationality research and through giving discounted or free workshops to those people its staff judge as likely to help make the world a better place, mainly those associated with various Effective Altruist cause areas, especially existential risk.

 

To be fully transparent: even before attending the workshop, I already had a strong belief that CFAR is a great organization and have been a monthly donor to CFAR for years. So keep that in mind as you read my description of my experience (you can become a donor here).


Preparation

 

First, some background about myself, so you know where I’m coming from in attending the workshop. I’m a professor specializing in the intersection of history, psychology, behavioral economics, sociology, and cognitive neuroscience. I discovered the rationality movement several years ago through a combination of my research and attending a LessWrong meetup in Columbus, OH, and so come from a background of both academic and LW-style rationality. Since discovering the movement, I have become an activist in the movement as the President of Intentional Insights, a nonprofit devoted to popularizing rationality and effective altruism (see here for our EA work). So I came to the workshop with some training and knowledge of rationality, including some CFAR techniques.

 

To help myself prepare for the workshop, I reviewed existing posts about CFAR materials, with an eye toward being careful not to assume that the actual techniques match their actual descriptions in the posts.

 

I also delayed a number of tasks for after the workshop, tying up loose ends. In retrospect, I wish I did not leave myself some ongoing tasks to do during the workshop. As part of my leadership of InIn, I coordinate about 50ish volunteers, and I wish I had placed those responsibilities on someone else during the workshop.

 

Before the workshop, I worked intensely on finishing up some projects. In retrospect, it would have been better to get some rest and come to the workshop as fresh as possible.

 

There were some communication snafus with logistics details before the workshop. It all worked out in the end, but I would have told myself in retrospect to get the logistics hammered out in advance to not experience anxiety before the workshop about how to get there.


Experience

 

The classes were well put together, had interesting examples, and provided useful techniques. FYI, my experience in the workshop was that reading these techniques in advance was not harmful, but that the techniques in the CFAR classes were quite a bit better than the existing posts about them, so don’t assume you can get the same benefits from reading posts as attending the workshop. So while I was aware of the techniques, the ones in the classes definitely had optimized versions of them - maybe because of the “broken telephone” effect or maybe because CFAR optimized them from previous workshops, not sure. I was glad to learn that CFAR considers the workshop they gave us in May as satisfactory enough to scale up their workshops, while still improving their content over time.

 

Just as useful as the classes were the conversations held in between and after the official classes ended. Talking about them with fellow aspiring rationalists and seeing how they were thinking about applying these to their lives was helpful for sparking ideas about how to apply them to my life. The latter half of the CFAR workshop was especially great, as it focused on pairing off people and helping others figure out how to apply CFAR techniques to themselves and how to address various problems in their lives. It was especially helpful to have conversations with CFAR staff and trained volunteers, of whom there were plenty - probably about 20 volunteers/staff for the 50ish workshop attendees.

 

Another super-helpful aspect of the conversations was networking and community building. Now, this may have been more useful to some participants than others, so YMMV. As an activist in the moment, I talked to many folks in the CFAR workshop about promoting EA and rationality to a broad audience. I was happy to introduce some people to EA, with my most positive conversation there being to encourage someone to switch his efforts regarding x-risk from addressing nuclear disarmament to AI safety research as a means of addressing long/medium-term risk, and promoting rationality as a means of addressing short/medium-term risk. Others who were already familiar with EA were interested in ways of promoting it broadly, while some aspiring rationalists expressed enthusiasm over becoming rationality communicators.

 

Looking back at my experience, I wish I was more aware of the benefits of these conversations. I went to sleep early the first couple of nights, and I would have taken supplements to enable myself to stay awake and have conversations instead.


Take-Aways and Integration

 

The aspects of the workshop that I think will help me most were what CFAR staff called “5-second” strategies - brief tactics and techniques that could be executed in 5 seconds or less and address various problems. The stuff that we learned at the workshops that I was already familiar with required some time to learn and practice, such as Trigger Action Plans, Goal Factoring, Murphyjitsu, Pre-Hindsight, often with pen and paper as part of the work. However, with sufficient practice, one can develop brief techniques that mimic various aspects of the more thorough techniques, and apply them quickly to in-the-moment decision-making.

 

Now, this doesn’t mean that the longer techniques are not helpful. They are very important, but they are things I was already generally familiar with, and already practice. The 5-second versions were more of a revelation for me, and I anticipate will be more helpful for me as I did not know about them previously.

 

Now, CFAR does a very nice job of helping people integrate the techniques into daily life, as this is a common failure mode of CFAR attendees, with them going home and not practicing the techniques. So they have 6 Google Hangouts with CFAR staff and all attendees who want to participate, 4 one-on-one sessions with CFAR trained volunteers or staff, and they also pair you with one attendee for post-workshop conversations. I plan to take advantage of all these, although my pairing did not work out.

 

For integration of CFAR techniques into my life, I found the CFAR strategy of “Overlearning” especially helpful. Overlearning refers to trying to apply a single technique intensely for a while to all aspect of one’s activities, so that it gets internalized thoroughly. I will first focus on overlearning Trigger Action Plans, following the advice of CFAR.

 

I also plan to teach CFAR techniques in my local rationality dojo, as teaching is a great way to learn, naturally.

 

Finally, I plan to integrate some CFAR techniques into Intentional Insights content, at least the more simple techniques that are a good fit for the broad audience with which InIn is communicating.


Benefits

 

I have a strong probabilistic belief that having attended the workshop will improve my capacity to be a person who achieves my goals for doing good in the world. I anticipate I will be able to figure out better whether the projects I am taking on are the best uses of my time and energy. I will be more capable of avoiding procrastination and other forms of akrasia. I believe I will be more capable of making better plans, and acting on them well. I will also be more in touch with my emotions and intuitions, and be able to trust them more, as I will have more alignment among different components of my mind.

 

Another benefit is meeting the many other people at CFAR who have similar mindsets. Here in Columbus, we have a flourishing rationality community, but it’s still relatively small. Getting to know 70ish people, attendees and staff/volunteers, passionate about rationality was a blast. It was especially great to see people who were involved in creating new rationality strategies, something that I am engaged in myself in addition to popularizing rationality - it’s really heartening to envision how the rationality movement is growing.

 

These benefits should resonate strongly with those who are aspiring rationalists, but they are really important for EA participants as well. I think one of the best things that EA movement members can do is studying rationality, and it’s something we promote to the EA movement as part of InIn’s work. What we offer is articles and videos, but coming to a CFAR workshop is a much more intense and cohesive way of getting these benefits. Imagine all the good you can do for the world if you are better at planning, organizing, and enacting EA-related tasks. Rationality is what has helped me and other InIn participants make the major impact that we have been able to make, and there are a number of EA movement members who have rationality training and who reported similar benefits. Remember, as an EA participant, you can likely get a scholarship with a partial or full coverage of the regular $3900 price of the workshop, as I did myself when attending it, and you are highly likely to be able to save more lives as a result of attending the workshop over time, even if you have to pay some costs upfront.

 

Hope these thoughts prove helpful to you all, and please contact me at gleb@intentionalinsights.org if you want to chat with me about my experience.

 

Counterfactual Mugging Alternative

-1 wafflepudding 06 June 2016 06:53AM

Edit as of June 13th, 2016: I no longer believe this to be easier to understand than traditional CM, but stand by the rest of it. Minor aesthetic edits made.

First post on the LW discussion board. Not sure if something like this has already been written, need your feedback to let me know if I’m doing something wrong or breaking useful conventions.

An alternative to the counterfactual mugging, since people often require it explained a few times before they understand it -- this one I think will be faster for most to comprehend because it arose organically, not seeming specifically contrived to create a dilemma between decision theories:

Pretend you live in a world where time travel exists and Time can create realities with acausal loops, or of ordinary linear chronology, or another structure, so long as there is no paradox -- only self-consistent timelines can be generated. 

In your timeline, there are prophets. A prophet (known to you to be honest and truly prophetic) tells you that you will commit an act which seems horrendously imprudent or problematic. It is an act whose effect will be on the scale of losing $10,000; an act you never would have taken ordinarily. But fight the prophecy all you want, it is self-fulfilling and you definitely live in a timeline where the act gets committed. However, if it weren’t for the prophecy being immutably correct, you could have spent $100 and, even having heard the prophecy (even having believed it would be immutable) the probability of you taking that action would be reduced by, say, 50%. So fighting the prophecy by spending $100 would mean that there were 50% fewer self-consistent (possible) worlds where you lost the $10,000, because its just much less likely for you to end up taking that action if you fight it rather than succumbing to it.

You may feel that there would be no reason to spend $100 averting a decision that you know you’re going to make, and see no reason to care about counterfactual worlds  where you don’t lose the $10,000. But the fact of the matter is that if you could have precommitted to fight the choice you would have, because in the worlds where that prophecy could have been presented to you, you’d be decreasing the average disutility by (($10,000)(.5 probability) - ($100) = $4,900). Not following a precommitment that you would have made to prevent the exact situation which you’re now in because you wouldn’t have followed the precommitment seems an obvious failure mode, but UDT successfully does the same calculation shown above and tells you to fight the prophecy. The simple fact that should tell causal decision theorists that converting to UDT is the causally optimal decision is that Updateless Decision Theorists actually do better on average than CDT proponents.

 

(You may assume also that your timeline is the only timeline that exists, so as not to further complicate the problem by your degree of empathy with your selves from other existing timelines.)

Open Thread June 6 - June 12, 2016

2 Elo 06 June 2016 04:21AM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

Weekly LW Meetups

1 FrankAdamek 03 June 2016 07:07PM

Rationality Quotes June 2016

2 bbleeker 03 June 2016 07:51AM

Another month, another rationality quotes thread. The rules are:

  • Provide sufficient information (URL, title, date, page number, etc.) to enable a reader to find the place where you read the quote, or its original source if available. Do not quote with only a name.
  • Post all quotes separately, so that they can be upvoted or downvoted separately. (If they are strongly related, reply to your own comments. If strongly ordered, then go ahead and post them together.)
  • Do not quote yourself.
  • Do not quote from Less Wrong itself, HPMoR, Eliezer Yudkowsky, or Robin Hanson. If you'd like to revive an old quote from one of those sources, please do so here.
  • No more than 5 quotes per person per monthly thread, please.

Thoughts on hacking aromanticism?

9 hg00 02 June 2016 11:52AM

Several years ago, Alicorn wrote an article about how she hacked herself to be polyamorous.  I'm interested in methods for hacking myself to be aromantic.  I've had some success with this, so I'll share what's worked for me, but I'm really hoping you all will chime in with your ideas in the comments.

Motivation

Why would someone want to be aromantic?  There's the obvious time commitment involved in romance, which can be considerable.  This is an especially large drain if you're in a situation where finding suitable partners is difficult, which means most of this time is spent enduring disappointment (e.g. if you're heterosexual and the balance of singles in your community is unfavorable).

But I think an even better way to motivate aromanticism is by referring you to this Paul Graham essay, The Top Idea in Your Mind.  To be effective at accomplishing your goals, you'd like to have your goals be the most interesting thing you have to think about.  I find it's far too easy for my love life to become the most interesting thing I have to think about, for obvious reasons.

Subproblems

After thinking some, I came up with a list of 4 goals people try to achieve through engaging in romance:

  1. Companionship.
  2. Sexual pleasure.
  3. Infatuation (also known as new relationship energy).
  4. Validation.  This one is trickier than the previous three, but I think it's arguably the most important.  Many unhappy singles have friends they are close to, and know how to masturbate, but they still feel lousy in a way people in post-infatuation relationships do not.  What's going on?  I think it's best described as a sort of romantic insecurity.  To test this out, imagine a time when someone you were interested in was smiling at you, and contrast that with the feeling of someone you were interested in turning you down.  You don't have to experience companionship or sexual pleasure from these interactions for them to have a major impact on your "romantic self-esteem".  And in a culture where singlehood is considered a failure, it's natural for your "romantic self-esteem" to take a hit if you're single.

To remove the need for romance, it makes sense to find quicker and less distracting ways to achieve each of these 4 goals.  So I'll treat each goal as a subproblem and brainstorm ideas for solving it.  Subproblems 1 through 3 all seem pretty easy to solve:

  1. Companionship: Make deep friendships with people you're not interested in romantically.  I recommend paying special attention to your coworkers and housemates, since you spend so much time with them.
  2. Sexual pleasure: Hopefully you already have some ideas on pleasuring yourself.
  3. Infatuation: I see this as more of a bonus than a need to be met.  There are lots of ways to find inspiration, excitement, and meaning in life outside of romance.

Subproblem 4 seems trickiest.

Hacking Romantic Self-Esteem

I'll note that what I'm describing as "validation" or "romantic self-esteem" seems closely related to abundance mindset.  But I think it's useful to keep them conceptually distinct.  Although alieving that there are many people you could date is one way to boost your romantic self-esteem, it's not necessarily the only strategy.

The most important thing to keep in mind about your romantic self-esteem is that it's heavily affected by the availability heuristic.  If I was encouraged by someone in 2015, that won't do much to assuage the sting of discouragement in 2016, except maybe if it happens to come to mind.

Another clue is the idea of a sexual "dry spell".  Dry spells are supposed to get worse the longer they go on... which simply means that if your mind doesn't have a recent (available!) incident of success to latch on, you're more likely to feel down.

So to increase your romantic self-esteem, keep a cherished list of thoughts suggesting your desirability is high, and don't worry too much about thoughts suggesting your desirability is low.  Here's a freebie: If you're reading this post, it's likely that you are (or will be) quite rich by global standards.  I hear rich people are considered attractive.  Put it on your list!

Other ideas for raising your romantic self-esteem:

  • Take steps to maintain your physical appearance, so you will appear marginally more desirable to yourself when you see yourself in the mirror.
  • Remind yourself that you're not a victim if you're making a conscious choice to prioritize other aspects of your life.  Point out to yourself things you could be doing to find partners that you're choosing not to do.

I think this is a situation where prevention works better than cure--it's best to work pre-emptively to keep your romantic self-esteem high.  In my experience, low romantic self-esteem leads to unproductive coping mechanisms like distracting myself from dark thoughts by wasting time on the Internet.

The other side of the coin is avoiding hits to your romantic self-esteem.  Here's an interesting snippet from a Quora answer I found:

In general specialized contemplative monastic organisations that tend to separate from the society tend to be celibate while ritual specialists within the society (priests) even if expected to follow a higher standard of ethical and ritual purity tend not to be.

So, it seems like it's easier for heterosexual male monks to stay celibate if they are isolated on a monastery away from women.  Without any possible partners around, there's no one to reject (or distract) them.  Participating in a monastic culture in which long-term singlehood is considered normal & desirable also removes a romantic self-esteem hit.

Retreating to a monastery probably isn't practical, but there may be simpler things you can do.  I recently switched from lifting weights to running in order to get exercise, and I found that running is better for my concentration because I'm not distracted by attractive people at the gym.

It's not supposed to be easy

I shared a bunch of ideas in this post.  But my overall impression is that instilling aromanticism is a very hard problem.  Based on my research, even monks and priests have a difficult time of things.  That's why I'm curious to hear what the Less Wrong community can come up with.  Side note: when possible, please try to make your suggestions gender-neutral so we can avoid gender-related flame wars.  Thanks!

June 2016 Media Thread

2 ArisKatsaris 01 June 2016 10:29AM

This is the monthly thread for posting media of various types that you've found that you enjoy. Post what you're reading, listening to, watching, and your opinion of it. Post recommendations to blogs. Post whatever media you feel like discussing! To see previous recommendations, check out the older threads.

Rules:

  • Please avoid downvoting recommendations just because you don't personally like the recommended material; remember that liking is a two-place word. If you can point out a specific flaw in a person's recommendation, consider posting a comment to that effect.
  • If you want to post something that (you know) has been recommended before, but have another recommendation to add, please link to the original, so that the reader has both recommendations.
  • Please post only under one of the already created subthreads, and never directly under the parent media thread.
  • Use the "Other Media" thread if you believe the piece of media you want to discuss doesn't fit under any of the established categories.
  • Use the "Meta" thread if you want to discuss about the monthly media thread itself (e.g. to propose adding/removing/splitting/merging subthreads, or to discuss the type of content properly belonging to each subthread) or for any other question or issue you may have about the thread or the rules.

rationalfiction.io - publish, discover, and discuss rational fiction

6 rayalez 31 May 2016 12:02PM

Hey, everyone! I want to share with you a project I've been working on for a while - http://rationalfiction.io.

I want it to become the perfect place to publish, discover, and discuss rational fiction.

We already have a lot of awesome stories, and I invite you to join and post more! =)

Open Thread May 30 - June 5, 2016

2 Elo 30 May 2016 04:51AM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

Cognitive Biases Affecting Self-Perception of Beauty

0 Bound_up 29 May 2016 06:32PM

I wrote an article for mass consumption on the biases which are at play in a hot-button social issue, namely, how people feel about their beauty.

 

skepticexaminer.com/2016/05/dont-think-youre-beautiful/

 

and

 

intentionalinsights.org/why-you-dont-think-youre-beautiful

 

It's supposed to be interesting to people who wouldn't normally care a whit for correcting their biases for the sake of epistemology.

 

EDIT: Text included below

 

 

Long-time friends Amy, Bailey, and Casey are having their weekly lunch together when Amy says “I don’t think I’m very beautiful.”


Have you ever seen something like this? Regardless, before moving on, try to guess what will happen next. What kind of future would you predict?


I’ve often seen such a scene. My experience would lead me to predict... 


“Of course you’re beautiful!” they reassure her. Granted, people sometimes say that just to be nice, but I’ll be talking about those times when they are sincere.


How can Bailey and Casey see Amy as beautiful when Amy doesn’t? Some great insight into beauty, perhaps?


Not at all! Consider what typically happens next.


“I only wish I was as beautiful as you, Amy,” Bailey reassures her.


The usual continuation of the scene reveals that Bailey is just as self-conscious as Amy is, and Casey’s probably the same. All people have this natural tendency, to judge their own appearance more harshly than they do others’.


So what’s going on?


If you were present, I’d ask you to guess what causes us to judge ourselves this way. Indeed, I have so asked from time to time, and found most people blame the same thing.


Think about it; what does everybody blame when people are self-conscious about their beauty?


We blame…


The media! The blasted media and the narrow standard of beauty it imposes.
There are two effects; the media is responsible for only one, and not the one we’re talking about.


Research suggests that the media negatively affects how we judge both ourselves and others. We tend to focus on how it affects our perception of ourselves, but the media affects how we judge others, too. More to the point, that’s not the effect we were talking about!


We were talking about a separate effect, where people tend to judge themselves one way and everyone else another. Is it proper to blame the media for this also? 


Picture what would happen if the media were to blame.


First, everyone assimilates the media’s standard of beauty. They judge beauty by that standard. That’s the theory. So far so good.


What does this cause? They look themselves over in the mirror. They see that they don’t fit the standard. Eventually they sigh, and give up. “I’m not beautiful,” they think.


Check. The theory fits.


But what happens when they look at other people?


Bailey looks at Amy. Amy doesn’t (as hardly anybody does) fit the standard of beauty. So…Bailey concludes that Amy isn’t beautiful?


That’s not what happens! Amy looks fine to Bailey, and vice versa! The media effect doesn’t look like this one. We might get our standard of beauty from the media, but the question remains, why do we hold ourselves to it morethan we do everyone else?


We need something that more fully explains why Amy judges herself one way and everyone else another, something mapping the territory of reality.


The Explanation


A combination of two things.


1. Amy’s beauty is very important to her.
2. She knows her looks better than others do.


Amy’s beauty affects her own life. Other people’s beauty doesn’t affect her life nearly as much.


Consider how Amy looks at other people. She sees their features and figure, whatever good and bad parts stand out, a balanced assessment of their beauty. She has no special reason to pay extra attention to their good or bad parts, no special reason to judge them any particular way at all. At the end of the day, it just doesn’t much matter to her how other people look.


Contrast that to how much her appearance matters to her. How we look affects how people perceive us, how we perceive ourselves, how we feel walking down the street. Indeed, researchers have found that the more beautifulwe are, the more we get paid, and the more we are perceived as honest and intelligent.


Like for most people, Amy’s beauty is a big deal to her. So which does she pay attention to, the potential gains of highlighting her good points, or the potential losses of highlighting her bad points? Research suggests that she will focus on losses. It’s called loss aversion.


Reason 1: Loss Aversion


We hate losing even more than we love winning. Loss aversion is when we value the same thing more or less based on if you’re going to gain it or if you risk losing it.


Say someone gives you $1000. They say you can either lose $400 of it now, or try to hold on to it all, 50-50 odds to keep it all or lose it all. What would you do?


Well, studies show about 61% of people in this situation choose to gamble on keeping everything over a sure loss.


Then suppose you get a second deal. You can either keep $600 of your $1000 now, or you can risk losing it all, 50-50 odds again. What would you do?


People tend to like keeping the $600 more in this deal, only 43% tend to gamble.


Do you see the trick?


Losing $400 out of $1000 is the same thing as keeping $600 out of $1000! So why do people like the “keeping” option over the “losing” option? We just tend to focus on avoiding losses, even if it doesn’t make sense.


Result for Amy? Given the choice to pay attention to what could make her look good, or to what could make her look bad…


Amy carefully checks on all her flaws each time she looks in the mirror. The balanced beauty assessment that Amy graciously grants others is lost when she views herself. She sees herself as less beautiful than everyone else sees her. 


Plus, whatever has your attention seems more important than what you’re not paying attention to. It’s calledattentional bias. It’s a natural fact that if you spend most of the time carefully examining your flaws, and only very little time appreciating your good points, the flaws will tend to weigh heaviest in your mind.


Now, the second reason Amy judges her own beauty under a harsher gaze.


Reason 2: Familiarity


Amy doesn’t just have more cause to look at her flaws, she has more ability to do so.
Who knows you like you? If you paid someone to examine flaw after flaw in you, they wouldn’t know where to look! They’d find one, and then hunt for the next one while all the beautiful parts of you kept getting in the way. There’s that balanced assessment we have when we judge each others beauty; there’s a limit to how judgmental we can be even if we’re trying!


Indeed, it takes years, a lifetime, even, to build up the blind spots to beauty, and the checklist of flaws Amy knows by heart. She can jump from one flaw to the next and to the next with an impressive speed and efficiency that would be fantastic if it wasn’t all aimed at tearing down the beauty before her.


Your intimate knowledge of your beauty could just as easily let you appreciate your subtle beauties as your subtle flaws, but thanks to loss aversion, your attention is dialed up to to ten and stuck on ruthless judgment.


Review


And so it is. Amy’s loss aversion focuses her attention on flaws. This attentional bias makes her misjudge her beauty for the worse, the handiwork of her emotional self. Then her unique intimacy with her appearance lets her unforgiving judgments strike more overwhelmingly and more piercingly than could her worst enemy. Indeed, in this, she is her own worst enemy.


Since others don’t have the ability to criticize us like we can, and they don’t have any reason to pay special attention to our faults, their attention towards us is more balanced. They see the clearest good and bad things.


The Fix


How can Amy achieve a more natural, balanced view of her beauty? It’s a question which has troubled me at times, as even the most beautiful people I know are so often so down about their looks. How can it be? I’ve often been in that scene offering my assurances, and know well the feeling when my assurances are rejected, and my view of another’s beauty is knocked away and replaced with a gloomier picture. A sense of listless hopelessness advances as I search for a way to show them what I see. How can I say it any better than I already have? How can I make them see...?


If we can avoid the attentional bias on flaws, then we can make up for our loss aversion. We’ll always see ourselves more deeply than most, but we can focus on the good and bad. For every subtle flaw we endure a subtle loveliness we can turn to.


Next time examining her form and features in the mirror, Amy intentionally switches her attention to the appreciation of what she likes about herself. She spends as much time on her good points as her bad. She is beginning to see herself with the balance others naturally see her with.


All people can do the same. A balanced attention will counter our natural loss aversion, and let us see ourselves as others already do.


As you practice seeing with new eyes, let the perspective of others remind you what you’re looking for. Allow yourself to accept their perspective of you as valid, and probably more balanced than your own. Your goal to have a balanced perspective may take time, but take comfort in each of the little improvements along the way.


Questions to consider
• What would happen if only the effects of the media were in play without the effects of loss aversion? Or vice versa?
• How can you remember to balance your attention when you look in the mirror?
• What other mistakes might our loss aversion lead us to?
• How else might you achieve a more balanced perspective of yourself?
• Whom do you know that might benefit from understanding these ideas?

How my something to protect just coalesced into being

5 Romashka 28 May 2016 06:21PM

Tl;dr Different people will probably have different answers to the question of how to find the goal & nurture the 'something to protect' feeling, but mine is: your specific working experience is already doing it for you.

What values do other people expect of you?

I think that for many people, their jobs are the most meaningful ways of changing the world (including being a housewife). When you just enter a profession and start sharing your space and time with people who have been in it for a while, you let them shape you, for better or for worse. If the overwhelming majority of bankers are not EA (from the beneficiaries' point of view), it will be hard to be an EA banker. If the overwhelming majority of teachers view the lessons as basically slam dunks (from the students' point of view), it will be hard to be a teacher who revisits past insights with any purpose other than cramming.

So basically, if I want Something to protect, I find a compatible job, observe the people, like something good and hate something bad, and then try to give others like me the chance to do more of the first and less of the second.

I am generalizing from one example... or two...

I've been in a PhD program. I liked being expected to think, being given free advice about some of the possible failures, knowing other people who don't consider solo expeditions too dangerous. I hated being expected to fail, being denied changing my research topic, spending half a day home with a cranky kid and then running to meet someone who wasn't going to show up.

Then I became a lab technician & botany teacher in an out-of-school educational facility. I liked being able to show up later on some days, being treated kindly by a dozen unfamiliar people (even if they speak at classroom volume level), being someone who steps in for a chemistry instructor, finds umbrellas, and gives out books from her own library. I hated the condescending treatment of my subject by other teachers, sudden appointments, keys going missing, questions being recycled in highschool contests, and the feeling of intrusion upon others' well-structured lessons when I just had to add something (everyone took it in stride).

(...I am going to leave the job, because it doesn't pay well enough & I do want to see my kid on weekdays. It let me to identify my StP, though - a vision of what I want from botany education.)

Background and resolution.

When kids here in Ukraine start studying biology (6th-7th Form), they wouldn't have had any physics or chemistry classes, and are at the very start of algebra and geometry curriculum. (Which makes this a good place to introduce the notion of a phenomenon for the first time.) The main thing one can get out of a botany course is, I think, the notion of ordered, sequential, mathematically describable change. The kids have already observed seasonal changes in weather and vegetation, they have words to describe their personal experiences - but this goes unused. Instead, they begin with history of botany (!), proceed to cell structure (!!) and then to bacteriae etc. Life cycle of mosses? Try asking them how long does any particular stage take! It all happens on one page, doesn't it?

There are almost no numbers.

There is, frankly, no need for numbers. Understanding the difference between the flowering and the non-flowering plants doesn't require any. There is almost no use for direct observation, either - even of the simplest things, like what will grow in the infusions of different vegetables after a week on the windowsill. There is no science.

And I don't like this.

I want there to be a book of simple, imperfectly posed problems containing as little words and as many pictures as possible. As in, 'compare the areas of the leaves on Day 1 - Day 15. How does it change? What processes underlie it?' etc. And there should be 10 or more leaves per day, so that the child would see that they don't grow equally fast, and that maybe sometimes, you can't really tell Day 7 from Day 10.

And there would be questions like 'given such gradient of densities of stomata on the poplar's leaves from Height 1 to Height 2, will there be any change in the densities of stomata of the mistletoe plants attached at Height 1 and Height 2? Explain your reasoning.' (Actually, I am unsure about this one. Leaf conductance depends on more than stomatal density...)

Conclusion

...Sorry for so many words. One day, my brain just told me [in the voice of Sponge Bob] that this was what I wanted. Subjectively, it didn't use virtue ethics or conscious decisions or anything, just saw a hole in the world order and squashed plugs into it until one kinda fit.

Has it been like this for you?

Anti-reductionism as complementary, rather than contradictory

-2 ImNotAsSmartAsIThinK 27 May 2016 11:17PM

Epistemic Status: confused & unlikely

Author's note: the central claim of this article I now believe is confused, and mostly inaccurate. More precisely (in response to a comment by ChristianKl)

>Whose idea of reductionism are you criticising? I think your post could get more useful by being more clear about the idea you want to challenge.

I think this is closest I get to having a "Definiton 3.4.1" in my post

"...the other reductionism I mentioned, the 'big thing = small thing + small thing' one..."

Essentially, the claim is that to accurately explain reality, non-reductionist explanations aren't always *wrong*. 

The confusion, however, that I realized elsewhere in the thread, is that I conflate 'historical explanation' with 'predictive explanation'. Good predictive explanation will almost always be reductionist, because, as it says on the tin, big are made of smaller things. Good historical explanations, though, will be contra-reductionist, they'll explain phenomena in terms of its relation to the environment. Consider evolution; the genes seem to be explained non-reductionistically because their presence or absence is determined by its effect on the environment i.e. whether its fit, so the explanation for how it got there necessarily includes complex things because they cause it.

>Apart from that I don't know what you mean with theory in "Reductionism is a philosophy, not a theory." As a result on using a bunch of terms where I don't know exactly what you mean it's hard to follow your argument.

Artifact of confusion;  if contra-reductionism is a valid platform for explanation, then the value of reductionism isn't constative -- that is, it isn't about whether it's true or false, but something at the meta-level, rather than the object level. The antecedent is no longer believed, so now I do not believe the consequent.

The conceit I had by calling it a philosophy, or more accurately, a perspective, is essentially that you have a dataset, then you can apply a 'reductionist' filter on it to get reductionist explanations and a 'contra-reductionist' filter to get contra explanations. This was a confusion; and only seemed reasonable because I I was treating the two type of explanation -- historical and predictive -- as somehow equivalent, which I now know to be mistaken.

 

Reductionism is usually thought of as the assertion that the sum of the parts equal the whole. Or, a bit more polemically, that reductionist explanations more meaningful, proper, or [insert descriptor laced with postive affect]. It's certainly appealing, you could even say it seems reality prefers these types of explanation. The facts of biology can be attributed to the effects of chemistry, the reactions of chemistry can be attributed to the interplay of atoms, and so on.

But this is conflating what is seen with the perspective itself; I see a jelly donut therefore I am a jelly donut is not a valid inference. Reductionism is a way of thinking about facts, but it is not the facts themselves. Reductionism is a philosophy, not a theory. The closest thing to an testable prediction it makes it what could be termed an anti-prediction.

Another confusion concerns the alternatives to reductionism. The salient instance of anti-reduction tends to be some holist quantum spirituality woo, but I contend this is more of a weak man than anything. To alleviate any confusion, I'll just refer to my proposed notion as 'contra-reductionism'.

Earlier, I mentioned reductionism makes no meaningful predictions. To clarify this, I'll distinguish from a kind a diminutive motte of reductionism which may or may not actually exist outside my own mind, (and which truly is just a species of causality, broadly construed). In broad strokes, this reductionism 'reduces' a phenomena to the sum of it's causes, as opposed to its parts. This is the kind of reductionist explanation that treats evolution as a reductionist explanation, indeed it treats almost any model which isn't strictly random as 'reductionist'. The other referent would be reductionism as the belief that "big things are made of a smaller things, and complex things are made of simpler things". 

It's is the former kind of reductionism that makes what I labeled an anti-prediction, the core of this argument is simply that reductionist is about causality; specifically, it qualifies what types of causes should even be considered meaningful or well-founded or simply, worth thinking about. If you broaden the net sufficiently, causality is a concept which even makes sense to apply to mathematical abstraction completely unrooted in any kind of time. That is the interventionist account of causality essentially boils it down to 'what levers could we have pulled to make something not happen', which perfectly translates to maths, see, for instance, reductio ad absurdum arguments.

But I digress. This diminutive reductionism here is simply the belief that things can be reduced to their causes, which is on par with defining transhumanism as 'simplified humanism' in the category of useless philosophical mottes. In short, this is quite literally an assertion of no substance, and isn't even worth giving a name.

Now that I've finished attacking straw men, the other reductionism I mentioned, the 'big thing = small thing + small thing' one, is also flawed, albeit useful nonetheless.

This can be illustrated by the example of evolution I mentioned: An evolutionary explanation is actually anti-reductionist; it explains the placement of nucleotides in terms of mathematics like inclusive genetic fitness and complexities like population ecology. Put bluntly, the there is little object-level difference between explaining genes sequences with evolution and explaining weather with pantheons of gods (there is meta-level difference; i.e. one is accurate). Put less controversially, this is explicitly non-reductionistic; relatively simple things (the genetic sequence of a creature) are explained in the language of things far more complex (population and environment dynamics over the course of billions of years). If this is your reductionism, all it does is encapsulate the ontology of universe-space, or more evocatively, it's a logic that doesn't -- couldn't -- tell you where you live, because doesn't change wherever you may go.

Another situation where reductionism  and contra-reductionism give different answers is an example cribbed from David Deutsch. It's possible to set up dominos so that they compute an algorithm which decides the primality of 631. How would you explain a a positive result?

The reductionist explanation is approximately: "the domino remains standing because the one behind it didn't fall over", and so on with variation such as "that domino didn't fall over because the one behind it was knockovered sideways". The contra-reductionist explanation is "that domino didn't fall over "because 631 is prime". Each one is 'useful' depending on whether you are concerned with the mechanics of the domino computer or the theory.

You might detect something in these passages -- that while I slough off any pretense of reductionism, glorious (philosophical) materialism remains a kind of true north in my analysis. This is my thesis. My contra-reductionism isn't non-materialistic, it's merely a perspective inversion of the sort highlighted by a figure/ground illusion. Reductionism defines -- reduces -- objects by pointing to their constituents. A mechanism functions because its components function. A big thing of small things. Quasi-reductionism  does the opposite, it defined objects by their impact on other objects, "[A] tree is only a tree in the shade it gives to the ground below, to the relationship of wind to branch and air to leaf." I don't mean this in a spiritual way, naturally (no pun intended). I am merely defining objects externally rather than internally. At the core, the rose is still a rose, the sum is still normality.

If I had to give a short, pithy summation of this post, the core is simply that, like all systematized notions of truth or meaningfulness, reductionism collapses in degenerate cases where it fails to be useful or give the right answer. Contra-reductionism isn't a improvement or a replacement, but a alternative formulation in a conceptual monoculture, which happens to give right answer sometimes.

Weekly LW Meetups

1 FrankAdamek 27 May 2016 03:39PM

This summary was posted to LW Main on May 27th. The following week's summary is here.

Irregularly scheduled Less Wrong meetups are taking place in:

The remaining meetups take place in cities with regular scheduling, but involve a change in time or location, special meeting content, or simply a helpful reminder about the meetup:

Locations with regularly scheduled meetups: Austin, Berlin, Boston, Brussels, Buffalo, Canberra, Columbus, Denver, Kraków, London, Madison WI, Melbourne, Moscow, New Hampshire, New York, Philadelphia, Research Triangle NC, San Francisco Bay Area, Seattle, Sydney, Tel Aviv, Toronto, Vienna, Washington DC, and West Los Angeles. There's also a 24/7 online study hall for coworking LWers and a Slack channel for daily discussion and online meetups on Sunday night US time.

continue reading »

LINK: Performing a Failure Autopsy

1 fowlertm 27 May 2016 02:21PM

In which I discuss the beginnings of a technique for learning from certain kinds of failures more effectively:

"What follows is an edited version of an exercise I performed about a month ago following an embarrassing error cascade. I call it a ‘failure autopsy’, and on one level it’s basically the same thing as an NFL player taping his games and analyzing them later, looking for places to improve.

But the aspiring rationalist wishing to do the something similar faces a more difficult problem, for a couple of reasons:

First, the movements of a mind can’t be seen in the same way the movements of a body can, meaning a different approach must be taken when doing granular analysis of mistaken cognition.

Second, learning to control the mind is simply much harder than learning to control the body.

And third, to my knowledge, nobody has really even tried to develop a framework for doing with rationality what an NFL player does with football, so someone like me has to pretty much invent the technique from scratch on the fly.  

I took a stab at doing that, and I think the result provides some tantalizing hints at what a more mature, more powerful versions of this technique might look like. Further, I think it illustrates the need for what I’ve been calling a “Dictionary of Internal Events”, or a better vocabulary for describing what happens between your ears."

LINK: Quora brainstorms strategies for containing AI risk

5 Mass_Driver 26 May 2016 04:32PM

In case you haven't seen it yet, Quora hosted an interesting discussion of different strategies for containing / mitigating AI risk, boosted by a $500 prize for the best answer. It attracted sci-fi author David Brin, U. Michigan professor Igor Markov, and several people with PhDs in machine learning, neuroscience, or artificial intelligence. Most people from LessWrong will disagree with most of the answers, but I think the article is useful as a quick overview of the variety of opinions that ordinary smart people have about AI risk.

https://www.quora.com/What-constraints-to-AI-and-machine-learning-algorithms-are-needed-to-prevent-AI-from-becoming-a-dystopian-threat-to-humanity

View more: Next