Irrationality Quotes August 2016

5 PhilGoetz 01 August 2016 07:12PM

Rationality quotes are self-explanatory.  Irrationality quotes often need some context and explication, so they would break the flow in Rationality Quotes.

2016 LessWrong Diaspora Survey Analysis: Part Three (Mental Health, Basilisk, Blogs and Media)

15 ingres 25 June 2016 03:40AM

2016 LessWrong Diaspora Survey Analysis

Overview


Mental Health

We decided to move the Mental Health section up closer in the survey this year so that the data could inform accessibility decisions.

LessWrong Mental Health As Compared To Base Rates In The General Population
Condition Base Rate LessWrong Rate LessWrong Self dx Rate Combined LW Rate Base/LW Rate Spread Relative Risk
Depression 17% 25.37% 27.04% 52.41% +8.37 1.492
Obsessive Compulsive Disorder 2.3% 2.7% 5.6% 8.3% +0.4 1.173
Autism Spectrum Disorder 1.47% 8.2% 12.9% 21.1% +6.73 5.578
Attention Deficit Disorder 5% 13.6% 10.4% 24% +8.6 2.719
Bipolar Disorder 3% 2.2% 2.8% 5% -0.8 0.733
Anxiety Disorder(s) 29% 13.7% 17.4% 31.1% -15.3 0.472
Borderline Personality Disorder 5.9% 0.6% 1.2% 1.8% -5.3 0.101
Schizophrenia 1.1% 0.8% 0.4% 1.2% -0.3 0.727
Substance Use Disorder 10.6% 1.3% 3.6% 4.9% -9.3 0.122

Base rates are taken from Wikipedia, US rates were favored over global rates where immediately available.

Accessibility Suggestions

So of the conditions we asked about, LessWrongers are at significant extra risk for three of them: Autism, ADHD, Depression.

LessWrong probably doesn't need to concern itself with being more accessible to those with autism as it likely already is. Depression is a complicated disorder with no clear interventions that can be easily implemented as site or community policy. It might be helpful to encourage looking more at positive trends in addition to negative ones, but the community already seems to do a fairly good job of this. (We could definitely use some more of it though.)

Attention Deficit Disorder - Public Service Announcement

That leaves ADHD, which we might be able to do something about, starting with this:

A lot of LessWrong stuff ends up falling into the same genre as productivity advice or 'self help'. If you have trouble with getting yourself to work, find yourself reading these things and completely unable to implement them, it's entirely possible that you have a mental health condition which impacts your executive function.

The best overview I've been able to find on ADD is this talk from Russell Barkely.

30 Essential Ideas For Parents

Ironically enough, this is a long talk, over four hours in total. Barkely is an entertaining speaker and the talk is absolutely fascinating. If you're even mildly interested in the subject I wholeheartedly recommend it. Many people who have ADHD just assume that they're lazy, or not trying hard enough, or just haven't found the 'magic bullet' yet. It never even occurs to them that they might have it because they assume that adult ADHD looks like childhood ADHD, or that ADHD is a thing that psychiatrists made up so they can give children powerful stimulants.

ADD is real, if you're in the demographic that takes this survey there's a decent enough chance you have it.

Attention Deficit Disorder - Accessibility

So with that in mind, is there anything else we can do?

Yes, write better.

Scott Alexander has written a blog post with writing advice for non-fiction, and the interesting thing about it is just how much of the advice is what I would tell you to do if your audience has ADD.

  • Reward the reader quickly and often. If your prose isn't rewarding to read it won't be read.

  • Make sure the overall article has good sectioning and indexing, people might be only looking for a particular thing and they won't want to wade through everything else to get it. Sectioning also gives the impression of progress and reduces eye strain.

  • Use good data visualization to compress information, take away mental effort where possible. Take for example the condition table above. It saves space and provides additional context. Instead of a long vertical wall of text with sections for each condition, it removes:

    • The extraneous information of how many people said they did not have a condition.

    • The space that would be used by creating a section for each condition. In fact the specific improvement of the table is that it takes extra advantage of space in the horizontal plane as well as the vertical plane.

    And instead of just presenting the raw data, it also adds:

    • The normal rate of incidence for each condition, so that the reader understands the extent to which rates are abnormal or unexpected.

    • Easy comparison between the clinically diagnosed, self diagnosed, and combined rates of the condition in the LW demographic. This preserves the value of the original raw data presentation while also easing the mental arithmetic of how many people claim to have a condition.

    • Percentage spread between the clinically diagnosed and the base rate, which saves the effort of figuring out the difference between the two values.

    • Relative risk between the clinically diagnosed and the base rate, which saves the effort of figuring out how much more or less likely a LessWronger is to have a given condition.

    Add all that together and you've created a compelling presentation that significantly improves on the 'naive' raw data presentation.

  • Use visuals in general, they help draw and maintain interest.

None of these are solely for the benefit of people with ADD. ADD is an exaggerated profile of normal human behavior. Following this kind of advice makes your article more accessible to everybody, which should be more than enough incentive if you intend to have an audience.1

Roko's Basilisk

This year we finally added a Basilisk question! In fact, it kind of turned into a whole Basilisk section. A fairly common question about this years survey is why the Basilisk section is so large. The basic reason is that asking only one or two questions about it would leave the results open to rampant speculation in one direction or another. By making the section comprehensive and covering every base, we've pretty much gotten about as complete of data as we'd want on the Basilisk phenomena.

Basilisk Knowledge
Do you know what Roko's Basilisk thought experiment is?

Yes: 1521 73.2%
No but I've heard of it: 158 7.6%
No: 398 19.2%

Basilisk Etiology
Where did you read Roko's argument for the Basilisk?

Roko's post on LessWrong: 323 20.2%
Reddit: 171 10.7%
XKCD: 61 3.8%
LessWrong Wiki: 234 14.6%
A news article: 71 4.4%
Word of mouth: 222 13.9%
RationalWiki: 314 19.6%
Other: 194 12.1%

Basilisk Correctness
Do you think Roko's argument for the Basilisk is correct?

Yes: 75 5.1%
Yes but I don't think it's logical conclusions apply for other reasons: 339 23.1%
No: 1055 71.8%

Basilisks And Lizardmen

One of the biggest mistakes I made with this years survey was not including "Do you believe Barack Obama is a hippopotamus?" as a control question in this section.2 Five percent is just outside of the infamous lizardman constant. This was the biggest survey surprise for me. I thought there was no way that 'yes' could go above a couple of percentage points. As far as I can tell this result is not caused by brigading but I've by no means investigated the matter so thoroughly that I would rule it out.

Higher?

Of course, we also shouldn't forget to investigate the hypothesis that the number might be higher than 5%. After all, somebody who thinks the Basilisk is correct could skip the questions entirely so they don't face potential stigma. So how many people skipped the questions but filled out the rest of the survey?

Eight people refused to answer whether they'd heard of Roko's Basilisk but went on to answer the depression question immediately after the Basilisk section. This gives us a decent proxy for how many people skipped the section and took the rest of the survey. So if we're pessimistic the number is a little higher, but it pays to keep in mind that there are other reasons to want to skip this section. (It is also possible that people took the survey up until they got to the Basilisk section and then quit so they didn't have to answer it, but this seems unlikely.)

Of course this assumes people are being strictly truthful with their survey answers. It's also plausible that people who think the Basilisk is correct said they'd never heard of it and then went on with the rest of the survey. So the number could in theory be quite large. My hunch is that it's not. I personally know quite a few LessWrongers and I'm fairly sure none of them would tell me that the Basilisk is 'correct'. (In fact I'm fairly sure they'd all be offended at me even asking the question.) Since 5% is one in twenty I'd think I'd know at least one or two people who thought the Basilisk was correct by now.

Lower?

One partial explanation for the surprisingly high rate here is that ten percent of the people who said yes by their own admission didn't know what they were saying yes to. Eight people said they've heard of the Basilisk but don't know what it is, and that it's correct. The lizardman constant also plausibly explains a significant portion of the yes responses, but that explanation relies on you already having a prior belief that the rate should be low.


Basilisk-Like Danger
Do you think Basilisk-like thought experiments are dangerous?

Yes, I think they're dangerous for decision theory reasons: 63 4.2%
Yes I think they're dangerous for social reasons (eg. A cult might use them): 194 12.8%
Yes I think they're dangerous for decision theory and social reasons: 136 9%
Yes I think they're socially dangerous because they make everybody involved look foolish: 253 16.7%
Yes I think they're dangerous for other reasons: 54 3.6%
No: 809 53.4%

Most people don't think Basilisk-Like thought experiments are dangerous at all. Of those that think they are, most of them think they're socially dangerous as opposed to a raw decision theory threat. The 4.2% number for pure decision theory threat is interesting because it lines up with the 5% number in the previous question for Basilisk Correctness.

P(Decision Theory Danger | Basilisk Belief) = 26.6%
P(Decision Theory And Social Danger | Basilisk Belief) = 21.3%

So of the people who say the Basilisk is correct, only half of them believe it is a decision theory based danger at all. (In theory this could be because they believe the Basilisk is a good thing and therefore not dangerous, but I refuse to lose that much faith in humanity.3)

Basilisk Anxiety
Have you ever felt any sort of anxiety about the Basilisk?

Yes: 142 8.8%
Yes but only because I worry about everything: 189 11.8%
No: 1275 79.4%

20.6% of respondents have felt some kind of Basilisk Anxiety. It should be noted that the exact wording of the question permits any anxiety, even for a second. And as we'll see in the next question that nuance is very important.

Degree Of Basilisk Worry
What is the longest span of time you've spent worrying about the Basilisk?

I haven't: 714 47%
A few seconds: 237 15.6%
A minute: 298 19.6%
An hour: 176 11.6%
A day: 40 2.6%
Two days: 16 1.05%
Three days: 12 0.79%
A week: 12 0.79%
A month: 5 0.32%
One to three months: 2 0.13%
Three to six months: 0 0.0%
Six to nine months: 0 0.0%
Nine months to a year: 1 0.06%
Over a year: 1 0.06%
Years: 4 0.26%

These numbers provide some pretty sobering context for the previous ones. Of all the people who worried about the Basilisk, 93.8% didn't worry about it for more than an hour. The next 3.65% didn't worry about it for more than a day or two. The next 1.9% didn't worry about it for more than a month and the last .7% or so have worried about it for longer.

Current Basilisk Worry
Are you currently worrying about the Basilisk?

Yes: 29 1.8%
Yes but only because I worry about everything: 60 3.7%
No: 1522 94.5%

Also encouraging. We should expect a small number of people to be worried at this question just because the section is basically the word "Basilisk" and "worry" repeated over and over so it's probably a bit scary to some people. But these numbers are much lower than the "Have you ever worried" ones and back up the previous inference that Basilisk anxiety is mostly a transitory phenomena.

One article on the Basilisk asked the question of whether or not it was just a "referendum on autism". It's a good question and now I have an answer for you, as per the table below:

Mental Health Conditions Versus Basilisk Worry
Condition Worried Worried But They Worry About Everything Combined Worry
Baseline (in the respondent population) 8.8% 11.8% 20.6%
ASD 7.3% 17.3% 24.7%
OCD 10.0% 32.5% 42.5%
AnxietyDisorder 6.9% 20.3% 27.3%
Schizophrenia 0.0% 16.7% 16.7%

 

The short answer: Autism raises your chances of Basilisk anxiety, but anxiety disorders and OCD especially raise them much more. Interestingly enough, schizophrenia seems to bring the chances down. This might just be an effect of small sample size, but my expectation was the opposite. (People who are really obsessed with Roko's Basilisk seem to present with schizophrenic symptoms at any rate.)

Before we move on, there's one last elephant in the room to contend with. The philosophical theory underlying the Basilisk is the CEV conception of friendly AI primarily espoused by Eliezer Yudkowsky. Which has led many critics to speculate on all kinds of relationships between Eliezer Yudkowsky and the Basilisk. Which of course obviously would extend to Eliezer Yudkowsky's Machine Intelligence Research Institute, a project to develop 'Friendly Artificial Intelligence' which does not implement a naive goal function that eats everything else humans actually care about once it's given sufficient optimization power.

The general thrust of these accusations is that MIRI, intentionally or not, profits from belief in the Basilisk. I think MIRI gets picked on enough, so I'm not thrilled about adding another log to the hefty pile of criticism they deal with. However this is a serious accusation which is plausible enough to be in the public interest for me to look at.

 

Percentage Of People Who Donate To MIRI Versus Basilisk Belief
Belief Percentage
Believe It's Incorrect 5.2%
Believe It's Structurally Correct 5.6%
Believe It's Correct 12.0%

Basilisk belief does appear to make you twice as likely to donate to MIRI. It's important to note from the perspective of earlier investigation that thinking it is "structurally correct" appears to make you about as likely as if you don't think it's correct, implying that both of these options mean about the same thing.

 

Sum Money Donated To MIRI Versus Basilisk Belief
Belief Mean Median Mode Stdev Total Donated
Believe It's Incorrect 1365.590 100.0 100.0 4825.293 75107.5
Believe It's Structurally Correct 2644.736 110.0 20.0 9147.299 50250.0
Believe It's Correct 740.555 300.0 300.0 1152.541 6665.0

Take these numbers with a grain of salt, it only takes one troll to plausibly lie about their income to ruin it for everybody else.

Interestingly enough, if you sum all three total donated counts and divide by a hundred, you find that five percent of the sum is about what was donated by the Basilisk group. ($6601 to be exact) So even though the modal and median donations of Basilisk believers are higher, they donate about as much as would be naively expected by assuming donations among groups are equal.4

 

Percentage Of People Who Donate To MIRI Versus Basilisk Worry
Anxiety Percentage
Never Worried 4.3%
Worried But They Worry About Everything 11.1%
Worried 11.3%

In contrast to the correctness question, merely having worried about the Basilisk at any point in time doubles your chances of donating to MIRI. My suspicion is that these people are not, as a general rule, donating because of the Basilisk per se. If you're the sort of person who is even capable of worrying about the Basilisk in principle, you're probably the kind of person who is likely to worry about AI risk in general and donate to MIRI on that basis. This hypothesis is probably unfalsifiable with the survey information I have, because Basilisk-risk is a subset of AI risk. This means that anytime somebody indicates on the survey that they're worried about AI risk this could be because they're worried about the Basilisk or because they're worried about more general AI risk.

 

Sum Money Donated To MIRI Versus Basilisk Worry
Anxiety Mean Median Mode Stdev Total Donated
Never Worried 1033.936 100.0 100.0 3493.373 56866.5
Worried But They Worry About Everything 227.047 75.0 300.0 438.861 4768.0
Worried 4539.25 90.0 10.0 11442.675 72628.0
Combined Worry         77396.0

Take these numbers with a grain of salt, it only takes one troll to plausibly lie about their income to ruin it for everybody else.

This particular analysis is probably the strongest evidence in the set for the hypothesis that MIRI profits (though not necessarily through any involvement on their part) from the Basilisk. People who worried from an unendorsed perspective donate less on average than everybody else. The modal donation among people who've worried about the Basilisk is ten dollars, which seems like a surefire way to torture if we're going with the hypothesis that these are people who believe the Basilisk is a real thing and they're concerned about it. So this implies that they don't, which supports my earlier hypothesis that people who are capable of feeling anxiety about the Basilisk are the core demographic to donate to MIRI anyway.

Of course, donors don't need to believe in the Basilisk for MIRI to profit from it. If exposing people to the concept of the Basilisk makes them twice as likely to donate but they don't end up actually believing the argument that would arguably be the ideal outcome for MIRI from an Evil Plot perspective. (Since after all, pursuing a strategy which involves Basilisk belief would actually incentivize torture from the perspective of the acausal game theories MIRI bases its FAI on, which would be bad.)

But frankly this is veering into very speculative territory. I don't think there's an evil plot, nor am I convinced that MIRI is profiting from Basilisk belief in a way that outweighs the resulting lost donations and damage to their cause.5 If anybody would like to assert otherwise I invite them to 'put up or shut up' with hard evidence. The world has enough criticism based on idle speculation and you're peeing in the pool.

Blogs and Media

Since this was the LessWrong diaspora survey, I felt it would be in order to reach out a bit to ask not just where the community is at but what it's reading. I went around to various people I knew and asked them about blogs for this section. However the picks were largely based on my mental 'map' of the blogs that are commonly read/linked in the community with a handful of suggestions thrown in. The same method was used for stories.

Blogs Read

LessWrong
Regular Reader: 239 13.4%
Sometimes: 642 36.1%
Rarely: 537 30.2%
Almost Never: 272 15.3%
Never: 70 3.9%
Never Heard Of It: 14 0.7%

SlateStarCodex (Scott Alexander)
Regular Reader: 1137 63.7%
Sometimes: 264 14.7%
Rarely: 90 5%
Almost Never: 61 3.4%
Never: 51 2.8%
Never Heard Of It: 181 10.1%

[These two results together pretty much confirm the results I talked about in part two of the survey analysis. A supermajority of respondents are 'regular readers' of SlateStarCodex. By contrast LessWrong itself doesn't even have a quarter of SlateStarCodexes readership.]

Overcoming Bias (Robin Hanson)
Regular Reader: 206 11.751%
Sometimes: 365 20.821%
Rarely: 391 22.305%
Almost Never: 385 21.962%
Never: 239 13.634%
Never Heard Of It: 167 9.527%

Minding Our Way (Nate Soares)
Regular Reader: 151 8.718%
Sometimes: 134 7.737%
Rarely: 139 8.025%
Almost Never: 175 10.104%
Never: 214 12.356%
Never Heard Of It: 919 53.06%

Agenty Duck (Brienne Yudkowsky)
Regular Reader: 55 3.181%
Sometimes: 132 7.634%
Rarely: 144 8.329%
Almost Never: 213 12.319%
Never: 254 14.691%
Never Heard Of It: 931 53.846%

Eliezer Yudkowsky's Facebook Page
Regular Reader: 325 18.561%
Sometimes: 316 18.047%
Rarely: 231 13.192%
Almost Never: 267 15.248%
Never: 361 20.617%
Never Heard Of It: 251 14.335%

Luke Muehlhauser (Eponymous)
Regular Reader: 59 3.426%
Sometimes: 106 6.156%
Rarely: 179 10.395%
Almost Never: 231 13.415%
Never: 312 18.118%
Never Heard Of It: 835 48.49%

Gwern.net (Gwern Branwen)
Regular Reader: 118 6.782%
Sometimes: 281 16.149%
Rarely: 292 16.782%
Almost Never: 224 12.874%
Never: 230 13.218%
Never Heard Of It: 595 34.195%

Siderea (Sibylla Bostoniensis)
Regular Reader: 29 1.682%
Sometimes: 49 2.842%
Rarely: 59 3.422%
Almost Never: 104 6.032%
Never: 183 10.615%
Never Heard Of It: 1300 75.406%

Ribbon Farm (Venkatesh Rao)
Regular Reader: 64 3.734%
Sometimes: 123 7.176%
Rarely: 111 6.476%
Almost Never: 150 8.751%
Never: 150 8.751%
Never Heard Of It: 1116 65.111%

Bayesed And Confused (Michael Rupert)
Regular Reader: 2 0.117%
Sometimes: 10 0.587%
Rarely: 24 1.408%
Almost Never: 68 3.988%
Never: 167 9.795%
Never Heard Of It: 1434 84.106%

[This was the 'troll' answer to catch out people who claim to read everything.]

The Unit Of Caring (Anonymous)
Regular Reader: 281 16.452%
Sometimes: 132 7.728%
Rarely: 126 7.377%
Almost Never: 178 10.422%
Never: 216 12.646%
Never Heard Of It: 775 45.375%

GiveWell Blog (Multiple Authors)
Regular Reader: 75 4.438%
Sometimes: 197 11.657%
Rarely: 243 14.379%
Almost Never: 280 16.568%
Never: 412 24.379%
Never Heard Of It: 482 28.521%

Thing Of Things (Ozy Frantz)
Regular Reader: 363 21.166%
Sometimes: 201 11.72%
Rarely: 143 8.338%
Almost Never: 171 9.971%
Never: 176 10.262%
Never Heard Of It: 661 38.542%

The Last Psychiatrist (Anonymous)
Regular Reader: 103 6.023%
Sometimes: 94 5.497%
Rarely: 164 9.591%
Almost Never: 221 12.924%
Never: 302 17.661%
Never Heard Of It: 826 48.304%

Hotel Concierge (Anonymous)
Regular Reader: 29 1.711%
Sometimes: 35 2.065%
Rarely: 49 2.891%
Almost Never: 88 5.192%
Never: 179 10.56%
Never Heard Of It: 1315 77.581%

The View From Hell (Sister Y)
Regular Reader: 34 1.998%
Sometimes: 39 2.291%
Rarely: 75 4.407%
Almost Never: 137 8.049%
Never: 250 14.689%
Never Heard Of It: 1167 68.566%

Xenosystems (Nick Land)
Regular Reader: 51 3.012%
Sometimes: 32 1.89%
Rarely: 64 3.78%
Almost Never: 175 10.337%
Never: 364 21.5%
Never Heard Of It: 1007 59.48%

I tried my best to have representation from multiple sections of the diaspora, if you look at the different blogs you can probably guess which blogs represent which section.

Stories Read

Harry Potter And The Methods Of Rationality (Eliezer Yudkowsky)
Whole Thing: 1103 61.931%
Partially And Intend To Finish: 145 8.141%
Partially And Abandoned: 231 12.97%
Never: 221 12.409%
Never Heard Of It: 81 4.548%

Significant Digits (Alexander D)
Whole Thing: 123 7.114%
Partially And Intend To Finish: 105 6.073%
Partially And Abandoned: 91 5.263%
Never: 333 19.26%
Never Heard Of It: 1077 62.29%

Three Worlds Collide (Eliezer Yudkowsky)
Whole Thing: 889 51.239%
Partially And Intend To Finish: 35 2.017%
Partially And Abandoned: 36 2.075%
Never: 286 16.484%
Never Heard Of It: 489 28.184%

The Fable of the Dragon-Tyrant (Nick Bostrom)
Whole Thing: 728 41.935%
Partially And Intend To Finish: 31 1.786%
Partially And Abandoned: 15 0.864%
Never: 205 11.809%
Never Heard Of It: 757 43.606%

The World of Null-A (A. E. van Vogt)
Whole Thing: 92 5.34%
Partially And Intend To Finish: 18 1.045%
Partially And Abandoned: 25 1.451%
Never: 429 24.898%
Never Heard Of It: 1159 67.266%

[Wow, I never would have expected this many people to have read this. I mostly included it on a lark because of its historical significance.]

Synthesis (Sharon Mitchell)
Whole Thing: 6 0.353%
Partially And Intend To Finish: 2 0.118%
Partially And Abandoned: 8 0.47%
Never: 217 12.75%
Never Heard Of It: 1469 86.31%

[This was the 'troll' option to catch people who just say they've read everything.]

Worm (Wildbow)
Whole Thing: 501 28.843%
Partially And Intend To Finish: 168 9.672%
Partially And Abandoned: 184 10.593%
Never: 430 24.755%
Never Heard Of It: 454 26.137%

Pact (Wildbow)
Whole Thing: 138 7.991%
Partially And Intend To Finish: 59 3.416%
Partially And Abandoned: 148 8.57%
Never: 501 29.01%
Never Heard Of It: 881 51.013%

Twig (Wildbow)
Whole Thing: 55 3.192%
Partially And Intend To Finish: 132 7.661%
Partially And Abandoned: 65 3.772%
Never: 560 32.501%
Never Heard Of It: 911 52.873%

Ra (Sam Hughes)
Whole Thing: 269 15.558%
Partially And Intend To Finish: 80 4.627%
Partially And Abandoned: 95 5.495%
Never: 314 18.161%
Never Heard Of It: 971 56.16%

My Little Pony: Friendship Is Optimal (Iceman)
Whole Thing: 424 24.495%
Partially And Intend To Finish: 16 0.924%
Partially And Abandoned: 65 3.755%
Never: 559 32.293%
Never Heard Of It: 667 38.533%

Friendship Is Optimal: Caelum Est Conterrens (Chatoyance)
Whole Thing: 217 12.705%
Partially And Intend To Finish: 16 0.937%
Partially And Abandoned: 24 1.405%
Never: 411 24.063%
Never Heard Of It: 1040 60.89%

Ender's Game (Orson Scott Card)
Whole Thing: 1177 67.219%
Partially And Intend To Finish: 22 1.256%
Partially And Abandoned: 43 2.456%
Never: 395 22.559%
Never Heard Of It: 114 6.511%

[This is the most read story according to survey respondents, beating HPMOR by 5%.]

The Diamond Age (Neal Stephenson)
Whole Thing: 440 25.346%
Partially And Intend To Finish: 37 2.131%
Partially And Abandoned: 55 3.168%
Never: 577 33.237%
Never Heard Of It: 627 36.118%

Consider Phlebas (Iain Banks)
Whole Thing: 302 17.507%
Partially And Intend To Finish: 52 3.014%
Partially And Abandoned: 47 2.725%
Never: 439 25.449%
Never Heard Of It: 885 51.304%

The Metamorphosis Of Prime Intellect (Roger Williams)
Whole Thing: 226 13.232%
Partially And Intend To Finish: 10 0.585%
Partially And Abandoned: 24 1.405%
Never: 322 18.852%
Never Heard Of It: 1126 65.925%

Accelerando (Charles Stross)
Whole Thing: 293 17.045%
Partially And Intend To Finish: 46 2.676%
Partially And Abandoned: 66 3.839%
Never: 425 24.724%
Never Heard Of It: 889 51.716%

A Fire Upon The Deep (Vernor Vinge)
Whole Thing: 343 19.769%
Partially And Intend To Finish: 31 1.787%
Partially And Abandoned: 41 2.363%
Never: 508 29.28%
Never Heard Of It: 812 46.801%

I also did a k-means cluster analysis of the data to try and determine demographics and the ultimate conclusion I drew from it is that I need to do more analysis. Which I would do, except that the initial analysis was a whole bunch of work and jumping further down the rabbit hole in the hopes I reach an oasis probably isn't in the best interests of myself or my readers.

Footnotes


  1. This is a general trend I notice with accessibility. Not always, but very often measures taken to help a specific group end up having positive effects for others as well. Many of the accessibility suggestions of the W3C are things you wish every website did.

  2. I hadn't read this particular SSC post at the time I compiled the survey, but I was already familiar with the concept of a lizardman constant and should have accounted for it.

  3. I've been informed by a member of the freenode #lesswrong IRC channel that this is in fact Roko's opinion, because you can 'timelessly trade with the future superintelligence for rewards, not just punishment' according to a conversation they had with him last summer. Remember kids: Don't do drugs, including Max Tegmark.

  4. You might think that this conflicts with the hypothesis that the true rate of Basilisk belief is lower than 5%. It does a bit, but you also need to remember that these people are in the LessWrong demographic, which means regardless of what the Basilisk belief question means we should naively expect them to donate five percent of the MIRI donation pot.

  5. That is to say, it does seem plausible that MIRI 'profits' from Basilisk belief based on this data, but I'm fairly sure any profit is outweighed by the significant opportunity cost associated with it. I should also take this moment to remind the reader that the original Basilisk argument was supposed to prove that CEV is a flawed concept from the perspective of not having deleterious outcomes for people, so MIRI using it as a way to justify donating to them would be weird.

Room For More Funding In AI Safety Is Highly Uncertain

12 Evan_Gaensbauer 12 May 2016 01:57PM

(Crossposted to the Effective Altruism Forum)


Introduction

In effective altruism, people talk about the room for more funding (RFMF) of various organizations. RFMF is simply the maximum amount of money which can be donated to an organization, and be put to good use, right now. In most cases, “right now” typically refers to the next (fiscal) year.  Most of the time when I see the phrase invoked, it’s to talk about individual charities, for example, one of Givewell’s top-recommended charities. If a charity has run out of room for more funding, it may be typical for effective donors to seek the next best option to donate to.
Last year, the Future of Life Institute (FLI) made the first of its grants from the pool of money it’s received as donations from Elon Musk and the Open Philanthropy Project (Open Phil). Since then, I've heard a few people speculating about how much RFMF the whole AI safety community has in general. I don't think that's a sensible question to ask before we have a sense of what the 'AI safety' field is. Before, people were commenting on only the RFMF of individual charities, and now they’re commenting of entire fields as though they’re well-defined. AI safety hasn’t necessarily reached peak RFMF just because MIRI has a runway for one more year to operate at their current capacity, or because FLI made a limited number of grants this year.

Overview of Current Funding For Some Projects


The starting point I used to think about this issue came from Topher Hallquist, from his post explaining his 2015 donations:

I’m feeling pretty cautious right now about donating to organizations focused on existential risk, especially after Elon Musk’s $10 million donation to the Future of Life Institute. Musk’s donation don’t necessarily mean there’s no room for more funding, but it certainly does mean that room for more funding is harder to find than it used to be. Furthermore, it’s difficult to evaluate the effectiveness of efforts in this space, so I think there’s a strong case for waiting to see what comes of this infusion of cash before committing more money.


My friend Andrew and I were discussing this last week. In past years, the Machine Intelligence Research Institute (MIRI) has raised about $1 million (USD) in funds, and received more than that  for their annual operations last year. Going into 2016, Nate Soares, Executive Director of MIRI, wrote the following:

Our successful summer fundraiser has helped determine how ambitious we’re making our plans; although we may still slow down or accelerate our growth based on our fundraising performance, our current plans assume a budget of roughly $1,825,000 per year [emphasis not added].


This seems sensible to me as it's not too much more than what they raised last year, and it seems more and not less money will be flowing into AI safety in the near future. However, Nate also had plans for how MIRI could've productively spent up to $6 million last year, to grow the organization. So, far from MIRI believing it had all the funding it could use, it was seeking more. Of course, others might argue MIRI or other AI safety organizations already receive enough funding relative to other priorities, but that is an argument for a different time.

Andrew and I also talked about how, had FLI had enough funding to grant money to all the promising applicants for its 2015 grants in AI safety research, that would have been millions more flowing into AI safety. It’s true what Topher wrote: that, being outside of FLI, and not otherwise being a major donor, it may be exceedingly difficult for individuals to evaluate funding gaps in AI safety. While FLI has only received $11 million to grant in 2015-16 ($6 million already granted in 2015, with $5 million more to be granted in the coming year), they could easily have granted more than twice that much, had they received the money.

To speak to other organizations, Niel Bowerman, Assistant Director at the Future of Humanity Institute (FH)I, recently spoke about how FHI receives most of its funding exclusively for research, and bottlenecks like the operations he runs more depend on private donations FHI could use more of.  Sean O HEigeartaigh, Executive Director at the Centre for the Study of Existential Risk (CSER), at Cambridge University, recently stated in discussion that CSER and the Leverhulme Centre for the Future of Intelligence (CFI), which CSER is currently helping launch, face the same problem with their operations. Nick Bostrom, author of Superintelligence, and Director of FHI, is in the course of launching the Strategic Artificial Intelligence Research Centre (SAIRC), which received $1.5 million (USD) in funding from FLI. SAIRC seems good for funding for at least the rest of 2016.

 


The Big Picture
Above are the funding summaries for several organizations listed in Andrew Critch’s 2015 map of the existential risk reduction ecosystem.There are organizations working on existential risks other than those from AI, but they aren’t explicitly organized in a network the same way AI safety organizations are. So, in practice, the ‘x-risk ecosystem’ is mapable almost exclusively in terms of AI safety.

It seems to me the 'AI safety field', if defined just as the organizations and projects listed in Dr. Critch’s ecosystem map, and perhaps others closely related (e.g., AI Impacts), could have productively absorbed between $10 million and $25 million in 2016 alone. Of course, there are caveats rendering this a conservative estimate. First of all, the above is a contrived version of the AI safety "field", as there is plenty of research outside of this network popping up all the time. Second, I think the organizations and projects I listed above could've themselves thought of more uses for funding. Seeing as they're working on what is (presumably) the most important problem in the world, there is much millions more could do for foundational research on the AGI containment/control problem, safety research into narrow systems aside.


Too Much Variance in Estimates for RFMF in AI Safety

I've also heard people setting the benchmark for truly appropriate funding for AI safety to be in the ballpark of a trillion dollars. While in theory that may be true, on its face it currently seems absurd. I'm not saying there won't be a time in even the next several years when $1 trillion/year couldn't be used effectively. I'm saying that if there isn't a roadmap for how to increase the productive use of ~$10 million/year to AI safety, to $100 million to $1 billion dollars, talking about $1 trillion/year isn't practical. I don't even think there will be more than $1 billion on the table per year for the near future.

This argument can be used to justify continued earning to give on the part of effective altruists. That is, there is so much money, e.g., MIRI could use, it makes sense for everyone who isn't an AI researcher to earn to give. This might make sense if governments and universities give major funding to what they think is AI safety, give 99% of it to only robotic unemployment or something, miss the boat on the control problem, and MIRI gets a pittance of the money that will flow into the field. The idea that there is effectively something like a multi-trillion dollar ceiling for effective funding for AI safety is still unsound.

When the range for RFMF for AI safety ranges between $5-10 million (the amount of funding AI safety received in 2015) and $1 trillion, I feel like anyone not already well-within the AI safety community cannot reasonably make an estimate of how much money the field can productively use in one year.
On the other hand, there are also people who think that AI safety doesn’t need to be a big priority, or is currently as big a priority as it needs to be, so money spent funding AI safety research and strategy would be better spent elsewhere.

All this stated, I myself don’t have a precise estimate of how much capacity for funding the whole AI safety field will have in, say, 2017.

Reasonable Assumptions Going Forward

What I'm confident saying right now is:

  1. The amount of money AI safety could've productively used in 2016 alone is within an order of magnitude of $10 million, and probably less than $25 million, based on what I currently know.
  2. The amount of total funding available will likely increase year over year for the next several years. There could be quite dramatic rises.. The Open Philanthropy Project, worth $10+ billion (USD), recently announced AI safety will be their top priority next year, although this may not necessarily translate into more major grants in the next 12 months. The White House recently announced they’ll be hosting workshops on the Future of Artificial Intelligence, including concerns over risk. Also, to quote Stuart Russell (HT Luke Muehlhauser): "Industry [has probably invested] more in the last 5 years than governments have invested since the beginning of the field [in the 1950s]." This includes companies like Facebook, Baidu, and Google each investing tons of money into AI research, including Google’s purchase of DeepMind for $500 million in 2014. With an increasing number of universities and corporations investing money and talent into AI research, including AI safety, and now with major philanthropic foundations and governments paying attention to AI safety as well, it seems plausible the amount of funding for AI safety worldwide might balloon up to $100+ million in 2017 or 2018. However, this could just as easily not happen, and there's much uncertainty in projecting this.
  3. The field of AI safety will also grow year over year for the next several years. I doubt projects needing funding will grow as fast as the amount of funding available. This is because the rate at which institutions are willing to invest in growth will not only depend on how much money they're receiving now, but how much they can expect to receive in the future. Since how much those expectations reasonably vary is so uncertain, organizations are smartly conservative to hold their cards close to their chest. While OpenAI has pledged $1 billion for funding AI research in general, and not just safety, over the next couple decades, nobody knows if such funding will be available to organizations out of Oxford or Berkeley like AI Impacts MIRI, FHI or CFI. However,

 

  • i) increased awareness and concern over AI safety will draw in more researchers.
  • ii) the promise or expectation of more money to come may draw in more researchers seeking funding.
  • iii) the expanding field and the increased funding available will create a feedback loop in which institutions in AI safety, such as MIRI, make contingency plans to expand faster, if able to or need be.

Why This Matters

I don't mean to use the amount of funding AI safety has received in 2015 or 2016 as an anchor which will bias how much RFMF I think the field has. However, it seems more extreme lower or upper estimates I’ve encountered are baseless, and either vastly underestimate or overestimate how much the field of AI safety can productively grow each year. This is actually important to figure out.

80,000 Hours rates AI safety as perhaps the most important and neglected cause currently prioritized by the effective altruism movement. Consequently, 80,000 Hours recommends how similarly concerned people can work on the issue. Some talented computer scientists who could do best working in AI safety might opt to earn to give in software engineering or data science, if they conclude the bottleneck on AI safety isn’t talent but funding. Alternatively, small but critical organization which requires funding from value-aligned and consistent donors might fall through the cracks if too many people conclude all AI safety work in general is receiving sufficient funding, and chooses to forgo donating to AI safety. Many of us could make individual decisions going either way, but it also seems many of us could end up making the wrong choice. Assessments of these issues will practically inform decisions many of make over the next few years, determining how much of our time and potential we use fruitfully, or waste.

Everything above just lays out how estimating room for more funding in AI safety overall may be harder than anticipated, and to show how high the variance might be. I invite you to contribute to this discussion, as it only just starting. Please use the above info as a starting point to look into this more, or ask questions that will usefully clarify what we’re thinking about. The best fora to start further discussion seem to be the Effective Altruism Forum, LessWrong, or the AI Safety Discussion group on Facebook, where I initiated the conversation leading to this post.

Using humility to counteract shame

9 Vika 15 April 2016 06:32PM

"Pride is not the opposite of shame, but its source. True humility is the only antidote to shame."

Uncle Iroh, "Avatar: The Last Airbender"

Shame is one of the trickiest emotions to deal with. It is difficult to think about, not to mention discuss with others, and gives rise to insidious ugh fields and negative spirals. Shame often underlies other negative emotions without making itself apparent - anxiety or anger at yourself can be caused by unacknowledged shame about the possibility of failure. It can stack on top of other emotions - e.g. you start out feeling upset with someone, and end up being ashamed of yourself for feeling upset, and maybe even ashamed of feeling ashamed if meta-shame is your cup of tea. The most useful approach I have found against shame is invoking humility.

What is humility, anyway? It is often defined as a low view of your own importance, and tends to be conflated with modesty. Another common definition that I find more useful is acceptance of your own flaws and shortcomings. This is more compatible with confidence, and helpful irrespective of your level of importance or comparison to other people. What humility feels like to me on a system 1 level is a sense of compassion and warmth towards yourself while fully aware of your imperfections (while focusing on imperfections without compassion can lead to beating yourself up). According to LessWrong, "to be humble is to take specific actions in anticipation of your own errors", which seems more like a possible consequence of being humble than a definition.

Humility is a powerful tool for psychological well-being and instrumental rationality that is more broadly applicable than just the ability to anticipate errors by seeing your limitations more clearly. I can summon humility when I feel anxious about too many upcoming deadlines, or angry at myself for being stuck on a rock climbing route, or embarrassed about forgetting some basic fact in my field that I am surely expected to know by the 5th year of grad school. While humility comes naturally to some people, others might find it useful to explicitly build an identity as a humble person. How can you invoke this mindset?

One way is through negative visualization or pre-hindsight, considering how your plans could fail, which can be time-consuming and usually requires system 2. A faster and less effortful way is to is to imagine a person, real or fictional, who you consider to be humble. I often bring to mind my grandfather, or Uncle Iroh from the Avatar series, sometimes literally repeating the above quote in my head, sort of like an affirmation. I don't actually agree that humility is the only antidote to shame, but it does seem to be one of the most effective.

(Cross-posted from my blog. Thanks to Janos Kramar for his feedback on this post.)

Positivity Thread :)

24 Viliam 08 April 2016 09:34PM

Hi everyone! This is an experimental thread to relax and enjoy the company of other aspiring rationalists. Special rules for communication and voting apply here. Please play along!

(If for whatever reason you cannot or don't want to follow the rules, please don't post in this thread. However, feel free to voice your opinion in the corresponding meta thread.)

Here is the spirit of the rules:

  • be nice
  • be cheerful
  • don't go meta

 

And here are the details:

 

On the scale from negative (hostility, complaints, passive aggression) through neutral (bare facts) to positive (happiness, fun, love), please only post comments from the "neutral to positive" half. Preferably at least slightly positive; but don't push yourself too far if you don't feel so. The goal is to make both yourself and your audience feel comfortable.

If you disagree with someone, please consider whether the issue is important enough to disagree openly. If it isn't, you also have an option to simply skip the comment. You can send the author a private message. Or you can post your disagreement in the meta thread (and then send them the link in a private message). If you still believe it is better to disagree here, please do it politely and friendly.

Avoid inherently controversial topics, such as politics, religion, or interpretations of quantum physics.

Feel free to post stuff that normally doesn't get posted on LessWrong. Feel free to be silly, as long as it harms no one. Emoticons are allowed. Note: This website supports Unicode. ◕‿◕

 

Upvote the stuff you like. :)

Downvote only the stuff that breaks the rules. :( In this thread, the proper reaction to a comment that you don't like, but doesn't break the rules, is to ignore it.

Please don't downvote a comment below zero, unless you believe that the breaking of rules was intentional.

(Note: There is one user permanently banned from this website. Any comment posted from any of this user's new accounts is considered an intentional breaking of the rules, regardless of its content.)

 

Don't go meta in this thread. If you want to discuss whether the rules here should be different, or whether a specific comment did or didn't break the rules, or something like that, please use the meta thread.

Don't abuse the rules. I already know that you are clever, and that you could easily break the spirit of the rules while following the letter. Just don't, please.

Even if you notice or suspect that other people are breaking some of the rules, please continue following all the rules. Don't let one uncooperative person start an avalanche of defection. That includes if you notice that people are not voting according to the rules. If necessary, complain in the meta thread.

 

Okay, that's enough rules for today. Have fun! I love you! ❤ ❤ ❤ ٩(⁎❛ᴗ❛⁎)۶

 

EDIT: Oops, I forgot the most important part. LOL! The topic is "anything that makes you happy" (basically Open Thread / Bragging Thread / etc., but only the positive things).

"3 Reasons It’s Irrational to Demand ‘Rationalism’ in Social Justice Activism"

8 PhilGoetz 29 March 2016 03:16PM

The lead article on everydayfeminism.com on March 25:

3 Reasons It’s Irrational to Demand ‘Rationalism’ in Social Justice Activism

The scenario is always the same: I say we should  abolish prisonspolice, and the  American settler state— someone tells me I’m irrational. I say we need  decolonization of the land — someone tells me I’m not being realistic.... When those who are the loudest, the most disruptive — the ones who want to destroy America and all of the oppression it has brought into the world — are being silenced even by others in social justice groups, that is unacceptable.

(The link from "decolonization" is to "Decolonization is not a metaphor", to make it clear s/he means actually giving the land back to the Native Americans.)

I regularly see people who describe how social justice activists act accused of setting up a straw man.  This article show that the bias of some SJWs against reason is impossible to strawman.  The author argues at length that rationality is bad, and that justice arguments shouldn't be rational or be defended rationally.  Ze is, or was, confused about what "rationality" means, but clearly now means it to include reason-based argumentation.

This isn't just some wacko's blog; it was chosen as the headline article for the website.  I had to click around to a few other articles to make sure it wasn't a parody site.

But it isn't just a sign of how irrational the social justice movement is—it has clues to how it got that way.

continue reading »

Lesswrong Potential Changes

17 Elo 19 March 2016 12:24PM

I have compiled many suggestions about the future of lesswrong into a document here:

https://docs.google.com/document/d/1hH9mBkpg2g1rJc3E3YV5Qk-b-QeT2hHZSzgbH9dvQNE/edit?usp=sharing

It's long and best formatted there.

In case you hate leaving this website here's the summary:

Summary

There are 3 main areas that are going to change.

  1. Technical/Direct Site Changes

 

  1.  
    1. new home page

    2. new forum style with subdivisions

      1. new sub for “friends of lesswrong” (rationality in the diaspora)

    3. New tagging system

    4. New karma system

    5. Better RSS

  2. Social and cultural changes

    1. Positive culture; a good place to be.

    2. Welcoming process

    3. Pillars of good behaviours (the ones we want to encourage)

    4. Demonstrate by example

    5. 3 levels of social strategies (new, advanced and longtimers)

  3. Content (emphasis on producing more rationality material)

    1. For up-and-coming people to write more

      1. for the community to improve their contributions to create a stronger collection of rationality.

    2. For known existing writers

      1. To encourage them to keep contributing

      2. To encourage them to work together with each other to contribute

Less Wrong Potential Changes

Summary

Why change LW?

How will we know we have done well (the feel of things)

How will we know we have done well (KPI - technical)

Technical/Direct Site Changes

Homepage

Subs

Tagging

Karma system

Moderation

Users

RSS magic

Not breaking things

Funding support

Logistical changes

Other

Done (or Don’t do it):

Social/cultural

General initiatives

Welcoming initiatives

Initiatives for moderates

Initiatives for long-time users

Rationality Content

Target: a good 3 times a week for a year.

Approach formerly prominent writers

Explicitly invite

Place to talk with other rationalists

Pillars of purpose
(with certain sub-reddits for different ideas)

Encourage a declaration of intent to post

Specific posts

Other notes


Why change LW?

 

Lesswrong has gone through great times of growth and seen a lot of people share a lot of positive and brilliant ideas.  It was hailed as a launchpad for MIRI, in that purpose it was a success.  At this point it’s not needed as a launchpad any longer.  While in the process of becoming a launchpad it became a nice garden to hang out in on the internet.  A place of reasonably intelligent people to discuss reasonable ideas and challenge each other to update their beliefs in light of new evidence.  In retiring from its “launchpad” purpose, various people have felt the garden has wilted and decayed and weeds have grown over.  In light of this; and having enough personal motivation to decide I really like the garden, and I can bring it back!  I just need a little help, a little magic, and some little changes.  If possible I hope for the garden that we all want it to be.  A great place for amazing ideas and life-changing discussions to happen.


How will we know we have done well (the feel of things)

 

Success is going to have to be estimated by changes to the feel of the site.  Unfortunately that is hard to do.  As we know outrage generates more volume than positive growth.  Which is going to work against us when we try and quantify by measurable metrics.  Assuming the technical changes are made; there is still going to be progress needed on the task of socially improving things.  There are many “seasoned active users” - as well as “seasoned lurkers” who have strong opinions on the state of lesswrong and the discussion.  Some would say that we risk dying of niceness, others would say that the weeds that need pulling are the rudeness.  


Honestly we risk over-policing and under-policing at the same time.  There will be some not-niceness that goes unchecked and discourages the growth of future posters (potentially our future bloggers), and at the same time some other niceness that motivates trolling behaviour as well as failing to weed out potential bad content which would leave us as fluffy as the next forum.  there is no easy solution to tempering both sides of this challenge.  I welcome all suggestions (it looks like a karma system is our best bet).


In the meantime I believe being on the general niceness, steelman side should be the motivated direction of movement.  I hope to enlist some members as essentially coaches in healthy forum growth behaviour.  Good steelmanning, positive encouragement, critical feedback as well as encouragement, a welcoming committee and an environment of content improvement and growth.


While at the same time I want everyone to keep up the heavy debate; I also want to see the best versions of ourselves coming out onto the publishing pages (and sometimes that can be the second draft versions).


So how will we know?  By trying to reduce the ugh fields to people participating in LW, by seeing more content that enough people care about, by making lesswrong awesome.


The full document is just over 11 pages long.  Please go read it, this is a chance to comment on potential changes before they happen.


Meta: This post took a very long time to pull together.  I read over 1000 comments and considered the ideas contained there.  I don't have an accurate account of how long this took to write; but I would estimate over 65 hours of work has gone into putting it together.  It's been literally weeks in the making, I really can't stress how long I have been trying to put this together.

If you want to help, please speak up so we can help you help us.  If you want to complain; keep it to yourself.

Thanks to the slack for keeping up with my progress and Vanvier, Mack, Leif, matt and others for reviewing this document.

As usual - My table of contents