Filter Last three months

Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Turning the Technical Crank

43 Error 05 April 2016 05:36AM

A few months ago, Vaniver wrote a really long post speculating about potential futures for Less Wrong, with a focus on the idea that the spread of the Less Wrong diaspora has left the site weak and fragmented. I wasn't here for our high water mark, so I don't really have an informed opinion on what has socially changed since then. But a number of complaints are technical, and as an IT person, I thought I had some useful things to say.

I argued at the time that many of the technical challenges of the diaspora were solved problems, and that the solution was NNTP -- an ancient, yet still extant, discussion protocol. I am something of a crank on the subject and didn't expect much of a reception. I was pleasantly surprised by the 18 karma it generated, and tried to write up a full post arguing the point.

I failed. I was trying to write a manifesto, didn't really know how to do it right, and kept running into a vast inferential distance I couldn't seem to cross. I'm a product of a prior age of the Internet, from before the http prefix assumed its imperial crown; I kept wanting to say things that I knew would make no sense to anyone who came of age this millennium. I got bogged down in irrelevant technical minutia about how to implement features X, Y, and Z. Eventually I decided I was attacking the wrong problem; I was thinking about 'how do I promote NNTP', when really I should have been going after 'what would an ideal discussion platform look like and how does NNTP get us there, if it does?'

So I'm going to go after that first, and work on the inferential distance problem, and then I'm going to talk about NNTP, and see where that goes and what could be done better. I still believe it's the closest thing to a good, available technological schelling point, but it's going to take a lot of words to get there from here, and I might change my mind under persuasive argument. We'll see.

Fortunately, this is Less Wrong, and sequences are a thing here. This is the first post in an intended sequence on mechanisms of discussion. I know it's a bit off the beaten track of Less Wrong subject matter. I posit that it's both relevant to our difficulties and probably more useful and/or interesting than most of what comes through these days. I just took the 2016 survey and it has a couple of sections on the effects of the diaspora, so I'm guessing it's on topic for meta purposes if not for site-subject purposes.

Less Than Ideal Discussion

To solve a problem you must first define it. Looking at the LessWrong 2.0 post, I see the following technical problems, at a minimum; I'll edit this with suggestions from comments.

  1. Aggregation of posts. Our best authors have formed their own fiefdoms and their work is not terribly visible here. We currently have limited support for this via the sidebar, but that's it.
  2. Aggregation of comments. You can see diaspora authors in the sidebar, but you can't comment from here.
  3. Aggregation of community. This sounds like a social problem but it isn't. You can start a new blog, but unless you plan on also going out of your way to market it then your chances of starting a discussion boil down to "hope it catches the attention of Yvain or someone else similarly prominent in the community." Non-prominent individuals can theoretically post here; yet this is the place we are decrying as moribund.
  4. Incomplete and poor curation. We currently do this via Promoted, badly, and via the diaspora sidebar, also badly.
  5. Pitiful interface feature set. This is not so much a Less Wrong-specific problem as a 2010s-internet problem; people who inhabit SSC have probably seen me respond to feature complaints with "they had something that did that in the 90s, but nobody uses it." (my own bugbear is searching for comments by author-plus-content).
  6. Changes are hamstrung by the existing architecture, which gets you volunteer reactions like this one.

I see these meta-technical problems:

  1. Expertise is scarce. Few people are in a position to technically improve the site, and those that are, have other demands on their time.
  2. The Trivial Inconvenience Problem limits the scope of proposed changes to those that are not inconvenient to commenters or authors.
  3. Getting cooperation from diaspora authors is a coordination problem. Are we better than average at handling those? I don't know.

Slightly Less Horrible Discussion

"Solving" community maintenance is a hard problem, but to the extent that pieces of it can be solved technologically, the solution might include these ultra-high-level elements:

  1. Centralized from the user perspective. A reader should be able to interact with the entire community in one place, and it should be recognizable as a community.
  2. Decentralized from the author perspective. Diaspora authors seem to like having their own fiefdoms, and the social problem of "all the best posters went elsewhere" can't be solved without their cooperation. Therefore any technical solution must allow for it.
  3. Proper division of labor. Scott Alexander probably should not have to concern himself with user feature requests; that's not his comparative advantage and I'd rather he spend his time inventing moral cosmologies. I suspect he would prefer the same. The same goes for Eliezer Yudkowski or any of our still-writing-elsewhere folks.
  4. Really good moderation tools.
  5. Easy entrance. New users should be able to join the discussion without a lot of hassle. Old authors that want to return should be able to do so and, preferably, bring their existing content with them.
  6. Easy exit. Authors who don't like the way the community is heading should be able to jump ship -- and, crucially, bring their content with them to their new ship. Conveniently. This is essentially what has happened, except old content is hostage here.
  7. Separate policy and mechanism within the site architecture. Let this one pass for now if you don't know what it means; it's the first big inferential hurdle I need to cross and I'll be starting soon enough.

As with the previous, I'll update this from the comments if necessary.

Getting There From Here

As I said at the start, I feel on firmer ground talking about technical issues than social ones. But I have to acknowledge one strong social opinion: I believe the greatest factor in Less Wrong's decline is the departure of our best authors for personal blogs. Any plan for revitalization has to provide an improved substitute for a personal blog, because that's where everyone seems to end up going. You need something that looks and behaves like a blog to the author or casual readers, but integrates seamlessly into a community discussion gateway.

I argue that this can be achieved. I argue that the technical challenges are solvable and the inherent coordination problem is also solvable, provided the people involved still have an interest in solving it.

And I argue that it can be done -- and done better than what we have now -- using technology that has existed since the '90s.

I don't argue that this actually will be achieved in anything like the way I think it ought to be. As mentioned up top, I am a crank, and I have no access whatsoever to anybody with any community pull. My odds of pushing through this agenda are basically nil. But we're all about crazy thought experiments, right?

This topic is something I've wanted to write about for a long time. Since it's not typical Less Wrong fare, I'll take the karma on this post as a referendum on whether the community would like to see it here.

Assuming there's interest, the sequence will look something like this (subject to reorganization as I go along, since I'm pulling this from some lengthy but horribly disorganized notes; in particular I might swap subsequences 2 and 3):

  1. Technical Architecture
    1. Your Web Browser Is Not Your Client
    2. Specialized Protocols: or, NNTP and its Bastard Children
    3. Moderation, Personal Gardens, and Public Parks
    4. Content, Presentation, and the Division of Labor
    5. The Proper Placement of User Features
    6. Hard Things that are Suddenly Easy: or, what does client control gain us?
    7. Your Web Browser Is Still Not Your Client (but you don't need to know that)
  2. Meta-Technical Conflicts (or, obstacles to adoption)
    1. Never Bet Against Convenience
    2. Conflicting Commenter, Author, and Admin Preferences
    3. Lipstick on the Configuration Pig
    4. Incremental Implementation and the Coordination Problem.
    5. Lowering Barriers to Entry and Exit
  3. Technical and Social Interoperability
    1. Benefits and Drawbacks of Standards
    2. Input Formats and Quoting Conventions
    3. Faking Functionality
    4. Why Reddit Makes Me Cry
    5. What NNTP Can't Do
  4. Implementation of Nonstandard Features
    1. Some desirable feature #1
    2. Some desirable feature #2
    3. ...etc. This subsequence is only necessary if someone actually wants to try and do what I'm arguing for, which I think unlikely.

(Meta-meta: This post was written in Markdown, converted to HTML for posting using Pandoc, and took around four hours to write. I can often be found lurking on #lesswrong or #slatestarcodex on workday afternoons if anyone wants to discuss it, but I don't promise to answer quickly because, well, workday)

[Edited to add: At +10/92% karma I figure continuing is probably worth it. After reading comments I'm going to try to slim it down a lot from the outline above, though. I still want to hit all those points but they probably don't all need a full post's space. Note that I'm not Scott or Eliezer, I write like I bleed, so what I do post will likely be spaced out]

A Second Year of Spaced Repetition Software in the Classroom

28 tanagrabeast 01 May 2016 10:14PM

This is a follow-up to last year's report. Here, I will talk about my successes and failures using Spaced Repetition Software (SRS) in the classroom for a second year. The year's not over yet, but I have reasons for reporting early that should become clear in a subsequent post. A third post will then follow, and together these will constitute a small sequence exploring classroom SRS and the adjacent ideas that bubble up when I think deeply about teaching.

Summary

I experienced net negative progress this year in my efforts to improve classroom instruction via spaced repetition software. While this is mostly attributable to shifts in my personal priorities, I have also identified a number of additional failure modes for classroom SRS, as well as additional shortcomings of Anki for this use case. My experiences also showcase some fundamental challenges to teaching-in-general that SRS depressingly spotlights without being any less susceptible to. Regardless, I am more bullish than ever about the potential for classroom SRS, and will lay out a detailed vision for what it can be in the next post.

continue reading »

The Sally-Anne fallacy

27 philh 11 April 2016 01:06PM

Cross-posted from my blog

I'd like to coin a term. The Sally-Anne fallacy is the mistake of assuming that somone believes something, simply because that thing is true.1

The name comes from the Sally-Anne test, used in developmental psychology to detect theory of mind. Someone who lacks theory of mind will fail the Sally-Anne test, thinking that Sally knows where the marble is. The Sally-Anne fallacy is also a failure of theory of mind.

In internet arguments, this will often come up as part of a chain of reasoning, such as: you think X; X implies Y; therefore you think Y. Or: you support X; X leads to Y; therefore you support Y.2

So for example, we have this complaint about the words "African dialect" used in Age of Ultron. The argument goes: a dialect is a variation on a language, therefore Marvel thinks "African" is a language.

You think "African" has dialects; "has dialects" implies "is a language"; therefore you think "African" is a language.

Or maybe Marvel just doesn't know what a "dialect" is.

This is also a mistake I was pointing at in Fascists and Rakes. You think it's okay to eat tic-tacs; tic-tacs are sentient; therefore you think it's okay to eat sentient things. Versus: you think I should be forbidden from eating tic-tacs; tic-tacs are nonsentient; therefore you think I should be forbidden from eating nonsentient things. No, in both cases the defendant is just wrong about whether tic-tacs are sentient.

Many political conflicts include arguments that look like this. You fight our cause; our cause is the cause of [good thing]; therefore you oppose [good thing]. Sometimes people disagree about what's good, but sometimes they just disagree about how to get there, and think that a cause is harmful to its stated goals. Thus, liberals and libertarians symmetrically accuse each other of not caring about the poor.3

If you want to convince someone to change their mind, it's important to know what they're wrong about. The Sally-Anne fallacy causes us to mistarget our counterarguments, and to mistake potential allies for inevitable enemies.


  1. From the outside, this looks like "simply because you believe that thing".

  2. Another possible misunderstanding here, is if you agree that X leads to Y and Y is bad, but still think X is worth it.

  3. Of course, sometimes people will pretend not to believe the obvious truth so that they can further their dastardly ends. But sometimes they're just wrong. And sometimes they'll be right, and the obvious truth will be untrue.

Positivity Thread :)

24 Viliam 08 April 2016 09:34PM

Hi everyone! This is an experimental thread to relax and enjoy the company of other aspiring rationalists. Special rules for communication and voting apply here. Please play along!

(If for whatever reason you cannot or don't want to follow the rules, please don't post in this thread. However, feel free to voice your opinion in the corresponding meta thread.)

Here is the spirit of the rules:

  • be nice
  • be cheerful
  • don't go meta

 

And here are the details:

 

On the scale from negative (hostility, complaints, passive aggression) through neutral (bare facts) to positive (happiness, fun, love), please only post comments from the "neutral to positive" half. Preferably at least slightly positive; but don't push yourself too far if you don't feel so. The goal is to make both yourself and your audience feel comfortable.

If you disagree with someone, please consider whether the issue is important enough to disagree openly. If it isn't, you also have an option to simply skip the comment. You can send the author a private message. Or you can post your disagreement in the meta thread (and then send them the link in a private message). If you still believe it is better to disagree here, please do it politely and friendly.

Avoid inherently controversial topics, such as politics, religion, or interpretations of quantum physics.

Feel free to post stuff that normally doesn't get posted on LessWrong. Feel free to be silly, as long as it harms no one. Emoticons are allowed. Note: This website supports Unicode. ◕‿◕

 

Upvote the stuff you like. :)

Downvote only the stuff that breaks the rules. :( In this thread, the proper reaction to a comment that you don't like, but doesn't break the rules, is to ignore it.

Please don't downvote a comment below zero, unless you believe that the breaking of rules was intentional.

(Note: There is one user permanently banned from this website. Any comment posted from any of this user's new accounts is considered an intentional breaking of the rules, regardless of its content.)

 

Don't go meta in this thread. If you want to discuss whether the rules here should be different, or whether a specific comment did or didn't break the rules, or something like that, please use the meta thread.

Don't abuse the rules. I already know that you are clever, and that you could easily break the spirit of the rules while following the letter. Just don't, please.

Even if you notice or suspect that other people are breaking some of the rules, please continue following all the rules. Don't let one uncooperative person start an avalanche of defection. That includes if you notice that people are not voting according to the rules. If necessary, complain in the meta thread.

 

Okay, that's enough rules for today. Have fun! I love you! ❤ ❤ ❤ ٩(⁎❛ᴗ❛⁎)۶

 

EDIT: Oops, I forgot the most important part. LOL! The topic is "anything that makes you happy" (basically Open Thread / Bragging Thread / etc., but only the positive things).

Astrobiology, Astronomy, and the Fermi Paradox II: Space & Time Revisited

23 CellBioGuy 10 March 2016 05:19AM

After a 6+ month hiatus driven by grad school and personal projects, I am finally able to continue my sequence on astrobiology.  I was flabbergasted by the positive response my last post got, and despite my status as a biologist with a hobby rather than an astronomer I decided to take a more rigorously mathematical approach to figuring out our biosphere's position in space and time rather than talking in generalizations and impressions.

Post is here:  http://thegreatatuin.blogspot.com/2016/03/space-and-time-revisited.html.  Seeing as this post is an elaboration on the last one, I am posting a link rather than reproducing the text.

To summarize, I found some actual rigorous observational fits to the star formation rate in the universe over time and projected them into the future.  These fits show the Sun as forming after 79% of all stars that will ever exist, and that 90% of all stars that will ever exist already exist.  This makes sense in the light of recent work on 'galaxy quenching' - a process by which galaxies more or less completely shut off star formation through a number of processes - indicating that the majority of gas in the universe probably won't form stars if trends that have held for most of the history of the universe continue to hold.  It relies heavily on analysis I began in comments on this site a few months ago.

I then lift two distinct metallicity normalizations from a paper that was making the rounds here a while back ("On The History and Future of Cosmic Planet Formation"), in an attempt to deal with the fact that that is a measurement of STAR formation, not terrestrial-planet-with-a-biosphere formation.  Depending on which metallicity normalization you use (and how willing you are to take a couple naive assumptions I make in order to slot the math that is too complicated for me to comment on on top of my star formation numbers) the Earth shows up as forming after either 72% or 51% of all terrestrial planets.

These numbers are remarkable in how boring they are.  We find ourselves in an utterly typical position in planet-order, even if I am wrong by quite a bit.  We are not early.  Of interest to many here, explanations of the so called Fermi paradox must go elsewhere, into the genesis of intelligent systems being exceedingly rare or the genesis of intelligent systems not implying interstellar spread.

Now that I seem to have a life again, I will be getting back to my original plan next, talking about our own solar system.

What is up with carbon dioxide and cognition? An offer

21 paulfchristiano 23 April 2016 05:47PM

One or two research groups have published work on carbon dioxide and cognition. The state of the published literature is confusing.

Here is one paper on the topic. The authors investigate a proprietary cognitive benchmark, and experimentally manipulate carbon dioxide levels (without affecting other measures of air quality). They find implausibly large effects from increased carbon dioxide concentrations.

If the reported effects are real and the suggested interpretation is correct, I think it would be a big deal. To put this in perspective, carbon dioxide concentrations in my room vary between 500 and 1500 ppm depending on whether I open the windows. The experiment reports on cognitive effects for moving from 600 and 1000 ppm, and finds significant effects compared to interindividual differences.

I haven't spent much time looking into this (maybe 30 minutes, and another 30 minutes to write this post). I expect that if we spent some time looking into indoor CO2 we could have a much better sense of what was going on, by some combination of better literature review, discussion with experts, looking into the benchmark they used, and just generally thinking about it.

So, here's a proposal:

  • If someone looks into this and writes a post that improves our collective understanding of the issue, I will be willing to buy part of an associated certificate of impact, at a price of around $100*N, where N is my own totally made up estimate of how many hours of my own time it would take to produce a similarly useful writeup. I'd buy up to 50% of the certificate at that price.
  • Whether or not they want to sell me some of the certificate, on May 1 I'll give a $500 prize to the author of the best publicly-available analysis of the issue. If the best analysis draws heavily on someone else's work, I'll use my discretion: I may split the prize arbitrarily, and may give it to the earlier post even if it is not quite as excellent.

Some clarifications:

  • The metric for quality is "how useful it is to Paul." I hope that's a useful proxy for how useful it is in general, but no guarantees. I am generally a pretty skeptical person. I would care a lot about even a modest but well-established effect on performance. 
  • These don't need to be new analyses, either for the prize or the purchase.
  • I reserve the right to resolve all ambiguities arbitrarily, and in the end to do whatever I feel like. But I promise I am generally a nice guy.
  • I posted this 2 weeks ago on the EA forum and haven't had serious takers yet.
(Thanks to Andrew Critch for mentioning these results to me and Jessica Taylor for lending me a CO2 monitor so that I could see variability in indoor CO2 levels. I apologize for deliberately not doing my homework on this post.)

The Web Browser is Not Your Client (But You Don't Need To Know That)

21 Error 22 April 2016 12:12AM

(Part of a sequence on discussion technology and NNTP. As last time, I should probably emphasize that I am a crank on this subject and do not actually expect anything I recommend to be implemented. Add whatever salt you feel is necessary)1


If there is one thing I hope readers get out of this sequence, it is this: The Web Browser is Not Your Client.

It looks like you have three or four viable clients -- IE, Firefox, Chrome, et al. You don't. You have one. It has a subforum listing with two items at the top of the display; some widgets on the right hand side for user details, RSS feed, meetups; the top-level post display; and below that, replies nested in the usual way.

Changing your browser has the exact same effect on your Less Wrong experience as changing your operating system, i.e. next to none.

For comparison, consider the Less Wrong IRC, where you can tune your experience with a wide range of different software. If you don't like your UX, there are other clients that give a different UX to the same content and community.

That is how the mechanism of discussion used to work, and does not now. Today, your user experience (UX) in a given community is dictated mostly by the admins of that community, and software development is often neither their forte nor something they have time for. I'll often find myself snarkily responding to feature requests with "you know, someone wrote something that does that 20 years ago, but no one uses it."

Semantic Collapse

What defines a client? More specifically, what defines a discussion client, a Less Wrong client?

The toolchain by which you read LW probably looks something like this; anyone who's read the source please correct me if I'm off:

Browser -> HTTP server -> LW UI application -> Reddit API -> Backend database.

The database stores all the information about users, posts, etc. The API presents subsets of that information in a way that's convenient for a web application to consume (probably JSON objects, though I haven't checked). The UI layer generates a web page layout and content using that information, which is then presented -- in the form of (mostly) HTML -- by the HTTP server layer to your browser. Your browser figures out what color pixels go where.

All of this is a gross oversimplification, obviously.

In some sense, the browser is self-evidently a client: It talks to an http server, receives hypertext, renders it, etc. It's a UI for an HTTP server.

But consider the following problem: Find and display all comments by me that are children of this post, and only those comments, using only browser UI elements, i.e. not the LW-specific page widgets. You cannot -- and I'd be pretty surprised if you could make a browser extension that could do it without resorting to the API, skipping the previous elements in the chain above. For that matter, if you can do it with the existing page widgets, I'd love to know how.

That isn't because the browser is poorly designed; it's because the browser lacks the semantic information to figure out what elements of the page constitute a comment, a post, an author. That information was lost in translation somewhere along the way.

Your browser isn't actually interacting with the discussion. Its role is more akin to an operating system than a client. It doesn't define a UX. It provides a shell, a set of system primitives, and a widget collection that can be used to build a UX. Similarly, HTTP is not the successor to NNTP; the successor is the plethora of APIs, for which HTTP is merely a substrate.

The Discussion Client is the point where semantic metadata is translated into display metadata; where you go from 'I have post A from user B with content C' to 'I have a text string H positioned above visual container P containing text string S.' Or, more concretely, when you go from this:

Author: somebody
Subject: I am right, you are mistaken, he is mindkilled.
Date: timestamp
Content: lorem ipsum nonsensical statement involving plankton....

to this:

<h1>I am right, you are mistaken, he is mindkilled.</h1>
<div><span align=left>somebody</span><span align=right>timestamp</span></div>
<div><p>lorem ipsum nonsensical statement involving plankton....</p></div>

That happens at the web application layer. That's the part that generates the subforum headings, the interface widgets, the display format of the comment tree. That's the part that defines your Less Wrong experience, as a reader, commenter, or writer.

That is your client, not your web browser. If it doesn't suit your needs, if it's missing features you'd like to have, well, you probably take for granted that you're stuck with it.

But it doesn't have to be that way.

Mechanism and Policy

One of the difficulties forming an argument about clients is that the proportion of people who have ever had a choice of clients available for any given service keeps shrinking. I have this mental image of the Average Internet User as having no real concept for this.

Then I think about email. Most people have probably used at least two different clients for email, even if it's just Gmail and their phone's built-in mail app. Or perhaps Outlook, if they're using a company system. And they (I think?) mostly take for granted that if they don't like Outlook they can use something else, or if they don't like their phone's mail app they can install a different one. They assume, correctly, that the content and function of their mail account is not tied to the client application they use to work with it.

(They may make the same assumption about web-based services, on the reasoning that if they don't like IE they can switch to Firefox, or if they don't like Firefox they can switch to Chrome. They are incorrect, because The Web Browser is Not Their Client)

Email does a good job of separating mechanism from policy. Its format is defined in RFC 2822 and its transmission protocol is defined in RFC 5321. Neither defines any conventions for user interfaces. There are good reasons for that from a software-design standpoint, but more relevant to our discussion is that interface conventions change more rapidly than the objects they interface with. Forum features change with the times; but the concepts of a Post, an Author, or a Reply are forever.

The benefit of this separation: If someone sends you mail from Outlook, you don't need to use Outlook to read it. You can use something else -- something that may look and behave entirely differently, in a manner more to your liking.

The comparison: If there is a discussion on Less Wrong, you do need to use the Less Wrong UI to read it. The same goes for, say, Facebook.

I object to this.

Standards as Schelling Points

One could argue that the lack of choice is for lack of interest. Less Wrong, and Reddit on which it is based, has an API. One could write a native client. Reddit does have them.

Let's take a tangent and talk about Reddit. Seems like they might have done something right. They have (I think?) the largest contiguous discussion community on the net today. And they have a published API for talking to it. It's even in use.

The problem with this method is that Reddit's API applies only to Reddit. I say problem, singular, but it's really problem, plural, because it hits users and developers in different ways.

On the user end, it means you can't have a unified user interface across different web forums; other forum servers have entirely different APIs, or none at all.2 It also makes life difficult when you want to move from one forum to another.

On the developer end, something very ugly happens when a content provider defines its own provision mechanism. Yes, you can write a competing client. But your client exists only at the provider's sufferance, subject to their decision not to make incompatible API changes or just pull the plug on you and your users outright. That isn't paranoia; in at least one case, it actually happened. Using an agreed-upon standard limits this sort of misbehavior, although it can still happen in other ways.

NNTP is a standard for discussion, like SMTP is for email. It is defined in RFC 3977 and its data format is defined in RFC 5536. The point of a standard is to ensure lasting interoperability; because it is a standard, it serves as a deliberately-constructed Schelling point, a place where unrelated developers can converge without further coordination.

Expertise is a Bottleneck

If you're trying to build a high-quality community, you want a closed system. Well kept gardens die by pacifism, and it's impossible to fully moderate an open system. But if you're building a communication infrastructure, you want an open system.

In the early Usenet days, this was exactly what existed; NNTP was standardized and open, but Usenet was a de-facto closed community, accessible mostly to academics. Then AOL hooked its customers into the system. The closed community became open, and the Eternal September began.3 I suspect, but can't prove, that this was a partial cause of the flight of discussion from Usenet to closed web forums.

I don't think that was the appropriate response. I think the appropriate response was private NNTP networks or even single servers, not connected to Usenet at large.

Modern web forums throw the open-infrastructure baby out with the open-community bathwater. The result, in our specific case, is that if we want something not provided by the default Less Wrong interface, it must be implemented by Less Wrongers.

I don't think UI implementation is our comparative advantage. In fact I know it isn't, or the Less Wrong UI wouldn't suck so hard. We're pretty big by web-forum standards, but we still contain only a tiny fraction of the Internet's technical expertise.

The situation is even worse among the diaspora; for example, at SSC, if Scott's readers want something new out of the interface, it must be implemented either by Scott himself or his agents. That doesn't scale.

One of the major benefits of a standardized, open infrastructure is that your developer base is no longer limited to a single community. Any software written by any member of any community backed by the same communication standard is yours for the using. Additionally, the developers are competing for the attention of readers, not admins; you can expect the reader-facing feature set to improve accordingly. If readers want different UI functionality, the community admins don't need to be involved at all.

A Real Web Client

When I wrote the intro to this sequence, the most common thing people insisted on was this: Any system that actually gets used must allow links from the web, and those links must reach a web page.

I completely, if grudgingly, agree. No matter how insightful a post is, if people can't link to it, it will not spread. No matter how interesting a post is, if Google doesn't index it, it doesn't exist.

One way to achieve a common interface to an otherwise-nonstandard forum is to write a gateway program, something that answers NNTP requests and does magic to translate them to whatever the forum understands. This can work and is better than nothing, but I don't like it -- I'll explain why in another post.

Assuming I can suppress my gag reflex for the next few moments, allow me to propose: a web client.

(No, I don't mean write a new browser. The Browser Is Not Your Client.4)

Real NNTP clients use the OS's widget set to build their UI and talk to the discussion board using NNTP. There is no fundamental reason the same cannot be done using the browser's widget set. Google did it. Before them, Deja News did it. Both of them suck, but they suck on the UI level. They are still proof that the concept can work.

I imagine an NNTP-backed site where casual visitors never need to know that's what they're dealing with. They see something very similar to a web forum or a blog, but whatever software today talks to a database on the back end, instead talks to NNTP, which is the canonical source of posts and post metadata. For example, it gets the results of a link to http://lesswrong.com/posts/message_id.html by sending ARTICLE message_id to its upstream NNTP server (which may be hosted on the same system), just as a native client would.

To the drive-by reader, nothing has changed. Except, maybe, one thing. When a regular reader, someone who's been around long enough to care about such things, says "Hey, I want feature X," and our hypothetical web client doesn't have it, I can now answer:

Someone wrote something that does that twenty years ago.

Here is how to get it.



  1. Meta-meta: This post took about eight hours to research and write, plus two weeks procrastinating. If anyone wants to discuss it in realtime, you can find me on #lesswrong or, if you insist, the LW Slack.

  2. The possibility of "universal clients" that understand multiple APIs is an interesting case, as with Pidgin for IM services. I might talk about those later.

  3. Ironically, despite my nostalgia for Usenet, I was a part of said September; or at least its aftermath.

  4. Okay, that was a little shoehorned in. The important thing is this: What I tell you three times is true.

Hedge drift and advanced motte-and-bailey

20 Stefan_Schubert 01 May 2016 02:45PM

Motte and bailey is a technique by which one protects an interesting but hard-to-defend view by making it similar to a less interesting but more defensible position. Whenever the more interesting position - the bailey - is attacked - one retreats to the more defensible one - the motte -, but when the attackers are gone, one expands again to the bailey. 

In that case, one and the same person switches between two interpretations of the original claim. Here, I rather want to focus on situations where different people make different interpretations of the original claim. The originator of the claim adds a number of caveats and hedges to their claim, which makes it more defensible, but less striking and sometimes also less interesting.* When others refer to the same claim, the caveats and hedges gradually disappear, however, making it more and more motte-like.

A salient example of this is that scientific claims (particularly in messy fields like psychology and economics) often come with a number of caveats and hedges, which tend to get lost when re-told. This is especially so when media writes about these claims, but even other scientists often fail to properly transmit all the hedges and caveats that come with them.

Since this happens over and over again, people probably do expect their hedges to drift to some extent. Indeed, it would not surprise me if some people actually want hedge drift to occur. Such a strategy effectively amounts to a more effective, because less observable, version of the motte-and-bailey-strategy. Rather than switching back and forth between the motte and the bailey - something which is at least moderately observable, and also usually relies on some amount of vagueness, which is undesirable - you let others spread the bailey version of your claim, whilst you sit safe in the motte. This way, you get what you want - the spread of the bailey version - in a much safer way.

Even when people don't use this strategy intentionally, you could argue that they should expect hedge drift, and that omitting to take action against it is, if not ouright intellectually dishonest, then at least approaching that. This argument would rest on the consequentialist notion that if you have strong reasons to believe that some negative event will occur, and you could prevent it from happening by fairly simple means, then you have an obligation to do so. I certainly do think that scientists should do more to prevent their views from being garbled via hedge drift. 

Another way of expressing all this is by saying that when including hedging or caveats, scientists often seem to seek plausible deniability ("I included these hedges; it's not my fault if they were misinterpreted"). They don't actually try to prevent their claims from being misunderstood. 

What concrete steps could one then take to prevent hedge-drift? Here are some suggestions. I am sure there are many more.

  1. Many authors use eye-catching, hedge-free titles and/or abstracts, and then only include hedges in the paper itself. This is a recipe for hedge-drift and should be avoided.
  2. Make abundantly clear, preferably in the abstract, just how dependent the conclusions are on keys and assumptions. Say this not in a way that enables you to claim plausible deniability in case someone misinterprets you, but in a way that actually reduces the risk of hedge-drift as much as possible. 
  3. Explicitly caution against hedge drift, using that term or a similar one, in the abstract of the paper.

* Edited 2/5 2016. By hedges and caveats I mean terms like "somewhat" ("x reduces y somewhat"), "slightly", etc, as well as modelling assumptions without which the conclusions don't follow and qualifications regarding domains in which the thesis don't hold.

Look for Lone Correct Contrarians

20 Gram_Stone 13 March 2016 04:11PM

Related to: The Correct Contrarian Cluster, The General Factor of Correctness

(Content note: Explicitly about spreading rationalist memes, increasing the size of the rationalist movement, and proselytizing. I also regularly use the word 'we' to refer to the rationalist community/subculture. You might prefer not to read this if you don't like that sort of thing and/or you don't think I'm qualified to write about that sort of thing and/or you're not interested in providing constructive criticism.)

I've tried to introduce a number of people to this culture and the ideas within it, but it takes some finesse to get a random individual from the world population to keep thinking about these things and apply them. My personal efforts have been very hit-or-miss. Others have told me that they've been more successful. But I think there are many people that share my experience. This is unfortunate: we want people to be more rational and we want more rational people.

At any rate, this is not about the art of raising the sanity waterline, but the more general task of spreading rationalist memes. Some people naturally arrive at these ideas, but they usually have to find them through other people first. This is really about all of the people in the world who are like you probably were before you found this culture; the people who would care about it, and invest in it, as it is right now, if only they knew it existed.

I'm going to be vague for the sake of anonymity, but here it goes:

I was reading a book review on Amazon, and I really liked it. The writer felt like a kindred spirit. I immediately saw that they were capable of coming to non-obvious conclusions, so I kept reading. Then I checked their review history in the hope that I would find other good books and reviews. And it was very strange.

They did a bunch of stuff that very few humans do. They realized that nuclear power has risks but that the benefits heavily outweigh the risks given the appropriate alternative, and they realized that humans overestimate the risks of nuclear power for silly reasons. They noticed when people were getting confused about labels and pointed out the general mistake, as well as pointing out what everyone should really be talking about. They acknowledged individual and average IQ differences and realized the correct policy implications. They really understood evolution, they took evolutionary psychology seriously, and they didn't care if it was labeled as sociobiology. They used the word 'numerate.'

And the reviews ranged over more than a decade of time. These were persistent interests.

I don't know what other people do when they discover that a stranger like this exists, but the first thing that I try to do is talk to them. It's not like I'm going to run into them on the sidewalk.

Amazon had no messaging feature that I could find, so I looked for a website, and I found one. I found even more evidence, and that's certainly what it wasThey were interested in altruism, including how it goes wrong; computer science; statistics; psychology; ethics; coordination failures; failures of academic and scientific institutions; educational reform; cryptocurrency, etc. At this point I considered it more likely than not that they already knew everything that I wanted to tell them, and that they already self-identified as a rationalist, or that they had a contrarian reason for not identifying as such.

So I found their email address. I told them that they were a great reviewer, that I was surprised that they had come to so many correct contrarian conclusions, and that, if they didn't already know, there was a whole culture of people like them.

They replied in ten minutes. They were busy, but they liked what I had to say, and as a matter of fact, a friend had already convinced them to buy Rationality: From AI to Zombies. They said they hadn't read much relative to the size of the book because it's so large, but they loved it so far and they wanted to keep reading.

(You might postulate that I found a review by a user like this on a different book because I was recommended this book and both of us were interested in Rationality: From AI to Zombies. However, the first review I read by this user was for a book on unusual gardening methods, that I found in a search for books about gardening methods. For the sake of anonymity, however, my unusual gardening methods must remain a secret. It is reasonable to postulate that there would be some sort of sampling bias like the one that I have described, but given what I know, it is likely that this is not that. You certainly could still postulate a correlation by means of books about unusual gardening methods, however.)

Maybe that extra push made the difference. Maybe if there hadn't been a friend, I would've made the difference.

Who knew that's how my morning would turn out?

As I've said in some of my other posts, but not in so many words, maybe we should start doing this accidentally effective thing deliberately!

I know there's probably controversy about whether or not rationalists should proselytize, but I've been in favor of it for awhile. And if you're like me, then I don't think this is a very special effort to make. I'm sure sometimes you see a little thread, and you think, "Wow, they're a lot like me; they're a lot like us, in fact; I wonder if there are other things too. I wonder if they would care about this."

Don't just move on! That's Bayesian evidence!

I dare you to follow that path to its destination. I dare you to reach out. It doesn't cost much.

And obviously there are ways to make yourself look creepy or weird or crazy. But I said to reach out, not to reach out badly. If you could figure out how to do it right, it could have a large impact. And these people are likely to be pretty reasonable. You should keep a look out in the future.

Speaking of the future, it's worth noting that I ended up reading the first review because of an automated Amazon book recommendation and subsequent curiosity. You know we're in the data. We are out there and there are ways to find us. In a sense, we aren't exactly low-hanging fruit. But in another sense, we are.

I've never read a word of the Methods of Rationality, but I have to shoehorn this in: we need to write the program that sends a Hogwarts acceptance letter to witches and wizards on their eleventh birthday.

JFK was not assassinated: prior probability zero events

19 Stuart_Armstrong 27 April 2016 11:47AM

A lot of my work involves tweaking the utility or probability of an agent to make it believe - or act as if it believed - impossible or almost impossible events. But we have to be careful about this; an agent that believes the impossible may not be so different from one that doesn't.

Consider for instance an agent that assigns a prior probability of zero to JFK ever having been assassinated. No matter what evidence you present to it, it will go on disbelieving the "non-zero gunmen theory".

Initially, the agent will behave very unusually. If it was in charge of JFK's security in Dallas before the shooting, it would have sent all secret service agents home, because no assassination could happen. Immediately after the assassination, it would have disbelieved everything. The films would have been faked or misinterpreted; the witnesses, deluded; the dead body of the president, that of twin or an actor. It would have had huge problems with the aftermath, trying to reject all the evidence of death, seeing a vast conspiracy to hide the truth of JFK's non-death, including the many other conspiracy theories that must be false flags, because they all agree with the wrong statement that the president was actually assassinated.

But as time went on, the agent's behaviour would start to become more and more normal. It would realise the conspiracy was incredibly thorough in its faking of the evidence. All avenues it pursued to expose them would come to naught. It would stop expecting people to come forward and confess the joke, it would stop expecting to find radical new evidence overturning the accepted narrative. After a while, it would start to expect the next new piece of evidence to be in favour of the assassination idea - because if a conspiracy has been faking things this well so far, then they should continue to do so in the future. Though it cannot change its view of the assassination, its expectation for observations converge towards the norm.

If it does a really thorough investigation, it might stop believing in a conspiracy at all. At some point, the probability of a miracle will start to become more likely than a perfect but undetectable conspiracy. It is very unlikely that Lee Harvey Oswald shot at JFK, missed, and the president's head exploded simultaneously for unrelated natural causes. But after a while, such a miraculous explanation will start to become more likely than anything else the agent can consider. This explanation opens the possibility of miracles; but again, if the agent is very thorough, it will fail to find evidence of other miracles, and will probably settle on "an unrepeatable miracle caused JFK's death in a way that is physically undetectable".

But then note that such an agent will have a probability distribution over future events that is almost indistinguishable from a normal agent that just believes the standard story of JFK being assassinated. The zero-prior has been negated, not in theory but in practice.

 

How to do proper probability manipulation

This section is still somewhat a work in progress.

So the agent believes one false fact about the world, but its expectation is otherwise normal. This can be both desirable and undesirable. The negative is if we try and control the agent forever by giving it a false fact.

To see the positive, ask why would we want an agent to believe impossible things in the first place? Well, one example was an Oracle design where the Oracle didn't believe its output message would ever be read. Here we wanted the Oracle to believe the message wouldn't be read, but not believe anything else too weird about the world.

In terms of causality, if X designates the message being read at time t, and B and A are event before and after t, respectively, we want P(B|X)≈P(B) (probabilities about current facts in the world shouldn't change much) while P(A|X)≠P(A) is fine and often expected (the future should be different if the message is read or not).

In the JFK example, the agent eventually concluded "a miracle happened". I'll call this miracle a scrambling point. It's kind of a breakdown in causality: two futures are merged into one, given two different pasts. The two pasts are "JFK was assassinated" and "JFK wasn't assassinated", and their common scrambled future is "everything appears as if JFK was assassinated". The non-assassination belief has shifted the past but not the future.

For the Oracle, we want to do the reverse: we want the non-reading belief to shift the future but not the past. However, unlike the JFK assassination, we can try and build the scrambling point. That's why I always talk about messages going down noisy wires, or specific quantum events, or chaotic processes. If the past goes through a truly stochastic event (it doesn't matter whether there is true randomness or just that the agent can't figure out the consequences), we can get what we want.

The Oracle idea will go wrong if the Oracle conclude that non-reading must imply something is different about the past (maybe it can see through chaos in ways we thought it couldn't), just as the JFK assassination denier will continue to be crazy if can't find a route to reach "everything appears as if JFK was assassinated".

But there is a break in the symmetry: the JFK assassination denier will eventually reach that point as long as the world is complex and stochastic enough. While the Oracle requires that the future probabilities be the same in all (realistic) past universes.

Now, once the Oracle's message has been read, the Oracle will find itself in the same situation as the other agent: believing an impossible thing. For Oracles, we can simply reset them. Other agents might have to behave more like the JFK assassination disbeliever. Though if we're careful, we can quantify things more precisely, as I attempted to do here.

The increasing uselessness of Promoted

19 PhilGoetz 19 March 2016 06:23PM

For some time now, "Promoted" has been reserved for articles written by MIRI staff, mostly about MIRI activities.  Which, I suppose, would be reasonable, if this were MIRI's blog.  But it isn't.  MIRI has its own blog.  It seems to me inconvenient both to readers of LessWrong, and to readers of MIRI's blog, to split MIRI's material up between the two.

People visiting lesswrong land on "Promoted", see a bunch of MIRI blogs, mostly written by people who don't read LessWrong themselves much anymore, and get a mistaken impression of what people talk about on LessWrong.  Also, LessWrong looks like a dying site, since often months pass between new posts.

I suggest the default landing page be "New", not "Promoted".

2016 LessWrong Diaspora Survey Analysis: Part One (Meta and Demographics)

17 ingres 14 May 2016 06:09AM

2016 LessWrong Diaspora Survey Analysis

Overview

  • Results and Dataset
  • Meta
  • Demographics (You are here)
  • LessWrong Usage and Experience
  • LessWrong Criticism and Successorship
  • Diaspora Community Analysis
  • What it all means for LW 2.0
  • Mental Health Section
  • Basilisk Section/Analysis
  • Blogs and Media analysis
  • Politics
  • Calibration Question And Probability Question Analysis
  • Charity And Effective Altruism Analysis

Survey Meta

Introduction

Hello everybody, this is part one in a series of posts analyzing the 2016 LessWrong Diaspora Survey. The survey ran from March 24th to May 1st and had 3083 respondents.

Almost two thousand eight hundred and fifty hours were spent surveying this year and you've all waited nearly two months from the first survey response to the results writeup. While the results have been available for over a week, they haven't seen widespread dissemination in large part because they lacked a succinct summary of their contents.

When we started the survey in march I posted this graph showing the dropoff in question responses over time:

So it seems only reasonable to post the same graph with this years survey data:

(I should note that this analysis counts certain things as questions that the other chart does not, so it says there are many more questions than the previous survey when in reality where are about as many as last year.)

2016 Diaspora Survey Stats

Survey hours spent in total: 2849.818888888889

Average number of minutes spent on survey: 102.14404619673437

Median number of minutes spent on survey: 39.775

Mode minutes spent on survey: 20.266666666666666

The takeaway here seems to be that some people take a long time with the survey, raising the average. However, most people's survey time is somewhere below the forty five minute mark. LessWrong does a very long survey, and I wanted to make sure that investment was rewarded with a deep detailed analysis. Weighing in at over four thousand lines of python code, I hope the analysis I've put together is worth the wait.

Credits

I'd like to thank people who contributed to the analysis effort:

Bartosz Wroblewski

Kuudes on #lesswrong

Obormot on #lesswrong

Two anonymous contributors

And anybody else who I may have forgotten. Thanks again to Scott Alexander, who wrote the majority of the survey and ran it in 2014, and who has also been generous enough to license his part of the survey under a creative commons license along with mine.


Demographics

Age

The 2014 survey gave these numbers for age:

Age: 27.67 + 8.679 (22, 26, 31) [1490]

In 2016 the numbers were:

Mean: 28.108772669759592
Median: 26.0
Mode: 23.0

Most LWers are in their early to mid twenties, with some older LWers bringing up the average. The average is close enough to the former figure that we can probably say the LW demographic is in their 20's or 30's as a general rule.

Sex and Gender

In 2014 our gender ratio looked like this:

Female: 179, 11.9%
Male: 1311, 87.2%

In 2016 the proportion of women in the community went up by over four percent:

Male: 2021 83.5%
Female: 393 16.2%

One hypothesis on why this happened is that the 2016 survey focused on the diaspora rather than just LW. Diaspora communities plausibly have marginally higher rates of female membership. If I had more time I would write an analysis investigating the demographics of each diaspora community, but to answer this particular question I think a couple of SQL queries are illustrative:

(Note: ActiveMemberships one and two are 'LessWrong' and 'LessWrong Meetups' respectively.)
sqlite> select count(birthsex) from data where (ActiveMemberships_1 = "Yes" OR ActiveMemberships_2 = "Yes") AND birthsex="Male";
425
sqlite> select count(birthsex) from data where (ActiveMemberships_1 = "Yes" OR ActiveMemberships_2 = "Yes") AND birthsex="Female";
66
>>> 66 / (425 + 66)
0.13441955193482688

Well, maybe. Of course, before we wring our hands too much on this question it pays to remember that assigned sex at birth isn't the whole story. The gender question in 2014 had these results:

F (cisgender): 150, 10.0%
F (transgender MtF): 24, 1.6%
M (cisgender): 1245, 82.8%
M (transgender FtM): 5, 0.3%
Other: 64, 4.3%

In 2016:

F (cisgender): 321 13.3%
F (transgender MtF): 65 2.7%
M (cisgender): 1829 76%
M (transgender FtM): 23 1%
Other: 156 6.48%

Some things to note here. 16.2% of respondents were assigned female at birth but only 13.3% still identify as women. 1% are transmen, but where did the other 1.9% go? Presumably into the 'Other' field. Let's find out.

sqlite> select count(birthsex) from data where birthsex = "Female" AND gender = "Other";
57
sqlite> select count(*) from data;
3083
>>> 57 / 3083
0.018488485241647746

Seems to be the case. In general the proportion of men is down 6.1% from 2014. We also gained 1.1% transwomen and .7% transmen in 2016. Moving away from binary genders, this surveys nonbinary gender count gained in proportion by nearly 2.2%. This means that over one in twenty LWers identified as a nonbinary gender, making it a larger demographic than binary transgender LWers! As exciting as that may sound to some ears the numbers tell one story and the write ins tell quite another.

It pays to keep in mind that nonbinary genders are a common troll option for people who want to write in criticism of the question. A quick look at the write ins accompanying the other option indicates that this is what many people used it for, but by no means all. At 156 responses, that's small enough to be worth doing a quick manual tally.

Key = Agender, Esoteric, Female, Male, Male-to-Female, Nonbinary, Objection on Basis Gender Doesn't Exist, Objection on Basis Gender Is Binary, in Process of Transitioning, Refusal, Undecided
Sample Size: 156
A 35
E 6
F 6
M 21
MTF 1
NB 55
OBGDE 6
OBGIB 7
PT 2
R 7
U 10

So depending on your comfort zone as to what constitutes a countable gender, there are 90 to 96 valid 'other' answers in the survey dataset. (Labeled dataset)

>>> 90 / 3083
0.029192345118391177

With some cleanup the number trails behind the binary transgender one by the greater part of a percentage point, but only by. I bet that if you went through and did the same sort of tally on the 2014 survey results you'd find that the proportion of valid nonbinary gender write ins has gone up between then and now.

Some interesting 'esoteric' answers: Attack Helocopter, Blackstar, Elizer, spiderman, Agenderfluid

For the rest of this section I'm going to just focus on differences between the 2016 and 2014 surveys.

2014 Demographics Versus 2016 Demographics

Country

United States: -1.000% 1298 53.700%
United Kingdom: -0.100% 183 7.600%
Canada: +0.100% 144 6.000%
Australia: +0.300% 141 5.800%
Germany: -0.600% 85 3.500%
Russia: +0.700% 57 2.400%
Finland: -0.300% 25 1.000%
New Zealand: -0.200% 26 1.100%
India: -0.100% 24 1.000%
Brazil: -0.300% 16 0.700%
France: +0.400% 34 1.400%
Israel: +0.200% 29 1.200%
Other: 354 14.646%

[Summing these all up to one shows that nearly 1% of change is unaccounted for. My hypothesis is that this 1% went into the other countries not in the list, this can't be easily confirmed because the 2014 analysis does not list the other country percentage.]

Race

Asian (East Asian): -0.600% 80 3.300%
Asian (Indian subcontinent): +0.300% 60 2.500%
Middle Eastern: 0.000% 14 0.600%
Black: -0.300% 12 0.500%
White (non-Hispanic): -0.300% 2059 85.800%
Hispanic: +0.300% 57 2.400%
Other: +1.200% 108 4.500%

Sexual Orientation

Heterosexual: -5.000% 1640 70.400%
Homosexual: +1.300% 103 4.400%
Bisexual: +4.000% 428 18.400%
Other: +3.880% 144 6.180%

[LessWrong got 5.3% more gay, 9.1% if you're more loose with the definition. Before we start any wild speculation, the 2014 question included asexuality as an option and it got 3.9% of the responses, we spun this off into a separate question on the 2016 survey which should explain a significant portion of the change.]

Are you asexual?

Yes: 171 0.074
No: 2129 0.926

[Scott said in 2014 that he'd probably 'vastly undercounted' our asexual readers, a near doubling in our count would seem to support this.]

Relationship Style

Prefer monogomous: -0.900% 1190 50.900%
Prefer polyamorous: +3.100% 426 18.200%
Uncertain/no preference: -2.100% 673 28.800%
Other: +0.426% 45 1.926%

[Polyamorous gained three points, presumably the drop in uncertain people went into that bin.]

Number of Partners

0: -2.300% 1094 46.800%
1: -0.400% 1039 44.400%
2: +1.200% 107 4.600%
3: +0.900% 46 2.000%
4: +0.100% 15 0.600%
5: +0.200% 8 0.300%
Lots and lots: +1.000% 29 1.200%

Relationship Goals

...and seeking more relationship partners: +0.200% 577 24.800%
...and possibly open to more relationship partners: -0.300% 716 30.800%
...and currently not looking for more relationship partners: +1.300% 1034 44.400%

Are you married?

Yes: 443 0.19
No: 1885 0.81

[This question appeared in a different form on the previous survey. Marriage went up by .8% from last year.]

Who do you currently live with most of the time?

Alone: -2.200% 487 20.800%
With parents and/or guardians: +0.100% 476 20.300%
With partner and/or children: +2.100% 687 29.400%
With roommates: -2.000% 619 26.500%

[This would seem to line up with the result that single LWers went down by 2.3%]

How many children do you have?

Sum: 598 or greater 0: +5.400% 2042 87.000%
1: +0.500% 115 4.900%
2: +0.100% 124 5.300%
3: +0.900% 48 2.000%
4: -0.100% 7 0.300%
5: +0.100% 6 0.300%
6: 0.000% 2 0.100%
Lots and lots: 0.000% 3 0.100%

[Interestingly enough, childless LWers went up by 5.4%. This would seem incongruous with the previous results. Not sure how to investigate though.]

Are you planning on having more children?

Yes: -5.400% 720 30.700%
Uncertain: +3.900% 755 32.200%
No: +2.800% 869 37.100%

[This is an interesting result, either nearly 4% of LWers are suddenly less enthusiastic about having kids, or new entrants to the survey are less likely and less sure if they want to. Possibly both.]

Work Status

Student: -5.402% 968 31.398%
Academics: +0.949% 205 6.649%
Self-employed: +4.223% 309 10.023%
Independently wealthy: +0.762% 42 1.362%
Non-profit work: +1.030% 152 4.930%
For-profit work: -1.756% 954 30.944%
Government work: +0.479% 135 4.379%
Homemaker: +1.024% 47 1.524%
Unemployed: +0.495% 228 7.395%

[Most interesting result here is that 5.4% of LWers are no longer students or new survey entrants aren't.]

Profession

Art: +0.800% 51 2.300%
Biology: +0.300% 49 2.200%
Business: -0.800% 72 3.200%
Computers (AI): +0.700% 79 3.500%
Computers (other academic, computer science): -0.100% 156 7.000%
Computers (practical): -1.200% 681 30.500%
Engineering: +0.600% 150 6.700%
Finance / Economics: +0.500% 116 5.200%
Law: -0.300% 50 2.200%
Mathematics: -1.500% 147 6.600%
Medicine: +0.100% 49 2.200%
Neuroscience: +0.100% 28 1.300%
Philosophy: 0.000% 54 2.400%
Physics: -0.200% 91 4.100%
Psychology: 0.000% 48 2.100%
Other: +2.199% 277 12.399%
Other "hard science": -0.500% 26 1.200%
Other "social science": -0.200% 48 2.100%

[The largest profession growth for LWers in 2016 was art, that or this is a consequence of new survey entrants.]

What is your highest education credential earned?

None: -0.700% 96 4.200%
High School: +3.600% 617 26.700%
2 year degree: +0.200% 105 4.500%
Bachelor's: -1.600% 815 35.300%
Master's: -0.500% 415 18.000%
JD/MD/other professional degree: 0.000% 66 2.900%
PhD: -0.700% 145 6.300%
Other: +0.288% 39 1.688%

[Hm, the academic credentials of LWers seems to have gone down some since the last survey. As usual this may also be the result of new survey entrants.]


Footnotes

  1. The 2850 hour estimate of survey hours is very naive. It measures the time between starting and turning in the survey, a person didn't necessarily sit there during all that time. For example this could easily be including people who spent multiple days doing other things before finally finishing their survey.

  2. The apache helicopter image is licensed under the Open Government License, which requires attribution. That particular edit was done by Wubbles on the LW Slack.

  3. The first published draft of this post made a basic stats error calculating the proportion of women in active memberships one and two, dividing the number of women by the number of men rather than the number of women by the number of men and women.

Lesswrong Potential Changes

17 Elo 19 March 2016 12:24PM

I have compiled many suggestions about the future of lesswrong into a document here:

https://docs.google.com/document/d/1hH9mBkpg2g1rJc3E3YV5Qk-b-QeT2hHZSzgbH9dvQNE/edit?usp=sharing

It's long and best formatted there.

In case you hate leaving this website here's the summary:

Summary

There are 3 main areas that are going to change.

  1. Technical/Direct Site Changes

 

  1.  
    1. new home page

    2. new forum style with subdivisions

      1. new sub for “friends of lesswrong” (rationality in the diaspora)

    3. New tagging system

    4. New karma system

    5. Better RSS

  2. Social and cultural changes

    1. Positive culture; a good place to be.

    2. Welcoming process

    3. Pillars of good behaviours (the ones we want to encourage)

    4. Demonstrate by example

    5. 3 levels of social strategies (new, advanced and longtimers)

  3. Content (emphasis on producing more rationality material)

    1. For up-and-coming people to write more

      1. for the community to improve their contributions to create a stronger collection of rationality.

    2. For known existing writers

      1. To encourage them to keep contributing

      2. To encourage them to work together with each other to contribute

Less Wrong Potential Changes

Summary

Why change LW?

How will we know we have done well (the feel of things)

How will we know we have done well (KPI - technical)

Technical/Direct Site Changes

Homepage

Subs

Tagging

Karma system

Moderation

Users

RSS magic

Not breaking things

Funding support

Logistical changes

Other

Done (or Don’t do it):

Social/cultural

General initiatives

Welcoming initiatives

Initiatives for moderates

Initiatives for long-time users

Rationality Content

Target: a good 3 times a week for a year.

Approach formerly prominent writers

Explicitly invite

Place to talk with other rationalists

Pillars of purpose
(with certain sub-reddits for different ideas)

Encourage a declaration of intent to post

Specific posts

Other notes


Why change LW?

 

Lesswrong has gone through great times of growth and seen a lot of people share a lot of positive and brilliant ideas.  It was hailed as a launchpad for MIRI, in that purpose it was a success.  At this point it’s not needed as a launchpad any longer.  While in the process of becoming a launchpad it became a nice garden to hang out in on the internet.  A place of reasonably intelligent people to discuss reasonable ideas and challenge each other to update their beliefs in light of new evidence.  In retiring from its “launchpad” purpose, various people have felt the garden has wilted and decayed and weeds have grown over.  In light of this; and having enough personal motivation to decide I really like the garden, and I can bring it back!  I just need a little help, a little magic, and some little changes.  If possible I hope for the garden that we all want it to be.  A great place for amazing ideas and life-changing discussions to happen.


How will we know we have done well (the feel of things)

 

Success is going to have to be estimated by changes to the feel of the site.  Unfortunately that is hard to do.  As we know outrage generates more volume than positive growth.  Which is going to work against us when we try and quantify by measurable metrics.  Assuming the technical changes are made; there is still going to be progress needed on the task of socially improving things.  There are many “seasoned active users” - as well as “seasoned lurkers” who have strong opinions on the state of lesswrong and the discussion.  Some would say that we risk dying of niceness, others would say that the weeds that need pulling are the rudeness.  


Honestly we risk over-policing and under-policing at the same time.  There will be some not-niceness that goes unchecked and discourages the growth of future posters (potentially our future bloggers), and at the same time some other niceness that motivates trolling behaviour as well as failing to weed out potential bad content which would leave us as fluffy as the next forum.  there is no easy solution to tempering both sides of this challenge.  I welcome all suggestions (it looks like a karma system is our best bet).


In the meantime I believe being on the general niceness, steelman side should be the motivated direction of movement.  I hope to enlist some members as essentially coaches in healthy forum growth behaviour.  Good steelmanning, positive encouragement, critical feedback as well as encouragement, a welcoming committee and an environment of content improvement and growth.


While at the same time I want everyone to keep up the heavy debate; I also want to see the best versions of ourselves coming out onto the publishing pages (and sometimes that can be the second draft versions).


So how will we know?  By trying to reduce the ugh fields to people participating in LW, by seeing more content that enough people care about, by making lesswrong awesome.


The full document is just over 11 pages long.  Please go read it, this is a chance to comment on potential changes before they happen.


Meta: This post took a very long time to pull together.  I read over 1000 comments and considered the ideas contained there.  I don't have an accurate account of how long this took to write; but I would estimate over 65 hours of work has gone into putting it together.  It's been literally weeks in the making, I really can't stress how long I have been trying to put this together.

If you want to help, please speak up so we can help you help us.  If you want to complain; keep it to yourself.

Thanks to the slack for keeping up with my progress and Vanvier, Mack, Leif, matt and others for reviewing this document.

As usual - My table of contents

Link: Evidence-Based Medicine Has Been Hijacked

17 Anders_H 16 March 2016 07:57PM

John Ioannidis has written a very insightful and entertaining article about the current state of the movement which calls itself "Evidence-Based Medicine".  The paper is available ahead of print at http://www.jclinepi.com/article/S0895-4356(16)00147-5/pdf.

As far as I can tell there is currently no paywall, that may change later, send me an e-mail if you are unable to access it.

Retractionwatch interviews John about the paper here: http://retractionwatch.com/2016/03/16/evidence-based-medicine-has-been-hijacked-a-confession-from-john-ioannidis/

(Full disclosure: John Ioannidis is a co-director of the Meta-Research Innovation Center at Stanford (METRICS), where I am an employee. I am posting this not in an effort to promote METRICS, but because I believe the links will be of interest to the community)

AlphaGo versus Lee Sedol

17 gjm 09 March 2016 12:22PM

There have been a couple of brief discussions of this in the Open Thread, but it seems likely to generate more so here's a place for it.

The original paper in Nature about AlphaGo.

Google Asia Pacific blog, where results will be posted. DeepMind's YouTube channel, where the games are being live-streamed.

Discussion on Hacker News after AlphaGo's win of the first game.

My new rationality/futurism podcast

15 James_Miller 06 April 2016 05:36PM

I've started a podcast called Future Strategist which will focus on decision making and futurism.  I have created seven shows so far:  interviews of computer scientist Roman Yampolskiy, LW contributor Gleb Tsipursky, and artist/free speech activist Rachel Haywire, and monologues on game theory and Greek Mythology, the Prisoners' Dilemma, the sunk cost fallacy, and the Map and Territory.  

 

If you enjoy the show and use iTunes I would be grateful if you left a positive review at iTunes.  I would also be grateful for any feedback you might have including suggestions for future shows.  I'm not used to interviewing people and I know that I need to work on being more articulate in my interviews.

 

Abuse of Productivity Systems

15 SquirrelInHell 27 March 2016 05:32AM

Note

I've recorded a short video that explains roughly the same idea, but uses different (and simpler) examples.

(On the premise that I need to practise my spoken English, which has always been lagging behind my writing, as well as the non-trivial skill of talking to dead objects as if they could listen.)

 

Example 1.

Bob's dream had always been to learn French, and to live in France after he retires early from his high-paying management job.

Recently, he used the flashcard program Anki to help him with learning French, and had considerable success with it.

In fact, he has learned French to complete fluency in around one and a half year, and he attributes much of this result to using Anki effectively.

His habit of learning Anki every day is very strong, and he always does it first thing in the morning without fail.

Now he thinks, "if I could have done it with French, what stops me from learning, like, 10 languages in the next 15 years? It'd be so cool!".

And so, after his daily French workload has dropped significantly, he downloads and imports a huge database of German flashcards.

Pretty soon, his notices that he is losing his motivation to learn every morning.

"What is wrong with me? Am I becoming lazy?", he thinks, and pushes himself to work hard.

Learning gradually becomes more and more unpleasant.

Bob's resentment builds, and soon is too large for him to overcome.

When he finally gives up on Anki altogether, it comes as a huge relief.

 

Example 2.

Sally is very satisfied with how the pomodoro technique helps her with productivity.

She has several projects on which she wants to work, and using pomodoros gives her a well defined framework for time-sharing those projects.

Having a more tangible measure of progress (the number of pomodoros done) provides pleasant reinforcement, and she has reduced her procrastination to negligible levels.

In the meantime, she is considering a move to another city, and wants to look for a new job.

With dismay, she discovers that when it comes to looking for jobs, she is not procrastination-free.

It doesn't fit with her new image of herself as a procrastination-free person.

Sally thinks about the problem, and comes up with a great idea: she is going to use pomodoros to search for jobs!

She decides to spend one of her pomodoros every day to browse job offers on the Internet.

The next day, when she remembers about the plan, she feels slight displeasure and annoyance, but pushes those feelings away quickly.

She sets the pomodoro timer and opens her web browser.

25 minutes later, the timer rings and she realizes that she has procrastinated away most of the pomodoro.

This is the first time it has ever happened to her.

But she keeps up the positive attitude, and tries the second time.

She is able to do a little bit more, but it's still nothing like the concentrated work she had been getting out of her pomodoros before.

 

Questions

What mistakes are Bob and Sally making?

What would you change, so turn those mistakes into successes?

(Note: the definition of "success" is broad here. If Bob can decide to not learn German with zero wasted motion, it's a success.)

Is there something in your life, that has failed in a similar manner?

To what other domains does this generalize?

[LINK] Why Cryonics Makes Sense - Wait But Why

15 Paul 25 March 2016 11:41AM

Wait But Why published an article on cryonics:

http://waitbutwhy.com/2016/03/cryonics.html

How I infiltrated the Raëlians (and was hugged by their leader)

15 SquirrelInHell 16 March 2016 05:45AM

I was invited by a stranger I met on a plane and actually went to a meeting of Raëlians (known in some LW circles as "the flying saucer cult") in 沖縄, Japan. It was right next to Claude Vorilhon's home, and he came himself for the "ceremony" (?) dressed in a theatrical space-y white uniform, complete with a Jewish-style white cap on his head. When saying his "sermon" (?) he spoke in English and his words were translated into Japanese for the benefit of those who didn't understand. And yes, it's true he talked with me briefly and then hugged me (I understand he does this with all newcomers, and it felt 100% fake to me). I then went on to eat lunch in an 居酒屋 with a group of around 15 members, who were all really friendly and pleasant people. I was actually treated to lunch by them, and afterwards someone gave me a ~20 minute ride to the town I wanted to be in, despite knowing they won't see me ever again.

If you have ever wondered how it is possible that a flying saucer cult has more members than EA, now it's time to learn something.

Note: I hope it's clear that I do not endorse creating cults, nor do I proclaim the EA community's inferiority. It hasn't even crossed my mind when I wrote the above line that any LW'er would take it as a stab they need to defend against. I'm merely pointing to the fact that we can learn from anything, whether it's good or bad, and encouraging a fresh discussion on this after I gathered some new data.

Let's do this as a Q&A session (I'm at work now so I can't write a long post).

Please ask questions in comments.

Newsjacking for Rationality and Effective Altruism

15 Gleb_Tsipursky 15 March 2016 09:58PM

Summary: This post describes the steps I took to newsjack a breaking story to promote Rationality and Effective Altruism ideas in an op-ed piece, so that anyone can take similar steps to newsjack a relevant story.

 

Introduction

Newsjacking is the art and science of injecting your ideas into a breaking news story. It should be done as early as possible in the life cycle of a news story for maximum impact for drawing people's attention to your ideas.

 

Some of you may have heard about the Wounded Warrior Project scandal that came to light five days ago or so. This nonprofit that helps wounded veterans had fired its top staff for excessively lavish spending and building Potemkin village-style programs that were showpieces for marketing but did little to help wounded veterans.

 

I scan the news regularly, and was lucky enough to see the story as it was just breaking, on the evening of March 10th. I decided to try to newsjack this story for the sake of Rationality and Effective Altruist ideas. With the help of some timely editing by EA and Rationality enthusiasts other than myself - props to Agnes Vishnevkin, Max Harms, Chase Roycraft, Rhema Hokama, Jacob Bryan, and Yaacov Tarko - TIME just published my piece. This is a big deal, as now one of the first news stories people see when they type "wounded warrior" into Google, as you can see from the screenshot below, is a story promoting Rationality and EA-themed ideas. Regarding Rationality proper, I talk about horns effect and scope neglect, citing Eliezer's piece on it in the post itself, probably the first link to Less Wrong from TIME. Regarding EA, I talked about about effective giving, and also EA organizations such as GiveWell, The Life You Can Save, Animal Charity Evaluators, and effective direct-action charities such as Against Malaria Foundation and GiveDirectly. Many people are searching for "wounded warrior" now that the scandal is emerging, and are getting exposure to Rationality and EA ideas.

 

 

Newsjacking a story like this and getting published in TIME may seem difficult, but it's doable. I hope that the story of how I did it and the steps I lay out, as well as the template of the actual article I wrote, will encourage you to try to do so yourself.

 

Specific Steps

 

1) The first step is to be prepared mentally to newsjack a story and be vigilant about scanning the headlines for any story that is relevant to Rationality or EA causes. The story I newsjacked was about a scandal in the nonprofit sector, a breaking news story that occurs at regular intervals. But a news story about mad cow disease spreading spreading from factory farms might be a good opportunity to write about Animal Charity Evaluators, or a news story about the Zika virus might be a good opportunity to write about how we still haven't killed off malaria (hint hint for any potential authors). While those are specifically EA-related, you can inject Rationality into almost any news story by pointing out biases, etc.

 

2) Once you find a story, decide what kind of angle you want to write about, write a great first draft, and get it edited. You are welcome to use my TIME piece as an inspiration and template. I can't stress getting it edited strong enough, the first draft is always going to be only the first draft. You can get friends to help out, but also tap EA resources such as the EA Editing and Review FB group, and the .impact Writing Help Slack channel. You can also get feedback on the LW Open Thread. Get multiple sets of eyes on it, and quickly. Ask more people than you anticipate you need, as some may drop out. For this piece, for example, I wrote it on the morning and early afternoon of Friday March 11th, and was lucky enough to have 6 people review it by the evening, but 10 people committed to actually reviewing it - so don't rely on all people to come through. 

 

3) Decide what venues you will submit it to, and send out the piece to as many appropriate venues as you think are reasonable. Here is an incomplete but pretty good list of places that accept op-eds. When you decide on the venues, write up a pitch for the piece which you will use to introduce the article to editors at various venues. Your pitch should start with stating that you think the readers of the specific venue you are sending it to will be interested in the piece, so that the editor knows this is not a copy-pasted email but something you specifically customized for that editor. Then continue with 3-5 sentences summarizing the article's main points and any unique angle you're bringing to it. Your second paragraph should describe your credentials for writing the piece. Here's my successful pitch to Time:

_______________________________________________________________________________________________

 

Good day, 

 

I think TIME readers will be interested in my timely piece, “Why The Wounded Warrior Fiasco Hurts Everyone (And How To Prevent It).” It analyzes the problems in the nonprofit sector that lead systematically to the kind of situation seen with Wounded Warrior. Unlike other writings on this topic, the article provides a unique angle by relying on neuroscience to clarify these challenges. The piece then gives clear suggestions for how your readers as individual donors can address these kinds of problems and avoid suffering the same kind of grief that Wounded Warrior supporters are dealing with. Finally, it talks about a nascent movement to reform and improve  the nonprofit sector, Effective Altruism. 

 

My expertise for writing the piece comes from my leadership of a nonprofit dedicated to educating people in effective giving,  Intentional Insights. I also serve as a professor at Ohio State, working at the intersection of history, psychology, neuroscience, and altruism, enabling me to have credibility as a scholar of these issues. I have written for many popular venues, such as The Huffington Post, Salon, The Plain Dealer, Alternet, and others, which leads me to believe your readership will enjoy my writing style.


Hope you can use this piece!

 

____________________________________________________________________________________________________


4) I bet I know what at least some of you are thinking. My credentials make it much easier for me to publish in TIME than someone without those credentials. Well, trust me, you can get published somewhere :-) Your hometown paper or university paper is desperately looking for good content about breaking stories, and if you can be the someone who provides that content, you can get EA and Rationality ideas out there. Then, you can slowly build up a base of publications that will take you to the next level.

Do you think I started with publishing in The Huffington Post? No, I started with my own blog, and then guest blogging for other people, then writing op-eds for smaller local venues which I don't even list anymore, and slowly over time got the kind of prominence that leads me to be considered for TIME. And it's still a crapshoot even for me: I sent out more than 30 pitches to editors at different prominent venues, and a number turned down the piece, before TIME accepted it. When it's accepted, you have to let editors at places that prefer original content, which is most op-ed venues, who get back to you and express interest, know that you piece has already been published - they may still publish it, or they may not, but likely not. So the fourth step is to be confident in yourself, try and keep trying, if you feel that this type of writing is a skill that you can contribute to spreading Rationality/EA.

 

5) There's a fifth step - repurpose your content at venues that allow republication. For instance, I wrote a version of this piece for The Life You Can Save blog, for the Intentional Insights blog, and for The Huffington Post, which all allow republication of other content. Don't let your efforts go to waste :-)

 

Conclusion

 

I hope this step-by-step guide to newsjacking a breaking story for Rationality or EA will encourage you to try it. It's not as hard as it seems, though it requires effort and dedication. It helps to know how to write well for a broad public audience in promoting Rationality and EA ideas, which is what we do at Intentional Insights, so email me at gleb@intentionalinsights.org if you want training in that or to discuss any other aspects of marketing such ideas broadly. You're also welcome to get in touch with me if you'd like editing help on such a newsjacking effort. Good luck spreading these ideas broadly!

 

P.S. To amplify the signal and get more people into EA and Rationality modes of thinking, you are welcome to share the story I wrote for TIME.


Tonic Judo

14 Gram_Stone 02 April 2016 09:19PM

(Content note: This is a story about one of the times that I've applied my understanding of rationality to reduce the severity of an affect-laden situation. This may remind you of Bayesian Judo, because it involves the mental availability and use of basic rationality techniques to perform feats that, although simple to perform in hindsight, leave an impression of surprising effectiveness on those who don't know what is generating the ability to perform the feats. However, I always felt dissatisfied with Bayesian Judo because it seemed dishonest and ultimately unproductive. Rationalists should exude not only auras of formidability, but of compassion. Read assured that the participants in this story leave mutually satisfied. I haven't read much about cognitive behavioral therapy or nonviolent communication, but this will probably look like that. Consider moving on to something else if what I've described doesn't seem like the sort of thing that would interest you.)

My friend lost his comb, and it was awful. He was in a frenzy for half an hour, searching the entire house, slamming drawers and doors as he went along. He made two phone calls to see if other people took his comb without asking. Every once in a while I would hear a curse or a drawn-out grunt of frustration. I kind-of couldn't believe it.

It makes more sense if you know him. He has a very big thing about people taking his possessions without asking, and the thing is insensitive to monetary value.

I just hid for a while, but eventually he knocked on my door and said that he 'needed to rant because that was the headspace he was in right now'. So he ranted about some non-comb stuff, and then eventually we got to the point where we mutually acknowledged that he was basically talking at me right now, and not with me, and that he was seriously pissed about that comb. So we started talking for real.

I said, "I can hardly imagine losing any one of my possessions and being as angry as you are right now. I mean, in particular, I never comb or brush my hair, so I can't imagine it in the most concrete possible sense, but even then, I can't imagine anything that I could lose that would make me react that way, except maybe my cellphone or my computer. The only way I can imagine reacting that way is if it was a consistent thing, and someone was consistently overstepping my boundaries by taking my things without asking, however cheap they were. I can't relate to this comb thing."

He said, "It's not about the comb, it's that I hate it when people take my stuff without asking. It really pisses me off. It would be different if I had just lost it, I wouldn't care. It's just like, "Why?" Why would you ever assume anything? Either you're right, and it's fine. Or you're wrong and you seriously messed up. Why would you ever not just ask?"

"Yeah, why?" I said. He didn't say anything.

I asked again, "Why?"

"What do you mean?"

"I mean if you were to really ask the question, non-rhetorically, "Why do people take things without asking?", what would the answer be?"

"Because they're just fundamentally inconsiderate. Maybe they were raised wrong or something."

I kind of smiled because I've tried to get him to notice black boxes in the past. He gets what I'm talking about when I bring it up, so I asked,

"Do you really think that that's what's going on in their heads? 'I'm going to be inconsiderate now.'? Do you really think there's a little 'evilness' node in their brains and that its value is jacked way up?"

"No, they probably don't even notice. They're not thinking they're gonna screw me over, they just never think about me at all. They're gathering things they need, and then they think 'Oh, I need a comb, better take it.' But it's my comb. That might be even worse than them being evil. I wouldn't have used the word 'inconsiderate' if I was talking about them being deliberate, I would have used a different word."

I replied, "Okay, that's an important distinction to make, because I thought of 'inconsiderateness' as purposeful. But I'm still confused, because when I imagine having my things taken because someone is evil, as opposed to having my things taken because someone made a mistake, I imagine being a lot more upset that my things were taken by evil than by chance. It's weird to me because you're experiencing the opposite. Why?"

He said, "It's not about why they took it, it's about the comb. Do you have any idea how much of an inconvenience that is? And if they had just thought about it, it wouldn't have happened. It just really pisses me off that people like that exist in the world. I specifically don't take other people's things. If someone takes your arm, through accident or evil, and they say "I took your arm because I'm a sadistic bastard who wanted to take your arm", or they just take your arm by being reckless and causing a car accident, then it doesn't matter. You'd still be like, "Yeah, and I don't have an arm right now. What do I do with that?""

I looked kind of amused, and said, "But I feel like the arm thing is a bad analogy, because it doesn't really fit the situation with the comb. Imagine if you could also misplace an arm, as you would any other object. That's...hard to imagine concretely. So, I'm still confused because you said before that you wouldn't have been as mad if you had just lost the comb. But now you're saying that you're mostly mad because of the inconvenience of not having the comb. So I don't really get it."

He thought for a minute and said, "Okay, yeah, that doesn't really make sense. I guess...maybe I was trying to look for reasons to get more pissed off about the whole thing and brought up the inconvenience of not having a comb? That was kind of stupid, I guess."

I said, "I really am curious. Please tell me, how much did the comb cost?"

"I got it for free with my shears!" He started laughing half-way through the sentence.

I laughed, and then I got serious again after a beat, and I continued, "And that's my main point. That something that costs so little and that wouldn't have riled you up if it wasn't so likely that it had been taken rather than misplaced, stresses both of us out on a Friday night, a time during which we've historically enjoyed ourselves. When the world randomly strikes at us and it's over before we can do anything, I feel like the only thing left to control is our reaction. It's not that people should never feel or express anger, or even that they shouldn't yell or slam things every once in a while, but that to keep it up for a long time or on a regular basis just seems like a cost with no benefit. And I don't want to sit in here suffering because I know one of my friends is suffering, unable to forget that all of this began with a missing comb, something that I would literally be willing to pay to replace. But that wouldn't have worked. And once again, this is not the same as someone stealing something extremely valuable or consistently violating your personal boundaries."

He sighed. And then he said somberly,

"I just wish...that I lived in a world where my cup runneth over with comb." And we both laughed. And the tension was gone.

Happy Notice Your Surprise Day!

14 Vaniver 01 April 2016 01:02PM

One of the most powerful rationalist techniques is noticing your surprise

It ties in to several deep issues. One of them relates to one of my favorite LW comments  (the second highest upvoted one in Main):

One of the things that I've noticed about this is that most people do not expect to understand things. For most people, the universe is a mysterious place filled with random events beyond their ability to comprehend or control. Think "guessing the teacher's password", but not just in school or knowledge, but about everything.

Such people have no problem with the idea of magic, because everything is magic to them, even science.

--pjeby

For the universe to make sense to you, you have to have a model; for that model to be useful, you have to notice what it says, and then you need to act on it. I've done many things the wrong way in my life, but the ones I remember as mistakes are the ones where some part of me *knew* it was a problem, and instead of having a discussion with that part of me, I just ignored it and marched on.

It is good to notice your surprise. But that's only the first step.

--Douglas_Knight

 

So any stories, of tricks you noticed, didn't notice, or successfully pulled?

Room For More Funding In AI Safety Is Highly Uncertain

12 Evan_Gaensbauer 12 May 2016 01:57PM

(Crossposted to the Effective Altruism Forum)


Introduction

In effective altruism, people talk about the room for more funding (RFMF) of various organizations. RFMF is simply the maximum amount of money which can be donated to an organization, and be put to good use, right now. In most cases, “right now” typically refers to the next (fiscal) year.  Most of the time when I see the phrase invoked, it’s to talk about individual charities, for example, one of Givewell’s top-recommended charities. If a charity has run out of room for more funding, it may be typical for effective donors to seek the next best option to donate to.
Last year, the Future of Life Institute (FLI) made the first of its grants from the pool of money it’s received as donations from Elon Musk and the Open Philanthropy Project (Open Phil). Since then, I've heard a few people speculating about how much RFMF the whole AI safety community has in general. I don't think that's a sensible question to ask before we have a sense of what the 'AI safety' field is. Before, people were commenting on only the RFMF of individual charities, and now they’re commenting of entire fields as though they’re well-defined. AI safety hasn’t necessarily reached peak RFMF just because MIRI has a runway for one more year to operate at their current capacity, or because FLI made a limited number of grants this year.

Overview of Current Funding For Some Projects


The starting point I used to think about this issue came from Topher Hallquist, from his post explaining his 2015 donations:

I’m feeling pretty cautious right now about donating to organizations focused on existential risk, especially after Elon Musk’s $10 million donation to the Future of Life Institute. Musk’s donation don’t necessarily mean there’s no room for more funding, but it certainly does mean that room for more funding is harder to find than it used to be. Furthermore, it’s difficult to evaluate the effectiveness of efforts in this space, so I think there’s a strong case for waiting to see what comes of this infusion of cash before committing more money.


My friend Andrew and I were discussing this last week. In past years, the Machine Intelligence Research Institute (MIRI) has raised about $1 million (USD) in funds, and received more than that  for their annual operations last year. Going into 2016, Nate Soares, Executive Director of MIRI, wrote the following:

Our successful summer fundraiser has helped determine how ambitious we’re making our plans; although we may still slow down or accelerate our growth based on our fundraising performance, our current plans assume a budget of roughly $1,825,000 per year [emphasis not added].


This seems sensible to me as it's not too much more than what they raised last year, and it seems more and not less money will be flowing into AI safety in the near future. However, Nate also had plans for how MIRI could've productively spent up to $6 million last year, to grow the organization. So, far from MIRI believing it had all the funding it could use, it was seeking more. Of course, others might argue MIRI or other AI safety organizations already receive enough funding relative to other priorities, but that is an argument for a different time.

Andrew and I also talked about how, had FLI had enough funding to grant money to all the promising applicants for its 2015 grants in AI safety research, that would have been millions more flowing into AI safety. It’s true what Topher wrote: that, being outside of FLI, and not otherwise being a major donor, it may be exceedingly difficult for individuals to evaluate funding gaps in AI safety. While FLI has only received $11 million to grant in 2015-16 ($6 million already granted in 2015, with $5 million more to be granted in the coming year), they could easily have granted more than twice that much, had they received the money.

To speak to other organizations, Niel Bowerman, Assistant Director at the Future of Humanity Institute (FH)I, recently spoke about how FHI receives most of its funding exclusively for research, and bottlenecks like the operations he runs more depend on private donations FHI could use more of.  Sean O HEigeartaigh, Executive Director at the Centre for the Study of Existential Risk (CSER), at Cambridge University, recently stated in discussion that CSER and the Leverhulme Centre for the Future of Intelligence (CFI), which CSER is currently helping launch, face the same problem with their operations. Nick Bostrom, author of Superintelligence, and Director of FHI, is in the course of launching the Strategic Artificial Intelligence Research Centre (SAIRC), which received $1.5 million (USD) in funding from FLI. SAIRC seems good for funding for at least the rest of 2016.

 


The Big Picture
Above are the funding summaries for several organizations listed in Andrew Critch’s 2015 map of the existential risk reduction ecosystem.There are organizations working on existential risks other than those from AI, but they aren’t explicitly organized in a network the same way AI safety organizations are. So, in practice, the ‘x-risk ecosystem’ is mapable almost exclusively in terms of AI safety.

It seems to me the 'AI safety field', if defined just as the organizations and projects listed in Dr. Critch’s ecosystem map, and perhaps others closely related (e.g., AI Impacts), could have productively absorbed between $10 million and $25 million in 2016 alone. Of course, there are caveats rendering this a conservative estimate. First of all, the above is a contrived version of the AI safety "field", as there is plenty of research outside of this network popping up all the time. Second, I think the organizations and projects I listed above could've themselves thought of more uses for funding. Seeing as they're working on what is (presumably) the most important problem in the world, there is much millions more could do for foundational research on the AGI containment/control problem, safety research into narrow systems aside.


Too Much Variance in Estimates for RFMF in AI Safety

I've also heard people setting the benchmark for truly appropriate funding for AI safety to be in the ballpark of a trillion dollars. While in theory that may be true, on its face it currently seems absurd. I'm not saying there won't be a time in even the next several years when $1 trillion/year couldn't be used effectively. I'm saying that if there isn't a roadmap for how to increase the productive use of ~$10 million/year to AI safety, to $100 million to $1 billion dollars, talking about $1 trillion/year isn't practical. I don't even think there will be more than $1 billion on the table per year for the near future.

This argument can be used to justify continued earning to give on the part of effective altruists. That is, there is so much money, e.g., MIRI could use, it makes sense for everyone who isn't an AI researcher to earn to give. This might make sense if governments and universities give major funding to what they think is AI safety, give 99% of it to only robotic unemployment or something, miss the boat on the control problem, and MIRI gets a pittance of the money that will flow into the field. The idea that there is effectively something like a multi-trillion dollar ceiling for effective funding for AI safety is still unsound.

When the range for RFMF for AI safety ranges between $5-10 million (the amount of funding AI safety received in 2015) and $1 trillion, I feel like anyone not already well-within the AI safety community cannot reasonably make an estimate of how much money the field can productively use in one year.
On the other hand, there are also people who think that AI safety doesn’t need to be a big priority, or is currently as big a priority as it needs to be, so money spent funding AI safety research and strategy would be better spent elsewhere.

All this stated, I myself don’t have a precise estimate of how much capacity for funding the whole AI safety field will have in, say, 2017.

Reasonable Assumptions Going Forward

What I'm confident saying right now is:

  1. The amount of money AI safety could've productively used in 2016 alone is within an order of magnitude of $10 million, and probably less than $25 million, based on what I currently know.
  2. The amount of total funding available will likely increase year over year for the next several years. There could be quite dramatic rises.. The Open Philanthropy Project, worth $10+ billion (USD), recently announced AI safety will be their top priority next year, although this may not necessarily translate into more major grants in the next 12 months. The White House recently announced they’ll be hosting workshops on the Future of Artificial Intelligence, including concerns over risk. Also, to quote Stuart Russell (HT Luke Muehlhauser): "Industry [has probably invested] more in the last 5 years than governments have invested since the beginning of the field [in the 1950s]." This includes companies like Facebook, Baidu, and Google each investing tons of money into AI research, including Google’s purchase of DeepMind for $500 million in 2014. With an increasing number of universities and corporations investing money and talent into AI research, including AI safety, and now with major philanthropic foundations and governments paying attention to AI safety as well, it seems plausible the amount of funding for AI safety worldwide might balloon up to $100+ million in 2017 or 2018. However, this could just as easily not happen, and there's much uncertainty in projecting this.
  3. The field of AI safety will also grow year over year for the next several years. I doubt projects needing funding will grow as fast as the amount of funding available. This is because the rate at which institutions are willing to invest in growth will not only depend on how much money they're receiving now, but how much they can expect to receive in the future. Since how much those expectations reasonably vary is so uncertain, organizations are smartly conservative to hold their cards close to their chest. While OpenAI has pledged $1 billion for funding AI research in general, and not just safety, over the next couple decades, nobody knows if such funding will be available to organizations out of Oxford or Berkeley like AI Impacts MIRI, FHI or CFI. However,

 

  • i) increased awareness and concern over AI safety will draw in more researchers.
  • ii) the promise or expectation of more money to come may draw in more researchers seeking funding.
  • iii) the expanding field and the increased funding available will create a feedback loop in which institutions in AI safety, such as MIRI, make contingency plans to expand faster, if able to or need be.

Why This Matters

I don't mean to use the amount of funding AI safety has received in 2015 or 2016 as an anchor which will bias how much RFMF I think the field has. However, it seems more extreme lower or upper estimates I’ve encountered are baseless, and either vastly underestimate or overestimate how much the field of AI safety can productively grow each year. This is actually important to figure out.

80,000 Hours rates AI safety as perhaps the most important and neglected cause currently prioritized by the effective altruism movement. Consequently, 80,000 Hours recommends how similarly concerned people can work on the issue. Some talented computer scientists who could do best working in AI safety might opt to earn to give in software engineering or data science, if they conclude the bottleneck on AI safety isn’t talent but funding. Alternatively, small but critical organization which requires funding from value-aligned and consistent donors might fall through the cracks if too many people conclude all AI safety work in general is receiving sufficient funding, and chooses to forgo donating to AI safety. Many of us could make individual decisions going either way, but it also seems many of us could end up making the wrong choice. Assessments of these issues will practically inform decisions many of make over the next few years, determining how much of our time and potential we use fruitfully, or waste.

Everything above just lays out how estimating room for more funding in AI safety overall may be harder than anticipated, and to show how high the variance might be. I invite you to contribute to this discussion, as it only just starting. Please use the above info as a starting point to look into this more, or ask questions that will usefully clarify what we’re thinking about. The best fora to start further discussion seem to be the Effective Altruism Forum, LessWrong, or the AI Safety Discussion group on Facebook, where I initiated the conversation leading to this post.

Common Misconceptions about Dual Process Theories of Human Reasoning

12 Gram_Stone 19 March 2016 09:50PM

(This is mostly a summary of Evans (2012); the fifth misconception mentioned is original research, although I have high confidence in it.)

It seems that dual process theories of reasoning are often underspecified within the rationalist community, so I will review some common misconceptions about these theories in order to ensure that everyone's beliefs about them are compatible. Briefly, the key distinction (and it seems, the distinction that implies the fewest assumptions) is the amount of demand that a given process places on working memory.

(And if you imagine what you actually use working memory for, then a consequence of this is that Type 2 processing always has a quality of 'cognitive decoupling' or 'counterfactual reasoning' or 'imagining of ways that things could be different', dynamically changing representations that remain static in Type 1 processing; the difference between a cached and non-cached thought, if you will. When you are transforming a Rubik's cube in working memory so that you don't have to transform it physically, this is an example of the kind of thing that I'm talking about from the outside.)

The first common confusion is that Type 1 and Type 2 refer to specific algorithms or systems within the human brain. It is a much stronger proposition, and not a widely accepted one, to assert that the two types of cognition refer to particular systems or algorithms within the human brain, as opposed to particular properties of information processing that we may identify with many different algorithms in the brain, characterized by the degree to which they place a demand on working memory.

The second and third common confusions, and perhaps the most widespread, are the assumptions that Type 1 processes and Type 2 processes can be reliably distinguished, if not defined, by their speed and/or accuracy. The easiest way to reject this is to say that the mistake of entering a quickly retrieved, unreliable input into a deliberative, reliable algorithm is not the same mistake as entering a quickly retrieved, reliable input into a deliberative, unreliable algorithm. To make a deliberative judgment based on a mere unreliable feeling is a different mistake from experiencing a reliable feeling and arriving at an incorrect conclusion through an error in deliberative judgment. It also seems easier to argue about the semantics of the 'inputs', 'outputs', and 'accuracy' of algorithms running on wetware, than it is to argue about the semantics of their demand on working memory and the life outcomes of the brains that execute them.

The fourth common confusion is that Type 1 processes involve 'intuitions' or 'naivety' and Type 2 processes involve thought about abstract concepts. You might describe a fast-and-loose rule that you made up as a 'heuristic' and naively think that it is thus a 'System 1 process', but it would still be the case that you invented that rule by deliberative means, and thus by means of a Type 2 process. When you applied the rule in the future it would be by means of a deliberative process that placed a demand on working memory, not by some behavior that is based on association or procedural memory, as if by habit. (Which is also not the same as making an association or performing a procedure that entails you choosing to use the deliberative rule, or finding a way to produce the same behavior that the deliberative rule originally produced by developing some sort of habit or procedural skill.) When facing novel situations, it is often the case that one must forego association and procedure and thus use Type 2 processes, and this can make it appear as though the key distinction is abstractness, but this is only because there are often no clear associations to be made or procedures to be performed in novel situations. Abstractness is not a necessary condition for Type 2 processes.

The fifth common confusion is that, although language is often involved in Type 2 processing, this is likely a mere correlate of the processes by which we store and manipulate information in working memory, and not the defining characteristic per se. To elaborate, we are widely believed to store and manipulate auditory information in working memory by means of a 'phonological store' and an 'articulatory loop', and to store and manipulate visual information by means of a 'visuospatial sketchpad', so we may also consider the storage and processing in working memory of non-linguistic information in auditory or visuospatial form, such as musical tones, or mathematical symbols, or the possible transformations of a Rubik's cube, for example. The linguistic quality of much of the information that we store and manipulate in working memory is probably noncentral to a general account of the nature of Type 2 processes. Conversely, it is obvious that the production and comprehension of language is often an associative or procedural process, not a deliberative one. Otherwise you still might be parsing the first sentence of this article.

Collaborative Truth-Seeking

11 Gleb_Tsipursky 04 May 2016 11:28PM

Summary: We frequently use debates to resolve different opinions about the truth. However, debates are not always the best course for figuring out the truth. In some situations, the technique of collaborative truth-seeking may be more optimal.

 

Acknowledgments: Thanks to Pete Michaud, Michael Dickens, Denis Drescher, Claire Zabel, Boris Yakubchik, Szun S. Tay, Alfredo Parra, Michael Estes, Aaron Thoma, Alex Weissenfels, Peter Livingstone, Jacob Bryan, Roy Wallace, and other readers who prefer to remain anonymous for providing feedback on this post. The author takes full responsibility for all opinions expressed here and any mistakes or oversights.

 

The Problem with Debates

 

Aspiring rationalists generally aim to figure out the truth, and often disagree about it. The usual method of hashing out such disagreements in order to discover the truth is through debates, in person or online.

 

Yet more often than not, people on opposing sides of a debate end up seeking to persuade rather than prioritizing truth discovery. Indeed, research suggests that debates have a specific evolutionary function – not for discovering the truth but to ensure that our perspective prevails within a tribal social context. No wonder debates are often compared to wars.

 

We may hope that as aspiring rationalists, we would strive to discover the truth during debates. Yet given that we are not always fully rational and strategic in our social engagements, it is easy to slip up within debate mode and orient toward winning instead of uncovering the truth. Heck, I know that I sometimes forget in the midst of a heated debate that I may be the one who is wrong – I’d be surprised if this didn’t happen with you. So while we should certainly continue to engage in debates, we should also use additional strategies – less natural and intuitive ones. These strategies could put us in a better mindset for updating our beliefs and improving our perspective on the truth. One such solution is a mode of engagement called collaborative truth-seeking.


Collaborative Truth-Seeking

 

Collaborative truth-seeking is one way of describing a more intentional approach in which two or more people with different opinions engage in a process that focuses on finding out the truth. Collaborative truth-seeking is a modality that should be used among people with shared goals and a shared sense of trust.

 

Some important features of collaborative truth-seeking, which are often not present in debates, are: focusing on a desire to change one’s own mind toward the truth; a curious attitude; being sensitive to others’ emotions; striving to avoid arousing emotions that will hinder updating beliefs and truth discovery; and a trust that all other participants are doing the same. These can contribute to increased  social sensitivity, which, together with other attributes, correlate with accomplishing higher group performance  on a variety of activities.

 

The process of collaborative truth-seeking starts with establishing trust, which will help increase social sensitivity, lower barriers to updating beliefs, increase willingness to be vulnerable, and calm emotional arousal. The following techniques are helpful for establishing trust in collaborative truth-seeking:

  • Share weaknesses and uncertainties in your own position

  • Share your biases about your position

  • Share your social context and background as relevant to the discussion

    • For instance, I grew up poor once my family immigrated to the US when I was 10, and this naturally influences me to care about poverty more than some other issues, and have some biases around it - this is one reason I prioritize poverty in my Effective Altruism engagement

  • Vocalize curiosity and the desire to learn

  • Ask the other person to call you out if they think you're getting emotional or engaging in emotive debate instead of collaborative truth-seeking, and consider using a safe word



Here are additional techniques that can help you stay in collaborative truth-seeking mode after establishing trust:

  • Self-signal: signal to yourself that you want to engage in collaborative truth-seeking, instead of debating

  • Empathize: try to empathize with the other perspective that you do not hold by considering where their viewpoint came from, why they think what they do, and recognizing that they feel that their viewpoint is correct

  • Keep calm: be prepared with emotional management to calm your emotions and those of the people you engage with when a desire for debate arises

    • watch out for defensiveness and aggressiveness in particular

  • Go slow: take the time to listen fully and think fully

  • Consider pausing: have an escape route for complex thoughts and emotions if you can’t deal with them in the moment by pausing and picking up the discussion later

    • say “I will take some time to think about this,” and/or write things down

  • Echo: paraphrase the other person’s position to indicate and check whether you’ve fully understood their thoughts

  • Be open: orient toward improving the other person’s points to argue against their strongest form

  • Stay the course: be passionate about wanting to update your beliefs, maintain the most truthful perspective, and adopt the best evidence and arguments, no matter if they are yours of those of others

  • Be diplomatic: when you think the other person is wrong, strive to avoid saying "you're wrong because of X" but instead to use questions, such as "what do you think X implies about your argument?"

  • Be specific and concrete: go down levels of abstraction

  • Be clear: make sure the semantics are clear to all by defining terms

  • Be probabilistic: use probabilistic thinking and probabilistic language, to help get at the extent of disagreement and be as specific and concrete as possible

    • For instance, avoid saying that X is absolutely true, but say that you think there's an 80% chance it's the true position

    • Consider adding what evidence and reasoning led you to believe so, for both you and the other participants to examine this chain of thought

  • When people whose perspective you respect fail to update their beliefs in response to your clear chain of reasoning and evidence, update a little somewhat toward their position, since that presents evidence that your position is not very convincing

  • Confirm your sources: look up information when it's possible to do so (Google is your friend)

  • Charity mode: trive to be more charitable to others and their expertise than seems intuitive to you

  • Use the reversal test to check for status quo bias

    • If you are discussing whether to change some specific numeric parameter - say increase by 50% the money donated to charity X - state the reverse of your positions, for example decreasing the amount of money donated to charity X by 50%, and see how that impacts your perspective

  • Use CFAR’s double crux technique

    • In this technique, two parties who hold different positions on an argument each writes the the fundamental reason for their position (the crux of their position). This reason has to be the key one, so if it was proven incorrect, then each would change their perspective. Then, look for experiments that can test the crux. Repeat as needed. If a person identifies more than one reason as crucial, you can go through each as needed. More details are here.  


Of course, not all of these techniques are necessary for high-quality collaborative truth-seeking. Some are easier than others, and different techniques apply better to different kinds of truth-seeking discussions. You can apply some of these techniques during debates as well, such as double crux and the reversal test. Try some out and see how they work for you.


Conclusion

 

Engaging in collaborative truth-seeking goes against our natural impulses to win in a debate, and is thus more cognitively costly. It also tends to take more time and effort than just debating. It is also easy to slip into debate mode even when using collaborative truth-seeking, because of the intuitive nature of debate mode.

 

Moreover, collaborative truth-seeking need not replace debates at all times. This non-intuitive mode of engagement can be chosen when discussing issues that relate to deeply-held beliefs and/or ones that risk emotional triggering for the people involved. Because of my own background, I would prefer to discuss poverty in collaborative truth-seeking mode rather than debate mode, for example. On such issues, collaborative truth-seeking can provide a shortcut to resolution, in comparison to protracted, tiring, and emotionally challenging debates. Likewise, using collaborative truth-seeking to resolve differing opinions on all issues holds the danger of creating a community oriented excessively toward sensitivity to the perspectives of others, which might result in important issues not being discussed candidly. After all, research shows the importance of having disagreement in order to make wise decisions and to figure out the truth. Of course, collaborative truth-seeking is well suited to expressing disagreements in a sensitive way, so if used appropriately, it might permit even people with triggers around certain topics to express their opinions.

 

Taking these caveats into consideration, collaborative truth-seeking is a great tool to use to discover the truth and to update our beliefs, as it can get past the high emotional barriers to altering our perspectives that have been put up by evolution. Rationality venues are natural places to try out collaborative truth-seeking.

 

 

 

[Link] White House announces a series of workshops on AI, expresses interest in safety

11 AspiringRationalist 04 May 2016 02:50AM

Geometric Bayesian Update

11 SquirrelInHell 09 April 2016 07:24AM

 

Today, I present to you Bayes theorem like you have never seen it before.

Take a moment to think: how would you calculate a Bayesian update using only basic geometry? I.e., you are given (as line segments) a prior P(H), and also P(E | H) and P(E | ~H) (or their ratio). How do you get P(H | E) only by drawing straight lines on paper?

Can you think of a way that would be possible to implement using a simple mechanical instrument?

It just so happens that today I noticed a very neat way to do this.

Have fun with this GeoGebra worksheet.

And here's a static image version if the live demo doesn't work for you:

 


Your math homework is to find a proof that this is indeed correct.

Hint: Vg'f cbffvoyr gb qb guvf ryrtnagyl naq jvgubhg nal pnyphyngvbaf, whfg ol ybbxvat ng engvbf bs nernf bs inevbhf gevnatyrf.

Please post answers in rot13, so that you don't spoil the fun for others who want to try.


Edit: For reference, here's a pictograph version of the diagram that came up later as a follow-up to this comment.

Consider having sparse insides

11 AnnaSalamon 01 April 2016 12:07AM

It's easier to seek true beliefs if you keep your (epistemic) identity small. (E.g., if you avoid beliefs like "I am a democrat", and say only "I am a seeker of accurate world-models, whatever those turn out to be".)

It seems analogously easier to seek effective internal architectures if you also keep non-epistemic parts of your identity small -- not "I am a person who enjoys nature", nor "I am someone who values mathematics" nor "I am a person who aims to become good at email" but only "I am a person who aims to be effective, whatever that turns out to entail (and who is willing to let much of my identity burn in the process)".

There are obviously hazards as well as upsides that come with this; still, the upsides seem worth putting out there.

The two biggest exceptions I would personally make, which seem to mitigate the downsides: "I am a person who keeps promises" and "I am a person who is loyal to [small set of people] and who can be relied upon to cooperate more broadly -- whatever that turns out to entail".

 

Thoughts welcome.

On Making Things

11 Gram_Stone 05 March 2016 03:26AM

(Content note: This is basically just a story about how I accidentally briefly made something that I find very unfun into something very fun, for the sake of illustrating how surprising it was and how cool it would be if everyone could do things like this more often and deliberately. You also might get a kick out of this story in the way that you might get a kick out of How It's Made, or many of Swimmer963's posts on swimming and nursing, or Elo's post on wearing magnetic rings. If none of that interests you, then you might consider backing out now.)

I'm learning math under the tutelage of a friend, and I go through a lot of paper. I write a lot of proofs so there can be plenty of false starts. I could fill a whole sheet of paper, decide that I only need one result to continue on my way, and switch to a blank sheet. Since this is how I go about it, I thought that a whiteboard would be a really good idea. The solution is greater surface area and practical erasure.

I checked Amazon; whiteboards are one of those products with polarized reviews. I secretly wondered if ten percent of all whiteboards manufactured don't just immediately permanently stain. Maybe I was being a little risk-averse, but I decided to hold off on buying one.

Then I remembered that I make signs for a living, and I realized that I could probably just make a whiteboard myself.

I had a good rapport with my supervisor. I have breaks and lunch time, and the boundaries are kind of fuzzy, so the time wouldn't be an issue. I didn't have to print anything, so I wouldn't be taking up time on the printers or using ink.

Maybe everyone knows what 'vinyl' is and I don't need to explain this, but the stuff that 'PVC pipes' (PVC stands for polyvinyl chloride) are made out of can be formed into thin elastic sheets. Manufacturers apply adhesive and paper backing to these sheets and sell them to people so they can pull off the paper and stick the vinyl to stuff. You can print on some of it too. It comes on long rolls, typically 54 in. or 60 in., sort of like tape or paper towels. If you ever see a vehicle that belongs to a business with all sorts of art all over it, then it's probably printed on vinyl.

It's kind of hard to print on a really short roll without everything going horribly awry, so we have tons of rolls with like 10 ft. by 54 in. sheets on them that just get thrown away.

If you scratch a vinyl print, the ink will come right off. So we laminate the vinyl before we apply it. Most of our products are laminated with a laminate by the enigmatic name of '8518', but today we happened to be using a very particular and rarely used dry erase laminate. So naturally I ran one of those extra sheets of vinyl through the laminator after I finished the job that I was really supposed to be doing.

And we keep these things called 'drops', which are just sheets of substrate material, stuff that you might apply vinyl to or print on, that were cut off from other things that were made into signs, and then never touched again. Sometimes you can make a sign out of one. People forget about them and don't like to use them because they're usually dirtier and more damaged than stock substrate, so we have a ton of them. It might be corrugated plastic (like cardboard, but plastic), or foamboard (two pieces of paper glued to a sheet of foam), or much thicker, non-elastic PVC.

And this is when I started to think that this was becoming a kind of important experience.

I looked at the drops lined up on the shelf. I definitely didn't want to use foamboard; it's extremely fragile, you can't pull the vinyl off if you mess up, it would dent when I pressed too hard with the marker, and it most generally sucks in every way possible except cost. Corrugated plastic is also quite fragile, and it has linear indentations between the flutes that vinyl would conform to; I wanted the board to be flat. PVC is a better alternative than both, but drops can sit for a long time, and large sheets of PVC warp under their own weight; I wanted a relatively large board and I didn't want it to be warped. So I went for a product that we refer to as 'MaxMetal'; two sheets of aluminum sandwiched around a thicker sheet of plastic. It's much harder to warp, and I could be confident that it would be a solid writing surface. PVC is solid, but it's not metal.

I was looking through the MaxMetal drops, trying to find the right one, realizing that I hadn't decided what dimensions I wanted the board to be, and I felt a little jump in my chest. That was me finally noticing how much fun I was having. And immediately after that, I realized that even though I had implicitly expected to do everything that I had done, I was surprised at how much fun I was having. I had failed to predict how much fun I would have doing those things. It seemed like something worth fixing.

I finally chose a precisely cut piece that was approximately 30 in. wide by 24 in. high. And then I made the board. I separated some of the vinyl from the backing, and I cut off a strip of backing, and I applied part of the vinyl sheet to one edge of the board. I put the end of the sheet with the strip of stuck vinyl between two mechanical rollers, left the substrate flat, flipped the vinyl sheet over the top of the machine and past the top of the substrate sheet, pulled up more of the backing, and rolled it through to press the two sheets together while I pulled the backing off of the vinyl. I put the product on a table, turned it upside down, cut off the excess vinyl with my trusty utility knife, and rounded the corners off by half an inch for safety and aesthetics. I took an orange Expo marker to it, and made a giant signature, and it worked. A microfiber rag erased it just fine even after letting it sit for half an hour. I cut off some super heavy duty, I-promise-this-is-safe double-sided tape, rolled it up, and took it home, so I could mount the board to my bedroom wall. I made a pretty snazzy whiteboard for myself. It was cool.

There probably aren't a lot of signmakers on LessWrong, but there are a lot of programmers. I don't see them talk about this experience a lot, but I figure it's pretty similar; what it feels like to use something that you made, or watch it work. And I'm sure there are other people with other things.

But it seems worth saying explicitly, "Maybe you should make stuff because it's fun."

That was my main explanation for how fun it was, for awhile. But there were a lot of other things when I thought about it more.

I technically had to solve problems, but they were relatively simple and rewarding to solve.

It felt a little forbidden, doing something creative for yourself at work when you're really only there to stay alive. Even a lame taboo is usually a nice kick.

And my time was taken up by responsibility, I was doing real work between all of those steps, so I could look forward to the next step in the creation process while doing something that I normally drag myself through. The day flew by when I started making that thing. When could I fit in some time for my whiteboard?

And it was fun because the meta-event was interesting; I never thought that I could do exactly the same work activity, and a small context change would change it from boring, old work to fun. I was laminating vinyl and fetching drops and rounding corners, but it wasn't for a vehicle wrap, or a sign, or a magnet; it was for my whiteboard, and that changed everything. I was glad that I noticed that, and hopeful that I could find a way to deliberately apply it in the future.

And I was using non-universal, demanded skills, that many people could acquire, but not instantly. It was cool to feel like I was being resourceful in a very particular way that most people never would.

And there weren't too many choices, and the choices weren't ambiguous. The dimensions of the board, including thickness, were limited to the dimensions of the drops, and I'd have to make very precise cuts through a hard material if I wanted a board that wasn't the size of an existing one. A whiteboard is mostly a plain white surface, there isn't much design to be done. I only had quarter-inch and half-inch corner rounders; it's one of those or square corners. What if I had more choices, either about the design of the board, or in a different domain with way more choices by default? I might be a human and regret every choice that I actually make because all of those other foregone choices combined are so much more salient.

And it seems helpful that the whiteboard was being made for a noble purpose: so that I could conserve paper and continue to study mathematics at the same time, and do so much more conveniently. I think it would have been less fun if I was making a whiteboard so that I could see what it's like to snap a whiteboard in half with cinder blocks and a bowling ball, or if I was making one because I just thought it would be cool to have one.

And instead of paying $30-$50, I paid nothing. It felt like I won.

I've thought for quite a while, but not on this level, that there should be an applied fun theory; that it seemed a bit strange that you wouldn't go further with the idea that you could find deliberate ways to make your world more fun, and try to make the present more fun, as opposed to just the distant future. And not in the way where you critically examine the suggestions that people usually generate when you ask for a list of activities that are popularly considered fun, but in the way where you predict that things are fun because you understand how fun works, and your predictions come true. Hopefully I offered up something interesting with respect to that line of inquiry.

But of course, fun seems like just the sort of thing that you could easily overthink. At the very least it's not the sort of domain where you want deep theories that don't generate practical advice for too long. But I still think it seems worth thinking about.

AIFoom Debate - conclusion?

11 Bound_up 04 March 2016 08:33PM

I've been going through the AIFoom debate, and both sides makes sense to me. I intend to continue, but I'm wondering if there're already insights in LW culture I can get if I just ask for them.

 

My understanding is as follows:

 

The difference between a chimp and a human is only 5 million years of evolution. That's not time enough for many changes.

 

Eliezer takes this as proof that the difference between the two in the brain architecture can't be much. Thus, you can have a chimp-intelligent AI that doesn't do much, and then with some very small changes, suddenly get a human-intelligent AI and FOOM!

 

Robin takes the 5-million year gap as proof that the significant difference between chimps and humans is only partly in the brain architecture. Evolution simply can't be responsible for most of the relevant difference; the difference must be elsewhere.

So he concludes that when our ancestors got smart enough for language, culture became a thing. Our species stumbled across various little insights into life, and these got passed on. An increasingly massive base of cultural content, made of very many small improvements is largely responsible for the difference between chimps and humans.

Culture assimilated new information into humans much faster than evolution could.

So he concludes that you can get a chimp-level AI, and to get up to human-level will take, not a very few insights, but a very great many, each one slowly improving the computer's intelligence. So no Foom, it'll be a gradual thing.

 

So I think I've figured out the question. Is there a commonly known answer, or are there insights towards the same?

Improving long-run civilisational robustness

10 RyanCarey 10 May 2016 11:15AM

People trying to guard civilisation against catastrophe usually focus on one specific kind of catastrophe at a time. This can be useful for building concrete knowledge with some certainty in order for others to build on it. However, there are disadvantages to this catastrophe-specific approach:

1. Catastrophe researchers (including Anders Sandberg and Nick Bostrom) think that there are substantial risks from catastrophes that have not yet been anticipated. Resilience-boosting measures may mitigate risks that have not yet been investigated.

2. Thinking about resilience measures in general may suggest new mitigation ideas that were missed by the catastrophe-specific approach.

One analogy for this is that an intrusion (or hack) to a software system can arise from a combination of many minor security failures, each of which might appear innocuous in isolation. You can decrease the chance of an intrusion by adding extra security measures, even without a specific idea of what kind of hacking would be performed. Things like being being able to power down and reboot a system, storing a backup and being able to run it in a "safe" offline mode are all standard resilience measures for software systems. These measures aren't necessarily the first thing that would come to mind if you were trying to model a specific risk like a password getting stolen, or a hacker subverting administrative privileges, although they would be very useful in those cases. So mitigating risk doesn't necessarily require a precise idea of the risk to be mitigated. Sometimes it can be done instead by thinking about the principles required for proper operation of a system - in the case of its software, preservation of its clean code - and the avenues through which it is vulnerable - such as the internet.

So what would be good robustness measures for human civilisation? I have a bunch of proposals:

 

Disaster forecasting

Disaster research

* Build research labs to survey and study catastrophic risks (like the Future of Humanity Institute, the Open Philanthropy Project and others)

Disaster prediction

* Prediction contests (like IARPA's Aggregative Contingent Estimation "ACE" program)

* Expert aggregation and elicitation

 

Disaster prevention

General prevention measures

* Build a culture of prudence in groups that run risky scientific experiments

* Lobby for these mitigation measures

* Improving the foresight and clear-thinking of policymakers and other relevant decision-makers

* Build research labs to plan more risk-mitigation measures (including the Centre for Study of Existential Risk)

Preventing intentional violence

* Improve focused surveillance of people who might commit large-scale terrorism (this is controversial because excessive surveillance itself poses some risk)

* Improve cooperation between nations and large institutions

Preventing catastrophic errors

* Legislating for individuals to be held more accountable for large-scale catastrophic errors that they may make (including by requiring insurance premiums for any risky activities)

 

Disaster response

* Improve political systems to respond to new risks

* Improved vaccine development, quarantine and other pandemic response measures

* Building systems for disaster notification


Disaster recovery

Shelters

* Build underground bomb shelters

* Provide a sheltered place for people to live with air and water

* Provide (or store) food and farming technologies (cf Dave Denkenberger's *Feeding Everyone No Matter What*

* Store energy and energy-generators

* Store reproductive technologies (which could include IVF, artificial wombs or measures for increasing genetic diversity)

* Store information about building the above

* Store information about building a stable political system, and about mitigating future catastrophes

* Store other useful information about science and technology (e.g. reading and writing)

* Store some of the above in submarines

* (maybe) store biodiversity

 

Space Travel

* Grow (or replicate) the international space station

* Improve humanity's capacity to travel to the Moon and Mars

* Build sustainable settlements on the Moon and Mars

 

Of course, some caveats are in order. 

To begin with, one could argue that surveilling terrorists is a measure specifically designed to reduce the risk from terrorism. But there are a number of different scenarios and methods through which a malicious actor could try to inflict major damage on civilisation, and so I still regard this as a general robustness measure, granted that there is some subjectivity to all of this. If you know absolutely nothing about the risks that you might face, and the structures in society that are to be preserved, then the exercise is futile. So some of the measures on this list will mitigate a smaller subset of risks than others, and that's just how it is, though I think the list is pretty different from the one people think of by using a risk-specific paradigm, which is the reason for the exercise.

Additionally, I'll disclaim that some of these measures are already well invested, and yet others will not be able to be done cheaply or effectively. But many seem to me to be worth thinking more about.

Additional suggestions for this list are welcome in the comments, as are proposals for their implementation.

 

Related readings

https://www.academia.edu/7266845/Existential_Risks_Exploring_a_Robust_Risk_Reduction_Strategy

http://www.nickbostrom.com/existential/risks.pdf

http://users.physics.harvard.edu/~wilson/pmpmta/Mahoney_extinction.pdf

http://gcrinstitute.org/aftermath

http://sethbaum.com/ac/2015_Food.html

http://the-knowledge.org

http://lesswrong.com/lw/ma8/roadmap_plan_of_action_to_prevent_human/

[link] Disjunctive AI Risk Scenarios

10 Kaj_Sotala 05 April 2016 12:51PM

Arguments for risks from general AI are sometimes criticized on the grounds that they rely on a series of linear events, each of which has to occur for the proposed scenario to go through. For example, that a sufficiently intelligent AI could escape from containment, that it could then go on to become powerful enough to take over the world, that it could do this quickly enough without being detected, etc.

The intent of my following series of posts is to briefly demonstrate that AI risk scenarios are in fact disjunctive: composed of multiple possible pathways, each of which could be sufficient by itself. To successfully control the AI systems, it is not enough to simply block one of the pathways: they all need to be dealt with.

I've got two posts in this series up so far:

AIs gaining a decisive advantage discusses four different ways by which AIs could achieve a decisive advantage over humanity. The one-picture version is:

AIs gaining the power to act autonomously discusses ways by which AIs might come to act as active agents in the world, despite possible confinement efforts or technology. The one-picture version (which you may wish to click to enlarge) is:

These posts draw heavily on my old paper, Responses to Catastrophic AGI Risk, as well as some recent conversations here on LW. Upcoming posts will try to cover more new ground.

The Thyroid Madness : Core Argument, Evidence, Probabilities and Predictions

10 johnlawrenceaspden 14 March 2016 01:41AM

I've made a couple of recent posts about hypothyroidism:

http://lesswrong.com/lw/nbm/thyroid_hormones_chronic_fatigue_and_fibromyalgia/
http://lesswrong.com/lw/n8u/a_medical_mystery_thyroid_hormones_chronic/

It appears that many of those who read them were unable to extract the core argument, and few people seem to have found them interesting.


They seem extremely important to me. Somewhere between a possible palliative for some cases of Chronic Fatigue Syndrome, and a panacea for most of the remaining unexplained diseases of the world.


So here I've made the core argument as plain as I can. But obviously it misses out many details. Please read the original posts to see what I'm really saying. They were written as I thought, and the idea has crystallised somewhat in the process of arguing about it with friends and contributors to Less Wrong. In particular I am indebted to the late Broda Barnes for the connection with diabetes, which I found in his book 'Hypothyroidism: The Unsuspected Illness', and which makes the whole thing look rather more plausible.



CORE ARGUMENT


(1.1) Hypothyroidism is a disease with very variable symptoms, which can present in many different ways.

It is an endocrine hormone disease, which causes the metabolism to run slow. A sort of general systems failure. Which parts fail first seems random.

It is extraordinarily difficult to diagnose by clinical symptoms.


(1.2) Chronic Fatigue Syndrome and Fibromyalgia look very like possible presentations of Hypothyroidism


(1.3) The most commonly used blood test (TSH) for Hypothyroidism is negative in CFS/FMS


=>


EITHER


(2.1) CFS/FMS/Hypothyroidism are extremely similar diseases which are nevertheless differently caused.


OR


(2.2) The blood test is failing to detect many cases of Hypothyroidism.



It seems that one is either forced to accept (2.1), or to believe that blood hormone levels can be normal in the presence of Hypothyroidism.


There is precedent for this:


Diabetes, another endocrine disorder (this time the hormone is insulin), comes in two forms:


type I : the hormone producing gland is damaged, the blood hormone levels go wrong.         (Classical Diabetes)

type II: the blood hormone levels are normal, but for some reason the hormone does not act. (Insulin Resistance)


I therefore hypothesize:


(3) That there is at least one mechanism interfering with the action of the thyroid hormones on the cells.


and


(4) The same, or similar mechanisms can interfere with the actions of other hormones.


A priori, I'd give these hypotheses a starting chance of 1%. They do not seem unreasonable. In fact they are obvious.

The strongest evidence against them is that they are so very obvious, and yet not believed by those whose job it is to decide.

 

 




CURRENT STATUS  (Estimated probability)


(1.1) Uncontroversial, believed by everyone involved (~100%)


(1.2) Similarly uncontroversial (~100%)


(1.3) By definition. With abnormal TSH, you'd have hypothyroidism (~100%)


(2.1) Universal belief of conventional medicine and medical science, some alternative medicine disagrees (~90%)


(2.2) The idea that the TSH test is inaccurate is widely believed in alternative medicine, and by thyroid patient groups, but largely rejected by conventional medicine (~10%)


(3) There is some evidence from alternative medicine that this might be true (~10%)


(4) My own idea. A wild stab in the dark. But if it happens twice, you bet it happens thrice [1] (~0.000001%)



Some Details


(1.1) Clinical diagnosis of Hypothyroidism is very out of fashion, considered hopelessly unreliable, doctors are actually trained to ignore the symptoms. There is a famous medical sin of 'Overdiagnosing Hypothyroidism', and doctors who fall into sin are regularly struck off.


(1.2) I don't think you'll find anyone who knows about both diseases to dispute this.


(1.3) True by definition. CFS/FMS symptoms plus abnormal TSH would be Hypothyroidism proper, almost no-one would disagree.


(2.1) This is the belief of conventional medicine. But the cause of CFS/FMS is unknown.

Generally the symptoms are blamed on 'stress', but 'stress' seems to be 'that which causes disease'. This 'explanation' seems to be doing little explanatory work. In fact it looks like magical thinking to me.

Medical Scientists know much more about all this than I do, and they believe it.

On the other hand, scientific ideas without verified causal chains often turn out to be wrong.


(2.2) (The important bit: If the TSH test is not solid, there are a number of interesting consequences.)


I've been looking for a few months through the endocrinological literature for evidence that the sensitivity of the TSH test was properly checked before its introduction or since, and I can't find any. It seems to have been an unjustified assumption. At the very least, my medical literature search skillz are not up to it. I appeal for help to those with better skillz.


It is beyond doubt that atrophy or removal of the thyroid gland causes the TSH value to go extremely high, and such cases are uncontroversial.


The actual interpretation of the TSH test is curiously wooly.

It has proved very difficult to pin down the 'normal range' for TSH, and they have been arguing about it for nearly forty years, over which the 'normal range' has been repeatedly narrowed

The AACB report of 2012 concluded that the normal range was so narrow that huge numbers of people with no symptoms would be outside it, and this range is not widely accepted for obvious reasons


There are many other possible blood hormone tests for hypothyroidism. All are considered to be less accurate or less sensitive than the TSH test. It does seem to be the best available blood test. It does not correlate particularly well with clinical symptoms.


(3) Broda Barnes, a conventional endocrinologist working before the introduction of reliable blood tests, was convinced that the most accurate test was the peripheral basal body temperature on waking.

He considered measuring the basal metabolic rate, and rejected it for good reasons. He considered that desiccated thyroid was a good treatment for the disease, and thought the disease very common. He estimated its prevalence at 40% in the American population. His work is nowadays considered obsolete, and ignored. But he seems to have been a careful, thoughtful scientist, and the best arguments against his conclusions are placebo-effect and confirmation bias. He treated thousands of patients, his treatments were not controversial at the time, and he reported great success. He wrote a popular book 'Hypothyroidism: The Unsuspected Illness', and his conclusions have fathered a large and popular alternative medicine tradition.


John Lowe, a chiropractor who claimed that fibromyalgia could be cured with desiccated thyroid, found that many (25%) of his patients did not respond to the treatment. He hypothesised peripheral resistance, thought it genetic, and used very high doses of the thyroid hormone T3 on many of his patients, which should have killed them. I have read many of his writings, they seem thoughtful and sane. I am not aware of any case in which John Lowe is thought to have done harm. There must be some, even if he was right. But if he was wrong he should have killed many of his patients, including himself. He was either a liar, or a serial murderer, or he was right. He was likely seeing an extremely biased sample of patients, those who could not be helped by conventional approaches.


(4) I just made it up by analogy.

There is the curious concept of 'adrenal fatigue', widespread in alternative medicine but dismissed as fantasy outside it, where the adrenal glands (more endocrine things) are supposed to be 'tired out' by 'excessive stress'. That could conceivably be explained by peripheral resistance to adrenal hormones.



CONSEQUENCES


If (3) is true but (4) is not:


There are a number of mysterious 'somatoform' disorders, collectively known as the central sensitivity syndromes, with many symptoms in common, which could be explained as type 2 hypothyroidism. Obvious cases are Chronic Fatigue Syndrome, Fibromyalgia Syndrome, Major Depressive Disorder and Irritable Bowel Syndrome, but there are many others. Taken together they would explain Broda Barnes' estimate of 40% of Americans.


If (4) is true:


Then we can probably explain most of the remaining unexplained human diseases as endocrine resistance disorders.

 

 




HOW CAN THIS BE TRUE, BUT HAVE BEEN MISSED?


This is the million-dollar question!


My favourite explanation is that in order to overwhelm 'peripheral resistance to thyroid hormones', one needs to give the patient both T4 and T3 in exactly the right proportions and dose.


Supplementation with T4 alone will not increase the levels of T3 in the system, since the conversion is under the body's normal control, and the body defends T3 levels.


But T3 is the 'active hormone'. Without significantly increasing the circulating levels of T3, the resistance cannot be overwhelmed.


On the other hand, any significant overdosing of T3 will massively overstimulate the body, causing the extremely unpleasant symptoms of hyperthyroidism.


This seems to me to be sufficient explanation for why various trials of T4 supplementation on the central sensitivity disorders have all failed. In almost all cases, the patients will either have seen no improvement, or have experienced the symptoms of over-treatment. Only in very few cases will any improvement have occurred, and standard trials are not designed to detect such effects.


It is actually just luck that the T4/T3 proportion in desiccated thyroid is about right for some people.


Alternatively, there may just be some component in desiccated thyroid whose action we don't understand.



PERSONAL EXPERIENCE


I displayed symptoms of mild-to-moderate Chronic Fatigue Syndrome, and my wonderful NHS GP checked everything it could possibly be. All my blood tests normal, TSH=2.51. I was heading for a diagnosis of CFS.


After four months I mysteriously partially recovered after trying the iron/vitamin B supplement Floradix, even though I wasn't anaemic.


I started researching on the basis that things that go away on their own tend to come back on their own.


I noticed that I had recorded, in records kept at the time of the illness, thirty out of a list of forty possible symptoms of Hypothyroidism, drew the obvious conclusions as so many others have, and purchased a supply of desiccated thyroid in case it came back.


It did come back, and after one month, I began to self-treat with desiccated thyroid, very carefully titrating small doses against symptoms, and quickly noted immediate huge improvement in all symptoms. In fact I'd say they were gone.


My basal temperature rose over a few weeks from 36.1 to ~36.6 (average, rise slow over several weeks, noise ~ +-0.3 day to day).


One week, holding the dose steady in anticipation of more blood tests, I overdid it by the truly minute amount of 3mg/day of desiccated thyroid, which caused all of the symptoms of the manic phase of bipolar disorder (whose down phase is indistinguishable from CFS, and whose up phase looks terribly like the onset of hyperthyroidism), The manic symptoms disappeared within twelve hours of ceasing thyroid supplementation, to be replaced by overwhelming tiredness.


I resumed thyroid supplementation at a slightly lower dose, and feel as well as I have done for ten years. It's now been ten weeks and I am becoming reasonably confident that it is having some effect.



POSSIBLE CAUSATION


Such catastrophic failures of the body's central control system CANNOT be evolutionarily stable unless they are extremely rare or have compensating advantages.


I am thus drawn to the idea of either:


(a) recent environmental change (which seems to be the alternative medicine explanation)


(b) immune defence (which would explain why e.g. CFS often presents as extended version of the normal post-viral fatigue)

If the alternative is being eaten alive, it seems all too plausible that an immune mechanism might be to 'wall off' cells in some way until the emergency is past, even if catastrophic damage is a side effect.




STRONG PREDICTIONS

Low Body Temperature


It is a very strong prediction of this theory that low basal metabolic rates, and thus low basal peripheral temperatures will be found in many sufferers of Chronic Fatigue Syndrome and Fibromyalgia.

If this is not true, then the idea is refuted unambiguously.

Thyroid Hormone Supplementation as Palliative

It is a less strong prediction, but still fairly strong, that supplementation of the hormones T4 and T3 in carefully titrated doses and proportions will relieve some of the symptoms of CFS/FMS.


Note that T4 supplementation alone is unlikely to work. And that unless the doses and proportions are carefully adjusted to relieve symptoms, the treatment is likely to either not work, or be worse than the disease!


SOME SELECTED POSSIBLE IMPLICATIONS / PREDICTIONS

I've been very reluctant to draw my wilder speculative conclusions in public, since they have the potential to do great harm whether or not the idea is true, but here are some of the less frightening ones that I feel safe stating:


I state them only to encourage people to believe that this problem is worth thinking about.


Endocrinology appears not to be too interested, and my crank emails to endocrinologists have gone unanswered.


One of the reasons that I feel safe stating these four in public is that Broda Barnes thought them obvious and published popular books about them, so they are unlikely to come as a surprise to anyone outside endocrinology:


Dieting/Exercise/Weight Loss


Dieting and Exercise don't work long term as treatments for weight loss. The function of the thyroid system is to adapt metabolism to available resources. Starvation will cause mild transient hypothyroidism as the body attempts to survive the famine it infers. This may be the explanation for Anorexia Nervosa.


Diabetes


Diagnosis of diabetes was once a death sentence. With the discovery of insulin, allowing diabetics to control their blood sugar levels, it became survivable.

However it still has terrible complications, a lot of which look like the complications of hypothyroidism.


If a hormone-resistance mechanism interferes with both insulin and thyroid hormones, the reason for this is obvious. Diabetics with well-controlled blood sugar are dying in their millions from a treatable condition.


Heart Disease


One of the very old tests for hypothyroidism was blood cholesterol. It was thought to be a reliable indicator of hypothyroidism if present, but it was not always present.


A known symptom of hypothyroidism is atherosclerosis and weakness of the heart.


I would imagine that hypothyroidism initially presents as low blood pressure, due to the weakness of the heart. As the arteries clog, the weakened heart is forced to work harder and harder. Blood pressure goes higher and higher, and eventually the heart collapses under the strain.


Blood pressure reducing medications may actually be doing harm. A promising treatment might be to correct the underlying hypothyroidism.


Smoking


Cigarettes are full of poisons, and smoking is correlated with very many diseases.


It could be that smoking causes amongst its effects peripheral resistance, which causes clinical hypothyroidism, which then causes everything it usually causes. And that would be my bet!


It could be that hypothyroidism causes a very great number of bad things, including depression, which then causes smoking.


Smoking may not actually be that dangerous, and it might be possible to mitigate its bad effects.

 

[1] Madonna, "Pretender", Like A Virgin, Power Station Studios, New York, New York (1984)




I'm going to stop there. There are quite a lot of similar conclusions to be drawn. Read Barnes.


I also have some novel ones of my own which I am not telling anyone about just yet.


What the hell do I, or any of the quacks who have been screaming about this for forty years, have to say in order that someone with real expertise in this area takes this idea seriously enough to have a go at refuting it?

 

 


EDIT: This keeps confusing people (including me): Low Basal Metabolic Rates. The amount of oxygen you use once you have been asleep for a while. That's what the thyroid apparently controls in adult animals. Daytime won't do, that's probably under the control of something else. And peripheral temperatures. Not core. We're interested in the amount of heat flowing out of the body. Which is not quite the same thing as temperature....

 

 


 

EDIT : WHY THIS IS WORTH A CLOSE LOOK, EVEN THOUGH IT IS LIKELY WRONG!

Thanks to HungryHobo for making me make this point explicitly:

This is a very simple and obvious explanation of an awful lot of otherwise confusing data, anecdotes, quackery, expert opinion and medical research.

And it is obviously false! Of course medicine has tried using thyroid supplementation to fix 'tired all the time'. It doesn't work!

But there really is an awful lot unexplained about all this T4/T3 business, and why different people think it works differently. I refer you to the internet for all the unexplained things.

In just the endocrinological literature there is a long fight going on about T4/T3 ratios in thyroid supplementation, and about the question of whether or not to treat 'subclinical hypothyroidism'. Some people show symptoms with very low TSH values. Some people have extremely high TSH values and show no symptoms at all.

I've been trying various ways of explaining it all for nearly four months now. And I've found lots of magical thinking in conventional medicine, and lots of waving away of the reports of honest-sounding empiricists, who have made no obvious errors of reasoning, most of whom are taking terrible risks with their own careers in order to, as they see it, help their patients.

I've read lots of people saying 'we tried this, and it works', and no people saying 'we tried this, and it makes no difference'. The explanation favoured by conventional medicine strongly predicts 'we tried this, and it makes no difference'. But they've never tried it! It's really confusing. A lot of people are very confused.

I think that simple explanations are extra-worth looking at because they are simple.

Of course that doesn't mean they're right. Consequence and experiment are the only judge of that.

I do not think I am right! There is no way I can have got the whole picture. I can't explain, for instance 'euthyroid sick syndrome'. But I don't predict that it doesn't exist either.

But you should look very carefully at the simple beautiful ideas that seem to explain everything, but that look untrue.

Firstly because Solomonoff induction looks like a good way to think about the world. Or call it Occam's Razor if you prefer. It is straightforward Bayesianism, as David Mackay points out in Information Theory, Inference, and Learning Algorithms.

Secondly because all the good ideas have turned out to be simple, and could have been spotted, (and often were) by the Ancient Greeks, and could have been demonstrated by them, if only they'd really thought about it.

Thirdly because experiments not done with the hypothesis in mind have likely neglected important aspects of the problem. (In this case T3 homeostasis and possible peripheral resistance and the difference between basal metabolic rate and waking rate, and the difference between core and peripheral temperature and the possibility of a common DIO2 mutation causing people's systems to react differently to T4 monotherapy).

So that even if there are things you can't explain (I can't explain hot daytime fibro-turks...), you should keep plugging away, to see if you can explain them, if you think hard enough.

Good ideas should be given extra-benefit of the doubt. Not ignored because they prove (slightly) too much!

 

 

 

 




 

I reckon that we should be able to refute or strongly support the general idea from reports in the published literature. Here is some stuff that I have found recently. There is a comment that looks like this. Add anything you find to it, and I'll move it up here.

ADD EVIDENCE FOR OR AGAINST HERE

Found this for "Wilson's syndrome", but can only see the abstract:

http://www.ncbi.nlm.nih.gov/pubmed/16883675

It looks like it might be supportive, but it also looks crap. No mention of blinding, randomising, or placebo in the abstract.

Can anyone see the actual paper and link to it here? And can anyone work out whether these guys are allies of Wilson, or trying to break him? Because that matters.


This, on the other hand:

http://www.ncbi.nlm.nih.gov/pubmed/9513740

Looks solid, and looks like refutation. They claim normal average core temperatures in CFS. I have quibbles, of course:

I'd expect the core temperature to be well defended. So I'm not worried by that per se, but they do talk about relation to oral temperature, and they do talk about metabolic rate, so they've obviously thought about it, and I can't quite work out what they did there.

Also, the reason that they're measuring this is because their CFS patients have all been complaining about low oral temperatures and the fact that even when they've got a fever, they're not hot. So errr?? Do all the CFS patients believe this theory and are (un)consciously faking? I mean, I can believe that, but is it true that all CFS patients think this theory is true? Who is telling CFS patients to take their temperatures and why?

On the other hand, their actual graphs do look funny. There's a strange shape to the CBT vs time graph in CFS, but n=7, I think, so maybe that's just noise.


These guys:

http://www.sciencedirect.com/science/article/pii/S0024320515301223

Are actually claiming HIGHER peripheral temperatures in Fibromyalgia. But I think they're measuring during the day. I've no idea how to explain that, or what it might mean.


Barnes claimed: Measure axillary temperature on waking. Should be 98.6+/-0.2F (so 37C+/-0.1), lower is bad. Treat with lots of thyroid (1/2-2 grains).

I claim (from just me, and I am perfectly capable of fooling myself): measure oral temperature on waking. Was low (~36.1), has gone higher (36.6-7-8-9) under influence of small amounts of thyroid (1/3 grain). Feel fine now.

Can anyone find: Large numbers of CFS/FMS patients have normal metabolic rate while sleeping or just after waking, no exercise allowed, or normal axillary or oral temperature on waking, again no exercise allowed?

Because that's what I'm looking for at the moment, and it is refutation. I will have to pull off some clever moves indeed to get round that.


Oh, yes, and there's a paper by Lowe himself, finding exactly what I expect him to find:

http://www.ncbi.nlm.nih.gov/pubmed/16810133

Can anyone dig up quibbles with this that can make me discount it?


Oh Jesus:

Clinical Response to Thyroxine Sodium in Clinically Hypothyroid but Biochemically Euthyroid Patients G. R. B. SKINNER MD DSc FRCPath FRCOG, D. HOLMES, A. AHMAD PhD, J. A. DAVIES BSc and J. BENITEZ MSc Vaccine Research Trust, 22 Alcester Road, Moseley, Birmingham B13 8BE, UK

This I can't explain at all! He treated CFS people with tiny amounts of T4, and worked up the dose until they were all better. Worked a treat, apparently. Can anyone break it?

It simultaneously breaks me and proves that CFS is a thyroid problem. I think. Help! Again, no placebos, but a large clinical trial that seems to have worked, by a careful man.

I wouldn't dream of suggesting that anyone steal this using sci-hub.io by typing the title into the search box and then solving the easy CAPTCHA which is in English even though the instructions are all in Russian. You should write to the authors and request a copy instead.

 


Four 2003 Studies of
Thyroid Hormone Replacement Therapies:
Logical Analysis and Ethical Implications
Dr. John C. Lowe

Lowe again, my rationalist hero, publishing in his own journal, referencing his own papers and books. This time I think he's made maths mistakes. But that's my department, so I'm going to go away and think about it. I mention the paper here to avoid the obvious mistake of deciding whether to mention it after I've had a proper look.

 


Effective Treatment of Chronic Fatigue Syndrome and Fibromyalgia—A Randomized, Double-Blind, Placebo-Controlled, Intent-To-Treat Study

Jacob E. Teitelbaum*, Barbara Bird, Robert M. Greenfield, Alan Weiss, Larry Muenz & Laurie Gould

DOI:10.1300/J092v08n02_02

ABSTRACT
Background: Hypothalamic dysfunction has been suggested in fibromyalgia (FMS) and chronic fatigue syndrome (CFS). This dysfunction may result in disordered sleep, subclinical hormonal deficiencies, and immunologic changes. Our previously published open trial showed that patients usually improve by using a protocol which treats all the above processes simultaneously. The current study examines this protocol using a randomized, double-blind design with an intent-to-treat analysis. Methods: Seventy-two FMS patients (38 active:34 placebo; 69 also met CFS criteria) received all active or all placebo therapies as a unified intervention. Patients were treated, as indicated by symptoms and/or lab testing, for: (1) subclinical thyroid, gonadal, and/or adrenal insufficiency, (2) disordered sleep, (3) suspected neurally mediated hypotension (NMH), (4) opportunistic infections, and (5) suspected nutritional deficiencies. Results: At the final visit, 16 active patients were “much better,” 14 “better”, 2 “same,” 0 “worse,” and 1 “much worse” vs. 3, 9, 11, 6, and 4 in the placebo group (p < .0001, Cochran-Mantel-Haenszel trend test). Significant improvement in the FMS Impact Questionnaire (FIQ) scores (decreasing from 54.8 to 33.2 vs. 51.4 to 47.7) and Analog scores (improving from 176.1 to 310.3 vs. 177.1 to 211.9) (both with p < .0001 by random effects regression), and Tender Point Index (TPI) (31.7 to 15.5 vs. 35.0 to 32.3, p < .0001 by baseline adjusted linear model) were seen. Long term follow-up (mean 1.9 years) of the active group showed continuing and increasing improvement over time, despite patients being able to discontinue most treatments. Conclusions: Significantly greater benefits were seen in the active group than in the placebo group for all primary outcomes. An integrated treatment approach appears effective in the treatment of FMS/CFS.

OK, how do we discount this one? I haven't even read it yet. Can anyone see it?




Thyroid Insufficiency. Is Thyroxine the Only Valuable Drug?

DOI:10.1080/13590840120083376

W. V. Baisier, J. Hertoghe & W. Eeckhaut

ABSTRACT
Purpose: To evaluate the efficacy of a drug containing both liothyronine and thyroxine (T3 + T4) in hypothyroid patients who were treated, but not cured, with thyroxine (T4 alone). Design: Practice-based retrospective study of patients' records. Materials and Methods: The records of 89 hypothyroid patients, treated elsewhere with thyroxine but still with hypothyroidism, seen in a private practice in Antwerp, Belgium, were compared with those of 832 untreated hypothyroid patients, over the same period of time (May 1984-July 1997). Results: The same criteria were applied to both groups: a score of eight main symptoms of hypothyroidism and the 24 h urine free T3 dosage. The group of 89 patients, treated elsewhere with T4, but still complaining of symptoms of hypothyroidism, did not really differ from the group of untreated hypothyroid patients as far as symptoms and 24 h urine free T3 were concerned. A number of these patients were followed up during treatment with natural desiccated thyroid (NDT): 40 T4 treated patients and 278 untreated patients. Both groups responded equally favourably to NDT. Conclusions: Combined T3 + T4 treatment seems to be more effective than treatment with T4 alone in hypothyroid patients.

Even mighty sci-hub.io can't provide me a copy of this. Any reason to bin it?

 

Bored now. Anyone find me anything that says this doesn't work?


I've even heard rumours that Lowe himself did PCRTs of his treatments. And probably published them in some chiropractic house mag. I can't even find those.

 

 


 

 

A rich seam of thyroid vs depression papers, all found through: http://psycheducation.org/

Since he's got a cause, I expect to find them all in favour. I'm going to list them here before reading them in order to avoid the obvious mistake of cherry picking from the cherry basket, and then add comments once I've read them / their abstracts.

Further evidence pointing in the opposite direction is very welcome!

I also tried:
https://www.ncbi.nlm.nih.gov/pubmed/?term=thyroxine+major+depression

and some of those are also here. I can't remember which ones I found through psycheducation and which ones through pubmed.
Bloody browser tabs, sorry, I should have been more careful.




J Affect Disord. 2014 Sep;166:353-8. doi: 10.1016/j.jad.2014.04.022. Epub 2014 May 2.
A favorable risk-benefit analysis of high dose thyroid for treatment of bipolar disorders with regard to osteoporosis.
Kelly T1.

 

ABSTRACT

High dose thyroid hormone has been in use since the 1930s for the treatment of affective disorders. Despite numerous papers showing benefit, the lack of negative trials and its inclusion in multiple treatment guidelines, high dose thyroid has yet to find wide spread use. The major objection to the use of high dose thyroid is the myth that it causes osteoporosis. This paper reviews the literature surrounding the use of high dose thyroid, both in endocrinology and in psychiatry. High dose thyroid does not appear to be a significant risk factor for osteoporosis while other widely employed psychiatric medications do pose a risk. Psychiatrists are uniquely qualified to do the risk-benefit analyses of high dose thyroid for the treatment of the bipolar I, bipolar II and bipolar NOS. Other specialties do not have the requisite knowledge of the risks of alterative medications or of the mortality and morbidity of the bipolar disorders to do a full risk benefit analysis.


J Clin Endocrinol Metab. 2010 Aug;95(8):3623-32. doi: 10.1210/jc.2009-2571. Epub 2010 May 25.
A randomized controlled trial of the effect of thyroxine replacement on cognitive function in community-living elderly subjects with subclinical hypothyroidism: the Birmingham Elderly Thyroid study.
Parle J1, Roberts L, Wilson S, Pattison H, Roalfe A, Haque MS, Heath C, Sheppard M, Franklyn J, Hobbs FD.

Conclusions:


This RCT provides no evidence for treating elderly subjects with SCH with T4 replacement therapy to improve cognitive function.

 


 

 

 

 

J Affect Disord. 2002 Apr;68(2-3):285-94.
Effects of supraphysiological thyroxine administration in healthy controls and patients with depressive disorders.
Bauer M1, Baur H, Berghöfer A, Ströhle A, Hellweg R, Müller-Oerlinghausen B, Baumgartner A.

J Affect Disord. 2009 Aug;116(3):222-6. doi: 10.1016/j.jad.2008.12.010. Epub 2009 Feb 11.
The use of triiodothyronine as an augmentation agent in treatment-resistant bipolar II and bipolar disorder NOS.
Kelly T1, Lieberman DZ.

Am J Psychiatry. 2006 Sep;163(9):1519-30; quiz 1665.
A comparison of lithium and T(3) augmentation following two failed medication treatments for depression: a STAR*D report.
Nierenberg AA1, Fava M, Trivedi MH, Wisniewski SR, Thase ME, McGrath PJ, Alpert JE, Warden D, Luther JF, Niederehe G, Lebowitz B, Shores-Wilson K, Rush AJ.

Nord J Psychiatry. 2015 Jan;69(1):73-8. doi: 10.3109/08039488.2014.929741. Epub 2014 Jul 1.
Well-being and depression in individuals with subclinical hypothyroidism and thyroid autoimmunity - a general population study.
Fjaellegaard K1, Kvetny J, Allerup PN, Bech P, Ellervik C.

Mol Biol Rep. 2014;41(4):2419-25. doi: 10.1007/s11033-014-3097-6. Epub 2014 Jan 18.
Thyroid hormones association with depression severity and clinical outcome in patients with major depressive disorder.
Berent D1, Zboralski K, Orzechowska A, Gałecki P.

Mol Psychiatry. 2016 Feb;21(2):229-36. doi: 10.1038/mp.2014.186. Epub 2015 Jan 20.
Levothyroxine effects on depressive symptoms and limbic glucose metabolism in bipolar disorder: a randomized, placebo-controlled positron emission tomography study.
Bauer M1,2, Berman S2, Stamm T3, Plotkin M4, Adli M3, Pilhatsch M1, London ED2, Hellemann GS5, Whybrow PC2, Schlagenhauf F3.
    Abstract

Mol Psychiatry. 2005 May;10(5):456-69.
Supraphysiological doses of levothyroxine alter regional cerebral metabolism and improve mood in bipolar depression.
Bauer M1, London ED, Rasgon N, Berman SM, Frye MA, Altshuler LL, Mandelkern MA, Bramen J, Voytek B, Woods R, Mazziotta JC, Whybrow PC.

Minerva Endocrinol. 2013 Dec;38(4):365-77.
Hypothyroidism and depression: salient aspects of pathogenesis and management.
Duntas LH1, Maillis A.

J Psychiatr Res. 2012 Nov;46(11):1406-13. doi: 10.1016/j.jpsychires.2012.08.009. Epub 2012 Sep 7.
The combination of triiodothyronine (T3) and sertraline is not superior to sertraline monotherapy in the treatment of major depressive disorder.
Garlow SJ1, Dunlop BW, Ninan PT, Nemeroff CB.

Mol Psychiatry. 2016 Feb;21(2):229-36. doi: 10.1038/mp.2014.186. Epub 2015 Jan 20.
Levothyroxine effects on depressive symptoms and limbic glucose metabolism in bipolar disorder: a randomized, placebo-controlled positron emission tomography study.
Bauer M1,2, Berman S2, Stamm T3, Plotkin M4, Adli M3, Pilhatsch M1, London ED2, Hellemann GS5, Whybrow PC2, Schlagenhauf F3.

 

Using humility to counteract shame

9 Vika 15 April 2016 06:32PM

"Pride is not the opposite of shame, but its source. True humility is the only antidote to shame."

Uncle Iroh, "Avatar: The Last Airbender"

Shame is one of the trickiest emotions to deal with. It is difficult to think about, not to mention discuss with others, and gives rise to insidious ugh fields and negative spirals. Shame often underlies other negative emotions without making itself apparent - anxiety or anger at yourself can be caused by unacknowledged shame about the possibility of failure. It can stack on top of other emotions - e.g. you start out feeling upset with someone, and end up being ashamed of yourself for feeling upset, and maybe even ashamed of feeling ashamed if meta-shame is your cup of tea. The most useful approach I have found against shame is invoking humility.

What is humility, anyway? It is often defined as a low view of your own importance, and tends to be conflated with modesty. Another common definition that I find more useful is acceptance of your own flaws and shortcomings. This is more compatible with confidence, and helpful irrespective of your level of importance or comparison to other people. What humility feels like to me on a system 1 level is a sense of compassion and warmth towards yourself while fully aware of your imperfections (while focusing on imperfections without compassion can lead to beating yourself up). According to LessWrong, "to be humble is to take specific actions in anticipation of your own errors", which seems more like a possible consequence of being humble than a definition.

Humility is a powerful tool for psychological well-being and instrumental rationality that is more broadly applicable than just the ability to anticipate errors by seeing your limitations more clearly. I can summon humility when I feel anxious about too many upcoming deadlines, or angry at myself for being stuck on a rock climbing route, or embarrassed about forgetting some basic fact in my field that I am surely expected to know by the 5th year of grad school. While humility comes naturally to some people, others might find it useful to explicitly build an identity as a humble person. How can you invoke this mindset?

One way is through negative visualization or pre-hindsight, considering how your plans could fail, which can be time-consuming and usually requires system 2. A faster and less effortful way is to is to imagine a person, real or fictional, who you consider to be humble. I often bring to mind my grandfather, or Uncle Iroh from the Avatar series, sometimes literally repeating the above quote in my head, sort of like an affirmation. I don't actually agree that humility is the only antidote to shame, but it does seem to be one of the most effective.

(Cross-posted from my blog. Thanks to Janos Kramar for his feedback on this post.)

Updating towards the simulation hypothesis because you think about AI

9 SoerenMind 05 March 2016 10:23PM

(This post is both written up in a rush and very speculative so not as rigorous and full of links as a good post on this site should be but I'd rather get the idea out there than not get around to it.)


Here’s a simple argument that could make us update towards the hypothesis that we live in a simulation. This is the basic structure:


1) P(involved in AI* | ¬sim) = very low

2) P(involved in AI | sim) = high


Ergo, assuming that we fully accept this the argument and its premises (ignoring e.g. model uncertainty), we should strongly update in favour of the simulation hypothesis.


Premise 1


Supposed you are a soul who will randomly awaken in one of at least 100 billion beings (the number of homo sapiens that have lived so far), probably many more. What you know about the world of these beings is that at some point there will be a chain of events that leads to the creation of superintelligent AI. This AI will then go on to colonize the whole universe, making its creation the most impactful event the world will see by an extremely large margin.


Waking up, you see that you’re in the body of one of the first 1000 beings trying to affect this momentous event. Would you be surprised? Given that you were randomly assigned a body, you probably would be.


(To make the point even stronger and slightly more complicated: Bostrom suggests to use observer moments, e.g. an observer-second, rather than beings as the fundamental unit of anthropics. You should be even more surprised to find yourself as an observer-second thinking about or even working on AI since most of the observer seconds in people's lives don’t do so. You reading this sentence may be such a second.)


Therefore, P(involved in AI* | ¬sim) = very low.


Premise 2

 

Given that we’re in a simulation, we’re probably in a simulation created by a powerful AI which wants to investigate something.


Why would a superintelligent AI simulate the people (and even more so, the 'moments’) involved in its creation? I have an intuition that there would be many reasons to do so. If I gave it more thought I could probably name some concrete ones, but for now this part of the argument remains shaky.


Another and probably more important motive would be to learn about (potential) other AIs. It may be trying to find out who its enemies are or to figure out ways of acausal trade. An AI created with the 'Hail Mary’ approach would need information about other AIs very urgently. In any case, there are many possible reasons to want to know who else there is in the universe.


Since you can’t visit them, the best way to find out is by simulating how they may have come into being. And since this process is inherently uncertain you’ll want to run MANY simulations in a Monte Carlo way with slightly changing conditions. Crucially, to run these simulations efficiently, you’ll run observer-moments (read: computations in your brain) more often the more causally important they are for the final outcome.


Therefore, the thoughts of people which are more causally connected to the properties of the final AI will be run many times and that includes especially the thoughts of those who got involved first as they may cause path-changes. AI capabilities researchers would not be so interesting to simulate because their work has less effect on the eventual properties of an AI.


If figuring out what other AIs are like is an important convergent instrumental goal for AIs, then a lot of minds created in simulations may be created for this purpose. Under SSA, the assumption that “all other things equal, an observer should reason as if they are randomly selected from the set of all actually existent observers [or observer moments] (past, present and future) in their reference class”, it would seem rather plausible that,

P(involved in AI | sim) = high


The closer the causal chain to (capabilities research etc)


If you read this, you’re probably one of those people who could have some influence over the eventual properties of a superintelligent AI and as a result should update towards living some simulation that’s meant to figure out the creation of an AI.


Why could this be wrong?


I could think of four general ways in which this argument could go wrong:


1) Our position in the history of the universe is not that unlikely

2) We would expect to see something else if we were in one of the aforementioned simulations.

3) There are other, more likely, situations we should expect to find ourselves in if we were in a simulation created by an AI

4) My anthropics are flawed


I’m most confused about the first one. Everyone has some things in their life that are very exceptional by pure chance. I’m sure there’s some way to deal with this in statistics but I don’t know it. In the interest of my own time I’m not going to go elaborate further on these failure modes and leave that to the commentators.


Conclusion

Is this argument flawed? Or has it been discussed elsewhere? Please point me to it. Does it make sense? Then what are the implications for those most intimately involved with the creation of superhuman AI?


Appendix


My friend Matiss Apinis (othercenterism) put the first premise like this:


“[…] it's impossible to grasp that in some corner of the Universe there could be this one tiny planet that just happens to spawn replicators that over billions of painful years of natural selection happen to create vast amounts of both increasingly intelligent and sentient beings, some of which happen to become just intelligent enough to soon have one shot at creating this final invention of god-like machines that could turn the whole Universe into either a likely hell or unlikely utopia. And here we are, a tiny fraction of those almost "just intelligent enough" beings, contemplating this thing that's likely to happen within our lifetimes and realizing that the chance of either scenario coming true may hinge on what we do. What are the odds?!"

Education as Entertainment and the Downfall of LessWrong

9 SquirrelInHell 04 March 2016 02:06PM

Note 1: I'm not very serious about the second part of the title, I just thought it sounds more catchy. I'm a long time lurker writing here for the first time, and it's not my intention to alienate anyone. Also, hi, nice to meet you. Please leave a comment to achieve a result of making me happy about you having left a comment. But let's get to the point.

I think you might be familiar with TED Talks. Recall the last time you watched one, and how you felt while doing it.

[BZRT BZRT sound of imagination working]

In my case, I often got the feeling like if I was learning something valuable while watching most TED Talks. The speakers are (mostly) obviously passionate and intelligent people, speaking about important matters they care about a lot. (Granted, I probably haven't watched more than a dozen TED Talks in all my life, so my sample is quite small, but I think it isn't very unrepresentative.)

But at some point, I started asking myself afterwards:

So, what have I actually learned?

Which translates in my internal dialect to:

For each major point, give a one-sentence summary and at least one example of how I could apply it.

(Note 2: don't treat this "one sentence summary" thing too strictly - of course it's only a reflex/shorthand that is useful in many situations, but not all. I like it because it's simple enough that it's installable as a subconscious trigger-action.)

And I could not state afterwards anything actually useful that I have learned from those "fascinating" videos (with at most one or two small exceptions).

This is exactly what I mean by "Education as Entertainment".

It's getting the enjoyable *feeling* of learning without any real progress.

[DUM DUM DUM sound of increasing dramatism]

And now, what if you use this concept to look at rationality materials?

For me, reading the core Eliezer's braindump (basically the content of "From AI to Zombies"), as well as braindumps (in the form of blogs) of several other people from the LW community, had definite learning value.

I take notes when I read those, and I have an accountability system in place that enables me to make sure I follow up on all the advice I give to myself, test the new ideas, and improve/drop/replace/implement as needed.

However, when I read (a significant part of) the content produced by the "modern" community-powered-LessWrong, I classify its actual learning value at around the same level as TED Talks.

Or YouTube videos with cats, only those don't give me the *impression* that I'm learning something.

THE END

Please let me know what you think.

Final Note: Please take my remarks with a grain of salt. What I write is meant to inspire thoughts in you, not to represent my best factual knowledge about the LW community.

The Science of Effective Fundraising: Four Common Mistakes to Avoid

8 Gleb_Tsipursky 11 April 2016 03:19PM

This article will be of interest primarily for Effective Altruists. It's also cross-posted to the EA Forum.

 

 

Summary/TL;DR: Charities that have the biggest social impact often get significantly less financial support than rivals that tell better stories but have a smaller social impact. Drawing on academic research across different fields, this article highlights four common mistakes that fundraisers for effective charities should avoid and suggests potential solutions to these mistakes. 1) Focus on individual victims as well as statistics; 2) Present problems that are solvable by individual donors; 3) Avoid relying excessively on matching donations and focus on learning about your donors; 4) Empower your donors and help them feel good.

 

 

Co-written by Gleb Tsipursky and Peter Slattery


 

Acknowledgments: Thanks to Stefan Schubert, Scott Weathers, Peter Hurford, David Moss, Alfredo Parra, Owen Shen, Gina Stuessy, Sheannal Anthony Obeyesekere and other readers who prefer to remain anonymous for providing feedback on this post. The authors take full responsibility for all opinions expressed here and any mistakes or oversights. Versions of this piece will be published on The Life You Can Save blog and the Intentional Insights blog.

 

Intro

Charities that use their funds effectively to make a social impact frequently struggle to fundraise effectively. Indeed, while these charities receive plaudits from those committed to measuring and comparing the impact of donations across sectors, many effective charities have not successfully fundraised large sums outside of donors focused highly on impact.

 

In many cases, this situation results from the beliefs of key stakeholders at effective charities. Some think that persuasive fundraising tactics are “not for them”  and instead assume that presenting hard data and statistics will be optimal as they believe that their nonprofit’s effectiveness can speak for itself.

The belief that a nonprofit’s effectiveness can speak for itself can be very harmful to fundraising efforts as it overlooks the fact that donors do not always optimise their giving for social impact. Instead, studies suggest that donors’ choices are influenced by many other considerations, such as a desire for a warm glow, social prestige, or being captured by engrossing stories. Indeed, charities that have the biggest social impact often get significantly less financial support than rivals that tell better stories but have a smaller social impact. For example, while one fundraiser collected over $700,000 to remove a young girl from a well and save a single life, most charities struggle to raise anything proportionate for causes that could save many more lives or lift thousands out of poverty.

 

Given these issues, the aim of this article is to use available science on fundraising and social impact to address some of the common misconceptions that charities may have about fundraising and, hopefully, make it easier for effective charities to also become more effective at fundraising. To do this it draws on academic research across different fields to highlight four common mistakes that those who raise funds for effective charities should avoid and suggest potential solutions to these mistakes.

 

Don’t forget individual victims

 

Many fundraisers focus on using statistics and facts to convey the severity of the social issues they tackle. However, while fact and statistics are often an effective way to convince potential donors, it is important to recognise that different people are persuaded by different things. While some individuals are best persuaded to do good deeds through statistics and facts, others are most influenced by the closeness and vividness of the suffering. Indeed, it has been found that people often prefer to help a single identifiable victim, rather than many faceless victims; the so-called identifiable victim effect.

 

One way in which charities can cover all bases is to complement their statistics by telling stories about one or more of the most compelling victims. Stories have been shown to be excellent ways of tapping emotions, and stories told using video and audio are likely to be particularly good at creating vivid depictions of victims that compel others to want to help them.

 

Don’t overemphasise the problem

 

Focusing on the size of the problem has been shown to be ineffective for at least two reasons. First, most people prefer to give to causes where they can save the greatest portion of people. This means that rather than save 100 out of 1,000 victims of malaria, the majority of people would rather use the same or even more resources to save all five out of five people stranded on a boat or one girl stranded in a well with the same amount of resources, even if saving 100 people is clearly the more rational choice. People being reluctant to help where they feel their impact is not going to be significant is often called the drop in the bucket effect.

 

Second, humans have a tendency to neglect the scope of the problem when dealing with social issues. This is called scope insensitivity: people do not scale up their efforts in proportion to a problem’s true size. For example, a donor willing to give $100 to help one person might only be willing to give $200 to help 100 people, instead of the proportional amount of $10,000.

 

Of course charities often need to deal with big problems. In such cases one solution is to break these big problems into smaller pieces (e.g., individuals, families or villages) and present situations on a scale that the donor can relate to and realistically address through their donation.

 

Don’t assume that matching donations is always a good way to spend funds

 

Charitable fundraisers frequently put a lot of emphasis on arranging for big donors to offer to match any contributions from smaller donors. Intuitively, donation matching seems to be a good incentive for givers as they will generate twice (sometimes three times) the social impact for donating the same amount. However, research provides insufficient evidence to support or discourage donation matching: after reviewing the evidence, Ben Kuhn argues that its positive effects on donations are relatively small (and highly uncertain), and that sometimes the effects can be negative.

 

Given the lack of strong supporting research, charities should make sure to check that donation matching works for them and should also consider other ways to use their funding from large donors. One option is to use some of this money to cover experiments and other forms of prospect research to better understand their donors’ reasons for giving. Another is to pay various non-program costs so that a charity may claim that more of the smaller donors’ donations will go to program costs, or to use big donations as seed money for a fundraising campaign.

 

Don't forget to empower donors and help them feel good

 

Charities frequently focus on showing tragic situations to motivate donors to help.  However, charities can sometimes go too far in focusing on the negatives as too much negative communication can overwhelm and upset potential donors, which can deter them from giving. Additionally, while people often help due to feeling sadness for others, they also give for the warm glow and feeling of accomplishment that they expect to get from helping.

 

Overall, charities need to remember that most donors want to feel good for doing good and ensure that they achieve this. One reason why the ALS Ice Bucket Challenge was such an incredibly effective approach to fundraising was that it gave donors the opportunity to have a good time, while also doing good. Even when it isn’t possible to think of a clever new way to make donors feel good while donating, it is possible to make donors look good by publicly thanking and praising them for their donations. Likewise it is possible to make them feel important and satisfied by explaining how their donations have been key to resolving tragic situations and helping address suffering.

 

Conclusion

 

Remember four key strategies suggested by the research:

 

1) Focus on individual victims as well as statistics

 

2) Present problems that are solvable by individual donors

 

3) Avoid relying excessively on matching donations and focus on learning about your donors

 

4) Empower your donors and help them feel good.

 

By following these strategies and avoiding the mistakes outlined above, you will not only provide high-impact services, but will also be effective at raising funds.


Fake Amnesia

8 Gram_Stone 03 April 2016 09:23PM

Followup to: Tonic Judo

Related to: Correspondence Bias

Imagine that someone you know has a reaction that you consider disproportionate to the severity of the event that caused it. If your friend loses their comb, and they get weirdly angry about it, and you persuade them into calming down with rational argument, and then it happens again, say, many months later, and they get just as angry as they did the first time, is that person unteachable? Is it a waste of your time to try to persuade them using rationality?

I think a lot of people would have an expectation that the friend would not have another outburst, and that when the friend had another outburst, that expectation would be violated.

And for some reason, at this turn, it seems like a lot of people think, "I tried to teach this person once, and it didn't work. They're the kind of person who can't be persuaded. I should direct my efforts elsewhere." Maybe you even make it look more 'rational' by name-dropping expected utility.

Or maybe it doesn't feel like stubbornness; maybe it feels like they just forgot. Like they were pretending to listen when they looked like they were listening to your arguments, but really they were just waiting for you to finish talking.

That does happen sometimes, if you fail to emotionally engage someone or if you're hanging out with all the wrong kinds of people.

But most of the time, when you're dealing with the majority of the human race, with all of the people who care about how they behave, the right way to go is to realize that a violation of expectations is a sign that your model is wrong.

You made your first rational argument with the implicit expectation that it would prevent all future outbursts over combs. But it happens again. You shouldn't stop at your first attempt. It may be that circumstances are different this time and an outburst is warranted, or it may be that your friend is not in a state in which your previous arguments are at the level of their attention. Or maybe they feel righteous anger and you need to get them to have less self-confidence and more confidence in you, and maybe you need to encourage them to control that in the future, instead of only the previous object-level impulse.

The point is, you expected your first argument to generalize more than it actually did. People often respond to situations like this as though the fact that their first attempt to instill a very general behavior in another person is strong evidence that the person can never be made to instill that general behavior. It's only strong evidence that your first attempt to instill a general behavior was less successful than you expected it to be.

The idea is to keep up your rational arguments, to give them enough feedback to actually learn the complicated thing that you're trying to teach them. From the fact that you see that your arguments generalize in certain situations, it does not follow that you have successfully given others the ability to see the generalizations that you can see.

(Content note: Inspired by this comment by user:jimmy. Highly recommended reading.)

In Defence of Simple Ideas That Explain Everything But Are Wrong

8 johnlawrenceaspden 22 March 2016 03:46PM

I've been thinking, and writing, about The Impossible Question of the Thyroid for some while now.

I came up with what I thought was a good stab at an answer to its majestic mystery:

http://lesswrong.com/r/discussion/lw/nef/the_thyroid_madness_core_argument_evidence/

This is a very simple and obvious explanation of an awful lot of otherwise confusing data, anecdotes, quackery, expert opinion and medical research.

People seem to hate it because it is so simple, and makes so many predictions, most of which are terrifying.

And it is obviously false! Of course medicine has tried using thyroid supplementation to fix 'tired all the time'. It doesn't work!

EDIT: Apparently I spoke too soon. GRB Skinner tried it in 2000, and it works a treat. See comments.

But there really is an awful lot unexplained about all this T4/T3 business, and why different people think it works differently. I refer you to the internet for all the unexplained things.

In just the endocrinological literature there is a long fight going on about T4/T3 ratios in thyroid supplementation, and about the question of whether or not to treat 'subclinical hypothyroidism'. Some people show symptoms with very low TSH values. Some people have extremely high TSH values and show no symptoms at all.

I've been trying various ways of explaining it all for nearly four months now. And I've found lots of magical thinking in conventional medicine, and lots of waving away of the reports of honest-sounding empiricists, real doctors, who have made no obvious errors of reasoning, most of whom are taking terrible risks with their own careers in order to, as they see it, help their patients.

I've read lots of people saying 'we tried this, and it works', and no people saying 'we tried this, and it makes no difference'. The explanation favoured by conventional medicine strongly predicts 'we tried this, and it makes no difference'. But they've never tried it!

It's really confusing. A lot of people are very confused.

I think that simple explanations are extra-worth looking at because they are simple.

Of course that doesn't mean they're right. Consequences and experiment are the only judge of that.

I do not think I am right! There is no way I can have got the whole picture. I can't explain, for instance: 'euthyroid sick syndrome'. But I don't predict that it doesn't exist either.

But you should look very carefully at the simple beautiful ideas that seem to explain everything, but that look untrue.

Firstly because Solomonoff induction looks like a good way to think about the world. Or call it Occam's Razor if you prefer. It is straightforward Bayesianism, as David Mackay points out in Information Theory, Inference, and Learning Algorithms.

Secondly because all the good ideas have turned out to be simple, and could have been spotted, (and often were) by the Ancient Greeks, and could have been demonstrated by them, if only they'd really thought about it.

Thirdly because experiments not done with the hypothesis in mind have likely neglected important aspects of the problem. (In this case T3 homeostasis, and possible peripheral resistance, and the difference between basal metabolic rate and waking rate, and the difference between core and peripheral temperature, and the possibility of a common DIO2 mutation causing people's systems to react differently to T4 monotherapy, and in general the hideous complexity of the thyroid system and its function in vertebrates in general).

Fourthly because the reason for the 'unreasonable effectiveness of mathematics' is that the simplest ideas tend to come up everywhere!

And so when a mathematician plays with a toy problem for fun, and reasons carefully about it, two thousand years later it can end up winning a major war in a way no one ever expected. 

So that even if there are things you can't explain (I can't explain hot daytime fibro-turks...), you should keep plugging away, to see if you can explain them, if you think hard enough.

Good ideas should be given extra-benefit of the doubt. Not ignored because they prove (slightly) too much!

Do not believe them. Do not ever ever believe them. You will end up worse than Hitler. You will end up worse than Marx.

But give them the benefit of the doubt. Keep them in mind. Try safe experiments, ready to abort when they go wrong.

And if they're easy to refute (mine is), then if you're going to call yourself a scientist, damned well take the trouble to refute the things. You might learn something!

Prediction challenge: Zika and birth rates

8 NancyLebovitz 12 March 2016 04:55PM

I've been wondering about good new topics for LW, and prediction might be one of them.

The effect of the Zika virus-- and human reactions to it-- on birth rates has the combination of being hard enough to be interesting, not being heavily plowed over by partisans, and having a quantitative outcome.

There's a lot of evidence that Zika causes microcephaly, but this isn't confirmed. There's also some reason to think it increases the rate of miscarriages.

Human reactions cover a wide range, including trying to wipe out the mosquitoes, increasing access to birth control, abortions, asking people to put off having children, creating a less-mosquito-friendly environment....

My assumption is that zika will cease to be a serious problem in not too many years, as more women get the disease and acquire immunity before their child-bearing years, but admittedly, this is assuming that zika (or some other disease with a similar infection pattern) is the problem.

Any other good prediction questions?

View more: Next