I was recently thinking about the possibility that someone with a lot of influence might at some point try to damage LessWrong and the SIAI and what preemptive measures one could take to counter it.

If you believe that the SIAI does the most important work in the universe and if you believe that LessWrong serves the purpose of educating people to become more rational and subsequently understand the importance of trying to mitigate risks from AI, then you should care about public relations, you should try to communicate your honesty and well-intentioned motives as effectively as possible.

Public relations are very important because a good reputation is necessary to do the following:

  • Making people read the Sequences.
  • Raising money for the SIAI.
  • Convincing people to take risks from AI seriously.
  • Allowing the SIAI to influence other AGI researchers.
  • Mitigating future opposition by politicians and other interest groups.
  • Being no easy target for criticism.

An attack scenario

First one has to identify characteristics that could potentially be used to cast a damaging light on this community. Here the most obvious possibility seems to be to portray the SIAI, together with LessWrong, as a cult.

After some superficial examination an outsider might conclude the following about this community:

Most of this might sound wrong to the well-read LessWrong reader. But how would those points be received by mediocre rationalists who don't know what you know, especially if eloquently summarized by a famous and respected person?

Preemptive measures

How one might counter such conclusions:

  • Create an introductory guide to LessWrong.
  • Explain why the context of the Sequences is important.
  • Explain why LessWrong differs from mainstream skepticism. 
  • Enable and encourage outsiders to challenge and question the community before turning against it.
  • Discourage the downvoting of people who have not yet read the Sequences.
  • Don't expect people to read hundreds of posts without supporting evidence that it is worth it
  • Avoid jargon when talking to outsiders.
  • Detach LessWrong from the SIAI by creating an additional platform to talk about related issues.
  • Ask or pay independent experts to peer-review.
  • Make the finances of the SIAI easily accessible.
  • Openly explain why and for what the SIAI currently needs more money.

So what do you think needs improvement and what would you do about it?

New Comment
35 comments, sorted by Click to highlight new comments since:

(1) Write a short, introductory, thoroughly cited guide on each major concept employed by SIAI / LW.

As an example, this is what I'm currently doing for the point about why standard, simple designs for machine ethics will result in disaster if implemented in a superintelligent machine. Right now, you have to read hundreds of pages of dense material that references unusual terms described in hundreds of other pages all across Less Wrong and SIAI's website. That is unnecessary, and doesn't help public perception of SIAI / LW. It looks like we're being purposely obscurantist and cult-like.

Why an intelligence explosion is probable is another good example of this.

(2) Engage the professional community. Somebody goes to SIAI's page and looks for accomplishments and they see not a single article in a peer-reviewed journal. Compare this to, um... the accomplishments page of every other 10-year research institute or university research program on the planet.

EDIT: I should note that in the course of not publishing papers in journals and engaging the mainstream community, SIAI has managed to be almost a decade ahead of everyone else. Having just read quite nearly the entirety of extant literature in the field of machine ethics, I can say with some confidence that the machine ethics field still isn't caught up to where Eliezer was circa 2001.

So of course SIAI can work much more quickly if it doesn't bother to absorb the entirety of the (mostly useless) machine ethics literature and then write papers that use the same language and style as the mainstream community, and cites all the same papers.

The problem is that if you don't write all those papers, then people keep asking you dumb questions about "Why can't we just tell it to maximize human happiness?" You have to keep answering that question because there is no readable, thoroughly-cited, mainstream-language guide that answers those types of questions. (Except, the one I'm writing now.)

Also, not publishing those papers in mainstream journals leaves you with less credibility in the eyes of journals who are savvy enough to know there is a difference between conference papers and those accepted to mainstream journals.

So I think it's worth all that effort, though probably not for somebody like Yudkowsky. He should be working on TDT and CEV, I imagine. Not reading papers about Kantian solutions to machine ethics.

Perhaps it would be useful to change the framing?

For example... if I join a book discussion group:

  • I understand that much of the discussion will not make any sense to me if I haven't read the book, and that there's a rapidly reached limit to how usefully I can participate in the discussion without having read the book.

  • I don't expect anyone to expend a lot of effort justifying the reading of the book to people skeptical about the benefits of doing so.

  • I don't expect anyone to summarize just the interesting parts of the book for me so I can participate in the discussion without actually reading the book.

All of this remains true even if the group welcomes new members who haven't read the book yet, but who hang around because the community's discussions seem interesting.

So, perhaps encouraging a similar attitude with respect to the Sequences would help manage some of the PR issues you identify surrounding them.

Of course, none of that would address the SIAI-related issues. Then again, from my perspective LW is already fairly separate from SIAI... at least, I participate in the former and not in the latter and nobody seems to mind... so I don't see a problem that needs solving there.

But I would not object to further separation, if a consensus emerged in favor of that.

An useful device here might be the word "about", LW is framed to be about rationality, so everyone who think they know anything about rationality think they can participate. However, in practice it is about a specific type of rationality (that it happens to be the type that can be considered the only one is for the moment irrelevant) that requires having read the sequences. From an outside view one might even argue that LW is "about" the sequences "rather than" rationality.

ciphergoth considers LW "a fan site for the sequences" (quote from Sunday). But this is only clear from people talking about them.

That's not unreasonable... certainly it's what got me to stick around.

And like any fan site, it's as much about enjoying the company of the sorts of people who find this sort of thing engaging as it is about the thing itself.

First one has to identify characteristics that could potentially be used to cast a damaging light on this community. Here the most obvious possibility seems to be to portray the SIAI, together with LessWrong, as a cult.

Probably the other main possibilities that spring to my mind are:

  • That it is a luddite organisation;

  • That it is an unscrupulous machine intelligence outfit masquerading as a luddite organisation for marketing reasons;

  • That it has fallen too far behind to have much chance of meeting its goals;

  • That it is too perfectionist to have much chance of meeting its goals;

  • That its lack of experience and secretive culture are a bad combination.

Less Wrong has a FAQ that anyone can edit. I think your first four Measures could be best addressed by getting an account on the Wiki and writing what you think we need.

Your worries about the structure of SIAI sound like the sort of thing worth talking about, but posting them on Less Wrong might not be the best way to go about them due to bystander effect and lots of important SIAI folk not being readers here. If you are really interested in this side of things, consider emailing someone on the organizational side of SIAI (Eliezer is not primarily organizational side and is usually busy; Michael Vassar might be good or at least know who to forward it to) and seeing what they have to say. Justin Shovelain also has a history of being good at explaining this side of things; he has a sequence that I think will get some of this across somewhere in the pipeline.

[-][anonymous]60

Using scope insensitivity and high-risk to justify action, outweigh low probabilities and disregard opposing evidence.

This is true, but I think the problem goes even further than this: many people are unwilling to make what Less Wrong readers would consider the "obvious" utilitarian choice, e.g. in a scenario like Torture vs. Dust Specks. Outsiders probably consider these unintuitive moral decisions weird at best and scary at worst.

[-][anonymous]50

.

I really love the idea of a commercial going "And now I'm showing you a picture of a man in glasses with a labcoat to prey on your learned respect for authority! Quick! flash to happy family to associate the product with happy families. nowthedisclaimerisreallyfastandintinytextbecauseiknowthiswillleavelessofanimpressiononyou"

[-][anonymous]00

.

Advertising for LessWrong is plausible. I just got a £75 Google AdWords voucher in the post ...

(£75 is approximately nothing - a decent taster campaign is at least £300. But I have no use for it and LW is welcome to it ...)

You talk about "making people read the sequences". I suggest that "making" people to do anything doesn't work. You have to pull them. This means you need them to think there's something good there they want to have.

(You want to herd cats, you need to work out the local value of tuna.)

How about some advertising taglines? The current tagline is excellent, for example. But why would people want that? What can they get here they don't have now?

People want to WIN.

Most people don't feel like winners.

  • "Win in the world with clear, rational thinking"
  • "If you know why you do things, you can WIN in the world."

etc. Any others? Other ideas of things that will pull people towards LW?

Edit: And why has HP:MoR lured people in? What kept them here? How many came here from HP:MoR and did not stay? Why not? Etc.

"Win in the world with clear, rational thinking"

"If you know why you do things, you can WIN in the world."

These both sound like "The Secret"-esque crank taglines, which will drive off the intended audience.

It is unfortunate that this method has been applied to things with no substance at all, like The Secret. However, it certainly would not on any level be deceptive or promise anything it couldn't deliver.

People want to win. That's what LW rationality is for. That is, in point of fact, what we promise. You seem to be objecting to saying so upfront.

Is it worth introducing simple rationality tools to people who would otherwise think The Secret was a good idea? Or is that something you think should be avoided in general?

Using it as the only hook possibly wouldn't be good and might lead to the effect you describe. However, brainstorming is cheap. What I'm saying is "ideas, ideas, please come up with lots."

The public mind now associates "WINNING" with Sheentology.

Does it? That connection didn't even occur to me until you pointed it out here. (I might be less in tune with pop culture than others though.)

Yes, this is an unfortunate turn of events. I'll also note that 'winning' is a term that Scientologists use constantly, in the same way that Sheen uses it.

(splutter) It'll pass :-)

I think part of what attracted people about HPMoR is that it showed Harry being successful for distinct, comprehensible, imitable reasons, which people wanted to learn more about, but more of it was a feeling that "this Eliezer guy writes some funny, interesting stuff, I want to check out more of what he's written."

Which works :-) But I'm quite interested to know about the experience of those who read MoR, looked at LessWrong and went away never to return. I don't know if they can even be estimated, let alone counted, surveyed and analysed, but I suspect they're important - look at the evidence that would refute your hypothesis (in this case, that MoR is good for LW), not just that which confirms it.

I haven't hypothesized that MoR is good for LW. I haven't bothered to track the contributions of the people who arrived from MoR, so I don't have much of a sense of what they're bringing to the community. I'm just aware that there seem to be a considerable number of members who've come here through MoR.

I would be very surprised though, if more karma-positive members are leaving Less Wrong due to MoR than are arriving because of it.

I didn't say you did, but many others have.

Edit: I did say "you". I meant a general "you" (one's hypothesis), not anything you i particular said. Sorry!

Discourage the downvoting of people who have not yet read the Sequences.

I don't downvote based on whether people have read the sequences. I vote based on merit and obnoxiousness.

I don't downvote based on whether people have read the sequences

I doesn't matter why you do it, what matters is what newbies and outsiders think who are not aware of your superior and rational use of the reputation system. This post is about public relations so you have to take an outside view.

Sure.

But it's not unreasonable for me to treat the beliefs of people who actually pay attention to what I do as a different set from the beliefs of people who don't, and to devote different levels of effort to attempting to manipulate the former and the latter.

For example, I might decide that the beliefs of people who won't pay attention to what I actually do before deciding that I'm behaving badly simply aren't worth considering at all.

This might not be wise -- that is, I might not like the consequences of that decision -- but it's perfectly coherent, and entirely on-topic.

That would undermine whatever value the whole karma system may have at this point. Not punishing, or perhaps even rewarding mediocre posts seems likely to encourage complacency on behalf of the users.

A race to the bottom would likely ensue as well, since new negative achievements would become possible: who can get away with the most trolling? Who can get the most karma with the least effort?

In fact, I think the system, and most people, are far too lenient already, on the whole.

I wonder if posts shouldn't start out with a slight negative value from the outset, to reflect their high potential for introducing arbitrary complexity (noise) into the established information pool (mostly signal... though that may be up for debate) of the site.

Another idea: the more posts a user makes, the greater that initial negative value should be, to reflect the higher standard that is expected of them as time goes by. :-)

Yeah, that would require pretty complex algorithms.

Occasionally people bother taking LW seriously enough to criticise, e.g. this FWIW. I suggest this is a useful sort of notice and that LW could do with a lot more of it.

Detach LessWrong from the SIAI by creating an additional platform to talk about related issues.

I think LessWrong is sufficiently seperated from SIAI - most LessWrong members are not involved at all with SIAI, and many SIAI members/advisors/whatchamacallits don't post on LessWrong (I think).

I think of LessWrong more of as "a place for people who read the Sequences to discuss related topics", whereas SIAI is much more focused on a specific goal. SIAI people may make anouncements on LessWrong because a lot of members are interested, but I don't expect SIAI people to pay that much attention to LessWrong in general.

So in that light, these:

Ask or pay independent experts to peer-review. Make the finances of the SIAI easily accessible. Openly explain why and for what the SIAI currently needs more money.

... are for SIAI, and have little to do with LessWrong.

I think LessWrong is sufficiently seperated from SIAI - most LessWrong members are not involved at all with SIAI, and many SIAI members/advisors/whatchamacallits don't post on LessWrong (I think).

It's not detached yet - it's still to a large degree about SIAI-related interests that are not related to the (excellent) tagline "a community blog devoted to the art of refining human rationality".

There are plenty of front-page promoted posts that are essentially advertising for SIAI, reasons why you should donate all you can spare to SIAI or how-tos on ways for readers to make money to donate to SIAI. Which I can live with - it's no more annoying than the banners on Wikipedia at the end of each year, and it takes money to keep the lights on - but it's not obviously on-mission (taking the tagline at face value).

I think that one day LW should be more independent of SIAI, but it's not a problem that it isn't yet and it can happen at its own pace.

I think LessWrong is sufficiently seperated from SIAI...

Why I think this is not the case:

  • The Sequences have been written with the goal in mind of convincing people of the importance of taking risks from AI serious and therefore donate to the SIAI (Reference: An interview with Eliezer Yudkowsky).
  • LessWrong is used to ask for donations.
  • You can find a logo with a link to the SIAI in the header and a logo and a link to LessWrong on the SIAI's frontpage.
  • LessWrong is mentioned as an achievement of the SIAI (Quote: "Less Wrong is important to the Singularity Institute's work towards a beneficial Singularity").
  • A quote from the official SIAI homepage: "Less Wrong is [...] a key venue for SIAI recruitment".

LessWrong is the mouthpiece of the SIAI and its main advertisement platform. I don't think one can reasonably disagree about that.

I do disagree. LessWrong isn't the mouthpiece of SIAI, that would be the SIAI blog. I don't think it's reasonable to expect top-level posts on LessWrong to represent the SIAI's views, and even less to expect that of discussion posts, comments and voting patterns.

There may be a fair amount of SIAI-oriented posts by Eliezer or others on LessWrong, but I don't see that as using LessWrong as a platform, but rather "the SIAI talking to LessWrong people".

LessWrong may be The SIAI's most popular advertisement platform, but that's because the quality of Eliezer's writings and the community attract more audience than the SIAI website does.

Eliezer needs nerds for the SIAI; instead of going through the effort of hunting nerds in the wild, he created LessWrong in the hope of having a self-sustained place where nerds like to hang out and are already familiar with his ideas. But LessWrong isn't supposed to represent the SIAI, apart from the fact that it was shaped with the features that make it a good hunting ground for the kind of nerds Eliezer needs. A lot of features required for having a functional internet community (moderation, karma, openness) have nothing to do with the SIAI's goals themselves.

I'm rambling a bit, but I still think that LessWrong is the wrong place to come to complain about things you don't like about SIAI. Information flow is mostly SIAI -> LessWrong. And also the issues of "what the SIAI should do to reach it's goals" is very different from "What features should LessWrong have to be a valuable community".

I still think that LessWrong is the wrong place to come to complain about things you don't like about SIAI.

I don't necessarily agree but I will do you all a favor and from now on send any criticism directly to the SIAI, via e-Mail or otherwise. Except someone else starts a discussion about the SIAI here, in which case I might post a comment.