WARNING: Memetic hazard.

http://www.slate.com/articles/technology/bitwise/2014/07/roko_s_basilisk_the_most_terrifying_thought_experiment_of_all_time.html?wpisrc=obnetwork

Is there anything we should do?

New to LessWrong?

New Comment
131 comments, sorted by Click to highlight new comments since: Today at 10:26 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Is there anything we should do?

  • Meet 10 new people (over a moderately challenging personal specific timeframe).
  • Express gratitude or appreciation.
  • Work close to where we live.
  • Have new experiences.
  • Get regular exercise.

ie. No. Nothing about this article comes remotely close to changing the highest expected value actions for the majority of the class 'we'. If it happens that there is a person in that class for whom this opens an opportunity to create (more expected) value then it is comparatively unlikely that that person is the kind who would benefit from "we shoulding" exhortations.

I want this list posted in response to every "is there anything we should do" ever. Just all over the internet. I would give you more than one upvote just for that list if I could.

"So? What do you think I should do?"

"Hm. I think you should start with all computable universes weighted by simplicity, disregard the ones inconsistent with your experiences, and maximize expected utility over the rest."

"That's your answer to everything!"

(source)

1ChristianKl10y
Writing a bot to do this isn't that hard and you can always rethink the list to optimize it for a moral general audience.
[-][anonymous]10y440

Is there anything we should do?

Laugh, as the entire concept (and especially the entire reaction to it by Eliezer and people who take the 'memetic hazard' thing seriously) is and always has been laughable. It's certainly given my ab muscles a workout every now and then over the last three years... maybe with more people getting to see it and getting that exercise it'll be a net good! God, the effort I had to go through to dig through comment threads and find that google cache...

This is also such a delicious example of the Streisand effect...

This is also such a delicious example of the Streisand effect...

Yes, Eliezer's Streisanding is almost suspiciously delicious. One begins to wonder if he is in thrall to... well, perhaps it is best not to speculate here, lest we feed the Adversary.

Hanlon's Beard.

4Dahlen10y
There.

I think this is also a delicious example of how easy it is to troll LessWrong readers. Do you want to have an LW article and a debate about you? Post an article about how LW is a cult or about Roko's basilisk. Success 100% guaranteed.

Think about the incentives this gives to people who make their money by displaying ads on their websites. The only way we could motivate them more would be to pay them directly for posting that shit.

This isn't a particularly noteworthy attribute of Less Wrong discussion; most groups below a certain size will find it interesting when a major media outlet talks about them. I'm sure that the excellent people over at http://www.clown-forum.com/ would be just as chatty if they got an article.

I suppose you could say that it gives journalists an incentive to write about the groups below that population threshold that are likely to generate general interest among the larger set of readers. But that's just the trivial result in which we have invented the human interest story.

6polymathwannabe10y
I was going to say precisely that. In the end, banning Roko's post was pointless and ineffectual: anyone with internet access can learn about the "dangerous" idea. Furthermore, it's still being debated here, in a LessWrong thread. Is any of you having nightmares yet?

On the other hand, does not banning these debates contribute to having less of them? Doesn't seem so. We already had a dozen of them, and we are going to have more, and more, and more...

I can't know what happened in the parallel Everett branch where Eliezer didn't delete that comment... but I wouldn't be too surprised if the exposure of the basilisk was pretty much the same -- without the complaining about censorship, but perhaps with more of "this is what people on LW actually believe, here is the link to prove it".

I think this topic is debated mostly because it's the clever contrarian thing to do. You have a website dedicated to rationality and artificial intelligence where people claim to care about humanity? Then you get contrarian points for inventing clever scenarios of how using rationality will actually make things go horribly wrong. It's too much fun to resist. (Please note that this motivated inventing of clever horror scenarios is different from predicting actual risks. Finding actual risks is a useful thing to do. Inventing dangers that exist only because you invented and popularized them, not very useful.)

4Jiro10y
The debates are not technically, banned, but there are still strict limits on what we're allowed to say. We cannot, for instance, have an actual discussion about why the basilisk wouldn't work. Furthermore, there are aspects other than the ban that make LW look bad. Just the fact that people fall for the basilisk makes LW look bad all by itself. You could argue that the people who fall for the basilisk are mentally unstable, but having too many mentally unstable people or being too willing to limit normal people for the sake of mentally unstable people makes us look bad too. Ultimately, the problem is that "looking bad" happens because there are aspects of LW that people consider to be bad. It's not just a public relations problem--the basilisk demonstrates a lack of rationality on LW and the only way to fix the bad perception is to fix the lack of rationality.
6David_Gerard10y
One of the problems is that the basilisk is very weird, but the prerequisites - which are mostly straight out of the Sequences - are also individually weird. So explaining the basilisk to people who haven't read the Sequences through a few times and haven't been reading LessWrong for years is ... a bit of work.
0Jiro10y
Presumably, you don't believe the basilisk would work. If you don't believe the basilisk would work, then it really doesn't matter all that much that people don't understand the prerequisites. After all, even understanding the prerequisites won't change their opinion of whether the basilisk is correct. (I suppose that understanding the sequences may change the degree of incorrectness--going from crazy and illogical to just normally illogical--but I've yet to see anyone argue this.)
4David_Gerard10y
Are you saying it's meaningless to tell someone about the prerequisites - which, as I note, are pretty much straight out of the Sequences - unless they think the basilisk would work?
2Jiro10y
It's not meaningless in general, but it's meaningless for the purpose of deciding that they shouldn't see the basilisk because they'd misunderstand it. They don't misunderstand it--they know that it's false, and if they read the sequences they'd still know that it's false. As I pointed out, you could still argue that they'd misunderstand the degree to which the basilisk is false, but I've yet to see anyone argue that.
4roystgnr10y
I've said precisely this in the past, but now I'm starting to second-guess myself. Sure, if we're worried about basilisk-as-memetic-hazard then deletion was an obvious mistake... but are any of you having nightmares yet? I'm guessing "no", in which case we're left with basilisk-as-publicity-stunt, which might actually be beneficial. I wouldn't personally have advocated recruiting rationalists via a tactic like "Get a bunch of places to report on how crazy you are, then anyone who doesn't believe everything they read will be pleasantly surprised", but I'm probably the last person to ask about publicity strategy. I also would have disapproved of "Write Harry Potter fan fiction and see who wants to dig through the footnotes", without benefit of hindsight.

Since LW is going to get a lot of visitors someone should put an old post that would make an excellent first impression in a prominent position. I nominate How to Be Happy.

9solipsist10y
If an AI article is used, I nominate sorting pebbles into correct heaps.

Or perhaps a publicity boost would be better utilized by directing traffic to effective altruism information, e.g. at givewell or 80000 hours.

5Viliam_Bur10y
Maybe all of them together... because we have more than one topic here, and different people are interested in different things. Something about AI math. Something about values. Something about improving your life. Something about helping others. Something about spreading rationality.
3Costanza10y
Let's check featured articles on the main page on 19 July 2014....and...there we go.
6Roxolan10y
"Eliezer Yudkowsky Facts" as a featured article. Wow, that's certainly one way to react to this kind of criticism. (I approve.)

Re-reading that post, I came upon this entry, which seems particularly relevant:

We're all living in a figment of Eliezer Yudkowsky's imagination, which came into existence as he started contemplating the potential consequences of deleting a certain Less Wrong post.

Assuming we can trust the veracity of this "fact", I think we have to begin to doubt Eliezer's rationality. I mean, sure, the Streisand effect is a real thing, but causing Roko's obscure thought experiment to become the subject of the #1 recently most read article on Slate, just by censoring it? Is that really realistic?

...

Seriously, did anyone actually see something like this coming?

3Error10y
I like this idea. Of course, right now the top thing on discussion is this thread, so here is probably as good a place as any to link the good stuff. One of my own personal favorites: Yvain's Diseased Thinking. Or, more or less the same point in much more concise form: Eliezer's Disguised Queries. Also, the top-ranked posts page, which I'm not sure is linked anywhere obvious.
2XiXiDu10y
The problem isn't that easy to solve. Consider that MIRI, then SIAI, already had a bad name before Roko's post, and before I ever voiced any criticism. Consider this video from an actual AI conference, from March 2010, a few months before Roko's post. Someone in the audience makes the following statement: Or consider the following comment by Ben Goertzel from 2004: And this is Yudkowsky's reply: LessWrong would have to somehow distance itself from MIRI and Eliezer Yudkowsky.

LessWrong would have to somehow distance itself from MIRI and Eliezer Yudkowsky.

And become just another procrastination website.

Okay, there is still CFAR here. Oh wait, they also have Eliezer in the team! And they believe they can teach the rest of the world to become more rational. How profoundly un-humble or, may I say, cultish? Scratch the CFAR, too.

While we are at it, let's remove the articles "Tsuyoku Naritai!", "Tsuyoku vs. the Egalitarian Instinct" and "A Sense That More Is Possible". They contain the same arrogant ideas, and encourage the readers to think likewise. We don't want more people trying to become awesome, or even worse, succeeding at that.

Actually, we should remove the whole Sequences. I mean, how could we credibly distance ourselves from Eliezer, if we leave hundreds of his articles as the core of this website? No one reads the Sequences anyway. Hell, these days no one even dares to say "Read the Sequences" anymore. Which is good, because telling people to read the Sequences has been criticized as cultish.

There are also some poisonous memes that make people think bad of us, so we should remove them from the website. Spe... (read more)

And become just another procrastination website.

Become?

What I meant by distancing LessWrong from Eliezer Yudkowsky is to become more focused on actually getting things done rather than rehashing Yudkowky's cached thoughts.

LessWrong should finally start focusing on trying to solve concrete and specific technical problems collaboratively. Not unlike what the Polymath Project is doing.

To do so LessWrong has to squelch all the noise by stopping to care about getting more members and starting to strongly moderate non-technical off-topic posts.

I am not talking about censorship here. I am talking about something unproblematic. Since once the aim of LessWrong is clear, to tackle technical problems, moderation becomes an understandable necessity. And I'd be surprised if any moderation will be necessary once only highly technical problems are discussed.

Doing this will make people hold LessWrong in high esteem. Because nothing is as effective at proving that you are smart and rational than getting things done.

ETA How about trying to solve the Pascal's mugging problem? It's highly specific, technical, and does pertain rationality.

3Viliam_Bur10y
I guess I agree with you on some more meta level. LessWrong as it is now, is not optimal. (Yeah, it is very cheap to say this; the problem is coming to a solution and an agreement about how specifically would the optimal version look like.) LessWrong as it is now is a result of a historical process, and technical limitations given by almost unmaintainability of Reddit code. If we tried to design it from the scratch, we would certainly invent something different, with the experience we have now. But I guess a part of the problem is general for web discussions, and seems to me somehow analogical to Gresham's law: "lower-quality content drives out higher-quality content". Specifically, people say they prefer higher-quality content, but they also want quantity on demand. However high quality there would be on a website, if people come a week later and find no new content, they will complain. But if there is a new content every week, people will learn to visit the site more often, and then they will complain about not having new content every day. There will never be enough. And the supply of the high-quality content is limited. If the choice is given to readers, at some point they will express preference for more content, even if it means somewhat lower quality. And then again, and again, until the quality drops down dramatically, but each single step felt like a reasonable trade-off. There is also a systematic bias, that people who spend more time procrastinating online have more voice in online debates... for the obvious reasons. So the community consensus for "how much new content per day or per week do we actually need?" will be mostly given by the greatest online procrastinators, which means the answer will pretty much always be "more!" So it would seem the solution for keeping the quality level is to remain very selective in accepting new content, even when it is met with disapproval of majority of the community. Which will provide not just anger, but also hund
4Sarunas10y
It seems to me that what XiXiDu wants are not just any high quality posts and the classification of Lesswrong posts into high and low quality buckets fails to capture what he/she tried to convey. It seems to me that XiXiDu talked about the lack of problem solving posts, the typical titles of which could be something like "Problem 123: Let's brainstorm for possible angles how to attack it" or "Problem 456: Let's try an unexpected approach 789 and see if it leads somewhere" (not unlike the aforementioned Polymath Project or maybe even Mathoverflow). Currently neither "Main" (which is mostly about presenting arguments that are already polished), nor "Discussion" (which is a mish mash of mostly links, open threads and posts, that are considered too short to be posted in Main) contains many posts of such type.
2NancyLebovitz10y
One solution might be a reputation net-- people who liked this also liked that. With luck, there'd be a cluster of people who want the same sort of thing you do.
0Capla9y
Why don't we?
-1Bruno_Coelho10y
The Useful Idea of Truth

According to the Slate article,

Yudkowsky and Peter Thiel have enthused about cryonics, the perennial favorite of rich dudes who want to live forever.

Uh, no. Surprisingly few "rich dudes" have shown an interest in cryonics. I know quite a few cryonicists and I have helped to organize cryonics-themed conferences, and to the best of my knowledge no one on the Forbes 500 list has signed up.

Moreover ordinary people can afford cryonics arrangements by using life insurance as the funding mechanism.

We can see that rich people have avoided cryonics from the fact that the things rich people really care about tend to become status signals and attract adventuresses in search of rich husbands. In reality cryonics lacks this status and it acts like "female Kryptonite." Just google the phrase "hostile wife phenomenon" to see what I mean. In other words, I tell straight men not to sign up for cryonics for the dating prospects.

2ChristianKl10y
Peter Thiel seems to be on the Forbes 500 list. Are you arguing that he isn't signed up for cryonics? Are you saying he simply isn't attending those conferences?
2John_Maxwell10y
Can you give examples?
1TheAncientGeek10y
Ordinary people seem be in thehabit of frittering their life insurance away in their descendants. Perhaps cryoni.cs is for the moderately well off and single.
1lsparrish10y
To be fair, the article got lots of things wrong.
[-][anonymous]10y290

This really is not a friendly civilization is it.

7 ideas that might cause you eternal torture, click now

If Langford basilisks actually existed, Gawker would be the first site to use them.

-11polymathwannabe10y

I thought the article was quite good.

Yes it pokes fun at lesswrong. That's to be expected. But it's well written and clearly conveys all the concepts in an easy to understand manner. The author understands lesswrong and our goals and ideas on a technical level, even if he doesn't agree with them. I was particularly impressed in how the author explained why TDT solves Newcomb's problem. I could give that explanation to my grandma and she'd understand it.

I don't generally believe that "any publicity is good publicity." However, this publicity is good publicity. Most people who read the article will forget it and only remember lesswrong as that kinda weird place that's really technical about decision stuff (which is frankly accurate). Those people who do want to learn more are exactly the people lesswrong wants to attract.

I'm not sure what people's expectations are for free publicity but this is, IMO, best case scenario.

From a technical standpoint, this bit:

Even if the alien jeers at you, saying, “The computer said you’d take both boxes, so I left Box B empty! Nyah nyah!” and then opens Box B and shows you that it’s empty, you should still only take Box B and get bupkis. ... The rationale for this eludes easy summary, but the simplest argument is that you might be in the computer’s simulation. In order to make its prediction, the computer would have to simulate the universe itself.

Seems wrong. Omega wouldn't necessarily have to simulate the universe, although that's one option. If it did simulate the universe, showing sim-you an empty box B doesn't tell it much about whether real-you will take box B when you haven't seen that it's empty.

(Not an expert, and I haven't read Good and Real which this is supposedly from, but I do expect to understand this better than a Slate columnist.)

And I think the final two paragraphs go beyond "pokes fun at lesswrong".

7wedrifid10y
It is wrong in about the same way that highschool chemistry is wrong. Not one of the statements is true but the error seems to be one of not quite understanding the details rather than any overt misrepresentation. ie. I'd cringe and say "more or less", since that's closer to getting Transparent Newcomb's right than I could reasonably expect from most people.
3FeepingCreature10y
The other options work out the same as simulating the universe for the purpose of telling you how you should decide to behave, but "simulating the universe" makes it visceral and easy to imagine.
2David_Gerard10y
Yes, they're a caution about reason as memetic immune disorder. The money quote for the whole article is:

Of course, mentioning the articles on ethical injuctions would be too boring.

Here comes the Straw Vulcan's younger brother, the Straw LessWrongian. (Brought to you by RationalWiki.)

Of course, mentioning the articles on ethical injuctions would be too boring.

It's troublesome how ambiguous the signals are that LessWrong is sending on some issues.

On the one hand LessWrong says that you should "shut up and multiply, to trust the math even when it feels wrong". On the other hand Yudkowsky writes that he would sooner question his grasp of "rationality" than give five dollars to a Pascal's Mugger because he thought it was "rational".

On the one hand LessWrong says that whoever knowingly chooses to save one life, when they could have saved two - to say nothing of a thousand lives, or a world - they have damned themselves as thoroughly as any murderer. On the other hand Yudkowsky writes that ends don't justify the means for humans.

On the one hand LessWrong stresses the importance of acknowledging a fundamental problem and saying "Oops". On the other hand Yudkowsky tries to patch a framework that is obviously broken.

Anyway, I worry that the overall message LessWrong sends is that of naive consequentialism based on back-of-the-envelope calculations, rather than the meta-level consequentialism that contains itself when faced wit... (read more)

3Viliam_Bur10y
Wow, these are very interesting examples! Okay, for me the whole paradox breaks down to this: I have limited brainpower and my hardware is corrupted. I am not able to solve all problems, and even where I believe I have a solution, I can't trust myself. On the other hand, I should use all the intelligence I have, simply because there is no convincing argument why doing anything else would be better. Using my reasoning to study my reasoning itself, and the biases thereof, here are some typical failure modes: those are the things I probably shouldn't do even if they seem rational. Now I'm kinda meta-reasoning about where should I follow my reasoning and where not. And things are getting confusing; probably because I am getting closer to limits of my rationality. Still, there is no better way for me to act. From the outside, this may seem like having dozen random excuses. But there are no better solutions. So the socially savvy solution is to shut up and pretend the whole topic doesn't even exist. It doesn't help to solve the problem, but it helps to save face. Sweeping the human irrationality under the rug instead of exposing it and then admitting that you, too, are only a human.
3David_Gerard10y
Expounding at length on dust specks vs torture, shut up and multiply and "taking ideas seriously" is likely to make people look askance at you, even if you also add "... but don't do anything weird, OK?" on the end.

Looks like a fairly standard parable about how we should laugh at academic theorists and eggheads because of all those wacky things they think. If only Less Wrong members had the common sense of the average Salon reader, then they would instantly see through such silly dilemmas.

Giving people the chance to show up and explain that this community is Obviously Wrong And Here's Why is a pretty good way to start conversations, human nature being what it is. An opportunity to have some interesting dialogues about the broader corpus.

That said, I am in the camp that finds the referenced 'memetic hazard' to be silly. If you are the sort of person who takes it seriously, this precise form of publicity might be more troubling for the obvious 'hazard' reasons. Out of curiosity, what is the fraction of LW posters that believes this is a genuine risk?

Out of curiosity, what is the fraction of LW posters that believes this is a genuine risk?

Vanishingly small - the post was deleted by Eliezer (was that what, a year ago? two?) because it gave some people he knew nightmares, but I don't remember anybody actually complaining about it. Most of the ensuing drama was about whether Eliezer was right in deleting it. The whole thing has been a waste of everybody's time and attention (as community drama over moderation almost always is).

Most of the ensuing drama was about whether Eliezer was right in deleting it. The whole thing has been a waste of everybody's time and attention (as community drama over moderation almost always is).

'Moderation' was precisely the opposite of the response that occurred. Hysterical verbal abuse is not the same thing as deleting a post and mere censorship would not have created such a lasting negative impact. While 'moderator censorship' was technically involved the incident is a decidedly non-central member of that class.

the post was deleted by Eliezer (was that what, a year ago? two?)

Nearly four years ago to the day, going by RationalWiki's chronology.

Talking about it presumably makes it feel like a newer, fresher issue than it is.

2MrMind10y
Eliezer specifically denied the possibility of a basilisk, although no theory of acausal blackmail in reflective equilibrium exists yet. Roko's post was deleted because of how people reacted to it, not because it was a real memetic hazard. ETA: on a second review, that's the reason Yudkowsky gave after the fact. I'm not convinced it was his initial motivation.
3Algernoq10y
Surely there's some non-zero possibility of acausal blackmail?
3MrMind10y
Well, I guess the standard caveat applies here: there's nothing that has really 0 chance of happening. I don't know about, but if it turned out acausal blackmail was logically impossible, that would deserve a probability as small as we can allow ourselves.
-1[anonymous]10y
Isn't this precisely what TDT solves?
5MrMind10y
I sincerely have no idea. I don't even know if TDT stands on its own as a completed theory.
-1Algernoq10y
I'd say it about as much of a risk as a self-loathing basilisk who punishes only people who supported its creation. It's wrong in the same way Pascal's Wager is wrong, with some extra creepiness added.
-5V_V10y

If only Less Wrong members had the common sense of the average Salon reader, then they would instantly see through such silly dilemmas.

Not sure I agree with your point. There's a standard LW idea that smart people can believe in crazy things due to their environment. For "environment" you can substitute "non-LW" or "LW" as you wish.

8Toggle10y
That's a valid point. To the extent that the article is narrowly targeted at this website, it could be read as an 'expose' on groupthink or the dangers of epistemic closure. That's a more charitable reading. But consider sentences like: "What you are about to read may sound strange and even crazy, but some very influential and wealthy scientists and techies believe it." Which is to say, the author seems to be using LW as a centered example of the social category "intellectuals with a focus on science and technology", rather than using LW as a test case of a community with unusual conventions.
6Tenoke10y
From what I've seen, it seems like very few people who know the basilisk believe it (<10 maybe?), but there are some people (still not a lot, but significantly more than 10), who avoid the basilisk just in case it is dangerous, because of EY's reaction.

It sounds like the actual unusual paradigm on LW is not so much "worried about the basilisk" as it is "unusually accommodating of people who worry about the basilisk".

5Algernoq10y
Modest proposal: practice "ontological rejection therapy" to decrease worry about basilisks, etc.: * Shout or type statements intended to draw punishment from every conceivable supernatural or post-Singularity entity. Negative result: Nothing happens. Gain increased sanity and epistemological confidence. Positive result: Devoured by Mind Flayers or equivalent. Surviving peers gain experimental data of immense value.
8David_Gerard10y
I was getting email from LW readers obsessing and worried by the basilisk, even though they knew intellectually it was a silly idea, and unable to talk about it on LW. That's why I started the RW article (which, btw, this Slate article neither mentions nor links to), because individual email doesn't scale. None since that.

obsessing and worried by the basilisk, even though they knew intellectually it was a silly idea

I had a similar but much lesser reaction (mildly disquieting) to the portrait of Hell given in the Portrait of the Artist as a Young Man. I found the portrait had strong immediate emotional impact. Good writing.

More strangely, even though I always had considered the probability that Hell exists as ludicrously tiny, it felt like that probability increased from the "evidence" of a fictional story.

Likely all sorts of biases involved, but is there one for strong emotions increasing assigned probability?

I found Ian M. Banks' Surface Detail to be fairly disturbing (and I'm in the Roko's-basilisk-is-ridiculous camp); even though the simulated-hell technology doesn't currently exist (AFAWK), having the salience of the possibility raised is unpleasant.

5Nornagest10y
Surface Detail's portrayal of Hell struck me as ugly and vulgar but not very disturbing. Some of the gratuitous nastiness in Consider Phlebas was worse, for example (the Eaters scene in particular), and so were some of Vatueil's simulated battle scenes; I think they came across as more salient because they didn't map onto cultural tropes I'd already rejected, and because they didn't come across as being scripted for a quasi-political morality play.
2Algernoq10y
I also was disgusted by "Player of Games", the only Banks novel I read. Is all of Banks' writing like this?
5David_Gerard10y
My first Banks was The Wasp Factory, which is pretty much a tour de force of tastelessness; all further examples are much less severe.
2TheAncientGeek10y
He tends gave one scene of severe nastiness every book.
0[anonymous]10y
I don't recall anything disgusting in Player of Games?
4gwern10y
What about when the protagonist visits the brothel?
0Algernoq10y
The alien culture in that one is all about brutality and domination. I didn't see a point to reading it, unless you like reading about fantasy violence.
0Nornagest10y
In the context of the Culture novels, the polite way of putting it would be that Banks had a penchant for using scenes of extreme horror and depravity as contrast to the utopian aspects of his writing, not to mention the SF spy games and gun porn. I've never read a novel of his that didn't have at least some of the same, though, and I've read some of his non-SF work.
7David_Gerard10y
I think people totally privilege hypotheses they've read about in compelling fiction. It's taking on board fictional evidence. I find it helps to keep in mind that a plausible story has too many details to be probable - "plausible" and "probable" are somewhat opposites - though it's harder to remember for a compelling story.

Is there anything we should do?

When reporters interviewed me about Bitcoin, I tried to point to LW as a potential source of stories and described in a positive way. Several of them showed interest, but no stories came out. I wonder why it's so hard to get positive coverage for LW and so easy to get negative coverage, when in contrast Wired magazine gave Cypherpunks a highly positive cover story in 1993, when Cypherpunks just got started and hadn't done much yet except publish a few manifestos.

[-][anonymous]10y230

I wonder why it's so hard to get positive coverage for LW and so easy to get negative coverage, when in contrast Wired magazine gave Cypherpunks a highly positive cover story in 1993, when Cypherpunks just got started and hadn't done much yet except publish a few manifestos.

There's an easy answer and a hard answer.

The easy answer is that, for whatever reason, the media today is far more likely to run a negative story about the tech industry or associated demographics than to run a positive story about it. LW is close enough to the tech industry, and its assumed/stereotyped demographic pattern is close enough to that of the tech industry, that attacking it is a way to attack the tech industry.

Observe:

"highly analytical sorts interested in optimizing their thinking, their lives, and the world through mathematics and rationality ... techno-futurism ... high-profile techies like Peter Thiel ... some very influential and wealthy scientists and techies believe it ... computing power ... computers ... computer ... mathematical geniuses Stanislaw Ulam and John von Neumann ... The ever accelerating progress of technology ... Futurists like science-fiction writer Vernor Vinge and eng... (read more)

9ChristianKl10y
Personal contact via the people employed in the Wired magazine and a lot of hackers are quite strong. Wired had actually an intention of pushing projects like Cypherpunks or in the last years the Quantified Self movement which they essentially founded (Keven Kelly and Gary Wolf are both Wired Editors). I don't think that LW is really the place that needs positive PR. I can't really think of a story about LW that I want to tell a reporter. I can think of stories about MIRI or about CFAR but LW itself doesn't need PR.
7Viliam_Bur10y
That's a great point. LW is not MIRI. LW comments are not MIRI research. LW moderation policy is not FAI source code. Etc. The proper response to basilisk would probably be: "So, tell me about the most controversial comment ever in your web discussions. You know, just so I can popularize it as the stuff your website is really about."
3Jiro10y
I don't think the idea is that LW is about the basilisk, but rather that the nature of the basilisk exposes flaws of LW. Whether it does that depends on circumstances; while it's trivially true that any website has a most controversial comment, not every website has a most controversial comment that happened like the basilisk did.

The basilisk seems to pretty much be the first thing outsiders know to associate with LW these days.

Well, for Charlie Stross it's practically professional interest :P

Anyhow: anecdote. Met an engineer on the train the other day. He asked me what I was reading on my computer, I said LW, he said he'd heard some vaguely positive things, I sent him a link to one of Yvain's posts, he liked it.

Eliezer Yudkowsky's reasons for banning Roko's post have always been somewhat vague. But I don't think he did it solely because it could cause some people nightmares.

(1) In one of his original replies to Roko’s post (please read the full comment, it is highly ambiguous) he states his reasons for banning Roko’s post, and for writing his comment (emphasis mine):

I’m banning this post so that it doesn’t (a) give people horrible nightmares and (b) give distant superintelligences a motive to follow through on blackmail against people dumb enough to think about them in sufficient detail, though, thankfully, I doubt anyone dumb enough to do this knows the sufficient detail. (I’m not sure I know the sufficient detail.)

…and further…

For those who have no idea why I’m using capital letters for something that just sounds like a random crazy idea, and worry that it means I’m as crazy as Roko, the gist of it was that he just did something that potentially gives superintelligences an increased motive to do extremely evil things in an attempt to blackmail us. It is the sort of thing you want to be EXTREMELY CONSERVATIVE about NOT DOING.

His comment indicates that he doesn’t believe that this... (read more)

[-][anonymous]10y170

"Doesn't work against a perfectly rational, informed agent" does not preclude "works quite well against naïve, stupid newbie LW'ers that haven't properly digested the sequences."

Memetic hazard is not a fancy word for coverup. It means that the average person accessing the information is likely to reach dangerous conclusions. That says more about the average of humanity than the information itself.

Good point. To build on that here's something I thought of when trying (but most likely not succeeding) to model/steelman Eliezer's thoughts at the time of his decision:

This basilisk is clearly bullshit, but there's a small (and maybe not vanishingly small) chance that with enough discussion people can come up with a sequence of "improved" basilisks that suffer from less and less obvious flaws until we end up with one worth taking seriously. It's probably better to just nip this one in the bud. Also, creating and debunking all these basilisks would be a huge waste of time.

At least Eliezer's move has focused all attention on the current (and easily debunked) basilisk, and it has made it sufficiently low-status to try and think of a better one. So in this sense it could even be called a success.

I would not call it a success. Sufficiently small silver linings are not worth focusing on with large-enough clouds.

6Emile10y
There were several possible fairly-good reasons for deleting that post, and also fairly good reasons for giving Eliezer some discretion as to what kind of stuff he can ban. Going over those reasons (again) is probably a waste of everybody's times. Who cares about whether a decision taken years ago was sensible, or slightly-wrong-but-within-reason, or wrong-but-only-in-hindsight, etc. ?

Who cares about whether a decision taken years ago was sensible, or slightly-wrong-but-within-reason, or wrong-but-only-in-hindsight, etc. ?

XiXiDu cares about every Eliezer potential-mistake.

5Jiro10y
We're discussing an article that judges LW for believing in the basilisk. Whether the founder believes in the basilisk is a lot more pertinent to judging LW than whether some randomly chosen person on LW believes in it, so there's a good reason to discuss Eliezer's belief specifically.

Both the article and the comments give me hope. My impression is that they treat these things more seriously than we would have seen in, e.g., 2006. This is despite the author's declared snarkiness.

Less Wrong is getting mainstream!

This really doesn't deserve all that much attention (It's blatant fear-mongering. If you're going to write about the Basilisk, you should also explain Pascal's mugging as a basic courtesy.), but there's one thing that this article makes me wonder:

I occasionally see people saying that working on Friendly AI is a waste of time. Yet at the same time it seems very hard to ignore the importance of existential risk prevention. I haven't seen a lot of good arguments for why an AGI wouldn't be potentially dangerous. So why wouldn't we want some people working on F... (read more)

6Viliam_Bur10y
Also, in Newcomb's problem, the goal is to go away with as much money as possible. So it's obvious what to optimize for. What exactly is the goal with the Basilisk? To give as much money as possible, just to build an evil machine which would torture you unless you gave it as much money as possible, but luckily you did, so you kinda... "win"? You and your five friends are the selected ones who will get the enjoyment of watching the rest of humanity tortured forever? (Sounds like how some early Christians imagined Heaven. Only the few most virtuous ones will get saved, and watching the suffering of the damned in Hell will increase their joy of their own salvation.) Completely ignoring the problem that just throwing a lot of money around doesn't solve the problem of creating a safe recursively self-improving superhuman AI. (Quoting Sequences: "There's a fellow currently on the AI list who goes around saying that AI will cost a quadrillion dollars—we can't get AI without spending a quadrillion dollars, but we could get AI at any time by spending a quadrillion dollars.") So these guys working on this evil machine... hungry, living in horrible conditions, never having a vacation or going on a date, never seeing a doctor, probably having mental breakdowns all the time; because they are writing the code that would torture them if they did any of that... is this the team we could trust with doing sane and good decisions, and getting all the math right? If no, then we are pretty much fucked regardless of whether we donated to the Basilisk or not, because soon we are all getting transformed to paperclips anyway; the only difference is that 99.9999999% of us will get tortured before that. How about, you know, just not building the whole monster at the first place? Uhm... could the solution to this horrible problem really be so easy?
9wedrifid10y
Yes.
5ChristianKl10y
No. All people who never heard of the Basilisk argument would also live in heaven. Even all people who heard of it in a way where it was clear that they wouldn't take it seriously would live in heaven.
3wedrifid10y
That isn't necessarily true. The kind of reasoning assumed in the Basilisk uFAI would also use the 'innocents' as hostages if it would help to extort compliance from the believers. It depends entirely on the (economic power weighted aggregate) insanity of the 'suckers' the uFAI is exploiting.
1ChristianKl10y
The basilisk get's more compliance from the believers when he puts the innocents into heaven then when he puts them into hell. Also the debate is not about an UFAI but a FAI that optimizes the utility function of general welfare with TDT. This is also the point, where you might think about how Eliezer's censorship had an effect. His censuring did lead you and Viliam_Bur to have an understanding of the issue where you think it's about an UFAI.
8wedrifid10y
This is at best not clear. It depends on the specific nature of the insanity in the compliant. Note that brutally disincentivizing evangelism has... instrumental downsides. Don't be misled by the loose relationship with Pascal's Wager. This isn't about belief, it is about decisions (and counterfactual decisions). The use of the term uFAI is deliberate, and correct. We don't need to define a torture-terrorist as Friendly just because of some sloppy utilitarian reasoning. Moreover, any actual risk from the scenario comes from AGI creators (or influencers) that make this assumption. That's the only thing that can cause the torture to happen. You are overconfident in your mind reading skills. I was one of the few people who were familiar enough with the subject matter at the time when Roko was writing his (typically fascinating) posts that I categorised the agent as a plausible not-friendly AGI immediately, the scenario as an interesting twist on acausal extortion then went straight to thinking about the actual content of the post, which was about a new means of cooperation.
5XiXiDu10y
Roko's post explicitly mentioned trading with unfriendly AI's.
1drethelin10y
yeah, the horror lies in the idea that it might be morally CORRECT for an FAI to engage in eternal torture of some people.
7Viliam_Bur10y
There is this problem with human psychology that threatening someone with torture doesn't contribute to their better judgement. If threatening someone with eternal torture would magically raise their intelligence over 9000 and give them ability to develop a correct theory of Friendliness and reliably make them build a Friendly AI in five years... then yes, under these assumptions, threatening people with eternal torture could be the morally correct thing to do. But human psychology doesn't work this way. If you start threatening people with torture, they are more likely to make mistakes in their reasoning. See: motivated reasoning, "ugh" fields, etc. Therefore, the hypothetical AI threatening people with torture for... well, pretty much for not being perfectly epistemically and instrumentally rational... would decrease the probability of Friendly AI being built correctly. Therefore, I don't consider this hypothetical AI to be Friendly.
2[anonymous]10y
[removed]
1Richard_Kennaway10y
This question is equivalent to: "How about, you know, just building a Friendly AI? Uhm... could the solution to the safe AI problem really be so easy?"
-3roystgnr10y
These questions are equivalent in the same sense as "how about just not setting X equal to pi" and "how about just setting X equal to e" are equivalent. Assuming you can do the latter is a prediction; assuming you can do the former is an antiprediction. To the contrary, "just building the [very specific sort of] whole monster" is what's more equivalent to "just building a [very specific definition of] Friendly AI", an a priori improbable task. Worse for the basilisk: at least in the case of Friendly AI you might end up stuck with nothing better to do but throw a dart and hope for a bulls-eye. But in the case of the basilisk, the acausal trade is only rational if you expect a high likelihood of the trade being carried out. But if that likelihood is low then you're just being nutty, which means it's unlikely for the other side of the trade to be upheld in any case (acausally trying to influence Omega's prediction of you may work if Omega is omniscient, but not so well if Omega is irrational). This lowers the likelihood still further... until the only remaining question is simply "what's the fixed point of "x_{n+1} = x_n/2"?"
-1Richard_Kennaway10y
Consider my parallel changed to "How about, you know, just not building an Unfriendly AI? Uhm... could the solution to the safe AI problem really be so easy?"
7Viliam_Bur10y
There are many possible Unfriendly AI, and most of them don't base their decision of torturing you on whether you gave them all your money. Therefore, you can use your reason to try building a Friendly AI... and either succeed or fail, depending on the complexity of the problem and your ability to solve it. But not depending on a blackmail. This is the difference between "you should be very careful to avoid building any Unfriendly AI, which may be a task beyond your skills", and "you should build this specific Unfriendly AI, because if you don't, but someone else does, then it will torture you for an eternity". In the former case, your intelligence is used to generate a good outcome, and yes, you may fail. In the latter case, your intelligence is used to fight against itself; you are are forcing yourself to work towards an outcome that you actually don't want. That's not the same thing. Building a Friendly AI is insanely difficult. Building a Torture AI is insane and difficult.
4Luke_A_Somers10y
Yes, the Basilisk does address how one should act in real life. It says: 'Don't build a basilisk, dummy!'. Problem solved.
-5[anonymous]10y

Dat irony tho.

Is there anything we should do?

Stylistic complaint: "we"? I don't think me reading your post means you and I are a "we". This is a public-facing website, your audience isn't your club.

As to the actual question, CellBioGuy's answer is spot-on.

[+][anonymous]10y-50