WARNING: Memetic hazard.
http://www.slate.com/articles/technology/bitwise/2014/07/roko_s_basilisk_the_most_terrifying_thought_experiment_of_all_time.html?wpisrc=obnetwork
Is there anything we should do?
WARNING: Memetic hazard.
http://www.slate.com/articles/technology/bitwise/2014/07/roko_s_basilisk_the_most_terrifying_thought_experiment_of_all_time.html?wpisrc=obnetwork
Is there anything we should do?
Is there anything we should do?
ie. No. Nothing about this article comes remotely close to changing the highest expected value actions for the majority of the class 'we'. If it happens that there is a person in that class for whom this opens an opportunity to create (more expected) value then it is comparatively unlikely that that person is the kind who would benefit from "we shoulding" exhortations.
I want this list posted in response to every "is there anything we should do" ever. Just all over the internet. I would give you more than one upvote just for that list if I could.
Is there anything we should do?
Laugh, as the entire concept (and especially the entire reaction to it by Eliezer and people who take the 'memetic hazard' thing seriously) is and always has been laughable. It's certainly given my ab muscles a workout every now and then over the last three years... maybe with more people getting to see it and getting that exercise it'll be a net good! God, the effort I had to go through to dig through comment threads and find that google cache...
This is also such a delicious example of the Streisand effect...
I think this is also a delicious example of how easy it is to troll LessWrong readers. Do you want to have an LW article and a debate about you? Post an article about how LW is a cult or about Roko's basilisk. Success 100% guaranteed.
Think about the incentives this gives to people who make their money by displaying ads on their websites. The only way we could motivate them more would be to pay them directly for posting that shit.
This isn't a particularly noteworthy attribute of Less Wrong discussion; most groups below a certain size will find it interesting when a major media outlet talks about them. I'm sure that the excellent people over at http://www.clown-forum.com/ would be just as chatty if they got an article.
I suppose you could say that it gives journalists an incentive to write about the groups below that population threshold that are likely to generate general interest among the larger set of readers. But that's just the trivial result in which we have invented the human interest story.
On the other hand, does not banning these debates contribute to having less of them? Doesn't seem so. We already had a dozen of them, and we are going to have more, and more, and more...
I can't know what happened in the parallel Everett branch where Eliezer didn't delete that comment... but I wouldn't be too surprised if the exposure of the basilisk was pretty much the same -- without the complaining about censorship, but perhaps with more of "this is what people on LW actually believe, here is the link to prove it".
I think this topic is debated mostly because it's the clever contrarian thing to do. You have a website dedicated to rationality and artificial intelligence where people claim to care about humanity? Then you get contrarian points for inventing clever scenarios of how using rationality will actually make things go horribly wrong. It's too much fun to resist. (Please note that this motivated inventing of clever horror scenarios is different from predicting actual risks. Finding actual risks is a useful thing to do. Inventing dangers that exist only because you invented and popularized them, not very useful.)
Since LW is going to get a lot of visitors someone should put an old post that would make an excellent first impression in a prominent position. I nominate How to Be Happy.
Or perhaps a publicity boost would be better utilized by directing traffic to effective altruism information, e.g. at givewell or 80000 hours.
Re-reading that post, I came upon this entry, which seems particularly relevant:
We're all living in a figment of Eliezer Yudkowsky's imagination, which came into existence as he started contemplating the potential consequences of deleting a certain Less Wrong post.
Assuming we can trust the veracity of this "fact", I think we have to begin to doubt Eliezer's rationality. I mean, sure, the Streisand effect is a real thing, but causing Roko's obscure thought experiment to become the subject of the #1 recently most read article on Slate, just by censoring it? Is that really realistic?
...
Seriously, did anyone actually see something like this coming?
LessWrong would have to somehow distance itself from MIRI and Eliezer Yudkowsky.
And become just another procrastination website.
Okay, there is still CFAR here. Oh wait, they also have Eliezer in the team! And they believe they can teach the rest of the world to become more rational. How profoundly un-humble or, may I say, cultish? Scratch the CFAR, too.
While we are at it, let's remove the articles "Tsuyoku Naritai!", "Tsuyoku vs. the Egalitarian Instinct" and "A Sense That More Is Possible". They contain the same arrogant ideas, and encourage the readers to think likewise. We don't want more people trying to become awesome, or even worse, succeeding at that.
Actually, we should remove the whole Sequences. I mean, how could we credibly distance ourselves from Eliezer, if we leave hundreds of his articles as the core of this website? No one reads the Sequences anyway. Hell, these days no one even dares to say "Read the Sequences" anymore. Which is good, because telling people to read the Sequences has been criticized as cultish.
There are also some poisonous memes that make people think bad of us, so we should remove them from the website. Spe...
What I meant by distancing LessWrong from Eliezer Yudkowsky is to become more focused on actually getting things done rather than rehashing Yudkowky's cached thoughts.
LessWrong should finally start focusing on trying to solve concrete and specific technical problems collaboratively. Not unlike what the Polymath Project is doing.
To do so LessWrong has to squelch all the noise by stopping to care about getting more members and starting to strongly moderate non-technical off-topic posts.
I am not talking about censorship here. I am talking about something unproblematic. Since once the aim of LessWrong is clear, to tackle technical problems, moderation becomes an understandable necessity. And I'd be surprised if any moderation will be necessary once only highly technical problems are discussed.
Doing this will make people hold LessWrong in high esteem. Because nothing is as effective at proving that you are smart and rational than getting things done.
ETA How about trying to solve the Pascal's mugging problem? It's highly specific, technical, and does pertain rationality.
According to the Slate article,
Yudkowsky and Peter Thiel have enthused about cryonics, the perennial favorite of rich dudes who want to live forever.
Uh, no. Surprisingly few "rich dudes" have shown an interest in cryonics. I know quite a few cryonicists and I have helped to organize cryonics-themed conferences, and to the best of my knowledge no one on the Forbes 500 list has signed up.
Moreover ordinary people can afford cryonics arrangements by using life insurance as the funding mechanism.
We can see that rich people have avoided cryonics from the fact that the things rich people really care about tend to become status signals and attract adventuresses in search of rich husbands. In reality cryonics lacks this status and it acts like "female Kryptonite." Just google the phrase "hostile wife phenomenon" to see what I mean. In other words, I tell straight men not to sign up for cryonics for the dating prospects.
I thought the article was quite good.
Yes it pokes fun at lesswrong. That's to be expected. But it's well written and clearly conveys all the concepts in an easy to understand manner. The author understands lesswrong and our goals and ideas on a technical level, even if he doesn't agree with them. I was particularly impressed in how the author explained why TDT solves Newcomb's problem. I could give that explanation to my grandma and she'd understand it.
I don't generally believe that "any publicity is good publicity." However, this publicity is good publicity. Most people who read the article will forget it and only remember lesswrong as that kinda weird place that's really technical about decision stuff (which is frankly accurate). Those people who do want to learn more are exactly the people lesswrong wants to attract.
I'm not sure what people's expectations are for free publicity but this is, IMO, best case scenario.
From a technical standpoint, this bit:
Even if the alien jeers at you, saying, “The computer said you’d take both boxes, so I left Box B empty! Nyah nyah!” and then opens Box B and shows you that it’s empty, you should still only take Box B and get bupkis. ... The rationale for this eludes easy summary, but the simplest argument is that you might be in the computer’s simulation. In order to make its prediction, the computer would have to simulate the universe itself.
Seems wrong. Omega wouldn't necessarily have to simulate the universe, although that's one option. If it did simulate the universe, showing sim-you an empty box B doesn't tell it much about whether real-you will take box B when you haven't seen that it's empty.
(Not an expert, and I haven't read Good and Real which this is supposedly from, but I do expect to understand this better than a Slate columnist.)
And I think the final two paragraphs go beyond "pokes fun at lesswrong".
Of course, mentioning the articles on ethical injuctions would be too boring.
Here comes the Straw Vulcan's younger brother, the Straw LessWrongian. (Brought to you by RationalWiki.)
Of course, mentioning the articles on ethical injuctions would be too boring.
It's troublesome how ambiguous the signals are that LessWrong is sending on some issues.
On the one hand LessWrong says that you should "shut up and multiply, to trust the math even when it feels wrong". On the other hand Yudkowsky writes that he would sooner question his grasp of "rationality" than give five dollars to a Pascal's Mugger because he thought it was "rational".
On the one hand LessWrong says that whoever knowingly chooses to save one life, when they could have saved two - to say nothing of a thousand lives, or a world - they have damned themselves as thoroughly as any murderer. On the other hand Yudkowsky writes that ends don't justify the means for humans.
On the one hand LessWrong stresses the importance of acknowledging a fundamental problem and saying "Oops". On the other hand Yudkowsky tries to patch a framework that is obviously broken.
Anyway, I worry that the overall message LessWrong sends is that of naive consequentialism based on back-of-the-envelope calculations, rather than the meta-level consequentialism that contains itself when faced wit...
Looks like a fairly standard parable about how we should laugh at academic theorists and eggheads because of all those wacky things they think. If only Less Wrong members had the common sense of the average Salon reader, then they would instantly see through such silly dilemmas.
Giving people the chance to show up and explain that this community is Obviously Wrong And Here's Why is a pretty good way to start conversations, human nature being what it is. An opportunity to have some interesting dialogues about the broader corpus.
That said, I am in the camp that finds the referenced 'memetic hazard' to be silly. If you are the sort of person who takes it seriously, this precise form of publicity might be more troubling for the obvious 'hazard' reasons. Out of curiosity, what is the fraction of LW posters that believes this is a genuine risk?
Out of curiosity, what is the fraction of LW posters that believes this is a genuine risk?
Vanishingly small - the post was deleted by Eliezer (was that what, a year ago? two?) because it gave some people he knew nightmares, but I don't remember anybody actually complaining about it. Most of the ensuing drama was about whether Eliezer was right in deleting it. The whole thing has been a waste of everybody's time and attention (as community drama over moderation almost always is).
Most of the ensuing drama was about whether Eliezer was right in deleting it. The whole thing has been a waste of everybody's time and attention (as community drama over moderation almost always is).
'Moderation' was precisely the opposite of the response that occurred. Hysterical verbal abuse is not the same thing as deleting a post and mere censorship would not have created such a lasting negative impact. While 'moderator censorship' was technically involved the incident is a decidedly non-central member of that class.
the post was deleted by Eliezer (was that what, a year ago? two?)
Nearly four years ago to the day, going by RationalWiki's chronology.
Talking about it presumably makes it feel like a newer, fresher issue than it is.
If only Less Wrong members had the common sense of the average Salon reader, then they would instantly see through such silly dilemmas.
Not sure I agree with your point. There's a standard LW idea that smart people can believe in crazy things due to their environment. For "environment" you can substitute "non-LW" or "LW" as you wish.
It sounds like the actual unusual paradigm on LW is not so much "worried about the basilisk" as it is "unusually accommodating of people who worry about the basilisk".
obsessing and worried by the basilisk, even though they knew intellectually it was a silly idea
I had a similar but much lesser reaction (mildly disquieting) to the portrait of Hell given in the Portrait of the Artist as a Young Man. I found the portrait had strong immediate emotional impact. Good writing.
More strangely, even though I always had considered the probability that Hell exists as ludicrously tiny, it felt like that probability increased from the "evidence" of a fictional story.
Likely all sorts of biases involved, but is there one for strong emotions increasing assigned probability?
I found Ian M. Banks' Surface Detail to be fairly disturbing (and I'm in the Roko's-basilisk-is-ridiculous camp); even though the simulated-hell technology doesn't currently exist (AFAWK), having the salience of the possibility raised is unpleasant.
Is there anything we should do?
When reporters interviewed me about Bitcoin, I tried to point to LW as a potential source of stories and described in a positive way. Several of them showed interest, but no stories came out. I wonder why it's so hard to get positive coverage for LW and so easy to get negative coverage, when in contrast Wired magazine gave Cypherpunks a highly positive cover story in 1993, when Cypherpunks just got started and hadn't done much yet except publish a few manifestos.
I wonder why it's so hard to get positive coverage for LW and so easy to get negative coverage, when in contrast Wired magazine gave Cypherpunks a highly positive cover story in 1993, when Cypherpunks just got started and hadn't done much yet except publish a few manifestos.
There's an easy answer and a hard answer.
The easy answer is that, for whatever reason, the media today is far more likely to run a negative story about the tech industry or associated demographics than to run a positive story about it. LW is close enough to the tech industry, and its assumed/stereotyped demographic pattern is close enough to that of the tech industry, that attacking it is a way to attack the tech industry.
Observe:
"highly analytical sorts interested in optimizing their thinking, their lives, and the world through mathematics and rationality ... techno-futurism ... high-profile techies like Peter Thiel ... some very influential and wealthy scientists and techies believe it ... computing power ... computers ... computer ... mathematical geniuses Stanislaw Ulam and John von Neumann ... The ever accelerating progress of technology ... Futurists like science-fiction writer Vernor Vinge and eng...
Well, for Charlie Stross it's practically professional interest :P
Anyhow: anecdote. Met an engineer on the train the other day. He asked me what I was reading on my computer, I said LW, he said he'd heard some vaguely positive things, I sent him a link to one of Yvain's posts, he liked it.
Eliezer Yudkowsky's reasons for banning Roko's post have always been somewhat vague. But I don't think he did it solely because it could cause some people nightmares.
(1) In one of his original replies to Roko’s post (please read the full comment, it is highly ambiguous) he states his reasons for banning Roko’s post, and for writing his comment (emphasis mine):
I’m banning this post so that it doesn’t (a) give people horrible nightmares and (b) give distant superintelligences a motive to follow through on blackmail against people dumb enough to think about them in sufficient detail, though, thankfully, I doubt anyone dumb enough to do this knows the sufficient detail. (I’m not sure I know the sufficient detail.)
…and further…
For those who have no idea why I’m using capital letters for something that just sounds like a random crazy idea, and worry that it means I’m as crazy as Roko, the gist of it was that he just did something that potentially gives superintelligences an increased motive to do extremely evil things in an attempt to blackmail us. It is the sort of thing you want to be EXTREMELY CONSERVATIVE about NOT DOING.
His comment indicates that he doesn’t believe that this...
"Doesn't work against a perfectly rational, informed agent" does not preclude "works quite well against naïve, stupid newbie LW'ers that haven't properly digested the sequences."
Memetic hazard is not a fancy word for coverup. It means that the average person accessing the information is likely to reach dangerous conclusions. That says more about the average of humanity than the information itself.
Good point. To build on that here's something I thought of when trying (but most likely not succeeding) to model/steelman Eliezer's thoughts at the time of his decision:
This basilisk is clearly bullshit, but there's a small (and maybe not vanishingly small) chance that with enough discussion people can come up with a sequence of "improved" basilisks that suffer from less and less obvious flaws until we end up with one worth taking seriously. It's probably better to just nip this one in the bud. Also, creating and debunking all these basilisks would be a huge waste of time.
At least Eliezer's move has focused all attention on the current (and easily debunked) basilisk, and it has made it sufficiently low-status to try and think of a better one. So in this sense it could even be called a success.
I would not call it a success. Sufficiently small silver linings are not worth focusing on with large-enough clouds.
Who cares about whether a decision taken years ago was sensible, or slightly-wrong-but-within-reason, or wrong-but-only-in-hindsight, etc. ?
XiXiDu cares about every Eliezer potential-mistake.
Both the article and the comments give me hope. My impression is that they treat these things more seriously than we would have seen in, e.g., 2006. This is despite the author's declared snarkiness.
This really doesn't deserve all that much attention (It's blatant fear-mongering. If you're going to write about the Basilisk, you should also explain Pascal's mugging as a basic courtesy.), but there's one thing that this article makes me wonder:
I occasionally see people saying that working on Friendly AI is a waste of time. Yet at the same time it seems very hard to ignore the importance of existential risk prevention. I haven't seen a lot of good arguments for why an AGI wouldn't be potentially dangerous. So why wouldn't we want some people working on F...
Dat irony tho.
Is there anything we should do?
Stylistic complaint: "we"? I don't think me reading your post means you and I are a "we". This is a public-facing website, your audience isn't your club.
As to the actual question, CellBioGuy's answer is spot-on.
Eliezer Yudkowsky's reasons for banning Roko's post have always been somewhat vague. But I don't think he did it solely because it could cause some people nightmares.
(1) In one of his original replies to Roko’s post (please read the full comment, it is highly ambiguous) he states his reasons for banning Roko’s post, and for writing his comment (emphasis mine):
…and further…
His comment indicates that he doesn’t believe that this could currently work. Yet he also does not seem to dismiss some current and future danger. Why didn’t he clearly state that there is nothing to worry about?
(2) The following comment by Mitchell Porter, to which Yudkowsky replies “This part is all correct AFAICT.”:
If Yudkowsky really thought it was irrational to worry about any part of it, why didn't he allow people to discuss it on LessWrong, where he and others could debunk it?
"Doesn't work against a perfectly rational, informed agent" does not preclude "works quite well against naïve, stupid newbie LW'ers that haven't properly digested the sequences."
Memetic hazard is not a fancy word for coverup. It means that the average person accessing the information is likely to reach dangerous conclusions. That says more about the average of humanity than the information itself.