Comment author: Anatoly_Vorobey 24 November 2014 07:04:43AM *  9 points [-]

There's no "fight". You've been a very aggressive and mean-spirited critic of LW/MIRI/EY for a few years. Doesn't mean that there's a fight. Doesn't mean anyone "wins" if, say, you shut up and go away.

Your suggestion is not constructive, because coming up with retorts to mean-spirited past posts and endorsing them would be a poor use of MIRI's time, and would only add to drama rather than reduce it. Here's what you should do instead:

First, consider just going away. It may be best for your physical and mental health to stay away from LW and LW-related topics. Delete your old posts, forget you ever cared about this stuff, take up some other hobbies, etc. If you feel you can't, presumably because you think these issues are really important, read on.

  1. Come up with a generously-sized kindly-worded update that negates the meanness and stick it on top of your relevant past posts. E.g. if I were in your position I would write something like "I wrote the post below during years in which, I now recognize, I was locked in a venom-filled flamewar against a community which I actually like and appreciate, despite what I perceive as its faults. I do not automatically repudiate my arguments and factual points, but if you read the below, please note that I regret the venom and the personal attacks and that I may well have quote-mined and misrepresented persons and communities. I now wish I wrote it all in a kinder spirit".

  2. Continue participating on LW as you desire, trying your best to be kind and not get into drama. Plenty of people manage to be skeptical of MIRI/EY and criticize them here without being you. (If you're not sure you can do this well, ask some regular(s) to help you out. Precommit that if people you asked PM you about your future LW comment or blog post saying you're being an asshole, you'll believe them and mend it.)

  3. Accept that some people will continue to hate/dislike/hold a grudge against you. Issue private apologies to them if you feel you should, but don't do it publicly (because drama). If that doesn't help, accept and move on.

Comment author: XiXiDu 24 November 2014 01:32:33PM *  17 points [-]

I wrote the post below during years in which, I now recognize, I was locked in a venom-filled flamewar against a community which I actually like and appreciate, despite what I perceive as its faults. I do not automatically repudiate my arguments and factual points, but if you read the below, please note that I regret the venom and the personal attacks and that I may well have quote-mined and misrepresented persons and communities. I now wish I wrote it all in a kinder spirit.

Sounds good. Thanks.

Plenty of people manage to be skeptical of MIRI/EY and criticize them here without being you.

Hmm...the state of criticism leaves a lot to be desired currently. The amount of criticism does not reflect the amount of extraordinary statements being made here. I think there are a lot of people who shy away from open criticism.

Some stuff that is claimed here is just very very (self-censored).

I have recently attempted to voice a minimum of criticism when I said something along the lines of "an intelligence explosion happening within minutes is an extraordinary claim". I actually believe that the whole concept is...I can't say this in a way that doesn't offend people here. It's hard to recount what happened then, but my perception was that even that was already too much criticism. In a diverse community with healthy disagreement I would expect a different reaction.

Please take the above as my general perception and not a precise recount of a situation.

Comment author: Halfwitz 23 November 2014 10:24:07PM *  31 points [-]

To be honest, I had you pegged as being stuck in a partisan spiral. The fact that you are willing to do this is pretty cool. Have some utils on the house. I don’t know if officially responding to your blog is worth MIRI’s time; it would imply some sort of status equivalence.

Also, you published some very embarrassing quotes from Yudkowsky. I’m guessing you caused him quite a bit of distress, so he’s probably not inclined to do you any favors. Mining someone’s juvenilia for outrageous statements is not productive – I mean he was 16 when he wrote some of the stuff you quote. I would remove those pages. Same with the usenet stuff – I know it was posted publicly but it feels like furtively-recorded conversations to me all these years later. Stick to arguments against positions MIRI and Yudkowsky currently hold. Personally I’ve moved from highly-skeptical of MIRI to moderately approving. I made this comment a year ago:

The fact that MIRI is finally publishing technical research has impressed me. A year ago it seemed, to put it bluntly, that your organization was stalling, spending its funds on the full-time development of Harry Potter fanfiction and popular science books. Perhaps my intuition there was uncharitable, perhaps not. I don't know how much of your lead researcher's time was spent on said publications, but it certainly seemed, from the outside, that it was the majority. Regardless, I'm very glad MIRI is focusing on technical research. I don't know how much farther you have to walk, but it's clear you're headed in the right direction.

And MIRI has stayed on course and is becoming a productive think tank with three full-time researchers and, it seems to me, a highly competent CEO. It is a very different organization now than the one you started out criticizing.

Comment author: XiXiDu 24 November 2014 12:17:06PM 1 point [-]

Also, you published some very embarrassing quotes from Yudkowsky. I’m guessing you caused him quite a bit of distress, so he’s probably not inclined to do you any favors.

If I post an embarrassing quote by Sarah Palin, then I am not some kind of school bully who likes causing people distress. Instead I highlight an important shortcoming of an influential person. I have posted quotes of various people other than Yudkowsky. I admire all of them for their achievements and wish them all the best. But as influential people they have to expect that someone might highlight something they said. This is not a smear campaign.

Comment author: ArisKatsaris 24 November 2014 01:37:32AM *  16 points [-]

I'm not MIRI affiliated, but as a member of the LessWrong forum, and talking for myself alone, I'll just repeat what I've said before: There's only so many times someone can call me a brainwashed cultist, before I stop forgiving them.

You've spent the past few years insulting and mocking people for having different opinions tha8n you. That's it. That's the entirety of the crime of LessWrong/MIRI: you've not produced a hint of unethicality or dishonesty in regards to any of MIRI and/or LessWrong's doings, but you bash them viciously for having different opinions.

LessWrongers always treated you (and Rationalwiki too), and is still treating you and any of your different opinions, much more civilly than you (or Rationalwiki) ever did us and any of ours. So you getting health related issues as a result of the viciousness you perpetrate -- okay, that's like repeatedly punching someone and then complaining that your fist has started to hurt.

We don't have, nor ever had, a "Why Alexander Kruel/Xixidu sucks" page that we can take down. You are the one with the bazillion "Why LessWrong/MIRI sucks" pages. Unlike you have done with EY, I haven't even screenshotted the comments by you that you've later chosen to take down because you found them embarrassing to yourself. Gee, it must be nice NOT having someone devoted to mocking you.

I wish you good health, as a general moral principle of my humanism. But I also care about the problems you caused on the targets of your viciousness.

Comment author: XiXiDu 24 November 2014 12:02:03PM *  7 points [-]

We don't have, nor ever had, a "Why Alexander Kruel/Xixidu sucks" page that we can take down.

That's implying a false equivalence. If I make a quotes page of a public person, a person with far-reaching goals, in order to highlight problematic beliefs this person holds, beliefs that would otherwise be lost in a vast amount of other statements, then this is not the same as making a "random stranger X sucks" page.

So you getting health related issues as a result of the viciousness you perpetrate...

Stressful fights adversely affect an existing condition.

Unlike you have done with EY, I haven't even screenshotted the comments by you that you've later chosen to take down because you found them embarrassing to yourself.

I have maybe deleted 5 comments and edited another 5. If I detect other mistakes I will fix them. You make it sound like doing so is somehow bad.

LessWrongers always treated you (and Rationalwiki too), and is still treating you and any of your different opinions, much more civilly than you (or Rationalwiki) ever did us and any of ours.

You are one of the people spouting comments such as this one for a long time. I reckon you might not see that such comments are a cause of what I wrote in the past.

Comment author: Dias 24 November 2014 01:25:41AM 7 points [-]

I don't think MIRI has any reason to take you up on this offer, as responding in this way would elevate the status of your writings. High status entities do not need to respond specifically to low status entities, and when they do, it will be obliquely and non-specifically addressed to the broader class which contains the specific low-status entity. Additionally, it would look mean-spirited to try to 'kick someone while their down', especially as this post in some ways resembles a call for a truce. As such, it would be a mistake for MIRI to accept your offer, even before taking into account the resources that would be required. If I was MIRI I would totally ignore this.

Given this, either you have failed to understand what apologizing actually consists in, or are still (perhaps subconsciously) trying to undermine MIRI. At the moment all you offer is the implication that you would continue your disruption were it not for the toll it has taken on your health. Contrition would demand at least a genuine apology - something like "I am sorry for acting badly" - if not actively working to undo the harm you have done.

Fortunately, I think you overestimate the impact you had. Probably your biggest effect was wasting everyone's time.

Comment author: XiXiDu 24 November 2014 11:46:29AM 9 points [-]

I don't think MIRI has any reason to take you up on this offer, as responding in this way would elevate the status of your writings.

Yudkowsky has a number of times recently found it necessary to openly attack RationalWiki, rather than ignoring it and clarifying the problem on LessWrong or his website in a polite manner. He also voiced his displeasure over the increasing contrarian attitude on LessWrong. This made me think that there is a small chance that they might desire to mitigate one of only a handful sources who perceive MIRI to be important enough to criticize them.

Given this, either you have failed to understand what apologizing actually consists in, or are still (perhaps subconsciously) trying to undermine MIRI.

I will apologize for mistakes I make and try to fix them. The above post was the confession that there very well could be mistakes, and a clarification that the reasons are not malicious.

Comment author: Dallas 23 November 2014 08:25:31PM 7 points [-]

I've had to deal with the stress you are contributing to putting on the broader perception of transhumanism for the weekend, and that is on top of preexisting mental problems. (Whether MIRI/LW is actually representative to this is entirely orthogonal to the point; public perception has and is shifting towards viewing the broader context of futurism as run by neoreactionaries and beige-os with parareligious delusions.)

Of course, that's no reason to stop anything. People are going to be stressed by things independent of their content.

But you are expecting an entity which you have devoted most of blog to criticizing to be caring enough about your psychological state that they take time out to write header statements for each of your posts?

If you want to stop accusations of lying and bad faith, stop spreading the "LW believes in Roko's Basilisk" meme, and do something less directly reputation-warfare escalatory, and more productive-- like hunting down Nazis and creating alternatives to the current decision-theoretic paradigm. (I don't think anybody's going to get that upset over abstract discussions of Newcomb's Problem. At least, I hope.)

Comment author: XiXiDu 24 November 2014 11:13:49AM 4 points [-]

If you want to stop accusations of lying and bad faith, stop spreading the "LW believes in Roko's Basilisk" meme...

How often and for how long did I spread this, and what do you mean by "spread"?

Imagine yourself in my situation back in 2010: After the leader of a community completely freaked out over a crazy post (calling the author an idiot in all bold and caps etc.) he went on to massively nuke any thread mentioning the topic. In addition there are mentions of people having horrible nightmares over it while others are actively trying to dissuade you from mentioning a thought experiment they believe to be dangerous, in private messages and emails, by referring to the leaders superior insight.

This made a lot of alarm bells ring for me.

But you are expecting an entity which you have devoted most of blog to criticizing to be caring enough about your psychological state that they take time out to write header statements for each of your posts?

No. I made an unilateral offer.

Comment author: XiXiDu 24 November 2014 10:13:51AM *  5 points [-]

If you believe that I am, or was, a troll then check out this screenshot from 2009 (this was a year before my first criticism). And also check out this capture of my homepage from 2005, on which I link to MIRI's and Bostrom's homepage (I have been a fan).

If you believe that I am now doing this because of my health, then check out this screenshot of a very similar offer I made in 2011.

In summary: (a) None of my criticisms were ever made with the intent of giving MIRI or LW a bad name, but were instead meant to highlight or clarify problematic issues (b) I believe that my health issues allow me to quit caring about the problems I see, but they are not the crucial reason for wanting to quit. The main reason is that I hate fights and want people to be happy rather than being constantly engaged in emotional battles.

That said, many of the replies to this post perfectly resemble the reason for why I kept going on for so long: lots of misunderstandings combined with smug personal attacks against me. Anyway, I made the above offer expecting that this would continue, so it still stands. And if this isn't worthwhile for MIRI, fine. But because of people like ArisKatsaris, paper-machine, wedrifid and others with a history of vicious personal attacks against me, I am unable to just delete everything, because that would only leave their misrepresentations of my motives and actions behind. Yes, you understand that correctly. I believe myself to be the one who has been constantly mishandled and forced to strike back (if you constantly call someone a troll and liar then you shouldn't be surprised if they call you brainwashed). And yet I offer you the chance to leave this battle as the winner by posting counterstatements to my blog.

Breaking the vicious cycle

43 XiXiDu 23 November 2014 06:25PM

You may know me as the guy who posts a lot of controversial stuff about LW and MIRI. I don't enjoy doing this and do not want to continue with it. One reason being that the debate is turning into a flame war. Another reason is that I noticed that it does affect my health negatively (e.g. my high blood pressure (I actually had a single-sided hearing loss over this xkcd comic on Friday)).

This all started in 2010 when I encountered something I perceived to be wrong. But the specifics are irrelevant for this post. The problem is that ever since that time there have been various reasons that made me feel forced to continue the controversy. Sometimes it was the urge to clarify what I wrote, other times I thought it was necessary to respond to a reply I got. What matters is that I couldn't stop. But I believe that this is now possible, given my health concerns.

One problem is that I don't want to leave possible misrepresentations behind. And there very likely exist misrepresentations. There are many reasons for this, but I can assure you that I never deliberately lied and that I never deliberately tried to misrepresent anyone. The main reason might be that I feel very easily overwhelmed and never had the ability to force myself to invest the time that is necessary to do something correctly if I don't really enjoy doing it (for the same reason I probably failed school). Which means that most comments and posts are written in a tearing hurry, akin to a reflexive retraction from the painful stimulus.

<tldr>

I hate this fight and want to end it once and for all. I don't expect you to take my word for it. So instead, here is an offer:

I am willing to post counterstatements, endorsed by MIRI, of any length and content[1] at the top of any of my blog posts. You can either post them in the comments below or send me an email (da [at] kruel.co).

</tldr>

I have no idea if MIRI believes this to be worthwhile. But I couldn't think of a better way to solve this dilemma in a way that everyone can live with happily. But I am open to suggestions that don't stress me too much (also about how to prove that I am trying to be honest).

You obviously don't need to read all my posts. It can also be a general statement.

I am also aware that LW and MIRI are bothered by RationalWiki. As you can easily check from the fossil record, I have at points tried to correct specific problems. But, for the reasons given above, I have problems investing the time to go through every sentence to find possible errors and attempt to correct it in such a way that the edit is not reverted and that people who feel offended are satisfied.

[1] There are obviously some caveats regarding the content, such as no nude photos of Yudkowsky ;-)

Comment author: XiXiDu 21 November 2014 12:05:59PM *  6 points [-]

Regarding Yudkowsky's accusations against RationalWiki. Yudkowsky writes:

First false statement that seems either malicious or willfully ignorant:

In LessWrong's Timeless Decision Theory (TDT),[3] punishment of a copy or simulation of oneself is taken to be punishment of your own actual self

TDT is a decision theory and is completely agnostic about anthropics, simulation arguments, pattern identity of consciousness, or utility.

Calling this malicious is a huge exaggeration. Here is a quote from the LessWrong Wiki entry on Timeless Decision Theory:

When Omega predicts your behavior, it carries out the same abstract computation as you do when you decide whether to one-box or two-box. To make this point clear, we can imagine that Omega makes this prediction by creating a simulation of you and observing its behavior in Newcomb's problem. [...] TDT says to act as if deciding the output of this computation...

RationalWiki explains this in the way that you should act as if it is you that is being simulated and who possibly faces punishment. This is very close to what the LessWrong Wiki says, phrased in a language that people with a larger inferential distance can understand.

Yudkowsky further writes:

The first malicious lie is here:

an argument used to try and suggest people should subscribe to particular singularitarian ideas, or even donate money to them, by weighing up the prospect of punishment versus reward

Neither Roko, nor anyone else I know about, ever tried to use this as an argument to persuade anyone that they should donate money.

This is not a malicious lie. Here is a quote from Roko's original post (emphasis mine):

...there is the ominous possibility that if a positive singularity does occur, the resultant singleton may have precommitted to punish all potential donors who knew about existential risks but who didn't give 100% of their disposable incomes to x-risk motivation. This would act as an incentive to get people to donate more to reducing existential risk, and thereby increase the chances of a positive singularity. This seems to be what CEV (coherent extrapolated volition of humanity) might do if it were an acausal decision-maker.1 So a post-singularity world may be a world of fun and plenty for the people who are currently ignoring the problem, whilst being a living hell for a significant fraction of current existential risk reducers (say, the least generous half). You could take this possibility into account and give even more to x-risk in an effort to avoid being punished.

This is like a robber walking up to you and explaining that you could take into account that he could shoot you if you don't give him your money.

Also notice that Roko talks about trading with uFAIs as well.

Comment author: Sysice 21 November 2014 08:36:40AM *  19 points [-]

It might be useful to feature a page containing what we, you know, actually think about the basilisk idea. Although the rationalwiki page seems to be pretty solidly on top of google search, we might catch a couple people looking for the source.

If any XKCD readers are here: Welcome! I assume you've already googled what "Roko's Basilisk" is. For a better idea of what's going on with this idea, see Eliezer's comment on the xkcd thread (linked in Emile's comment), or his earlier response here.

Comment author: XiXiDu 21 November 2014 11:08:07AM 6 points [-]

For a better idea of what's going on with this idea, see Eliezer's comment on the xkcd thread (linked in Emile's comment), or his earlier response here.

For a better idea of what's going on you should read all of his comments on the topic in chronological order.

What do you mean by Pascal's mugging?

4 XiXiDu 20 November 2014 04:38PM

Some people[1] are now using the term Pascal's mugging as a label for any scenario with a large associated payoff and a small or unstable probability estimate, a combination that can trigger the absurdity heuristic.

Consider the scenarios listed below: (a) Do these scenarios have something in common? (b) Are any of these scenarios cases of Pascal's mugging?

(1) Fundamental physical operations -- atomic movements, electron orbits, photon collisions, etc. -- could collectively deserve significant moral weight. The total number of atoms or particles is huge: even assigning a tiny fraction of human moral consideration to them or a tiny probability of them mattering morally will create a large expected moral value. [Source]

(2) Cooling something to a temperature close to absolute zero might be an existential risk. Given our ignorance we cannot rationally give zero probability to this possibility, and probably not even give it less than 1% (since that is about the natural lowest error rate of humans on anything). Anybody saying it is less likely than one in a million is likely very overconfident. [Source]

(3) GMOS might introduce “systemic risk” to the environment. The chance of ecocide, or the destruction of the environment and potentially humans, increases incrementally with each additional transgenic trait introduced into the environment. The downside risks are so hard to predict -- and so potentially bad -- that it is better to be safe than sorry. The benefits, no matter how great, do not merit even a tiny chance of an irreversible, catastrophic outcome. [Source]

(4) Each time you say abracadabra, 3^^^^3 simulations of humanity experience a positive singularity.

If you read up on any of the first three scenarios, by clicking on the provided links, you will notice that there are a bunch of arguments in support of these conjectures. And yet I feel that all three have something important in common with scenario four, which I would call a clear case of Pascal's mugging.

I offer three possibilities of what these and similar scenarios have in common:

  • Probability estimates of the scenario are highly unstable and highly divergent between informed people who spent a similar amount of resources researching it.
  • The scenario demands skeptics to either falsify or accept its decision relevant consequences. The scenario is however either unfalsifiable by definition, too vague, or almost impossibly difficult to falsify.
  • There is no or very little direct empirical evidence in support of the scenario.[2]

In any case, I admit that it is possible that I just wanted to bring the first three scenarios to your attention. I stumbled upon each very recently and found them to be highly..."amusing".

 

[1] I am also guilty of doing this. But what exactly is wrong with using the term in that way? What's the highest probability for which the term is still applicable? Can you offer a better term?

[2] One would have to define what exactly counts as "direct empirical evidence". But I think that it is pretty intuitive that there exists a meaningful difference between the risk of an asteroid that has been spotted with telescopes and a risk that is solely supported by a priori arguments.

View more: Prev | Next