Since you have not yet replied to my other comment, here is what I have done so far:
(1) I removed many more posts and edited others in such a way that no mention of you, MIRI or LW can be found anymore (except an occasional link to a LW post).[1]
(2) I slightly changed your given disclaimer and added it to my about page:
...Note that I wrote some posts, posts that could previously be found on this blog, during a dark period of my life. Eliezer Yudkowsky is a decent and honest person with no ill intent, and anybody can be made to look terrible by selectively c
I don't have time to evaluate what you did, so I'll take this as a possible earnest of a good-faith attempt at something, and not speak ill of you until I get some other piece of positive evidence that something has gone wrong. A header statement only on relevant posts seems fine by me, if you have the time to add it to items individually.
I very strongly advise you, on a personal level, not to talk about these things online at all. No, not even posting links without discussion, especially if your old audience is commenting on them. The probability I estimate of your brain helplessly dragging you back in is very high.
I apologize for any possible misunderstanding in this comment. My reading comprehension is often bad.
I know that in the original post I offered to add a statement of your choice to any of my posts. I stand by this, although I would have phrased this differently now. I would like to ask you to consider that there are also personal posts which are completely unrelated to you, MIRI, or LW. Such as photography posts and math posts. It would be really weird and confusing to readers to add your suggested header to those posts. If that is what you want, I will do...
I already deleted the 'mockery index' (which had included a disclaimer for some months that read that I distant myself from those outsourced posts). I also deleted the second post you mentioned.
I changed the brainwash post to 'The Singularity Institute: How They Convince You' and added the following disclaimer suggested by user Anatoly Vorobey:
...I wrote the post below during years in which, I now recognize, I was locked in a venom-filled flamewar against a community which I actually like and appreciate, despite what I perceive as its faults. I do not autom
Thank you. I'll likewise keep my promise.
Yes, it was a huge overreaction on my side and I shouldn't have written such a comment in the first place. It was meant as an explanation of how that post came about, it was not meant as an excuse. It was still wrong. The point I want to communicate is that I didn't do it out of some general interest to cause MIRI distress.
I apologize for offending people and overreacting to what I perceived the way I described it but which was, as you wrote, not that way. I already deleted that post yesterday.
OK!
To make the first step and show that this is not some kind of evil ploy, I now deleted the (1) Yudkowsky quotes page and (2) the post on his personality (explanation on how that post came about).
I realize that they were unnecessarily offending and apologize for that. If I could turn back the clock I would do a lot differently and probably stay completely silent about MIRI and LW.
Also, the page where you try to diagnos him with narsisism just seems mean.
I can clarify this. I never intended to write that post but was forced to do so out of self-defense.
I replied to this comment whose author was wondering why Yudkowsky is using Facebook more than LessWrong these days. To which I replied with an on-topic speculation based on evidence.
Then people started viciously attacking me, to which I had to respond. In one of those replies I unfortunately used the term "narcissistic tendencies". I was then again attacked for using t...
So let me get this straight - you did a psychiatric diagnosis over the internet, and instead of saying, 'obviously I'm using the term colloquially' you provided evidence.
...
and then you are surprised when you get attacked, and even now characterize these attacks by like as coming from a mindless horde...
when the horde was actually 4 people, only one post was against you personally as opposed to being against that one thing you said, and there were roughly 2 others on your side. And your comments there are upvoted.
I think it is more like you went through all the copies of Palin's school newspaper, and picked up some notes she passed around in class, and then published the most outrageous things she said in such a way that you implied they were written recently.
This is exactly the kind of misrepresentation that make me avoid deleting my posts. Most of the most outrageous things he said have been written in the past ten years.
I suppose you are partly referring to the quotes page? Please take a look, there are only two quotes that are older than 2004, for one of whi...
Those two quotes that are dated before 2004 are the least outrageous.
This is the most outrageous one to me:
I must warn my reader that my first allegiance is to the Singularity, not humanity. I don’t know what the Singularity will do with us. I don’t know whether Singularities upgrade mortal races, or disassemble us for spare atoms. While possible, I will balance the interests of mortality and Singularity. But if it comes down to Us or Them, I’m with Them. You have been warned.
And it's clearly the exact opposite of what present Eliezer belives.
The stuff that bothers me are Usenet and mailing list quotes (they are equivalent to passing notes and should be considered off the record) and anything written when he was a teenager. The rest, I suppose, should at least be labeled with the date they were written. And if he has explicitly disclaimed the statement, perhaps that should be mentioned, too.
Young Eliezer was a little crankish and has pretty much grown out of it. I feel like you're criticising someone who no longer exists.
Also, the page where you try to diagnose him with narsisism just seems mean.
If you feel there was something wrong about your articles, why can't you write it there, using your own words?
I made bad experiences with admitting something like that. I once wrote on Facebook that I am not a high IQ individual and got responses suggesting that now everyone can completely ignore me and everything I say is garbage. If I look at the comments to this post, my perception is that many people understood it as some kind of confession that everything I ever wrote is just wrong and that they can subsequently ignore everything else I might ever ...
If I look at the comments to this post, my perception is that many people understood it as some kind of confession that everything I ever wrote is just wrong and that they can subsequently ignore everything else I might ever write.
If it helps, I believe your criticism is a mix of good and bad parts, but the bad parts make it really difficult for the reader to focus on the good parts, so at the end even the good parts are kinda wasted. It would be better if you could separate them, but the problem is probably what you describe as being "easily overw...
You don't need to delete any of your posts or comments. What I mainly fear is that if I was to delete posts, without linking to archived versions, then you would forever go around implying that all kinds of horrible things could have been found on those pages, and that me deleting them is evidence of this.
If you promise not to do anything like that, and stop portraying me as somehow being the worst person on Earth, then I'll delete the comments, passages or posts that you deem offending.
But if there is nothing reasonable I could do to ever improve your opi...
I wouldn't want you to delete the interview series anyway. The things that most offended me was this: the title of "http://kruel.co/2013/01/10/the-singularity-institute-how-they-brainwash-you/" is absurdly offensive and inappropriate if you don't believe in the deliberate ill intent of MIRI. If you don't want to delete the post altogether, at least rename it to "How they convince you". When you use 'brainwash' or 'trick' or 'con', you're accusing them of being criminals. Only say such words if you really believe it.
I'd also like the del...
I wrote the post below during years in which, I now recognize, I was locked in a venom-filled flamewar against a community which I actually like and appreciate, despite what I perceive as its faults. I do not automatically repudiate my arguments and factual points, but if you read the below, please note that I regret the venom and the personal attacks and that I may well have quote-mined and misrepresented persons and communities. I now wish I wrote it all in a kinder spirit.
Sounds good. Thanks.
...Plenty of people manage to be skeptical of MIRI/EY and c
Also, you published some very embarrassing quotes from Yudkowsky. I’m guessing you caused him quite a bit of distress, so he’s probably not inclined to do you any favors.
If I post an embarrassing quote by Sarah Palin, then I am not some kind of school bully who likes causing people distress. Instead I highlight an important shortcoming of an influential person. I have posted quotes of various people other than Yudkowsky. I admire all of them for their achievements and wish them all the best. But as influential people they have to expect that someone mig...
As far as I can tell, Yudkowsky basically grew up on the internet. I think it is more like you went through all the copies of Palin's school newspaper, and picked up some notes she passed around in class, and then published the most outrageous things she said in such a way that you implied they were written recently. I think this goes against some notion of journalistic tact.
We don't have, nor ever had, a "Why Alexander Kruel/Xixidu sucks" page that we can take down.
That's implying a false equivalence. If I make a quotes page of a public person, a person with far-reaching goals, in order to highlight problematic beliefs this person holds, beliefs that would otherwise be lost in a vast amount of other statements, then this is not the same as making a "random stranger X sucks" page.
So you getting health related issues as a result of the viciousness you perpetrate...
Stressful fights adversely affect an ...
That's implying a false equivalence. If I make a quotes page of a public person, a person with far-reaching goals, in order to highlight problematic beliefs this person holds, beliefs that would otherwise be lost in a vast amount of other statements, then this is not the same as making a "random stranger X sucks" page.
Then again, LW does not have a "Why Anything Sucks" page as far as I'm aware. There are plenty of people/organizations out there with whom LW/MIRI disagree, and who are more visible than you, but I don't think LW has ev...
You are one of the people spouting comments such as this one for a long time
Yes, my first encounter with you was when I bashed you for your unfair criticism of Rationalwiki and your unfair support of Eliezer Yudkowsky, yet somehow you failed to call me a brainwashed cultist of Rationalwiki, and you failed to launch a website devoted on how much your bashing of Rationalwiki is justified because they're horrible cultist people out to brainwash you.
I reckon you might not see that such comments are a cause of what I wrote in the past.
Oh, I've actually w...
I don't think MIRI has any reason to take you up on this offer, as responding in this way would elevate the status of your writings.
Yudkowsky has a number of times recently found it necessary to openly attack RationalWiki, rather than ignoring it and clarifying the problem on LessWrong or his website in a polite manner. He also voiced his displeasure over the increasing contrarian attitude on LessWrong. This made me think that there is a small chance that they might desire to mitigate one of only a handful sources who perceive MIRI to be important enoug...
If you want to stop accusations of lying and bad faith, stop spreading the "LW believes in Roko's Basilisk" meme...
How often and for how long did I spread this, and what do you mean by "spread"?
Imagine yourself in my situation back in 2010: After the leader of a community completely freaked out over a crazy post (calling the author an idiot in all bold and caps etc.) he went on to massively nuke any thread mentioning the topic. In addition there are mentions of people having horrible nightmares over it while others are actively tryin...
If you believe that I am, or was, a troll then check out this screenshot from 2009 (this was a year before my first criticism). And also check out this capture of my homepage from 2005, on which I link to MIRI's and Bostrom's homepage (I have been a fan).
If you believe that I am now doing this because of my health, then check out this screenshot of a very similar offer I made in 2011.
In summary: (a) None of my criticisms were ever made with the intent of giving MIRI or LW a bad name, but were instead meant to highlight or clarify problematic issues (b) I b...
This comment ruined my (initially very high) impression from your article. I appreciate that you are trying, and I believe in your good intentions, it's just... you are doing it somewhat wrong. Not sure if I can explain it or provide a better advice.
Probably the essence is that you were strongly emotionally driven in your critique, but you seem to be also strongly emotionally driven in negotiating peace, and your offers are not well calibrated. You want to stop an unproductive debate, but your offer to MIRI to publish something on your blog seems like anot...
Note XiXiDu preserves every potential negative aspect of the MIRI and LW community and is a biased source lacking context and positive examples.
I am a member for more than 5 years now. So I am probably as much part of LW as most people. I have repeatedly said that LessWrong is the most intelligent and rational community I know of.
To quote one of my posts:
I estimate that the vast majority of all statements that can be found in the sequences are true, or definitively less wrong. Which generally makes them worth reading.
I even defended LessWrong again...
Note XiXiDu preserves every potential negative aspect of the MIRI and LW community and is a biased source lacking context and positive examples.
I am a member for more than 5 years now. So I am probably as much part of LW as most people. I have repeatedly said that LessWrong is the most intelligent and rational community I know of.
To quote one of my posts:
I estimate that the vast majority of all statements that can be found in the sequences are true, or definitively less wrong. Which generally makes them worth reading.
I even defended LessWrong again...
Seriously, you bring up a post titled "The Singularity Institute: How They Brainwash You" as supposed evidence towards you supporting LessWrong, MIRI whatever?
Yes, when you talk to LessWrongers, then you occasionally mention the old thing of how you consider it the "most intelligent and rational community I know of". But that evaluation isn't what you constantly repeat to people outside Lesswrong. When asking people "What does Alexander Kruel think of LessWrong?" nobody will say "He endorses it as the most intelligent and...
Regarding Yudkowsky's accusations against RationalWiki. Yudkowsky writes:
First false statement that seems either malicious or willfully ignorant:
In LessWrong's Timeless Decision Theory (TDT),[3] punishment of a copy or simulation of oneself is taken to be punishment of your own actual self
TDT is a decision theory and is completely agnostic about anthropics, simulation arguments, pattern identity of consciousness, or utility.
Calling this malicious is a huge exaggeration. Here is a quote from the LessWrong Wiki entry on Timeless Decision Theory:
...Whe
For a better idea of what's going on with this idea, see Eliezer's comment on the xkcd thread (linked in Emile's comment), or his earlier response here.
For a better idea of what's going on you should read all of his comments on the topic in chronological order.
So what exactly is this 'witch hunt' composed of? What evil thing has Musk done other than disagree with you on how dangerous AI is?
What I meant is that he and others will cause the general public to adopt a perception of the field of AI that is comparable to the public perception of GMOs, vaccination, nuclear power etc., non-evidence-backed fear of something that is generally benign and positive.
He could have used his influence and reputation to directly contact AI researchers or e.g. hold a quarterly conference about risks from AI. He could have talke...
The mainstream press has now picked up on Musk's recent statement. See e.g. this Daily Mail article: 'Elon Musk claims robots could kill us all in FIVE YEARS in his latest internet post…'
Is this a case of multiple discovery?[1] And might something similar happen with AGI? Here are 4 projects who have concurrently developed very similar looking models:
(1) University of Toronto: Unifying Visual-Semantic Embeddings with Multimodal Neural Language Models
(2) Baidu/UCLA: Explain Images with Multimodal Recurrent Neural Networks
(3) Google: A Neural Image Caption Generator
(4) Stanford: Deep Visual-Semantic Alignments for Generating Image Descriptions
[1] The concept of multiple discovery is the hypothesis that most scientific discoveries and inventi...
How meaningful is the "independent" criterion given the heavy overlaps in works cited and what I imagine must be a fairly recent academic MRCA among all the researchers involved?
What are you worried he might do?
Start a witch hunt against the field of AI? Oh wait...he's kind of doing this already.
If he believes what he's said, he should really throw lots of money at FHI and MIRI.
Seriously? How much money do they need to solve "friendly AI" within 5-10 years? Or else, what are their plans? If what MIRI imagines will happen in at most 10 years then I strongly doubt that throwing money at MIRI will make a difference. You'll need people like Musk who can directly contact and convince politicians or summon up the fears of general public in order to force politicians to notice and take actions.
I wonder what would have been Musk's reaction had he witnessed Eurisko winning the United States Traveller TCS national championship in 1981 and 1982. Or if he had witnessed Schmidhuber's universal search algorithm solving Towers of Hanoi on a desktop computer in 2005.
A chiropractor?
Am I delusional or am I correct in thinking chiropractors are practitioners of something a little above blood letting and way below actual modern medicine?
...
However, I haven't done any real research on this subject. The idea that chiropractors are practicing sham medicine is just kind of background knowledge that I'm not really sure where I picked up.
Same for me. I was a little bit shocked to read that someone on LessWrong goes to a chiropractor. But for me this attitude is also based on something I considered to be common knowledge, suc...
Do "all those who have recently voiced their worries about AI risks" actually believe we live in a simulation in a mathematical universe? ("Or something along these lines..."?)
Although I don't know enough about Stuart Russell to be sure, he seems rather down to earth. Shane Legg also seems reasonable. So does Laurent Orseau. With the caveat that these people also seem much less extreme in their views on AI risks.
I certainly do not want to discourage researchers from being cautious about AI. But what currently happens seems to be the ...
Have you read Basic AI Drives. I remember reading it when it got posted on boingboing.net way before I had even heard of MIRI. Like Malthus’s arguments, it just struck me as starkly true.
I don't know what you are trying to communicate here. Do you think that mere arguments, pertaining to something that not even the relevant experts understand at all, entitles someone to demonize a whole field?
The problem is that armchair theorizing can at best yield very weak decision relevant evidence. You don't just tell the general public that certain vaccines cause ...
Musk's accomplishments don't necessarily make him an expert on the demonology of AI's. But his track record suggests that he has a better informed and organized way of thinking about the potentials of technology than Carrico's.
Would I, epistemically speaking, be better off adopting the beliefs hold by all those who have recently voiced their worries about AI risks? If I did that then I would end up believing that I was living in a simulation, in a mathematical universe, and that within my lifetime, thanks to radical life extension, I could hope to rent ...
Could you provide examples of advanced math that you were unable to learn? Why do you think you failed?
I appreciate having Khan academy for looking up math concept that on which I need a refresher, but I've herd (or maybe just assumed?) that the higher level teaching was a bit mediocre. You disagree?
Comparing Khan Academy's linear algebra course to the free book that I recommended, I believe that Khan Academy will be more difficult to understand if you don't already have some background knowledge of linear algebra. This is not true for the calculus course though. Comparing both calculus and linear algebra to the books I recommend, I believe that Khan Aca...
I am not sure about the prerequisites you need for "rationality" but take a look at the following courses:
(1) Schaum's Outline of Probability, Random Variables, and Random Processes:
The background required to study the book is one year calculus, elementary differential equations, matrix analysis...
(2) udacity's Intro to Artificial Intelligence:
Some of the topics in Introduction to Artificial Intelligence will build on probability theory and linear algebra.
(3) udacity's Machine Learning: Supervised Learning :
...A strong familiarity with Pr
So, this "Connection Theory" looks like run-of-the-mill crackpottery. Why are people paying attention to it?
From the post:
“I don’t feel confident assigning less than a 1% chance that it’s correct — and if it works, it would be super valuable. Therefore it’s very high EV!”
What I meant by distancing LessWrong from Eliezer Yudkowsky is to become more focused on actually getting things done rather than rehashing Yudkowky's cached thoughts.
LessWrong should finally start focusing on trying to solve concrete and specific technical problems collaboratively. Not unlike what the Polymath Project is doing.
To do so LessWrong has to squelch all the noise by stopping to care about getting more members and starting to strongly moderate non-technical off-topic posts.
I am not talking about censorship here. I am talking about something unpr...
Of course, mentioning the articles on ethical injuctions would be too boring.
It's troublesome how ambiguous the signals are that LessWrong is sending on some issues.
On the one hand LessWrong says that you should "shut up and multiply, to trust the math even when it feels wrong". On the other hand Yudkowsky writes that he would sooner question his grasp of "rationality" than give five dollars to a Pascal's Mugger because he thought it was "rational".
On the one hand LessWrong says that whoever knowingly chooses to save one l...
Since LW is going to get a lot of visitors someone should put an old post that would make an excellent first impression in a prominent position. I nominate How to Be Happy.
The problem isn't that easy to solve. Consider that MIRI, then SIAI, already had a bad name before Roko's post, and before I ever voiced any criticism. Consider this video from an actual AI conference, from March 2010, a few months before Roko's post. Someone in the audience makes the following statement:
...Whenever I hear the Singularity Institute talk I feel like they are a bunch of
LessWrong would have to somehow distance itself from MIRI and Eliezer Yudkowsky.
And become just another procrastination website.
Okay, there is still CFAR here. Oh wait, they also have Eliezer in the team! And they believe they can teach the rest of the world to become more rational. How profoundly un-humble or, may I say, cultish? Scratch the CFAR, too.
While we are at it, let's remove the articles "Tsuyoku Naritai!", "Tsuyoku vs. the Egalitarian Instinct" and "A Sense That More Is Possible". They contain the same arrogant i...
Also the debate is not about an UFAI but a FAI that optimizes the utility function of general welfare with TDT.
Roko's post explicitly mentioned trading with unfriendly AI's.
Eliezer Yudkowsky's reasons for banning Roko's post have always been somewhat vague. But I don't think he did it solely because it could cause some people nightmares.
(1) In one of his original replies to Roko’s post (please read the full comment, it is highly ambiguous) he states his reasons for banning Roko’s post, and for writing his comment (emphasis mine):
...I’m banning this post so that it doesn’t (a) give people horrible nightmares and (b) give distant superintelligences a motive to follow through on blackmail against people dumb enough to think about
"Doesn't work against a perfectly rational, informed agent" does not preclude "works quite well against naïve, stupid newbie LW'ers that haven't properly digested the sequences."
Memetic hazard is not a fancy word for coverup. It means that the average person accessing the information is likely to reach dangerous conclusions. That says more about the average of humanity than the information itself.
It seems to me that as long as something is dressed in a sufficiently "sciency" language and endorsed by high status members of the community, a sizable number (though not necessarily a majority) of lesswrongers will buy into it.
I use the term "new rationalism".
...With proper preparation, yes. To reuse my example: it doesn't take long to register an Amazon account, offer a high-paying HIT with a binary download which opens up a port on the computer, and within minutes multiple people across the world will have run your trojan (well-paying HITs go very fast & Turkers are geographically diverse, especially if the requester doesn't set requirements on country*); and then one can begin doing all sorts of other things like fuzzing, SMT solvers to automatically extract vulnerabilities from released patches, building
This is not magic, I am not a layman, and your beliefs about computer security are wildly misinformed. Putting trojans on large fractions of the computers on the internet is currently within the reach of, and is actually done by, petty criminals acting alone.
Within moments? I don't take your word for this, sorry. The only possibility that comes to my mind is by somehow hacking the Windows update servers and then somehow forcefully install new "updates" without user permission.
...While this does involve a fair amount of thinking time, all of t
Right, and I'm saying: the "moments later" part of what Luke said is not something that should be surprising or controversial, given the premises.
The premise was a superhuman intelligence? I don't see how it could create a large enough botnet, or find enough exploits, in order to be everywhere moments later. Sounds like magic to me (mind you, a complete layman).
If I approximate "superintelligence" as NSA, then I don't see how the NSA could have a trojan everywhere moments after the POTUS asked them to take over the Internet. Now I co...
...you could say that experts disagreed about one of the 5 theses (intelligence explosion), as only 10% thought a human level AI could reach a strongly superhuman level within 2 years
Hit the brakes on that line of reasoning! That's not what the question asked. It asked WILL it, not COULD it.
If I have a statement "X will happen", and ask people to assign a probability to it, then if the probability is <=50% I believe it isn't too much to a stretch to paraphrase "X will happen with a probability <=50%" as "It could be tha...
It might be developed in a server cluster somewhere, but as soon as you plug a superhuman machine into the internet it will be everywhere moments later.
Even if you disagree with this line of reasoning, I don't think it's fair to paint it as "*very extreme".
With "very extreme" I was referring to the part where he claims that this will happen "moments later".
The two quotes you gave say two pretty different things. What Yudkowsky said about the time-scale of self improvement being weeks or hours, is controversial.
My problem with Luke's quote was the "moments later" part.
That's not extreme at all, and also not the same as the EY quote. Have you read any computer security papers? You can literally get people to run programs on their computer as root by offering them pennies!
He wrote it will be moments later everywhere. Do you claim that it could take over the Internet within moments?
...to hear that 10% - of fairly general populations which aren't selected for Singulitarian or even transhumanist views - would endorse a takeoff as fast as 'within 2 years' is pretty surprising to me.
In the paper human-level AI was defined as follows:
“Define a ‘high–level machine intelligence’ (HLMI) as one that can carry out most human professions at least as well as a typical human.”
Given that definition it doesn't seem too surprising to me. I guess I have been less skeptical about this than you...
...Fast takeoff / intelligence explosion has alway
The two quotes you gave say two pretty different things. What Yudkowsky said about the time-scale of self improvement being weeks or hours, is controversial. FWIW, I think he's probably right, but I wouldn't be shocked if it turned out otherwise.
What Luke said was about what happens when an already-superhuman AI gets an Internet connection. This should not be controversial at all. This is merely claiming that a "superhuman machine" is capable of doing something that regular humans already do on a fairly routine basis. The opposite claim - that th...
Given that definition it doesn't seem too surprising to me. I guess I have been less skeptical about this than you...
I don't think much of typical humans.
These kind of very extreme views are what I have a real problem with.
I see.
And just to substantiate "extreme views", here is Luke Muehlhauser:
It might be developed in a server cluster somewhere, but as soon as you plug a superhuman machine into the internet it will be everywhere moments later.
That's not extreme at all, and also not the same as the EY quote. Have you read any comput...
I have read the 22 pages yesterday and haven't seen anything about specific risks? Here is question 4:
...4 Assume for the purpose of this question that such HLMI will at some point exist. How positive or negative would be overall impact on humanity, in the long run?
Please indicate a probability for each option. (The sum should be equal to 100%.)”
Respondents had to select a probability for each option (in 1% increments). The addition of the selection was displayed; in green if the sum was 100%, otherwise in red.
The five options were: “Extremely good – On b
This will be my last comment and I am going to log out after it. If you or MIRI change your mind, or discover any evidence "that something has gone wrong", please let me know by email or via a private message on e.g. Facebook or some other social network that's available at that point in time.
... (read more)