Comment author: Eliezer_Yudkowsky 24 November 2014 09:28:40PM 21 points [-]

I respect both updates and hostile ceasefires.

  • You can update by posting a header to all of your blog posts saying, "I wrote this blog during a dark period of my life. I now realize that Eliezer Yudkowsky is a decent and honest person with no ill intent, and that anybody can be made to look terrible by selectively collecting all of his quotes one-sidedly as I did. I regret this page, and leave it here as an archive to that regret." If that is how you feel and that is what you do, I will treat with you starting from scratch in any future endeavors. I've been stupid too, in my life. (If you then revert to pattern, you do not get a second second chance.)

  • I have not found it important to say very much at all about you so far, unless you show up to a thread in which I am participating. If carrying on your one-sided vendetta is affecting your health and you want to declare a one-sided ceasefire for instrumental reasons, and you feel afraid that your brain will helplessly drag you back in if anyone mentions your name, then I state that: if you delete your site, withdraw entirely from all related online discussions, and do not say anything about MIRI or Eliezer Yudkowsky in the future, I will not say anything about Xixidu or Alexander Kruel in the future. I will urge others to do the same. I do not control anyone except myself. I remark that you cannot possibly expect anything except hostility given your past conduct and that feeding your past addiction by posting one little comment anywhere, only to react with shock as people don't give you the respect to which you consider yourself entitled, is likely to drag you back in and destroy your health again.

Failing either of these actions:

I am probably going to put up a page about Roko's Basilisk soon. I am not about to mention you just to make your health problems worse, nor avoid mentioning you if I find that a net positive while I happen to be writing; your conduct has placed you outside of my circle of concern. If the name Alexander Kruel happens to arise in some other online discussion or someone links to your site, I will explain that you have been carrying on a one-sided vendetta against MIRI for unknown psychological reasons. If for some reason I am talking about the hazards of my existence, I might bring up the name of Alexander Kruel as that guy who follows me around the 'Net looking for sentences that can be taken out of context to add to his hateblog, and mention with some bemusement that you didn't stop even after you posted that all the one-sided hate was causing you health problems. Either a ceasefire or an update will prevent me from saying any such thing.

I urge you to see a competent cognitive-behavioral therapist and talk to them about the reason why your brain is making you do this even as it destroys your health.

I have written this note according to the principles of Tell Culture to describe my own future actions conditional on yours. Reacting to it in a way I deem inappropriate, such as taking a sentence out of context and putting it on your hateblog, will result in no future such communications with you.

Comment author: Mader_Levap 25 July 2016 08:15:50PM -3 points [-]

"You can update by posting a header to all of your blog posts saying, "I wrote this blog during a dark period of my life. I now realize that Eliezer Yudkowsky is a decent and honest person with no ill intent, and that anybody can be made to look terrible by selectively collecting all of his quotes one-sidedly as I did. I regret this page, and leave it here as an archive to that regret.""

Wow, just wow. Cult leader demands Stalin-style self-critique on every page (no sane person would consider it reasonable) and censoring of all posts related to Less Wrong after campaign of harassment.

In response to That Alien Message
Comment author: Mader_Levap 25 July 2016 07:32:57PM *  -1 points [-]

"I don't trust my ability to set limits on the abilities of Bayesian superintelligences."

Limits? I can think up few on the spot already.

Environment: CPU power, RAM capacity etc. I don't think even you guys claim something as blatant as "AI can break laws of physics when convenient".

Feats:

  • Win this kind of situation in chess. Sure, AI would not allow occurence of that situation in first place during game, but that's not my point.

  • Make human understand AI. Note: uplifting does not count, since human then ceases to be human. As a practice, try teaching your cat Kant's philosophy.

  • Make AI understand itself fully and correctly. This one actually works on all levels. Can YOU understand yourself? Are you even theoretically capable of that? Hint: no.

  • Related: survive actual self-modification, especially without any external help. Transhumanist fantasy says AIs will do it all the time. Reality is that any self-preserving AI will be as eager to preform self-modification as you to get randomized extreme form of lobotomy (transhumanist version of Russian roulette, except with all bullets in every gun except one in gazilion).

I guess some people are so used to think about AI as magic omnipotent technogods they don't even notice it. Sad.

Comment author: Mader_Levap 25 July 2016 06:16:55PM -2 points [-]

I never seen anyone bragging about defeating strawmans so much. Hell, in one place he explicitly said about "Soul Swap World" that he made up on spot to happily destroy.

And I still do not know what I am supposed to think about personal identity. I happen to think ME is generated by brain. Brain that works so well it can generate mind despite all of those changes in atoms meticulously described by Yudkowsky.

In response to comment by dlrlw on Hard Takeoff
Comment author: Lumifer 17 March 2015 03:20:08PM 1 point [-]

FOOM, in the context of LW, is extremely rapid take-off of artificial intelligence.

If an AI can improve itself and the rate at which it improves is itself a function of how good (=smart) it is, the growth of its capabilities will resemble exponential and at some point will rapidly escalate into the superhuman realm. That is FOOM.

In response to comment by Lumifer on Hard Takeoff
Comment author: Mader_Levap 11 January 2016 04:13:34PM -3 points [-]

Except it is not possible, so entire Youdkowsky's house of cards fell apart.

Comment author: AlexM 16 July 2011 08:12:20PM *  15 points [-]

Why can't modern Nazis disavow ancient Nazi practice in favor of some true essence that makes sense in modern terms?

One can argue that holocaust denial is an attempt to bring nazism closer to modern ethical values. Real, authentic Nazis were proud of their achievement and would be outraged by thought that their successors would call them a lie.

Why not start your search for the true essence in Lord of the Rings

Some people do :-P

Comment author: Mader_Levap 18 September 2015 11:37:47AM -1 points [-]

Real, authentic Nazis were proud of their achievement

Not publicly. Holocaust denial exists since it (mass murdering of certain groups of humans) make them look bad. Of course, it is Insane Troll Logic, but I do not think anyone expects sane logic from Nazis.

Comment author: capybaralet 26 January 2015 06:02:25AM 4 points [-]

Which transhumanist ideas are "not even wrong"?

And do you mean simply 'not well specified enough'? Or more like 'unfalsifiable'?

You also seem to be implying that scientists cannot discuss topics outside of their field, or even outside its current reach.

My philosophy on language is that people can generally discuss anything. For any words that we have heard (and indeed, many we haven't), we have some clues as to their meaning, e.g. based on the context in which they've been used and similarity to other words.

Also, would you consider being cautious an inherently good thing?

Finally, from my experience as a Masters student in AI, many people are happy to give opinions on transhumanism, it's just that many of those opinions are negative.

Comment author: Mader_Levap 18 September 2015 10:22:53AM 0 points [-]

"Which transhumanist ideas are "not even wrong"?"

Technological Singularity, for example (as defined in Wikipedia). In my view, it is just atheistic version of Rapture or The End Of World As We Know It endemic in various cults and equally likely.

Reason for that is that recursive self-improvement is not possible, since it requires perfect self-knowledge and self-understanding. In reality, AI will be black box to itself, like our brains are black boxes to ourself.

More precisely, my claim is that any mind on any level of complexity is insuficient to understand itself. It is possible for more advanced mind to understand simpler mind, but it obviously does not help very much in context of direct self-improvement.

AI with any self-preservation instincts would be as likely to willingly preform direct self-modification to its mind as you to get stabbed by icepick through eyesocket.

So any AI improvement would have to be done old way. Slow way. No fast takeoff. No intelligence explosion. No Singularity.

Comment author: Jiro 10 May 2013 08:27:04PM 3 points [-]

When you say that player 2 "is obviously not going to pay out" that's an approximation. You don't know that he's not going to pay off. You know that he's very, very, very, unlikely to pay off. (For instance, there's a very slim chance that he subscribes to a kind of honesty which leads him to do things he says he'll do, and therefore doesn't follow minimax.) But in Pascal's Mugging, "very, very, very, unlikely" works differently from "no chance at all".

Comment author: Mader_Levap 21 August 2015 07:28:58PM -3 points [-]

That does not matter. If you think it is scam, then size of promised reward does not matter. 100? Googol? Googolplex? 3^^^3? Infinite? It just do not enter calculations in first place, since it is made up anyway.

Determining "is this scam?" probably would have to rely on other things than size of reward. That' avoids whole "but but there is no 1 in 3^^^3 probablility because I say so" bs.