"I don't trust my ability to set limits on the abilities of Bayesian superintelligences."
Limits? I can think up few on the spot already.
Environment: CPU power, RAM capacity etc. I don't think even you guys claim something as blatant as "AI can break laws of physics when convenient".
Feats:
Win this kind of situation in chess. Sure, AI would not allow occurence of that situation in first place during game, but that's not my point.
Make human understand AI. Note: uplifting does not count, since human then ceases to be human. As a practice, try teaching your cat Kant's philosophy.
Make AI understand itself fully and correctly. This one actually works on all levels. Can YOU understand yourself? Are you even theoretically capable of that? Hint: no.
Related: survive actual self-modification, especially without any external help. Transhumanist fantasy says AIs will do it all the time. Reality is that any self-preserving AI will be as eager to preform self-modification as you to get randomized extreme form of lobotomy (transhumanist version of Russian roulette, except with all bullets in every gun except one in gazilion).
I guess some people are so used to think about AI as magic omnipotent technogods they don't even notice it. Sad.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I respect both updates and hostile ceasefires.
You can update by posting a header to all of your blog posts saying, "I wrote this blog during a dark period of my life. I now realize that Eliezer Yudkowsky is a decent and honest person with no ill intent, and that anybody can be made to look terrible by selectively collecting all of his quotes one-sidedly as I did. I regret this page, and leave it here as an archive to that regret." If that is how you feel and that is what you do, I will treat with you starting from scratch in any future endeavors. I've been stupid too, in my life. (If you then revert to pattern, you do not get a second second chance.)
I have not found it important to say very much at all about you so far, unless you show up to a thread in which I am participating. If carrying on your one-sided vendetta is affecting your health and you want to declare a one-sided ceasefire for instrumental reasons, and you feel afraid that your brain will helplessly drag you back in if anyone mentions your name, then I state that: if you delete your site, withdraw entirely from all related online discussions, and do not say anything about MIRI or Eliezer Yudkowsky in the future, I will not say anything about Xixidu or Alexander Kruel in the future. I will urge others to do the same. I do not control anyone except myself. I remark that you cannot possibly expect anything except hostility given your past conduct and that feeding your past addiction by posting one little comment anywhere, only to react with shock as people don't give you the respect to which you consider yourself entitled, is likely to drag you back in and destroy your health again.
Failing either of these actions:
I am probably going to put up a page about Roko's Basilisk soon. I am not about to mention you just to make your health problems worse, nor avoid mentioning you if I find that a net positive while I happen to be writing; your conduct has placed you outside of my circle of concern. If the name Alexander Kruel happens to arise in some other online discussion or someone links to your site, I will explain that you have been carrying on a one-sided vendetta against MIRI for unknown psychological reasons. If for some reason I am talking about the hazards of my existence, I might bring up the name of Alexander Kruel as that guy who follows me around the 'Net looking for sentences that can be taken out of context to add to his hateblog, and mention with some bemusement that you didn't stop even after you posted that all the one-sided hate was causing you health problems. Either a ceasefire or an update will prevent me from saying any such thing.
I urge you to see a competent cognitive-behavioral therapist and talk to them about the reason why your brain is making you do this even as it destroys your health.
I have written this note according to the principles of Tell Culture to describe my own future actions conditional on yours. Reacting to it in a way I deem inappropriate, such as taking a sentence out of context and putting it on your hateblog, will result in no future such communications with you.
"You can update by posting a header to all of your blog posts saying, "I wrote this blog during a dark period of my life. I now realize that Eliezer Yudkowsky is a decent and honest person with no ill intent, and that anybody can be made to look terrible by selectively collecting all of his quotes one-sidedly as I did. I regret this page, and leave it here as an archive to that regret.""
Wow, just wow. Cult leader demands Stalin-style self-critique on every page (no sane person would consider it reasonable) and censoring of all posts related to Less Wrong after campaign of harassment.