Posts

Sorted by New

Wiki Contributions

Comments

"You can update by posting a header to all of your blog posts saying, "I wrote this blog during a dark period of my life. I now realize that Eliezer Yudkowsky is a decent and honest person with no ill intent, and that anybody can be made to look terrible by selectively collecting all of his quotes one-sidedly as I did. I regret this page, and leave it here as an archive to that regret.""

Wow, just wow. Cult leader demands Stalin-style self-critique on every page (no sane person would consider it reasonable) and censoring of all posts related to Less Wrong after campaign of harassment.

"I don't trust my ability to set limits on the abilities of Bayesian superintelligences."

Limits? I can think up few on the spot already.

Environment: CPU power, RAM capacity etc. I don't think even you guys claim something as blatant as "AI can break laws of physics when convenient".

Feats:

  • Win this kind of situation in chess. Sure, AI would not allow occurence of that situation in first place during game, but that's not my point.

  • Make human understand AI. Note: uplifting does not count, since human then ceases to be human. As a practice, try teaching your cat Kant's philosophy.

  • Make AI understand itself fully and correctly. This one actually works on all levels. Can YOU understand yourself? Are you even theoretically capable of that? Hint: no.

  • Related: survive actual self-modification, especially without any external help. Transhumanist fantasy says AIs will do it all the time. Reality is that any self-preserving AI will be as eager to preform self-modification as you to get randomized extreme form of lobotomy (transhumanist version of Russian roulette, except with all bullets in every gun except one in gazilion).

I guess some people are so used to think about AI as magic omnipotent technogods they don't even notice it. Sad.

I never seen anyone bragging about defeating strawmans so much. Hell, in one place he explicitly said about "Soul Swap World" that he made up on spot to happily destroy.

And I still do not know what I am supposed to think about personal identity. I happen to think ME is generated by brain. Brain that works so well it can generate mind despite all of those changes in atoms meticulously described by Yudkowsky.

Except it is not possible, so entire Youdkowsky's house of cards fell apart.

Real, authentic Nazis were proud of their achievement

Not publicly. Holocaust denial exists since it (mass murdering of certain groups of humans) make them look bad. Of course, it is Insane Troll Logic, but I do not think anyone expects sane logic from Nazis.

"Which transhumanist ideas are "not even wrong"?"

Technological Singularity, for example (as defined in Wikipedia). In my view, it is just atheistic version of Rapture or The End Of World As We Know It endemic in various cults and equally likely.

Reason for that is that recursive self-improvement is not possible, since it requires perfect self-knowledge and self-understanding. In reality, AI will be black box to itself, like our brains are black boxes to ourself.

More precisely, my claim is that any mind on any level of complexity is insuficient to understand itself. It is possible for more advanced mind to understand simpler mind, but it obviously does not help very much in context of direct self-improvement.

AI with any self-preservation instincts would be as likely to willingly preform direct self-modification to its mind as you to get stabbed by icepick through eyesocket.

So any AI improvement would have to be done old way. Slow way. No fast takeoff. No intelligence explosion. No Singularity.

That does not matter. If you think it is scam, then size of promised reward does not matter. 100? Googol? Googolplex? 3^^^3? Infinite? It just do not enter calculations in first place, since it is made up anyway.

Determining "is this scam?" probably would have to rely on other things than size of reward. That' avoids whole "but but there is no 1 in 3^^^3 probablility because I say so" bs.