Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Mader_Levap comments on That Alien Message - Less Wrong

111 Post author: Eliezer_Yudkowsky 22 May 2008 05:55AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (164)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Mader_Levap 25 July 2016 07:32:57PM *  -1 points [-]

"I don't trust my ability to set limits on the abilities of Bayesian superintelligences."

Limits? I can think up few on the spot already.

Environment: CPU power, RAM capacity etc. I don't think even you guys claim something as blatant as "AI can break laws of physics when convenient".

Feats:

  • Win this kind of situation in chess. Sure, AI would not allow occurence of that situation in first place during game, but that's not my point.

  • Make human understand AI. Note: uplifting does not count, since human then ceases to be human. As a practice, try teaching your cat Kant's philosophy.

  • Make AI understand itself fully and correctly. This one actually works on all levels. Can YOU understand yourself? Are you even theoretically capable of that? Hint: no.

  • Related: survive actual self-modification, especially without any external help. Transhumanist fantasy says AIs will do it all the time. Reality is that any self-preserving AI will be as eager to preform self-modification as you to get randomized extreme form of lobotomy (transhumanist version of Russian roulette, except with all bullets in every gun except one in gazilion).

I guess some people are so used to think about AI as magic omnipotent technogods they don't even notice it. Sad.

Comment author: hairyfigment 26 July 2016 05:42:03AM 2 points [-]

As far as environment goes, the context says exactly the opposite of what you suggest it does.

Among your bullet points, only the first seems well-defined. I could try to discuss them anyway, but I suggest you just read up on the subject and come back. Eliezer's organization has a great deal of research on self-understanding and theoretical limits; it's the middle icon at the top right of the page.