Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

kilobug comments on Engelbart: Insufficiently Recursive - Less Wrong

11 Post author: Eliezer_Yudkowsky 26 November 2008 08:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (22)

Sort By: Old

You are viewing a single comment's thread.

Comment author: kilobug 22 September 2011 08:05:47PM 1 point [-]

« You would very probably have to reach into the brain and rewire neural circuitry directly; I don't think any sense input or motor interaction would accomplish such a thing. » Are you really sure it's not possible ? To me it's just a harder version of the AI Box problem : I'm pretty sure (> 95% probability) a sufficiently smart AI can convince anyone (who actually listens to it, at least) to let it out of the box. I'm not so sure about an AI being able to rewire my brain enough to make me an Einstein or Engelbart using only sensory inputs, but I would definitely give a significant probability (higher than 10%) than a Bayesian superintelligence could reverse-engineer the way my brain learns from stimulus enough to rewire enough parts of my brain without using any nanotechnology. Just from other humans, by reading things (like Less Wrong or many books), we can improve a lot. A superintelligence could probably do much, much more to help us improve ourselves with just normal communication.