You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

redding comments on Open Thread - Aug 24 - Aug 30 - Less Wrong Discussion

7 Post author: Elo 24 August 2015 08:14AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (318)

You are viewing a single comment's thread.

Comment author: redding 24 August 2015 04:37:41PM 1 point [-]

Not sure if this is obvious of just wrong, but isn't it possible (even likely?) that there is no way of representing a complex mind that is sufficiently useful enough to allow an AI to usefully modify itself. For instance, if you gave me complete access to my source code, I don't think I could use it to achieve any goals as such code would be billions of lines long. Presumably there is a logical limit on how far one can usefully compress ones own mind to reason about it, and it seams reasonably likely that such compression will be too limited to allow a singularity.

Comment author: [deleted] 24 August 2015 08:59:58PM *  3 points [-]

The ability to reason about large amounts of code seems to be more a memory and computation speed problem, than a logic problem. Computers already seem to be better than humans on these counts, so it seems like they may be better at understanding large pieces of code, once we have the whole "understanding" thing solved.

Comment author: DanielLC 25 August 2015 04:10:57AM 0 points [-]

There's certainly ways you can usefully modify yourself. For example, giving yourself a heads-up display. However, I'm not sure how much it would end up increasing your intelligence. You could get runaway super-intelligence if every improvement increases the best mind current!you can make by at least that much, but if it increases by less than that, it won't run away.