Bugmaster comments on Welcome to Less Wrong! (2012) - Less Wrong

25 Post author: orthonormal 26 December 2011 10:57PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (1430)

You are viewing a single comment's thread. Show more comments above.

Comment author: Bugmaster 25 April 2012 12:29:35AM 0 points [-]

The SIAI folks would say that your reasoning is exactly the kind of reasoning that leads to all of us being converted into computronium one day. More specifically, they would claim that, if you program an AI to improve itself recursively -- i.e., to rewrite its own code, and possibly rebuild its own hardware, in order to become smarter and smarter -- then its intelligence will grow exponentially, until it becomes smart enough to easily outsmart everyone on the planet. It would go from "monkey" to "quasi-godlike" very quickly, potentially so quickly that you won't even notice it happening.

FWIW, I personally am not convinced that this scenario is even possible, and I think that SIAI's worries are way overblown, but that's just my personal opinion.

Comment author: wedrifid 25 April 2012 12:37:09AM 2 points [-]

i.e., to rewrite its own code, and possibly rebuild its own hardware, in order to become smarter and smarter -- then its intelligence will grow exponentially, until it becomes smart enough to easily outsmart everyone on the planet.

Recursively, not necessarily exponentially. It may exploit the low hanging fruit early and improve somewhat slower once those are gone. Same conclusion applies - the threat is that it improves rapidly, not that it improves exponentially.

Comment author: Bugmaster 25 April 2012 12:42:48AM 0 points [-]

Good point, though if the AI's intelligence grew linearly or as O(log T) or something, I doubt that it would be able to achieve the kind of speed that we'd need to worry about. But you're right, the speed is what ultimately matters, not the growth curve as such.

Comment author: olalonde 25 April 2012 12:41:07AM *  0 points [-]

Human level intelligence is unable to improve itself at the moment (it's not even able to recreate itself if we exclude reproduction). I don't think monkey level intelligence will be more able to do so. I agree that the SIAI scenario is way overblown or at least until we have created an intelligence vastly superior to human one.

Comment author: Vulture 25 April 2012 02:22:54AM 2 points [-]

Uh... I think the fact that humans aren't cognitively self-modifying (yet!) doesn't have to do with our intelligence level so much as the fact that we were not designed explicitly to be self-modifying, as the SIAI is assuming any AGI would be. I don't really know enough about AI to know whether or not this is strictly necessary for a decent AGI, but I get the impression that most (or all) serious would-be-AGI-builders are aiming for self-modification.

Comment author: olalonde 25 April 2012 11:21:47AM 0 points [-]

Isn't it implied that sub-human intelligence is not designed to be self-modifying given that monkeys don't know how to program? What exactly do you mean by "we were not designed explicitly to be self-modifying"?

Comment author: Vulture 26 April 2012 01:09:18AM 0 points [-]

My understanding was that in your comment you basically said that our current inability to modify ourselves is evidence that an AGI of human-level intelligence would likewise be unable to self-modify.

Comment author: adamisom 25 April 2012 03:13:22AM 0 points [-]

This is a really stupid question, but I don't grok the distinction between 'learning' and 'self-modification' - do you get it?

Comment author: Vulture 25 April 2012 04:16:56AM 2 points [-]

By my understanding, learning is basically when a program collects the data it uses itself through interaction with some external system. Self-modification, on the other hand, is when the program has direct read/write acces to its own source code, so it can modify its own decision-making algorithm directly, not just the data set its algorithm uses.

Comment author: TheOtherDave 25 April 2012 04:48:00AM 1 point [-]

This seems to presume a crisp distinction between code and data, yes?
That distinction is not always so crisp. Code fragments can serve as data, for example.
But, sure, it's reasonable to say a system is learning but not self-modifying if the system does preserve such a crisp distinction and its code hasn't changed.