Bugmaster comments on Welcome to Less Wrong! (2012) - Less Wrong

25 Post author: orthonormal 26 December 2011 10:57PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (1430)

You are viewing a single comment's thread. Show more comments above.

Comment author: Bugmaster 25 April 2012 01:18:08AM 1 point [-]

That's a good point, but, from reading what Eliezer and Luke are writing, I formed the impression that my interpretation is correct. In addition, the SIAI FAQ seems to be saying that intelligence explosion is a natural consequence of Moore's Law; thus, if Moore's Law continues to hold, intelligence explosion is inevitable.

FWIW, I personally disagree with both statements, but that's probably a separate topic.

Comment author: TheOtherDave 25 April 2012 03:31:38AM 0 points [-]

Huh. The FAQ you cite doesn't seem to be positing inevitability to me. (shrug)

Comment author: Bugmaster 25 April 2012 08:31:00PM 0 points [-]

You're right, I just re-read it and it doesn't mention Moore's Law; either it did at some point and then changed, or I saw that argument somewhere else. Still, the FAQ does seem to suggest that the only thing that can stop the Singularity is total human extinction (well, that, or the existence of souls, which IMO we can safely discount); that's pretty close to inevitability as far as I'm concerned.

Comment author: TheOtherDave 25 April 2012 09:01:42PM 0 points [-]

Note that the section you're quoting is no longer talking about the inevitable ascension of any given AGI, but rather the inevitability of some AGI ascending.

Comment author: Bugmaster 26 April 2012 08:36:34PM 0 points [-]

I thought they were talking specifically about an AGI that is capable of recursive self-improvement. This does not encompass all possible AGIs, but the non-self-improving ones are not likely to be very smart, as far as I understand, and thus aren't a concern.

Comment author: TheOtherDave 26 April 2012 09:10:27PM *  2 points [-]

OK, now I am confused.

This whole thread started because you said:

[SIAI] are assuming that any AGI will inevitably (plus or minus epsilon) self-improve itself to transhuman levels.

and I asked why you believed that, as distinct from "...any AGI has a non-negligible chance of self-improving itself to transhuman levels, and the cost of that happening is so vast that it's worth devoting effort to avoid even if the chance is relatively low"?

Now you seem to be saying that SI doesn't believe that any AGI will inevitably (plus or minus epsilon) self-improve itself to transhuman levels, but it is primarily concerned with those who do.

I agree with that entirely; it was my point in the first place.

Were we in agreement all along, have you changed your mind in the course of this exchange, or am I really really confused about what's going on?

Comment author: Bugmaster 26 April 2012 09:22:32PM 2 points [-]

Sorry, I think I am guilty of misusing terminology. I have been using AI and AGI interchangeably, but that's obviously not right. As far as I understand, "AGI" refers to a general intelligence who can solve (or, at least, attempt to solve) any problem, whereas "AI" refers to any kind of an artificial intelligence, including the specialized kind. There are many AIs that already exist in the world -- for example, Google's AdSense algorithm -- but SIAI is not concerned about them (as far as I know), because they lack the capacity to self-improve.

My own hidden assumption, which I should've recognized and voiced earlier, is that an AGI (as contrasted with non-general AI) would most likely be produced through a process of recursive self-improvement; it is highly unlikely that an AGI could be created from scratch by humans writing lines of code. As far as I understand, the SIAI agrees with this statement, but again, I could be wrong.

Thus, it is unlikely that a non-general AI will ever be smart enough to warrant concern. It could still do some damage, of course, but then, so could a busted water main. On the other hand, an AGI will most likely arise as the result of recursive self-improvement, and thus will be capable of further self-improvement, thus boosting itself to transhuman levels very quickly unless its self-improvement is arrested by some mechanism.

Comment author: TheOtherDave 26 April 2012 09:57:01PM 1 point [-]

OK, I think I understand better now.

Yeah, I've been talking throughout about what you're labeling "AI" here. We agree that these won't necessarily self-improve. Awesome.

With respect to what you're labeling "AGI" here, you're saying the following:
1) given that X is an AGI developed by humans, the probability that X has thus far been capable of recursive self-improvement is very high, and
2) given that X has thus far been capable of recursive self-improvement, the probability that X will continue to be capable of recursive self-improvement in the future is very high.
3) SIAI believes 1) and 2).

Yes? Have I understood you?

Comment author: Bugmaster 26 April 2012 11:42:38PM 0 points [-]

Yes, with the caveats that a). as far as I know, no such X currently exists, and b). my confidence in (3) is much lower than my confidence in (1) and (2).