You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

KatjaGrace comments on Superintelligence Reading Group 3: AI and Uploads - Less Wrong Discussion

9 Post author: KatjaGrace 30 September 2014 01:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (138)

You are viewing a single comment's thread.

Comment author: KatjaGrace 02 October 2014 07:49:13AM 3 points [-]

Bostrom talks about a seed AI being able to improve its 'architecture', presumably as opposed to lower level details like beliefs. Why would changing architecture be particularly important?

Comment author: Jeff_Alexander 03 October 2014 02:48:51AM 3 points [-]

One way changing architecture could be particularly important is improvement in the space- or time-complexity of its algorithms. A seed AI with a particular set of computational resources that improves its architecture to make decisions in (for example) logarithmic time instead of linear could markedly advance along the "speed superintelligence" spectrum through such an architectural self-modification.

Comment author: NxGenSentience 03 October 2014 12:05:59PM *  0 points [-]

One’s answer depends on how imaginative one wants to get. One situation is if the AI were to realize we had unknowingly trapped it in too deep a local optimum fitness valley, for it to progress upward significantly w/o significant rearchitecting. We might ourselves be trapped in a local optimality bump or depression, and have transferred some resultant handicap to our AI progeny. if it, with computationally enhanced resources, can "understand" indirectly that it is missing something (analogy: we can detect "invisible" celestial objects by noting perturbations in what we can see, using computer modeling and enhanced instrumentation), it might realize a fundamental blind spot was engineered-in, and redesign is needed. (E.G, what if it realizes it needs to have emotion -- or different emotions -- for successful personal evolution toward enlightenment? What if it is more interested in beauty and aesthetics, than finding deep theorems and proving string theory? We don't really know, collectively, what "superintelligence" is. To the hammer, the whole world looks like... How do we know some logical positivist engineer's vision of AI nirvanna, will be shared by the AI? How many kids would rather be a painter than a Harvard MBA "just like daddy planned for?" Maybe the AIs will find things that are "analog", like art, more interesting than what they know in advance they can do, like anything computable, which becomes relatively uninteresting? What will they find worth doing, if success at anything fitting halting problem parameters (and they might extend and complete those theorems first) is already a given?