Is there a recursive self-improvement hierarchy?
When we talk about recursively self-improving AI, the word "recursive" there is close enough to being literal rather than metaphoric that we glide over it without asking precisely what it means.
But it's not literally recursion—or is it?
The notion is that an AI has a function optimize(X) which optimizes itself. But it's recursion in the sense of modifying itself, not calling itself. You can imagine ways to do this that would use recursion—say, the paradigmatic executable that rewrites its source code, compiles it, and exec's it—but you can imagine many ways that would not involve any recursive calls.
Can we define recursive self-improvement precisely enough that we can enumerate, explicitly or implicitly, all possible ways of accomplishing it, as clearly as we can list all possible ways of writing a recursive function? (You would want to choose one formalism to use, say lambda calculus.)
A model of the brain's mapping of the territory
I'm linking to a video which describes how the brain may be learning to improve its skills at mapping the territory from limited samples.
This model of learning was previously unknown to me. Judging from the date of the video, what I heard from the person who referred me to it, and the fact that I do not recall hearing much related to this on LessWrong, I think this may be recent enough that some people here would benefit from me spreading the word.
Check out this model of a learning theory which gets background introduction starting from the 52:00 mark and gets going at the 54:00 mark. The overview of the model is explained in approximately 4 minutes.
Raw silicon ore of perfect emptiness
Does building a computer count as explaining something to a rock?
(If we still had open threads, I would have posted this there. As it is, I figure this is better than not saying anything.)
= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)