Certainly it's possible, and in some ways it's actually being done. But recursive self-improvement isn't always a recipe for FOOMing; diminishing returns can lead to an asymptotic bound, Zeno-style, even if technical refinements remain theoretically possible. In particular, if there are diminishing returns which can't be overcome without radically changing the nature of the problem (for example, with the help of a brain-computer interface or a full-blown mind upload), then self-improvement of the human brain has to wait on those advances to make significant progress.
I'd argue that this is in fact the case. The human brain is a piece of special-purpose hardware that's already quite well optimized for its niche, albeit subject to any number of mind-killing barbarities of the ancestral environment which might not apply in the contemporary world (perennial famine being a cheap example). We're probably not at the peak of the fitness landscape for mammalian intelligence, but I'd be surprised if we weren't reasonably close to a local maximum; if that's true, we're likely to run into early limits to how much we can improve performance by tweaking parameters of a working brain. Generational improvements (genetic or environmental) probably have higher limits, but would take so long that even modest assumptions about the progress of AGI suggest it's the way to bet in the medium term.
Improving human thought is a slightly different topic, but it's pretty much synonymous with rationality as this site uses the term; its various subfields are also highly interdependent, so I don't think attempting to recursively improve any single one of them is going to fly. That said, I'm pretty sure there's a lot of potential latent in the prospect, but with the caveat that it's bounded above by human intelligence; a perfectly optimized rational thinker would probably look exceptionally smart to the likes of us, but even Socrates was proverbially mortal.
But recursive self-improvement isn't always a recipe for FOOMing; diminishing returns can lead to an asymptotic bound, Zeno-style, even if technical refinements remain theoretically possible.
Yes. Early wins are very cheering and promising, but pretty much all growth curves follow a sigmoid. See the start? That's exponential. See the end? That's diminishing returns. This is one of those curves you see over and over.
So the trick is to find a new sigmoid to climb :-D
We're probably not at the peak of the fitness landscape for mammalian intelligence, but I'd be surprised if we weren't reasonably close to a local maximum;
This makes me wonder, if we are to create an FAI and there are other alien uFAI out there, will they be more intelligent because they had more time or is there an overall general intelligence limit? I suppose if there is a total limit to self-improvement for any kind of general intelligence, then all that matters is the acquisition of resources? So any alien uFAI who was able to acquire more raw resources, at the time our FAI reaches the upper bound for intelligence, could subdue our FAI by brute force?
So any alien uFAI who was able to acquire more raw resources, at the time our FAI reaches the upper bound for intelligence, could subdue our FAI by brute force?
No. Even assuming an overwhelming intelligence superiority it would not be possible to subdue a competing superintelligence within any physics remotely like that which we know. Except, of course, if you catch it before it is aware of your existence.
Given the capability to reach speeds of a high percentage of that of light and consume most of the resources from a star system for future expansion the speed of light will give a hard minimum limit on how much of the cosmic commons you can consume before the smarter AI can catch you.
The problem then is that having more than one superintelligence - without the ability to cooperate - will guarantee the squandering of a lot of the resources that could otherwise have been spent on fun.
I'm using "intelligence" here to mean the part of your cognitive architecture that doesn't rely on learned skills, and "thought" to refer to whatever cognitive algorithms you're consciously executing. Neuroplasticity means the distinction's a little muddled, but it only goes so far; neurons only update so fast.
OK, clear enough terms.
I'm familiar with the idea of cognitive predispositions, i.e., it is 'easier' to learn to fear a snake than to learn to fear a butterfly, and easier to learn to recognize a face than to recognize a word, so we're not blank slates in any absolute sense. Still, I have trouble connecting this kind of biological predisposition with an upper bound on abstract intelligence -- my naive intuition is that once you leave the realm of things that humans are pre-programmed to learn, all learning tasks of a given complexity are more or less equally difficult.
I'm sure there are biological limits as far as, e.g., how many thoughts I can keep in my head at once, or how many data points I can memorize in an hour, but I'm not sure what evidence there is that anyone has ever come close to bumping up against even one of these limits, let alone all of them. I like Socrates, but he didn't have access to, e.g, modern pneumonics, or timed academic competitions, and so on.
Also, it seems likely that if you or I started to improve our thought now, and worked at it diligently for, say, 20 hours a week, that we would start to benefit from cyborg-style advances before we would run into hard limits on biological intelligence. E.g., Google already allows you to outsource a fair amount of vocab-style memorization; Yelp has a Monocle program that lets you superimpose an image of what restaurants are nearby over your ordinary vision; Wolfram Alpha solves general systems of complex equations expressed in more or less arbitrary fashion; so, barring a general collapse, it should only be a few years before we get practical technology for doing a whole lot of 'thinking' with our silicon accessories, even without any breakthroughs in terms of a mind-machine interface.
It seems the more math I learn the faster I am able to understand and acquire new math skills. I'm still at a very basic level, so that might quickly hit diminishing returns, but so far it seems that improving my abilities allows me to improve my abilities even more and faster.
No people I am aware of on this forum have taken over a major corporation yet, but the night is young; perhaps some people have ideas.
I current tend to believe that the site works better when most comments are no longer than 3 paragraphs, so I put my reply to this post into a new post.
Mind hacks, yes. Rationality techniques definitely count. Probability, scientific method, etc.
One I keep failing to get around to: some serious training of my memory.
There's plenty of discussion about recursive self-improvement for AGIs, and self-improvement for ourselves, but I haven't come across (in memory) anything that specifically combines the two concepts. Arguably, increasing rationality skills would be recursive, but can anyone think of specific ways in which we can improve ourselves via iterative cycles? Is there a limit to how far we can currently improve our abilities by improving our abilities to improve our abilities? Or are these not the right questions; the concept a mere semantic illusion...
A place to start might be base-level anti-akrasia abilities. Are there physical steps we can do to force ourselves to self-improve when we otherwise lose the will or forget or move on to another interesting subject? Construct some sort of externally self-propelling regime that pulls us along a path to improvement? Could we "Incept" ourselves with the seed of a recursively self-improving paradigm?
-
Forgive the overbearingly inquisitive tone. This is my first post after lurking a couple years, and I guess even in the discussion section I subconsciously avoid making any potentially low-status claims of my own!