"FOOM that takes two years"
In addition to comments by Robin and Aron, I would also pointed out the possibility that longer the FOOM takes, larger the chance it is not local, regardless of security - somewhere else, there might be another FOOMing AI.
Now as I understand, some consider this situation even more dangerous, but it as well might create "take over" defence.
Another comment to FOOM scenario and this is sort of addition to Tim's post:
"As machines get smarter, they will gradually become able to improve more and more of themselves. Yes, eventually machines will be able to cut humans out of the loop - but before that there will have been much automated improvement of machines by machines - and after that there may still be human code reviews."
Eliezer seems to spend a lot of time explaining what happens when "k > 1" - when AI intelligence surpases human and starts selfimproving. But I suspect that the phase 0.3 < k < 1 might be pretty long, maybe decades.
Also, moreover, by the time of FOOM, we should be able to use vast amounts of fast 'subcritical' AIs (+ weak AIs) as guardians of process. In fact, by that time, k < 1 AIs might play a pretty important role in world economy and security by that time and it does not take too much pattern recognition power to keep things at bay. (Well, in fact, I believe Eliezer proposes something similar in his thesis, except for locality issue).
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Eliezer:
"Tim probably read my analysis using the self-optimizing compiler as an example, then forgot that I had analyzed it and thought that he was inventing a crushing objection on his own."
Why do you think it is crushing objection? I believe Tim just repeats his favorite theme (which, in fact, I tend to agree with) where machine augmented humans build better machines. If you can use automated refactoring to improve the way compiler works (and today, you often can), that is in fact pretty cool augmentation of human capabilities. It is recursive FOOM. The only difference of your vision and his is that as long as k < 1 (and perhaps some time after that point), humans are important FOOM agents. Also, humans are getting much more capable in the process. For example, machine augmented human (think weak AI + direct neural interface and all that cyborging whistles + mind drugs) might be quite likely to follow the FOOM.