private_messaging comments on Thoughts on the Singularity Institute (SI) - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (1270)
Reading Holden's transcript with Jaan Tallinn (trying to go over the whole thing before writing a response, due to having done Julia's Combat Reflexes unit at Minicamp and realizing that the counter-mantra 'If you respond too fast you may lose useful information' was highly applicable to Holden's opinions about charities), I came across the following paragraph:
I've been previously asked to evaluate this possibility a few times, but I think the last time I did was several years ago, and when I re-evaluated it today I noticed that my evaluation had substantially changed in the interim due to further belief shifts in the direction of "Intelligence is not as computationally expensive as it looks" - constructing a non-self-modifying predictive super-human intelligence might be possible on the grounds that human brains are just that weak. It would still require a great feat of cleanly designed, strong-understanding-math-based AI - Holden seems to think this sort of development would happen naturally with the sort of AGI researchers we have nowadays, and I wish he'd spent a few years arguing with some of them to get a better picture of how unlikely this is. Even if you write and run algorithms and they're not self-modifying, you're still applying optimization criteria to things like "have the humans understand you", and doing inductive learning has a certain inherent degree of program-creation to it. You would need to have done a lot of "the sort of thinking you do for Friendly AI" to set out to create such an Oracle and not have it kill your planet.
Nonetheless, I think after further consideration I would end up substantially increasing my expectation that if you have some moderately competent Friendly AI researchers, they would apply their skills to create a (non-self-modifying) (but still cleanly designed) Oracle AI first - that this would be permitted by the true values of "required computing power" and "inherent difficulty of solving problem directly", and desirable for reasons I haven't yet thought through in much detail - and so by Conservation of Expected Evidence I am executing that update now.
Flagging and posting now so that the issue doesn't drop off my radar.
"Intelligence is not as computationally expensive as it looks"
How sure are you that your intuitions do not arise from typical mind fallacy and from you attributing the great discoveries and inventions of mankind to the same processes that you feel run in your skull and which did not yet result in any great novel discoveries and inventions that I know of?
I know this sounds like ad-hominem, but as your intuitions are significantly influenced by your internal understanding of your own process, your self esteem will stand hostage to be shot through in many of the possible counter arguments and corrections. (Self esteem is one hell of a bullet proof hostage though, and tends to act more as a shield for bad beliefs).
There is a lot of engineers working on software for solving engineering problems, including the software that generates and tests possible designs and looks for ways to make better computers. Your philosophy-based natural-language-defined in-imagination-running Oracle AI may have to be very carefully specified so that it does not kill imaginary mankind. And it may well be very difficult to build such a specification. Just don't confuse it with the software written to solve definable problems.
Ultimately, figuring out how to make a better microchip involves a lot of testing of various designs, that's how humans do it, that's how tools do it. I don't know how you think it is done. The performance is a result of a very complex function of the design. To build a design that performs you need to reverse this ultra complicated function, which is done by a mixture of analytical methods and iteration of possible input values, and unless P=NP, we have very little reason to expect any fundamentally better solutions (and even if P=NP there may still not be any). Meaning that the AGI won't have any edge over practical software, and won't out-foom it.