Kaj_Sotala comments on Risks from AI and Charitable Giving - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (126)
Good post.
You seem to excessively focus on recursive self-improvement to the exclusion of other hard takeoff scenarios, however. As Eliezer noted,
That post mentions several other hard takeoff scenarios, e.g.:
(Also a couple more, but I found those a little vague and couldn't come up with a good way to summarize them in a few of sentences.)
Thanks. I will review those scenarios. Just some quick thoughts:
On first sight this sounds suspicious. The genetic difference between a chimp and a human amounts to about ~40–45 million bases that are present in humans and missing from chimps. And that number is irrespective of the difference in gene expression between humans and chimps. So it's not like you're adding a tiny bit of code and get a superapish intelligence.
The argument from the gap between chimpanzees and humans is interesting but can not be used to extrapolate onwards from human general intelligence. It is pure speculation that humans are not Turing complete and that there are levels above our own. That chimpanzees exist, and humans exist, is not a proof for the existence of anything that bears, in any relevant respect, the same relationship to a human that a human bears to a chimpanzee.
Humans can process long chains of inferences with the help of tools. The important question is if incorporating those tools into some sort of self-perception, some sort of guiding agency, is vastly superior to humans using a combination of tools and expert systems.
In other words, it is not clear that there does exist a class of problems that is solvable by Turing machines in general, but not by a combination of humans and expert systems.
If an AI that we invented can hold a complex model in its mind, then we can also simulate such a model by making use of expert systems. Being consciously aware of the model doesn't make any great difference in principle to what you can do with the model.
Here is what Greg Egan has to say about this in particular:
The quote from Egan would seem to imply that for (literate) humans, too, working memory differences are insignificant: anyone can just use pen and paper to increase their effective working memory. But human intelligence differences do seem to have a major impact on e.g. job performance and life outcomes (e.g. Gottfredson 1997), and human intelligence seems to be very closely linked to - though admittedly not identical with - working memory measures (e.g. Oberauer et al. 2005, Oberauer et al. 2008).
I believe that what he is suggesting is that if you reached a certain plateau then intelligence hits diminishing returns. Would Marilyn vos Savant be proportionally more likely to take over the world, if she tried to, than a 115 IQ individual?
Some anecdotal evidence:
Is there evidence that a higher IQ is useful beyond a certain level? The question is not just if it is useful but if it would be worth the effort it would take to amplify your intelligence to that point given that your goal was to overpower lower IQ agent's. Would a change in personality, more data, a new pair of sensors or some weapons maybe be more useful? If so, would an expected utility maximizer pursue intelligence amplification?
(A marginal note, bigger is not necessarily better.)
Sure. She's demonstrated that she can communicate successfully with millions and handle her own affairs quite successfully, generally winning at life. This is comparable to, say, Ronald Reagan's qualifications. I'd be quite unworried in asserting she'd be more likely to take over the world than a baseline 115 person.
I upvoted for the anecdote, but remember that you're referring to von Neumann, who invented both the basic architecture of computers and the self-replicating machine. I am not qualified to judge whether or not those are as original as relativity, but they are certainly big.
Surely humans are Turing complete. I don't think anybody disputes that.
We know that capabilities extend above our own in all the realms where machines already outstrip our capabilities - and we have a pretty good idea what greater speed, better memory and more memory would do.
Agree with your basic point, but a nit-pick: limited memory and speed (heat death of the universe, etc) put many neat Turing machine computations out of reach of humans (or other systems in our world) barring new physics.
Sure: I meant in the sense of the "colloquial usage" here:
Ah, thanks for making this point - I notice I've recently been treating "recursive self improvement and "hard takeoff" as more or less interchangeable concepts. I don't think I need to update on this, but I'll try and use my language more carefully at least.