Mark_Friedenbach comments on Leaving LessWrong for a more rational life - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (268)
I generally agree with your position on the Sequences, but it seems to me that it is possible to hang around this website and have meaningful discussions without worshiping the Sequences or Eliezer Yudkowsky. At least it works for me.
As for being a highly involved/high status member of the community, especially the offline one, I don't know.
Anyway, regarding the point about super-intelligence that you raised, I charitably interpret the position of the AI-risk advocates not as the claim that super-intelligence would be in principle outside the scope of human scientific inquiry, but as the claim that a super-intelligent agent would be more efficient at understanding humans that humans would be at understanding it, giving the super-intelligent agent and edge over humans.
I think that the AI-risk advocates tend to exaggerate various elements of their analysis: they probably underestimate time to human-level AI and time to super-human AI, they may overestimate the speed and upper bounds to recursive self-improvement (their core arguments based on exponential growth seem, at best, unsupported).
Moreover, it seems that they tend to conflate super-intelligence with a sort of near-omniscience:
They seem to assume that a super-intelligent agent will be a near-optimal Bayesian reasoner with an extremely strong prior that will allow it to gain a very accurate model of the world, including all the nuances of human psychology, from a very small amount of observational evidence and little or no interventional experiments. Recent discussion here.
Maybe this is the community bias that you were talking about, the over-reliance on abstract thought rather than evidence, projected on an hypothetical future AI.
It seems dubious to me that this kind of extreme inference is even physically possible, and if it is, we are certainly not anywhere close to implementing it. All the recent advances in machine learning, for instance, rely on processing very large datasets.
Anyway, as much as they exaggerate the magnitude and urgency of the issue, I think that the AI-risk advocates have a point when they claim that keeping a system much intelligent than ourselves under control would be a non-trivial problem.
You nailed it. (Your other points too.)
The problem here is that intelligence is not some linear scale, even general intelligence. We human beings are insanely optimized for social intelligence in a way that is not easy for a machine to learn to replicate, especially without detection. It is possible for a general AI to be powerful enough to provide meaningful acceleration of molecular nanotechnology and medical science research whilst being utterly befuddled by social conventions and generally how humans think, simply because it was not programmed for social intelligence.
There is however a substantial difference between a non-trivial problem and an impossible problem. Non-trivial we can work with. I solve non-trivial problems for a living. You solve a non-trivial problem by hacking at it repeatedly until it breaks into components that are themselves well understood enough to be trivial problems. It takes a lot of work, and the solution is to simply to do a lot of work.
But in my experience the AI-risk advocates claim that safe / controlled UFAI is an impossibility. You can't solve an impossibility! What's more, in that frame of mind any work done towards making AGI is risk-increasing. Thus people are actively persuaded to NOT work on artificial intelligence, and instead work of fields of basic mathematics which is at this time too basic or speculative to say for certain whether it would have a part in making a safe or controllable AGI.
So smart people who could be contributing to an AGI project, are now off fiddling with basic mathematics research on chalkboards instead. That is, in the view of someone who believes safe / controllable UFAI is non-trivial but possible mechanism to accelerate the arrival of life-saving anti-aging technologies, a humanitarian disaster.
Agree.
I think that since many AI risk advocates have little or no experience in computer science and specifically AI research, they tend to anthropomorphize AI to some extent. They get that an AI could have goals different than human goals but they seem to think that it's intelligence would be more or less like human intelligence, only faster and with more memory. In particular they assume that an AI will easily develop a theory of mind and social intelligence from little human interaction.
I think they used to claim that safe AGI was pretty much an impossibility unless they were the ones who built it, so gib monies plox!
Anyway, it seems that in recent times they have taken a somewhat less heavy handed approach.