XiXiDu comments on Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It) - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (432)
Thanks for the original pointer goes to Kevin.
Key points, some of which I already mentioned in the post Should I believe what the SIAI claims?:
[...]
My comment from the discussion post:
Should I believe what the SIAI claims? I'm still not sure, although I learnt some things since that post. But what I know is how serious the people here take this stuff. Also read the comments on this post for how people associated with LW overreact to completely harmless AI research.
The issue with potential risks posed by unfriendly AI are numerous. The only organisation that takes those issues serious is the SIAI, as its name already implies. But I believe most people simply don't see a difference between the SIAI and one or a few highly intelligent people telling them that a particle collider could destroy the world while all experts working directly on it claim there's no risk. Now I think I understand the argument that if the whole world is at stake it does outweigh the low probability of the event. But does it? I think it is completely justified to have at least one organisation working on FAI, but is the risk as serious as portrayed and perceived within the SIAI? Right now if I had to hazard a guess I'd say that it will probably be a gradual development of many exponential growth phases. That is, we'll have this conceptual revolution and optimize it very rapidly. Then the next revolution will be necessary. Sure, I might be wrong there, as the plateau argument of self-improvement recursion might hold. But even if that is true, I think we'll need at least two paradigm-shattering conceptual revolutions before we get there. But what does that mean though? How quickly can such revolutions happen? I'm guessing that this could take a long time, if it isn't completely impossible. That is, if we are not the equivalent of Universal Turing Machine of abstract reasoning. Just imagine we are merely better chimps. Maybe it doesn't matter if a billion humans does science for a million years, we won't come up with the AI equivalent of Shakespeare's plays. This would mean that we are doomed to evolve slowly, to tweak ourselves incrementally into a posthuman state. Yet, there are also other possibilities, that AGI might for example be a gradual development over many centuries. Human intelligence might turn out to be close to the maximum.
There is so much we do not know yet (http://bit.ly/ckeQo6). Take for example a constrained well-understood domain like Go. AI does still perform awfully at Go. Or take P vs. NP.:
But that is just my highly uneducated guess which I never seriously contemplated. I believe that for most academics the problem here is mainly about the missing proof of concept. Missing evidence. They are not the kind of people who wait before testing the first nuke because it might ignite the atmosphere. If there's no good evidence, a position supported by years worth of disjunctive lines of reasoning won't convince them either.
The paperclip maximizer (http://wiki.lesswrong.com/wiki/Paperclip_maximizer) scenario needs serious consideration. But given what needs to be done, what insights may be necessary to create something creative that is effective in the real world, it's hard to believe that this is a serious risk. It's similar with the kind of grey goo scenario that nanotechnology might hold. It will likely be a gradual development that once it becomes sophisticated enough to pose a serious risk is also understood and controlled by countermeasures.
I also wonder why we don't see any alien paperclip maximizer's out there? If there are any in the observable universe our FAI will lose anyway since it is far behind in its development.
I suppose the actual risk could be taking mere idea too serious.
Indeed. Companies illustrate this. They are huge, superhuman powerful entities too.
A major upvote for this. The SIAI should create a sister organization to publicize the logical (and exceptionally) dangerous conclusion to the course that corporations are currently on. We have created powerful, superhuman entities with the sole top-level goal (required by LAW in for-profit corporations) of "Optimize money acquisition and retention". My personal and professional opinion is that this is a far more immediate (and greater) risk than UnFriendly AI).
Companies are probably the number 1 bet for the type of organisation most likely to produce machine intelligence - with number 2 being governments. So, there's a good chance that early machine intelligences will be embedded into the infrastructure of companies. So, these issues are probably linked.
Money is the nearest global equivalent of "utility". Law-abiding maximisation of it does not seem unreasonable. There are some problems where it is difficult to measure and price things, though.
On the other hand, maximization of money, including accurate terms for expected financial costs of legal penalties, can cause remarkable unreasonable behavior. As was repeated recently "It's hard for the idea of an agent with different terminal values to really sink in", in particular "something that could result in powerful minds that actually don't care about morality". A business that actually behaved as a pure profit maximizer would be such an entity.
Morality is represented by legal constraints. That results in a "negative" morality, and - arguably -not a very good one.
Fortunately companies are also subject to many of the same forces that produce cooperation and niceness in the rest of biology - including reputations, reciprocal altruism and kin selection.
Algorithmic trading is indeed an example for the kind of risks posed by complication (unmanageable) systems but also shows that we evolve our security measures with each small-scale catastrophe. There is no example of some existential risk from true runaway technological development yet although many people believe there are such risks, e.g. nuclear weapons. Unstoppable recursive self-improvement is just a hypothesis that you shouldn't take as a foundation for a whole lot of further inductions.
Dispelling Stupid Myths About Nuclear War
Apparently I don't understand what you mean by "serious risk". (Before I pick this apart, by the way, I agree that we should try not to Godwin people -- because I think it doesn't work.)
I consider it likely that AGI will take a long time to develop. A rational species would likely figure out the flaw and take corrective steps by then. But look around you. Nearly all of us seem to agree, if you look at what we actually want according to our actions, that we should try to prevent an asteroid strike that might destroy humanity. As far as I can tell we haven't started yet. No doubt you can think of other examples: the evidence says that if we put off FAI theory 'until we need it', we could easily put it off longer than that.