timtyler comments on Ben Goertzel: The Singularity Institute's Scary Idea (and Why I Don't Buy It) - Less Wrong

32 Post author: ciphergoth 30 October 2010 09:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (432)

You are viewing a single comment's thread. Show more comments above.

Comment author: timtyler 30 October 2010 10:53:50AM 5 points [-]

It will likely be a gradual development that once it becomes sophisticated enough to pose a serious risk is also understood and controlled by countermeasures.

Indeed. Companies illustrate this. They are huge, superhuman powerful entities too.

Comment author: mwaser 30 October 2010 01:55:18PM 4 points [-]

A major upvote for this. The SIAI should create a sister organization to publicize the logical (and exceptionally) dangerous conclusion to the course that corporations are currently on. We have created powerful, superhuman entities with the sole top-level goal (required by LAW in for-profit corporations) of "Optimize money acquisition and retention". My personal and professional opinion is that this is a far more immediate (and greater) risk than UnFriendly AI).

Comment author: timtyler 30 October 2010 02:24:48PM *  5 points [-]

Companies are probably the number 1 bet for the type of organisation most likely to produce machine intelligence - with number 2 being governments. So, there's a good chance that early machine intelligences will be embedded into the infrastructure of companies. So, these issues are probably linked.

Money is the nearest global equivalent of "utility". Law-abiding maximisation of it does not seem unreasonable. There are some problems where it is difficult to measure and price things, though.

Comment author: soreff 31 October 2010 04:49:39AM *  4 points [-]

Money is the nearest global equivalent of "utility". Law-abiding maximisation of it does not seem unreasonable.

On the other hand, maximization of money, including accurate terms for expected financial costs of legal penalties, can cause remarkable unreasonable behavior. As was repeated recently "It's hard for the idea of an agent with different terminal values to really sink in", in particular "something that could result in powerful minds that actually don't care about morality". A business that actually behaved as a pure profit maximizer would be such an entity.

Comment author: timtyler 31 October 2010 07:44:55AM *  0 points [-]

Morality is represented by legal constraints. That results in a "negative" morality, and - arguably -not a very good one.

Fortunately companies are also subject to many of the same forces that produce cooperation and niceness in the rest of biology - including reputations, reciprocal altruism and kin selection.

Comment author: XiXiDu 30 October 2010 11:58:04AM 0 points [-]

Algorithmic trading is indeed an example for the kind of risks posed by complication (unmanageable) systems but also shows that we evolve our security measures with each small-scale catastrophe. There is no example of some existential risk from true runaway technological development yet although many people believe there are such risks, e.g. nuclear weapons. Unstoppable recursive self-improvement is just a hypothesis that you shouldn't take as a foundation for a whole lot of further inductions.

Dispelling Stupid Myths About Nuclear War

An all-out nuclear war between Russia and the United States would be the worst catastrophe in history, a tragedy so huge it is difficult to comprehend. Even so, it would be far from the end of human life on earth. The dangers from nuclear weapons have been distorted and exaggerated, for varied reasons. These exaggerations have become demoralizing myths, believed by millions of Americans.