jimrandomh comments on Should I believe what the SIAI claims? - Less Wrong

23 Post author: XiXiDu 12 August 2010 02:33PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (600)

You are viewing a single comment's thread. Show more comments above.

Comment author: jimrandomh 30 December 2010 06:10:11PM 4 points [-]

That would surely be a very good argument if I was able to judge it. But can intelligence be captured by a discrete algorithm or is it modular and therefore not subject to overall improvements that would affect intelligence itself as a meta-solution?

This seems backwards - if intelligence is modular, that makes it more likely to be subject to overall improvements, since we can upgrade the modules one at a time. I'd also like to point out that we currently have two meta-algorithms, bagging and boosting, which can improve the performance of any other machine learning algorithm at the cost of using more CPU time.

It seems to me that, if we reach a point where we can't improve an intelligence any further, it won't be because it's fundamentally impossible to improve, but because we've hit diminishing returns. And there's really no way to know in advance where the point of diminishing returns will be. Maybe there's one breakthrough point, after which it's easy until you get to the intelligence of an average human, then it's hard again. Maybe it doesn't become difficult until after the AI's smart enough to remake the world. Maybe the improvement is gradual the whole way up.

But we do know one thing. If an AI is at least as smart as an average human programmer, then if it chooses to do so, it can clone itself onto a large fraction of the computer hardware in the world, in weeks at the slowest, but more likely in hours. We know it can do this because human-written computer viruses do it routinely, despite our best efforts to stop them. And being cloned millions or billions of times will probably make it smarter, and definitely make it powerful.

In other words, on a fundamental level problems are not solved, solutions are discovered by an evolutionary process. In all discussions I took part so far 'intelligence' has had a somewhat proactive aftertaste. But nothing genuine new is ever being created deliberately.

In a sense, all thoughts are just the same words and symbols rearranged in different ways. But that is not the type of newness that matters. New software algorithms, concepts, frameworks, and programming languages are created all the time. And one new algorithm might be enough to birth an artificial general intelligence.

Comment author: NancyLebovitz 30 December 2010 06:11:41PM 2 points [-]

But we do know one thing. If an AI is at least as smart as an average human programmer, then if it chooses to do so, it can clone itself onto a large fraction of the computer hardware in the world, in weeks at the slowest, but more likely in hours. We know it can do this because human-written computer viruses do it routinely, despite our best efforts to stop them. And being cloned millions or billions of times will probably make it smarter, and definitely make it powerful.

The AI will be much bigger than a virus. I assume this will make propagation much harder.

Comment author: jimrandomh 30 December 2010 07:01:23PM 2 points [-]

Harder, yes. Much harder, probably not, unless it's on the order of tens of gigabytes; most Internet connections are quite fast.

Comment author: timtyler 30 December 2010 07:24:25PM 0 points [-]

And one new algorithm might be enough to birth an artificial general intelligence.

Anything could be possible - though the last 60 years of the machine intelligence field are far more evocative of the "blood-out of-a-stone" model of progress.

Comment author: timtyler 30 December 2010 07:11:21PM -1 points [-]

If an AI is at least as smart as an average human programmer, then if it chooses to do so, it can clone itself onto a large fraction of the computer hardware in the world, in weeks at the slowest, but more likely in hours. We know it can do this because human-written computer viruses do it routinely, despite our best efforts to stop them. And being cloned millions or billions of times will probably make it smarter, and definitely make it powerful.

Smart human programmers can make dark nets too. Relatively few of them want to trash their own reputations and appear in the cross-hairs of the world's security services and law-enforcement agencies, though.

Comment author: jimrandomh 30 December 2010 07:49:47PM 1 point [-]

Reputation and law enforcement are only a deterrent to the mass-copies-on-the-Internet play if the copies are needed long-term (ie, for more than a few months), because in the short term, with a little more effort, the fact that an AI was involved at all could be kept hidden.

Rather than copy itself immediately, the AI would first create a botnet that does nothing but spread itself and accept commands, like any other human-made botnet. This part is inherently anonymous; on the occasions where botnet owners do get caught, it's because they try to sell use of them for money, which is harder to hide. Then it can pick and choose which computers to use for computation, and exclude those that security researchers might be watching. For added deniability, it could let a security researcher catch it using compromised hosts for password cracking, to explain the CPU usage.

Maybe the state of computer security will be better in 20 years, and this won't be as much of a risk anymore. I certainly hope so. But we can't count on it.