From my estimation, all it needs to do is find out how to hack a bank. If it can't hack one bank, it can try to hack any other bank that it has access to, considering that almost all banks have more than 100 USD in them. It could even find and spread a keylogger to get someone's credit card info.
Such techniques (which are repeatable within a very short timespan, faster than humans can react) seem much more sure than using nanotech or starting a nuclear war. I don't think that distracting humans would really improve its chances of success because it's incredibly doubtful that human's could react so fast to so many different cyber-attacks.
Possible, true, but the chances of this happening seem uber-low.
At the recent London meet-up someone (I'm afraid I can't remember who) suggested that one might be able to solve the Friendly AI problem by building an AI whose concerns are limited to some small geographical area, and which doesn't give two hoots about what happens outside that area. Cipergoth pointed out that this would probably result in the AI converting the rest of the universe into a factory to make its small area more awesome. In the process, he mentioned that you can make a "fun game" out of figuring out ways in which proposed utility functions for Friendly AIs can go horribly wrong. I propose that we play.
Here's the game: reply to this post with proposed utility functions, stated as formally or, at least, as accurately as you can manage; follow-up comments explain why a super-human intelligence built with that particular utility function would do things that turn out to be hideously undesirable.
There are three reasons I suggest playing this game. In descending order of importance, they are: