Last night I found myself thinking, "Well, suppose there's no Singularity coming any time soon. The FAI project will still have gotten a bunch of nerds working together on a project aimed at the benefit of all humanity — including formalizing a lot of ethics — who might otherwise have been working on weapons, wireheading, or something else awful. That's gotta be a good thing, right?"
Then I realized this sounds like rationalization.
Which got me to thinking about what my concerns are about this stuff.
My biggest AI risk worries right now are more immediate than paperclip optimizers. They're wealth optimizers, profit optimizers; probably extrapolations of current HFT systems. The goal of such a system isn't even to make its owners happy — just to make them rich — and it certainly doesn't care about anyone else. It may not even have beliefs about humans, just about flows of capital and information.
Even assuming that such systems believe that crashing the economy would be bad for their owners, I expect that for the vast majority of living and potential humans, world dominance by such systems would constitute a Bad Ending.
It does not seem to me that it would require self-modifying emergent AI to bring about such a Bad Ending; and no exotic technologies such as computronium — just the continuation of current trends.
My biggest AI risk worries right now are more immediate than paperclip optimizers. They're wealth optimizers, profit optimizers; probably extrapolations of current HFT systems. The goal of such a system isn't even to make its owners happy — just to make them rich — and it certainly doesn't care about anyone else. It may not even have beliefs about humans, just about flows of capital and information.
I contend that those that exist are already a problem.