Well, the intelligence in general can be much more alien than this.
Consider an AI that, given any mathematical model of a system and some 'value' metric, finds optimum parameters for object in a system. E.g. the system could be Navier-Stokes equations and a wing, the wing shape may be the parameter, and some metric of drag and lift of the wing can be the value to maximize, and the AI would do all that's necessary including figuring out how to simulate those equations efficiently.
Or the system could be general relativity and quantum mechanics, the parameter could be a theory of everything equation, and some metric of inelegance has to be minimized.
That's the sort of thing that scientists tend to see as 'intelligent'.
The AI, however, did acquire plenty of connotations from science fiction, whereby it is very anthropomorphic.
Those are narrow AIs. Their behavior doesn't involve acquiring resources from the outside world and autonomously developing better ways to do that. That's the part that might lead to psychopath-like behavior.
Faith, Hope, and Singularity: Entering the Matrix with New York’s Futurist Set
To my knowledge LessWrong hasn't received a great deal of media coverage. So, I was surprised when I came across an article via a Facebook friend which also appeared on the cover of the New York Observer today. However, I was disappointed upon reading it, as I don't think it is an accurate reflection of the community. It certainly doesn't reflect my experience with the LW communities in Toronto and Waterloo.
I thought it would be interesting to see what the broader LessWrong community thought about this article. I think it would make for a good discussion.
Possible conversation topics:
Edit 1: Added some clarification about my view on the article.
Edit 2: Re-added link using “nofollow” attribute.