Here's some arguments against AI x-risk positions from an expert source rather than the popular media:
http://www.kurzweilai.net/superintelligence-fears-promises-and-potentials
http://time.com/3641921/dont-fear-artificial-intelligence/
In any case I think you have unnecessarily limited yourself to considering viewpoints expressed in media that tend to act as echo chambers. It's not very interesting or relevant what a bunch of talking heads say with respect to a technical question.
The Time article doesn't say anything interesting.
Goertzel's article (the first link you posted) is worth reading, although about half of it doesn't actually argue against AI risk, and the part that does seems obviously flawed to me. Even so, if more LessWrongers take the time to read the article I would enjoy talking about the details, particularly about his conception of AI architectures that aren't goal-driven.
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.