I think it would actually be helpful if researchers made more experiments with AGI agents showing what could go wrong and how to deal with such error conditions. I don't think that the "social sciences" approach to that works.
I didn't read it, but I heard that Elon Musk is badly influenced by it. I know of his papers prior to the book, and I've taken a look at the content, I know the material being discussed. I think he is vastly exaggerating the risks from AI technology. AI technology will be as pervasive as the internet, it is a very spook/military like mindset to believe that it will only be owned by a few powerful entities, who will wield it to dominate the world, or the developers will be so extremely ignorant that they will have AI agents escaping their labs and start kil...
Confidential stuff, it could be an army of 1000 hamsters. :) To be honest, I don't think teams more crowded than 5-6 are good for this kind of work. But please note that we are doing absolutely nothing that is dangerous in the slightest. It is a tool, not even an agent. Although I will be working on an AGI agent code as soon as we finish the next version of the "kernel", to demonstrate how well our code can be applied to robotics problems. Demo or die.
Well, achieving better than human performance on a sufficiently wide benchmark. Preparing that benchmark is almost as hard as writing the code, it seems. Of course, any such estimates must be taken with a grain of salt, but I think that conceptually solid AGI projects have a significant chance by that time (including OpenCog), although previously I have argued that neuromorphic approaches are likely to succeed by 2030, latest.
We believe we can achieve trans-sapient performance by 2018, he is not that off the mark. But dangers as such, those are highly over-blown, exaggerated, pseudo-scientific fears, as always.
If human-level is defined as "able to solve the same set of problems that a human can within the same time", I don't think there would be the problem that you mention. The whole purpose of the "human-level" adjective, as far as I can tell, is to avoid the condition that the AI architecture in question is similar to the human brain in any way whatsoever.
Consecutively, the set of human-level AI's is much larger than the set of human-level human-like AI's.
Thanks for the nice interview Alexander. I'm Eray Ozkural by the way, if you have any further questions, I would love to answer them.
I actually think that SIAI is serving a useful purpose when they are highlighting the importance of ethics in AI research and computer science in general.
Most engineers, whether because we are slightly autistic or not I do not know, have no or very little interest in ethical consequences of their work. I have met many good engineers who work for military firms (firms that thrive on easy money from the military), and not once ...
Subjective experience isn't limited to sensory experience, a headache, or any feeling, like happiness, without any sensory reason, would also count. The idea is that you can trace most of those to electrical/biochemical states. Might be why some drugs can make you feel happy and how anesthetics work!
There are in fact some plausible scientific hypotheses that try to isolate particular physical states associated with "qualia". Without giving references to those, obviously, as I'm sure you'll all agree, there is no reason to debate the truth of physicalism.
The mentioned approach is probably bogus, and seems to be a rip-off of Marvin Minsky's older A-B brain ideas in "The Society of Mind". I wish I were a "cognitive scientist" it would be so much easier to publish!
However, needless to say any such hypothesis must be founded...
It's also like posting an article about creationism on an evolution forum. Or an article about how we need to (raise/lower) the minimum wage to a (libertarian/social-democrat) forum. Et cetera, et cetera, et cetera.