Comment author: Artaxerxes 17 November 2014 04:07:29AM 5 points [-]

Well, his comment was deleted, possibly by him, so we should take that into account - maybe he thought he was being a bit overly Cassandra-like too.

The other thing to remember is that Musk's comments reach a slightly different audience to the usual with regards to AI risk. So it's at least somewhat relevant to see the perspective of the one communicating to these people.

Comment author: examachine 18 November 2014 04:30:15AM 0 points [-]

I think it would actually be helpful if researchers made more experiments with AGI agents showing what could go wrong and how to deal with such error conditions. I don't think that the "social sciences" approach to that works.

Comment author: Artaxerxes 17 November 2014 02:54:36PM 1 point [-]

What did you think of Bostrom's recent book?

Comment author: examachine 18 November 2014 04:05:42AM *  -1 points [-]

I didn't read it, but I heard that Elon Musk is badly influenced by it. I know of his papers prior to the book, and I've taken a look at the content, I know the material being discussed. I think he is vastly exaggerating the risks from AI technology. AI technology will be as pervasive as the internet, it is a very spook/military like mindset to believe that it will only be owned by a few powerful entities, who will wield it to dominate the world, or the developers will be so extremely ignorant that they will have AI agents escaping their labs and start killing people. Those are merely bad science fiction scenarios, like they have on Hollywood movies, it's not even good science fiction, because he is talking about very improbable events. An engineer who can build an AI smarter than himself probably isn't that stupid or reckless. Terminator/Matrix scenarios won't happen; they will remain in the movies.

Moreover, as a startup person, I think he doesn't understand the computer industry well, and fails to see the realistic (not comic book) applications of AI technology. AGI researchers must certainly do a better job at revealing the future applications. That will both help them find better funding and attracting public attention, and of course, obtaining public approval.

Thus, let me state it. AI really is the next big thing (after wearable/VR/3dprinting, stuff that's already taking off, I would predict). It's right now like a few years before the Mosaic browser showed up. I think that in AI there will be something for everybody, just like the internet. And Bostrom's fears are completely irrational and unfounded, it seems to me. People should cheer up if they think they can have the first true AI in just 5 years.

Comment author: CarlShulman 17 November 2014 05:37:49PM *  4 points [-]

By "we" do you mean Gök Us Sibernetik Ar & Ge in Turkey? How many people work there?

Comment author: examachine 18 November 2014 03:54:27AM -1 points [-]

Confidential stuff, it could be an army of 1000 hamsters. :) To be honest, I don't think teams more crowded than 5-6 are good for this kind of work. But please note that we are doing absolutely nothing that is dangerous in the slightest. It is a tool, not even an agent. Although I will be working on an AGI agent code as soon as we finish the next version of the "kernel", to demonstrate how well our code can be applied to robotics problems. Demo or die.

Comment author: Lumifer 17 November 2014 04:20:06PM 6 points [-]

We believe we can achieve trans-sapient performance by 2018

What does "trans-sapient performance" mean?

Comment author: examachine 18 November 2014 03:50:32AM 0 points [-]

Well, achieving better than human performance on a sufficiently wide benchmark. Preparing that benchmark is almost as hard as writing the code, it seems. Of course, any such estimates must be taken with a grain of salt, but I think that conceptually solid AGI projects have a significant chance by that time (including OpenCog), although previously I have argued that neuromorphic approaches are likely to succeed by 2030, latest.

Comment author: examachine 17 November 2014 02:23:48PM 0 points [-]

We believe we can achieve trans-sapient performance by 2018, he is not that off the mark. But dangers as such, those are highly over-blown, exaggerated, pseudo-scientific fears, as always.

Comment author: Vladimir_Nesov 19 January 2012 06:02:24PM *  4 points [-]

Phrasing "human-level" as "roughly as good as humans at science etc." (in the questions) is incorrect, because it requires the AI to be human-like in their ability. Instead, it should be something like "roughly as good as humans (or better, perhaps unevenly) at science etc.". That parenthetical is important, as it distinguishes the magical matching of multiple abilities to human level, which respondents rightly object to, from a more well-defined lower bound where you require that it's at least as capable.

Comment author: examachine 19 January 2012 09:35:01PM 3 points [-]

If human-level is defined as "able to solve the same set of problems that a human can within the same time", I don't think there would be the problem that you mention. The whole purpose of the "human-level" adjective, as far as I can tell, is to avoid the condition that the AI architecture in question is similar to the human brain in any way whatsoever.

Consecutively, the set of human-level AI's is much larger than the set of human-level human-like AI's.

Comment author: examachine 19 January 2012 09:24:21PM 11 points [-]

Thanks for the nice interview Alexander. I'm Eray Ozkural by the way, if you have any further questions, I would love to answer them.

I actually think that SIAI is serving a useful purpose when they are highlighting the importance of ethics in AI research and computer science in general.

Most engineers, whether because we are slightly autistic or not I do not know, have no or very little interest in ethical consequences of their work. I have met many good engineers who work for military firms (firms that thrive on easy money from the military), and not once have they raised a point about ethics. Neither do data mining or information retrieval researchers seem to have such qualms (except when they are pretending to, at academic conferences). At companies like facebook, they think they have a right to exploit the data they have collected and use it for all sorts of commercial and police state purposes. Likewise in AI and robotics, I see people cheering whenever military drones or robots are mentioned, as if automatization of warfare is civil or better in some sense because it is higher technology.

I think that at least AGI researchers must understand that they must have no dealing with the military and the government, and by doing so, they may be putting themselves and all of us at risk. Maybe fear tactics will work, I don't know.

On the other hand, I don't think that "friendly" AI is such a big concern, for reasons I mention above, artificial persons simply aren't needed. I have heard the argument that "but someone will build it sooner or later", though there is no reason that person is going to listen to you. The way I see it, it's better to focus on technology right now, so we can have a better sense of applications first. People seem to think that we should equip robots with fully autonomous AGI. Why is that? People have mentioned to me robotic bartenders, robotic geishas, fire rescuers, and cleaners. Well, is that serious? Do you really want a bartender that can solve general relativity problems while cleaning glasses? It's just nonsense. Or does a fire rescuer really need to think about whether it wants to go on to exterminate the human race after extinguishing the fire? The simple answer is that the people who give those examples are not focusing on the engineering requirements of the applications they have in mind. Another example: military robots. They think that military robots must have a sense of morality. I ask you, why is it important to have moral individuals in an enterprise that is fundamentally immoral? All war is murder, and I suggest you to stay away from professional murder business. That is, if you have any true sense of morality.

Instead of "friendly", a sense of "benevolence" may instead be thought, and that might make sense from an ethical theory viewpoint. It is possible to formalize some theories of ethics and implement them on an autonomous AI, however, for all the capabilities that autonomous trans-sapient AI's may possess, I think it is not a good idea to let such machines develop into distinctive personalities of their own, or meddle in human affairs. I think there are already too many people on earth, I don't think we need artificial persons. We might need robots, we might need AI's, but not artificial persons, or AI's that will decide instead of us. I prefer that as humans we remain at the helm. That I say with respect to some totalitarian sounding proposals like CEV. In general, I do not think that we need to replace critical decision making with AI's. Give AI's to us scientists and engineers and that shall be enough. For the rest, like replacing corrupt and ineffective politicians, a broken economic system, social injustice, etc., we need human solutions, because ultimately we must replace some sub-standard human models with harmful motivations like greed and superstitious ideas, with better human models that have the intellectual capacity to understand the human condition, science, philosophy, etc., regardless of any progress in AI. :) In the end, there is a pandemic of stupidity and ignorance that we must cure for those social problems, and I doubt we can cure it with an AI vaccine.

Comment author: [deleted] 12 September 2011 03:10:25PM 0 points [-]

There are in fact some plausible scientific hypotheses that try to isolate particular physical states associated with "qualia". Without giving references to those, obviously, as I'm sure you'll all agree, there is no reason to debate the truth of physicalism.

I was looking some things up after you mentioned this, and after reading a bit about it, qualia appears to be extremely similar to sensory memory.

(http://en.wikipedia.org/wiki/Qualia) (http://en.wikipedia.org/wiki/Sensory_memory)

These quotes about them from Wikipedia(with the names removed) seem to do a good job describing the similarity:

'The information represented in ### is the "raw data" which provides a snapshot of a person's overall sensory experience.' 'Another way of defining ### is as "raw feels." A raw feel is a perception in and of itself, considered entirely in isolation from any effect it might have on behavior and behavioral disposition.'

If you think about this in P-zombie terms, and someone attempts to say "A P-zombie is a person who has sensory memory, but not qualia." I'm not sure what would even be the difference between that and a regular person. Either one can call on their sensory memory to say "I am experiencing redness right now, and now I am experiencing my experiences of redness" and it would seem to be correct if that is what is in their sensory memory. There doesn't appear to be anything left for qualia to explain, and it feels a lot like the question is dissolved at that point.

Is this approximately correct, or is there something else that qualia attempts to explain that sensory memory doesn't that I'm not perceiving?

Comment author: examachine 14 January 2012 04:34:48PM 0 points [-]

Subjective experience isn't limited to sensory experience, a headache, or any feeling, like happiness, without any sensory reason, would also count. The idea is that you can trace most of those to electrical/biochemical states. Might be why some drugs can make you feel happy and how anesthetics work!

Comment author: examachine 09 September 2011 03:09:30PM 2 points [-]

There are in fact some plausible scientific hypotheses that try to isolate particular physical states associated with "qualia". Without giving references to those, obviously, as I'm sure you'll all agree, there is no reason to debate the truth of physicalism.

The mentioned approach is probably bogus, and seems to be a rip-off of Marvin Minsky's older A-B brain ideas in "The Society of Mind". I wish I were a "cognitive scientist" it would be so much easier to publish!

However, needless to say any such hypothesis must be founded on the correct philosophical explanation, which is pretty much neurophysiological identity theory. I don't see a need to debate that, either. Debates of dualism etc. are for the weak minded.

Furthermore, awareness is not quite the same thing as phenomenal consciousness, either. Awareness itself is a quite high cognitive function. But a system could have phenomenal consciousness without any discernible perceptual awareness. I suspect that these theories are not sufficiently informed by neuroscience and philosophy, but neither am I going to offer free clues about the solution to that :) For now, let us just say that it is entirely plausible that small nervous systems (like that of an insect) with no possibility of higher order representations still may have subjective experience. There is also a hint of anthropocentricism in the cited approach (we're conscious because we can make those higher order representations...), which I usually think points to the falsehood of a theory of mind (similar errors are often seen on this site, as well).

Is Dennett to blame here? I hope not :/ Dennett has many excellent ideas, but his approach to consciousness may push the people the wrong way (as it has some flavor of behaviorism, which is not the most advanced view).