All of examachine's Comments + Replies

Manfred110

It's also like posting an article about creationism on an evolution forum. Or an article about how we need to (raise/lower) the minimum wage to a (libertarian/social-democrat) forum. Et cetera, et cetera, et cetera.

4JoshuaZ
I think gjm responded pretty effectively so I'm just going to note that it really isn't helpful if you want to have a dialogue with other humans to spend your time insulting them. It makes them less likely to listen, it makes one less likely to listen one's self (since one sets up a mental block where it is cognitively unpleasant to admit one was wrong when one was) and makes bystanders who are reading less likely to take your ideas seriously. By the way Eray, you claimed back last November here that 2018 was a reasonable target for "trans-sapient" entities. Do you still stand by that?
6gjm
For sure. But fully autonomous agents are a goal a lot of people will surely be working towards, no? I don't think anyone is claiming "every AI project is dangerous". They are claiming something more like "AI with the ability to do pretty much all the things human minds do is dangerous", with the background presumption that as AI advances it becomes more and more likely that someone will produce an AI with all those abilities. Again: for sure, but that isn't the point at issue. One exciting but potentially scary scenario involving AI is this: we make AI systems that are better than us at making AI systems, let them get to work designing their successors, let them design their successors, etc. End result (hopefully): a dramatically better AI than we could hope to make on our own. Another, closely related: we make AI systems that have the ability to reconfigure themselves by improving their own software and maybe even adjusting their hardware. In any of these cases, you may be confident that the AI you initially built doesn't want to get out of whatever box you put it in. But how sure are you that after 20 iterations of self-modification, or of replacing an AI by the successor it designed, you still have something that doesn't want to get out of the box? There are ways to avoid having to worry about that. We can just make AI systems that neither self-modify nor design new AI systems, for instance. But if we are ever to make AIs smarter than us, the temptation to use that smartness to make better AIs will be very strong, and it only requires one team to try it to expose us to any risks that might ensue. (One further observation: telling people they're stupid and you're laughing at them is not usually effective in making them take your arguments more seriously. To some observers it may suggest that you are aware there's a weakness in your own arguments. ("Argument weak; shout louder."))
4JoshuaZ
This seems to be closer to an argument from ridicule than an argument with content. No one has said anything about "super scientists"- I am however mildly curious if you are familiar with the AI Box experiment. Are you claiming that AI aren't going to get to be effectively powerful or are you claiming that you inherently trust that safeguards will be sufficient? Note that these are not the same thing.
5artemium
Do you have any serious counter arguments to ideas presented in a Bostrom's book? Majority of top AI experts agree that we will have human-level AI by the end of this century, and people like Musk, Bostrom and MIRI guys are just trying to think about possible negative impacts that this development may have on humans. The problem is that the fate of humanity may depend on action of non-human actors, who will likely have utility function incompatible with human survival and it is perfectly rational to be worried about that. Those ideas are definitely not above criticism but also should not be dismissed based on perceived lack of expertise. Someone like Elon Musk has actually direct contact with people who are working on one of the most advanced AI projects on earth (Vicarious, DeepMind), so he certainly knows what he is talking about.
2Salemicus
Humans can be extremely dangerous. Why wouldn't a human-level AI be?

I think it would actually be helpful if researchers made more experiments with AGI agents showing what could go wrong and how to deal with such error conditions. I don't think that the "social sciences" approach to that works.

4JoshuaZ
This misses the basic problem: Most of the ways things can go seriously wrong are things that would occur after the AGI is already an AGI and are things where once they've happened one cannot recover. More concretely, what experiment in your view should they be doing?

I didn't read it, but I heard that Elon Musk is badly influenced by it. I know of his papers prior to the book, and I've taken a look at the content, I know the material being discussed. I think he is vastly exaggerating the risks from AI technology. AI technology will be as pervasive as the internet, it is a very spook/military like mindset to believe that it will only be owned by a few powerful entities, who will wield it to dominate the world, or the developers will be so extremely ignorant that they will have AI agents escaping their labs and start kil... (read more)

-3The_Jaded_One
+1 for entertainment value. EDIT: I am not agreeing with examachine's comment, I just think it's hilariously bad.

Confidential stuff, it could be an army of 1000 hamsters. :) To be honest, I don't think teams more crowded than 5-6 are good for this kind of work. But please note that we are doing absolutely nothing that is dangerous in the slightest. It is a tool, not even an agent. Although I will be working on an AGI agent code as soon as we finish the next version of the "kernel", to demonstrate how well our code can be applied to robotics problems. Demo or die.

Well, achieving better than human performance on a sufficiently wide benchmark. Preparing that benchmark is almost as hard as writing the code, it seems. Of course, any such estimates must be taken with a grain of salt, but I think that conceptually solid AGI projects have a significant chance by that time (including OpenCog), although previously I have argued that neuromorphic approaches are likely to succeed by 2030, latest.

2Lumifer
You understand that you just replaced some words with others without clarifying anything, right? "Sufficiently wide" doesn't mean anything.

We believe we can achieve trans-sapient performance by 2018, he is not that off the mark. But dangers as such, those are highly over-blown, exaggerated, pseudo-scientific fears, as always.

0JoshuaZ
What would you be willing to make a bet that nothing remotely resembling that happens before 2020? 2025?
5CarlShulman
By "we" do you mean Gök Us Sibernetik Ar & Ge in Turkey? How many people work there?
7Lumifer
What does "trans-sapient performance" mean?
1Artaxerxes
What did you think of Bostrom's recent book?

If human-level is defined as "able to solve the same set of problems that a human can within the same time", I don't think there would be the problem that you mention. The whole purpose of the "human-level" adjective, as far as I can tell, is to avoid the condition that the AI architecture in question is similar to the human brain in any way whatsoever.

Consecutively, the set of human-level AI's is much larger than the set of human-level human-like AI's.

Thanks for the nice interview Alexander. I'm Eray Ozkural by the way, if you have any further questions, I would love to answer them.

I actually think that SIAI is serving a useful purpose when they are highlighting the importance of ethics in AI research and computer science in general.

Most engineers, whether because we are slightly autistic or not I do not know, have no or very little interest in ethical consequences of their work. I have met many good engineers who work for military firms (firms that thrive on easy money from the military), and not once ... (read more)

6timtyler
You can't have a human in the loop all the time - it's too slow. So: many machines in the future will be at least semi-autonomous - as many of them are today. Probably a somewhat more interesting question is whether machines will be given rights as "people". It's a complex political question, but I expect that eventually they will. Thus things like The Campaign for Robot Rights . The era of machine slavery will someday be looked back on as a kind of moral dark ages - much as the era of human slavery is looked back on today. Right - but what is a human? No doubt the first mammals also wished to "remain at the helm". In a sense they did - though many of their modern descendants don't look much like the mouse-like creatures they all descended from. It seems likely to be much the same with us. That isn't the SIAI proposal, FWIW. See: http://lesswrong.com/lw/x5/nonsentient_optimizers/

Subjective experience isn't limited to sensory experience, a headache, or any feeling, like happiness, without any sensory reason, would also count. The idea is that you can trace most of those to electrical/biochemical states. Might be why some drugs can make you feel happy and how anesthetics work!

There are in fact some plausible scientific hypotheses that try to isolate particular physical states associated with "qualia". Without giving references to those, obviously, as I'm sure you'll all agree, there is no reason to debate the truth of physicalism.

The mentioned approach is probably bogus, and seems to be a rip-off of Marvin Minsky's older A-B brain ideas in "The Society of Mind". I wish I were a "cognitive scientist" it would be so much easier to publish!

However, needless to say any such hypothesis must be founded... (read more)

0[anonymous]
I was looking some things up after you mentioned this, and after reading a bit about it, qualia appears to be extremely similar to sensory memory. (http://en.wikipedia.org/wiki/Qualia) (http://en.wikipedia.org/wiki/Sensory_memory) These quotes about them from Wikipedia(with the names removed) seem to do a good job describing the similarity: 'The information represented in ### is the "raw data" which provides a snapshot of a person's overall sensory experience.' 'Another way of defining ### is as "raw feels." A raw feel is a perception in and of itself, considered entirely in isolation from any effect it might have on behavior and behavioral disposition.' If you think about this in P-zombie terms, and someone attempts to say "A P-zombie is a person who has sensory memory, but not qualia." I'm not sure what would even be the difference between that and a regular person. Either one can call on their sensory memory to say "I am experiencing redness right now, and now I am experiencing my experiences of redness" and it would seem to be correct if that is what is in their sensory memory. There doesn't appear to be anything left for qualia to explain, and it feels a lot like the question is dissolved at that point. Is this approximately correct, or is there something else that qualia attempts to explain that sensory memory doesn't that I'm not perceiving?
0[anonymous]
I don't know what phenomenal consciousness or subjective experience means. Could you please give a reference or explanation for these terms?