Comment author: examachine 06 May 2015 11:16:03PM -5 points [-]

This is like posting an article about genetic evolution on a creationist forum. They will pretend to "understand" what you are saying and then dig down deeper into their irrational dogma.

Comment author: examachine 02 May 2015 01:49:24AM *  -7 points [-]

Bostrom is a crypto-creationist "philosopher" with farcical arguments in favor of abrahamic mythology and neo-luddism. People are giving too much credit to lunatics who promote AI eschatology. Please do not listen to schizophrenics like Bostrom. The whole "academic career" of Bostrom may be summarized as "non-solutions to non-problems". I have never seen a less useful thinker. He could not be more wrong! I sometimes think that philosophy departments should be shut down, if they are to breed this kind of ignorance.

It's quite ironic that Bostrom is talking about superintelligence, BTW. How will he imagine what intelligent entities think?

Comment author: JoshuaZ 03 February 2015 07:59:39PM 2 points [-]

Because life isn't a third grade science fiction movie, where the super scientists who program AI agents are at the same time so incompetent that their experiments break out of the lab and kill everyone.

This seems to be closer to an argument from ridicule than an argument with content. No one has said anything about "super scientists"- I am however mildly curious if you are familiar with the AI Box experiment. Are you claiming that AI aren't going to get to be effectively powerful or are you claiming that you inherently trust that safeguards will be sufficient? Note that these are not the same thing.

Comment author: examachine 22 April 2015 12:33:50PM -5 points [-]

Wow, that's clearly foolish. Sorry. :) I mean I can't stop laughing so I won't be able to answer. Are you people retarded or something? Read my lips: AI DOES NOT MEAN FULLY AUTONOMOUS AGENT.

And AI Box experiment is more bullshit. I can PROGRAM an agent so that it never walks out of a box. It never wants to. Period. Imbeciles. You don't have to "imprison" any AI agent.

So, no, because it doesn't have to be fully autonomous.

Comment author: JoshuaZ 17 December 2014 03:44:00AM *  2 points [-]

This misses the basic problem: Most of the ways things can go seriously wrong are things that would occur after the AGI is already an AGI and are things where once they've happened one cannot recover.

More concretely, what experiment in your view should they be doing?

Comment author: examachine 03 February 2015 07:52:09PM *  -4 points [-]

Because life isn't a third grade science fiction movie, where the super scientists who program AI agents are at the same time so incompetent that their experiments break out of the lab and kill everyone. :) Not going to happen. Sorry!

Comment author: jessicat 11 December 2014 10:37:03PM *  38 points [-]

Transcript:

Question: Are you as afraid of artificial intelligence as your Paypal colleague Elon Musk?

Thiel: I'm super pro-technology in all its forms. I do think that if AI happened, it would be a very strange thing. Generalized artificial intelligence. People always frame it as an economic question, it'll take people's jobs, it'll replace people's jobs, but I think it's much more of a political question. It would be like aliens landing on this planet, and the first question we ask wouldn't be what does this mean for the economy, it would be are they friendly, are they unfriendly? And so I do think the development of AI would be very strange. For a whole set of reasons, I think it's unlikely to happen any time soon, so I don't worry about it as much, but it's one of these tail risk things, and it's probably the one area of technology that I think would be worrisome, because I don't think we have a clue as to how to make it friendly or not.

Comment author: examachine 12 December 2014 07:10:15PM -9 points [-]

I'm sorry to say that even a chatbot might refute this line of reasoning. Of course, economical impact is more important than such unfounded concerns. That might be the greatest danger of AI software. It might end up refuting a lot of pseudo-science about ethics.

Countries are starting wars over oil. High technology is a good thing, it might make us more wealthy, more capable, more peaceful. If employed wisely, of course. What we must concern ourselves with is how wise, how ethical we ourselves are in our own actions and plans.

Comment author: artemium 26 November 2014 07:09:04PM *  3 points [-]

Do you have any serious counter arguments to ideas presented in a Bostrom's book? Majority of top AI experts agree that we will have human-level AI by the end of this century, and people like Musk, Bostrom and MIRI guys are just trying to think about possible negative impacts that this development may have on humans. The problem is that the fate of humanity may depend on action of non-human actors, who will likely have utility function incompatible with human survival and it is perfectly rational to be worried about that.

Those ideas are definitely not above criticism but also should not be dismissed based on perceived lack of expertise. Someone like Elon Musk has actually direct contact with people who are working on one of the most advanced AI projects on earth (Vicarious, DeepMind), so he certainly knows what he is talking about.

Comment author: examachine 28 November 2014 03:41:56AM *  -9 points [-]

I do. Nick Bostrom is a creationist idiot (simulation "argument" is creationism), with absolutely no expertise in AI, who thinks the doomsday argument is true. Funnily enough, he does claim to be an expert in several extremely difficult fields including AI and computational neuroscience despite lack of any serious technical publications, on his book cover. That's usually a red flag indicating a charlatan. Despite whatever you might think, a "social scientist" is ill-equipped to say anything about AI. That's enough for now. For a more detailed exposition, I am afraid you will have to wait a while longer. You will know it, when you see it, stay tuned!

Comment author: The_Jaded_One 20 November 2014 09:15:43PM *  -1 points [-]

+1 for entertainment value.

EDIT: I am not agreeing with examachine's comment, I just think it's hilariously bad.

Comment author: examachine 26 November 2014 05:14:08PM -5 points [-]

It is entertaining indeed that a non computer scientist entrepreneur (Elon Musk) is emotionally influenced by the incredibly fallacious pseudo-scientific bullshit of Nick Bostrom, another non-computer scientist, and that people are talking about it.

So let's see, a clown writes a book, and an investor thinks it is a credible book while it is not true. What makes this hilarious is people's reactions to it. A ship of fools.

Comment author: Lumifer 18 November 2014 05:29:42AM 2 points [-]

achieving better than human performance on a sufficiently wide benchmark

You understand that you just replaced some words with others without clarifying anything, right? "Sufficiently wide" doesn't mean anything.

Comment author: examachine 19 November 2014 01:10:53AM -7 points [-]

I cannot possibly disclose confidential research here, so you will have to be content with that.

At any rate, believing that human-level AI is an extremely dangerous technology is pseudo-scientific.

Comment author: examachine 18 November 2014 05:03:34AM -26 points [-]

Racists. Why does anyone even care about such people? Just ignore them.

Comment author: Artaxerxes 17 November 2014 04:07:29AM 5 points [-]

Well, his comment was deleted, possibly by him, so we should take that into account - maybe he thought he was being a bit overly Cassandra-like too.

The other thing to remember is that Musk's comments reach a slightly different audience to the usual with regards to AI risk. So it's at least somewhat relevant to see the perspective of the one communicating to these people.

Comment author: examachine 18 November 2014 04:30:15AM 0 points [-]

I think it would actually be helpful if researchers made more experiments with AGI agents showing what could go wrong and how to deal with such error conditions. I don't think that the "social sciences" approach to that works.

View more: Next