Punoxysm comments on Musk on AGI Timeframes - Less Wrong

19 Post author: Artaxerxes 17 November 2014 01:36AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (70)

You are viewing a single comment's thread.

Comment author: Punoxysm 17 November 2014 03:58:37AM *  13 points [-]

This is not Musk's field of expertise. I do not give his words special weight.

The fact that he can sit in on some cutting edge tech demos, or even chat with CEOs, still doesn't make him an expert.

I have a technical background in AI; there's still massive hurdles to overcome; not 5-10 year hurdles. Nothing from Deepmind will "escape onto the internet" any time soon. It is very much grounded in the "Narrow AI" technologies like machine learning.

I feel pretty confident calling him a Cassandra.

Comment author: Baughn 17 November 2014 12:12:24PM 12 points [-]

I feel pretty confident calling him a Cassandra.

I agree with the rest of your comment, but calling him a "Cassandra" means "He's right, but no-one will believe him," and I hope that isn't what you meant!

An applicable morality tale here would be the boy that cried wolf, if he hadn't retracted his post. I don't remember if he had a name. (Elon Musk: Inverse Cassandra.)

Comment author: hairyfigment 17 November 2014 09:04:03PM 0 points [-]

Stöffler might have the best name among those who failed to update properly.

Comment author: chaosmage 17 November 2014 05:52:22PM 5 points [-]

there's still massive hurdles to overcome

Are you talking about what he's talking about - "risk of something seriously dangerous happening" - or are you talking about AGI?

Because I can easily imagine how a narrow AI technology could do a lot of damage, particularly if humans intend it to.

Comment author: Punoxysm 17 November 2014 06:05:49PM 1 point [-]

Well, in terms of out-of-control software produced by an AI company, I feel the two risks, 'something dangerous' and AGI, are pretty closely linked.

Could more limited AI tech make a more damaging computer virus or cause an unexpected confidential data leak? Sure, but that's not the issue at hand.

The most advanced AI today takes input and creates output. It is strictly Oracle AI with nothing present in its architecture that could circumvent that. I don't see that changing anytime soon.

Comment author: chaosmage 17 November 2014 07:04:36PM *  0 points [-]

Could more limited AI tech make a more damaging computer virus or cause an unexpected confidential data leak? Sure, but that's not the issue at hand.

You're free to disregard those, but I'm not sure Elon Musk is doing that.

The more damaging computer virus or data leak are only two of the possible worries. If a narrow AI simply helps black market chemists find more novel psychoactives than regulation can ever hope to handle, or if bots eliminate just 10% of jobs (say in transportation and retail to name just the most obvious) leading to massive societal unrest, or if they get better at solving captchas than humans are (which would lead to a massive crisis in anonymous communication and everything that depends on it)... all of these would make Musks prediction true in my book.

Comment author: Punoxysm 17 November 2014 07:25:43PM 2 points [-]

But these are just technological issues comparable to other mundane ones; just like how 3D printing could make it easy to create weapons, or how the rise of the automobile has created an enormous new cause of death and injury. There's not reason to think it would be outside the scope of ordinary policy-making methods to handle them.

Also, Solving Captchas is already pretty damn easy. A combination of algorithmic methods and crowdsourcing makes it quite cheap, especially for sites using older/easier captcha versions. Captcha is not a security plan; it's a speedbump that's getting easier to pass all the time (but still, no crisis will result from this).

Comment author: Lumifer 17 November 2014 07:11:50PM 0 points [-]

if they get better at solving Capchas than humans are leading to a massive crisis in anonymous communication and everything that deprends on it

You seem to be much confused :-)

Comment author: chaosmage 17 November 2014 07:18:55PM 0 points [-]

Cleared up my grammar - was that the symptom of the perceived confusion, or do you doubt that much depends on anonymous communication?

Comment author: Lumifer 17 November 2014 07:31:51PM *  4 points [-]

How would breaking captchas break anonymous communications?

Comment author: chaosmage 21 November 2014 03:46:11PM *  0 points [-]

Some powerful agents (say secret services or the government of... let's say China) would benefit greatly from disrupting anonymous electronic communication as a whole, because that'd force electronic communication to occur in a non-anonymous fashion. People could still encrypt, but it'd at least be known who talked to who, and that's the kind of information that's apparently valued worth billions of dollars and a couple of civil rights. Correct?

But how could you do that? Thoroughly anonymized peer-to-peer networks built to defy surveillance (such as Freenet), appear to successfully make de-anonymizing communication very, very, very hard. If you kill or severely impede less than perfect anonymization services such as Tor, anonymity-liking people can just migrate to services such as Freenet, and your plan to disrupt anonymous electronic communication has backfired. Correct?

But what you can do is attack not the anonymity, but the communication inside that anonymity. All you need to do is flood the anonymous medium with disruptive pseudo-communication. Spam is the obvious example, but (especially if there are web-of-trust-like structures between the anonymization and the actual communication) you can't make your bots too easy to identify - but as far as it is still possible, you can simply throw in more and more bots.

How do you identify bots as such? You do Turing tests of course. How do you identify lots and lots of bots as such? You do completely automated Turing tests, or Captchas. Not necessarily the ones we have, which are apparently somewhat solvable with the current state of machine learning, but better ones. Captchas have already improved, because they had to. Surely there can be better ones, or sites can start to require perfect performance on ten different Captchas at once for acceptance as a non-bot, or charge (even anonymously, using something like bitcoin) for the privilege of getting to take the Captcha. But once you get to the level where narrow AIs can solve Captchas as successfully as humans, the floodgates are open.

And then anyone who benefits from disrupting all anonymous electronic communication can - and will - do so. Non-anonymity will be promoted as "a small price to pay" to get rid of the bot plague, and everyone will live happy ever after - except those in that vast majority of countries that does not have a First Amendment, and are scared of their governments for very good reasons. They'll retreat into non-electronic communication of course, but that can't be the way forward, can it?

Comment author: Lumifer 21 November 2014 04:41:39PM *  2 points [-]

Your argument is basically that anonymous networks can be spammed into uselessness. That looks to be theoretically possible but practically difficult, but that's not the main problem with your argument. The biggest hole, from my point of view, is that you think that captchas are a good (or even only) anti-spam measure. They are not.

And, of course, email is a pseudonymous P2P network which used to have a large spam problem and which, by now, has largely solved it.

Here is good write-up of how spam wars work in real life.

Comment author: chaosmage 21 November 2014 05:31:47PM 3 points [-]

Spam wars in real life use mechanisms that don't work in fully anonymous networks like Freenet. You can't filter by IP in a network without IPs.

Captchas are obviously not a good (or even only) anti-spam measure. But inside anonymous networks, they're one of the few things that work. Webs of Trust, which I explicitly mentioned, are another - they just don't scale well.

Comment author: Artaxerxes 17 November 2014 04:07:29AM 5 points [-]

Well, his comment was deleted, possibly by him, so we should take that into account - maybe he thought he was being a bit overly Cassandra-like too.

The other thing to remember is that Musk's comments reach a slightly different audience to the usual with regards to AI risk. So it's at least somewhat relevant to see the perspective of the one communicating to these people.

Comment author: examachine 18 November 2014 04:30:15AM 0 points [-]

I think it would actually be helpful if researchers made more experiments with AGI agents showing what could go wrong and how to deal with such error conditions. I don't think that the "social sciences" approach to that works.

Comment author: JoshuaZ 17 December 2014 03:44:00AM *  2 points [-]

This misses the basic problem: Most of the ways things can go seriously wrong are things that would occur after the AGI is already an AGI and are things where once they've happened one cannot recover.

More concretely, what experiment in your view should they be doing?

Comment author: [deleted] 18 November 2014 11:06:13AM *  3 points [-]

DeepMind is very definitely AGI in the sense of the domain of problems its learners can learn and its agents can solve. If DeepMind is easily controlled and not very dangerous, that's not evidence for AGI being further away than we thought before we looked at DeepMind, it's evidence for AGI being more easily controlled than we thought before we looked at DeepMind.

Real AGI was never going to look like magic genies, so we should never fault real-life AI work for failing at genie.