You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Google may be trying to take over the world

22 [deleted] 27 January 2014 09:33AM

So I know we've already seen them buying a bunch of ML and robotics companies, but now they're purchasing Shane Legg's AGI startup.  This is after they've acquired Boston Dynamics, several smaller robotics and ML firms, and started their own life-extension firm.

 

Is it just me, or are they trying to make Accelerando or something closely related actually happen?  Given that they're buying up real experts and not just "AI is inevitable" prediction geeks (who shall remain politely unnamed out of respect for their real, original expertise in machine learning), has someone had a polite word with them about not killing all humans by sheer accident?

Comments (133)

Comment author: XiXiDu 27 January 2014 11:52:47AM *  21 points [-]

...has someone had a polite word with them about not killing all humans by sheer accident?

Shane Legg is familiar with AI risks. So is Jaan Tallinn, a top donor of MIRI, who is also associated with DeepMind. I suppose they will talk about their fears with Google.

Comment author: [deleted] 27 January 2014 12:41:43PM 12 points [-]

Actually, there does seem to have been a very quiet press release about this acquisition resulting in a DeepMind ethics board.

So that's a relief.

Comment author: curiousepic 27 January 2014 04:30:06PM *  1 point [-]

Is there any more information beyond mentions like this?

Comment author: Emile 27 January 2014 12:28:24PM 3 points [-]

Not to mention of course the Google employees that post on LW.

Comment author: XiXiDu 27 January 2014 12:32:41PM *  8 points [-]

Not to mention of course the Google employees that post on LW.

I didn't know there were any. My guess is that you have to be pretty high in the hierarchy to actually steer Google into a direction that would suit MIRI (under the assumption that people who agree with MIRI are in the minority).

Comment author: jkaufman 27 January 2014 09:48:24PM 11 points [-]

I didn't know there were any.

Hi!

Comment author: Dr_Manhattan 28 January 2014 01:18:17AM *  5 points [-]

plus cousin_it and at least 2-3 others. Plus Ctrl+F for Google here http://intelligence.org/team/. Moshe Looks might be one of Google's AGI people I think.

Comment author: Baughn 28 January 2014 01:23:27AM *  9 points [-]

I didn't know there were any.

Greetings from Dublin! You're right that the average employee is unlikely to matter, though.

Comment author: [deleted] 27 January 2014 12:13:57PM *  13 points [-]

Eliezed specifically mentioned Google in his Intelligence Explosion Microeconomics paper as the only named organization that could potentially start an intelligence explosion.

Larry Page has publicly said that he is specifically interested in “real AI” (Artificial General Intelligence), and some of the researchers in the field are funded by Google. So far as I know, this is still at the level of blue-sky work on basic algorithms and not an attempt to birth The Google in the next five years, but it still seems worth mentioning Google specifically.

In these interviews Larry Page gave years ago he constantly said that he wanted Google to become "the ultimate search engine" that would be able to understand all the information in the world. And to do that, Larry Page said, it would need to be 'true' artificial intelligence (he didn't say 'true', but it comes clear what he means in the context).

Here's a quote by Larry Page from the year 2007:

We have some people at Google who are really trying to build artificial intelligence and to do it on a large scale and so on, and in fact, to make search better, to do the perfect job of search you could ask any query and it would give you the perfect answer and that would be artificial intelligence based on everything being on the web, which is a pretty close approximation. We're lucky enough to be working incrementally closer to that, but again, very, very few people are working on this, and I don't think it's as far off as people think.

I doubt it would be very Friendly if you use MIRI's definition, but it doesn't seem like they have something 'evil' in their mind. Peter Norvig is the co-author of AI: A Modern Approach, which is currently the dominant textbook in the field. The 3rd edition had several mentions about AGI and Friendly AI. So at least some people in Google have heard about this Friendliness thing and paid attention to it. But the projects run by Google X are quite secretive, so it's hard to know exactly how seriously they take the dangers of AGI and how much effort they put into these matters. It could be, like lukeprog said in October 2012, that Google doesn't even have "an AGI team".

Comment author: lukeprog 27 January 2014 08:23:16PM *  11 points [-]

It could be, like lukeprog said in October 2012, that Google doesn't even have "an AGI team".

Not that I know of, anyway. Kurzweil's team is probably part of Page's long-term AGI ambitions, but right now they're focusing on NLP (last I heard). And Deep Mind, which also has long-term AGI ambitions, has been working on game AI as an intermediate step. But then again, that kind of work is probably more relevant progress toward AGI than, say, OpenCog.

IIRC the Deep Mind folks were considering setting up an ethics board before Google acquired them, so the Google ethics board may be a carryover from that. FHI spoke to Deep Mind about safety standards a while back, so they're not totally closed to taking Friendliness seriously. I haven't spoken to the ethics board, so I don't know how serious they are.

Comment author: lukeprog 27 January 2014 09:23:05PM 14 points [-]

Update: "DeepMind reportedly insisted on the board’s establishment before reaching a deal."

Comment author: lukeprog 28 January 2014 06:18:35PM *  6 points [-]

Update: DeepMind will work under Jeff Dean at Google's search team.

And, predictably:

“Things like the ethics board smack of the kind of self-aggrandizement that we are so worried about,” one machine learning researcher told Re/code. “We’re a hell of a long way from needing to worry about the ethics of AI.”

...despite the fact that AI systems already fly planes, drive trains, and pilot Hellfire-carrying aerial drones.

Comment author: XiXiDu 28 January 2014 07:42:56PM *  9 points [-]

NYTimes also links to LessWrong.

Quote:

Mr. Legg noted in a 2011 Q&A with the LessWrong blog that technology and artificial intelligence could have negative consequences for humanity.

Comment author: shminux 28 January 2014 08:30:50PM *  3 points [-]

despite the fact that AI systems already fly planes, drive trains, and pilot Hellfire-carrying aerial drones.

It would be quite a reach to insist that we need to worry about the ethics of the control boards which calculate how to move elevons or how much to open a throttle in order to maintain certain course or speed. Autonomous UAVs able to open fire without a human in the loop are much more worrying.

I imagine that some of the issues the ethics board might have to deal with eventually would be related to self-agentizing tools, in Karfnofsky-style terminology. For example, if a future search engine receives queries whose answers depend on other simultaneous queries, it may have to solve game-theoretical problems, like optimizing traffic flows. These may some day include life-critical decisions, like whether to direct drivers to a more congested route in order to let emergency vehicles pass unimpeded.

Comment author: XiXiDu 28 January 2014 06:55:53PM 2 points [-]

They actually link to LessWrong in the article, namely to my post here.

Comment author: CellBioGuy 30 January 2014 04:15:52AM *  0 points [-]

I personally suspect the ethics board exists for more prosaic reasons. Think "don't bias the results of people's medical advice searches to favor the products of pharmaceutical companies that pay you money" rather than "don't eat the world".

EDIT: just saw other posts including quotes from the head people of the place that got bought. I still think that this is the sort of actual issues they will deal with, as opposed to the theoretical justifications.

Comment author: [deleted] 27 January 2014 12:57:59PM 11 points [-]

So, to summarize, Google wants to build a potentially dangerous AI, but they believe they can keep it as an Oracle AI which will answer questions but not act independently. They also apparently believe (not without some grounding) that true AI is so computationally expensive in terms of both speed and training data that we will probably maintain an advantage of sheer physical violence over a potentially threatening unboxed oracle for a long time.

Except that they are also blatant ideological Singulatarians, so they're working to close that gap.

Comment author: XiXiDu 27 January 2014 03:43:39PM 8 points [-]

Comment by Juergen Schmidhuber:

Our former PhD student Shane Legg is co-founder of deepmind (with Demis Hassabis and Mustafa Suleyman), just acquired by Google for ~$500m. Several additional ex-members of the Swiss AI Lab IDSIA have joined deepmind, including Daan Wierstra, Tom Schaul, Alex Graves.

Comment author: [deleted] 27 January 2014 03:55:44PM 9 points [-]

Yes, or in other words, these are the competent AGI researchers.

Comment author: ChristianKl 27 January 2014 10:52:10AM 22 points [-]

Buying off AGI startups and then letting the relevant programmers program smart cars seems to me a quite good move to stall UFAI.

Comment author: Brillyant 27 January 2014 03:09:45PM 2 points [-]

Can you elaborate?

Comment author: Luke_A_Somers 27 January 2014 04:04:17PM 3 points [-]

It sets AGI-minded programmers (under circumstances expected to yield UFAI) onto tasks that would not be expected to result in AGI of any sort (driving)

Comment author: Brillyant 27 January 2014 05:05:14PM 3 points [-]

I get that part. Is there some reason I'm missing as to why Google wouldn't utilize the talent at DeepMind to pursue AGI-relevant projects?

I mean, Google has great resources (much more than MIRI or anyone else) and a proven record of success at being instrumentally rational in the techincal/programming arena (i.e. winning on a grand scale for a length of time). They are adding folks who, from what I read on LW, actually understand AGI's complexity, implications, etc.

Comment author: Luke_A_Somers 28 January 2014 03:59:37PM 0 points [-]

Just nervousness about UFAI.

Comment author: ThisSpaceAvailable 28 January 2014 02:34:39AM 2 points [-]

This analysis seems to be based on AGI-mindedness being an inherent property of programmers, and not a response to market forces.

Comment author: Luke_A_Somers 28 January 2014 03:57:49PM 2 points [-]

No... not at all! Quite the opposite, in fact. If it were inherent, then moving them away from it would be ineffective.

Comment author: XiXiDu 28 January 2014 06:47:26PM 4 points [-]

Microsoft seems to focus on AI as well:

Q: You are in charge of more than 1000 research labs around the world.

What kind of thing are you focusing on?

Microsoft: A big focus right now, really on point for this segment, is artificial intelligence.

We have been very focused.

It is our largest investment area right now.

Comment author: gjm 27 January 2014 12:05:57PM 6 points [-]

Peter Norvig is at least in principle aware of some of the issues; see e.g. this article about the current edition of Norvig&Russell's AIAMA (which mentions a few distinct way in which AI could have very bad consequences and cites Yudkowsky and Omohundro).

I don't know what Google's attitude is to these things, but if it's bad then either they aren't listening to Peter Norvig or they have what they think are strong counterarguments, and in either case an outsider having a polite word is unlikely to make a big difference.

Comment author: jamesf 28 January 2014 11:34:30PM 5 points [-]

Peter Norving was a resident at Hacker School while I was there, and we had a brief discussion about existential risks from AI. He basically told me that he predicts AI won't surpass humans in intelligence by so much that we won't be able to coerce it into not ruining everything. It was pretty surprising, if that is what he actually believes.

Comment author: XiXiDu 27 January 2014 01:15:21PM 5 points [-]

I don't know what Google's attitude is to these things, but if it's bad then either they aren't listening to Peter Norvig or they have what they think are strong counterargument...

My guess is that most people at Google, who are working on AI, take those risks somewhat seriously (i.e. less seriously than MIRI, but still acknowledge them) but think that the best way to mitigate risks associated with AGI is to research AGI itself, because the problems are intertwined.

Comment author: private_messaging 27 January 2014 02:21:11PM 10 points [-]

has someone had a polite word with them about not killing all humans by sheer accident?

Why do you think you have a better idea of the risks and solutions involved than they do, anyway? Superior AI expertise? Some superior expert-choosing talent of yours?

Comment author: XiXiDu 27 January 2014 03:21:12PM 10 points [-]

My suggestion to Google is to free up their brightest minds and tell them to talk to MIRI for 2 weeks, full-time. After the two weeks are over, let each of them write a report on whether Google should e.g. give them more time to talk to MIRI, accept MIRI's position and possibly hire them, or ignore them. MIRI should be able to comment on a draft of each of the reports.

I think this could finally settle the issue, if not for MIRI itself then at least for outsiders like me.

Comment author: private_messaging 27 January 2014 04:02:35PM 9 points [-]

Well, that's sort of like having the brightest minds at CERN spend two weeks full time talking to some random "autodidact" who's claiming that LHC is going to create a blackhole that will devour the Earth. Society can't work this way.

Does that mean there is a terrible ignored risk? No, when there is a real risk, the brightest people of extreme and diverse intellectual accomplishment are the ones most likely to be concerned about it (and various "autodidacts" are most likely to fail to notice the risk).

Comment author: XiXiDu 27 January 2014 04:19:39PM *  11 points [-]

Well, that's sort of like having the brightest minds at CERN spend two weeks full time talking to some random "autodidact" who's claiming that LHC is going to create a blackhole that will devour the Earth.

This is an unusual situation though. We have a lot of smart people who believe MIRI (they are not idiots, you've to grant them that). And you and me are not going to change their mind, ever, and they are hardly going to convince us. But if a bunch of independent top-notch people were to accept MIRI's position, then that would certainly make me assign a high probability to the possibility that I simply don't get it and that they are right after all.

Society can't work this way.

In the case of the LHC, independent safety reviews have been conducted. I wish this was the case for the kinds of AI risk scenarios imagined by MIRI.

Comment author: private_messaging 27 January 2014 04:34:51PM *  4 points [-]

We have a lot of smart people who believe MIRI (they are not idiots, you've to grant them that).

If you pitch something stupid to a large enough number of smart people, some small fraction will believe.

In the case of the LHC, independent safety reviews have been conducted.

Not for every crackpot claim. edit: and since they got an ethical review board, that's your equivalent of what was conducted...

I wish this was the case for the kinds of AI risk scenarios imagined by MIRI.

There's a threshold. Some successful trading software, or a popular programming language, or some AI project that does something world-level notable (plays some game really well for example), that puts one above the threshold. Convincing some small fraction of smart people does not. Shane Legg's startup evidently is above the threshold.

As for the risks, why would you think that Google's research is a greater risk to mankind than, say, MIRI's? (assuming that the latter is not irrelevant, for the sake of the argument)

Comment author: XiXiDu 27 January 2014 04:57:36PM *  2 points [-]

As for the risks, why would you think that Google's research is a greater risk to mankind than, say, MIRI's? (assuming that the latter is not irrelevant, for the sake of the argument)

If MIRI was right then, as far as I understand it, a not quite friendly AI (broken friendly AI) could lead to a worse outcome than a general AI that was designed without humans in mind. Since in the former case you would end up with something that keeps humans alive, but e.g. gets a detail liked boredom wrong, while in the latter case you would be transformed into e.g. paperclips. So from this perspective, if MIRI was right, it could be the greater risk.

Comment author: private_messaging 27 January 2014 07:14:57PM 8 points [-]

Well, the other issue is also that people's opinions tend to be more informative of their own general plans than about the field in general.

Imagine that there's a bunch of nuclear power plant engineering teams - before nuclear power plants - working on different approaches.

One of the teams - not a particularly impressive one either - claimed that any nuclear plant is going to blow up like a hundred kiloton nuclear bomb, unless fitted with a very reliable and fast acting control system. This is actually how nuclear power plants were portrayed in early science fiction ("Blowups Happen", by Heinlein).

So you look at the blueprints, and you see that everyone's reactor is designed for a negative temperature coefficient of reactivity, in the high temperature range, and can't blow up like a nuke. Except for one team whose reactor is not designed to make use of a negative temperature coefficient of reactivity. The mysterious disagreement is explained, albeit in a very boring way.

Comment author: V_V 27 January 2014 08:55:14PM 13 points [-]

Except for one team whose reactor is not designed to make use of a negative temperature coefficient of reactivity.

Except that this contrarian team, made of high school drop-outs, former theologians, philosophers, mathematicians and coal power station technicians, never produce an actual design, instead they spend all their time investigating arcane theoretical questions about renormalization in quantum field theory and publish their possibly interesting results outside the scientific peer review system, relying on hype to disseminate them.

Comment author: private_messaging 28 January 2014 09:49:55AM *  2 points [-]

Well, they still have some plan, however fuzzy it is. The plan involves a reactor which according to it's proponents would just blow up like a 100 kiloton nuke if not for some awesome control system they plan to someday work on. Or in case of AI, a general architecture that is going to self improve and literally kill everyone unless a correct goal is set for it. (Or even torture everyone if there's a minus sign in the wrong place - the reactor analogy would be a much worse explosion still if the control rods get wired backwards. Which happens).

My feeling is that there may be risks for some potential designs, but they are not like "the brightest minds that build the first AI failed to understands some argument that even former theologians can follow" (In fiction this happens because said theologian is very special, in reality it happens because the argument is flawed or irrelevant)

Comment author: XiXiDu 28 January 2014 11:10:33AM 7 points [-]

"the brightest minds that build the first AI failed to understands some argument that even former theologians can follow"

This is related to something that I am quite confused about. There are basically 3 possibilities:

(1) You have to be really lucky to stumble across MIRI's argument. Just being really smart is insufficient. So we should not expect whoever ends up creating the first AGI to think about it.

(2) You have to be exceptionally intelligent to come up with MIRI's argument. And you have to be nowhere as intelligent in order to build an AGI that can take over the world.

(3) MIRI's argument is very complex. Only someone who deliberately thinks about risks associated with AGI could come up with all the necessary details of the argument. The first people to build an AGI won't arrive at the correct insights in time.

Maybe there is another possibility on how MIRI could end up being right that I have not thought about, let me know.

It seems to me that what all of these possibilities have in common is that they are improbable. Either you have to be (1) lucky or (2) exceptionally bright or (3) be right about a highly conjunctive hypothesis.

Comment author: fortyeridania 27 January 2014 05:43:49PM 1 point [-]

when there is a real risk, the brightest people of extreme and diverse intellectual accomplishment are the ones most likely to be concerned about it (and various "autodidacts" are most likely to fail to notice the risk)

Can you cite some evidence for this?

Comment author: David_Gerard 27 January 2014 07:35:06PM *  4 points [-]

Um, surely if you take (a) people with a track record of successful achievement in an area (b) people without a track record of success but who think they know a lot about the area, the presumption that (a) is more likely to know what they're talking about should be the default presumption. It may of course not work out that way, but that would surely be the way to bet.

Comment author: fortyeridania 27 January 2014 07:44:20PM -2 points [-]

Yes, I agree, but that is only part of the story, right?

What if autodidacts, in their untutored excitability, are excessively concerned about a real risk? Or if a real risk has nearly all autodidacts significantly worried, but only 20% of actual experts significantly worried? Wouldn't that falsify /u/private_messaging's assertion? And what's so implausible about that scenario? Shouldn't we expect autodidacts' concerns to be out of step with real risks?

Comment author: V_V 27 January 2014 08:34:19PM *  4 points [-]

What if autodidacts, in their untutored excitability, are excessively concerned about a real risk?

If autodidacts are excessively concerned, then why would it be worth for experts to listen to them?

Comment author: fortyeridania 27 January 2014 10:42:03PM 0 points [-]

It may not be. I was not taking issue with the claim "Experts need not listen to autodidacts." I was taking issue with the claim "Given a real risk, experts are more likely to be concerned than autodidacts are."

Comment author: V_V 27 January 2014 11:50:29PM *  5 points [-]

I would assume that experts are likely to be concerned to an extent more appropriate to the severity of the risk than autodidacts are.

There can be exceptions, of course, but when non-experts make widely more extreme claims than experts do on some issue, especially a strongly emotively charged issue (e.g. the End of the World), unless they can present really compelling evidence and arguments, Dunning–Kruger effect seems to be the most likely explanation.

Comment author: fortyeridania 28 January 2014 12:24:13AM 0 points [-]

I would assume that experts are likely to be concerned to an extent more appropriate to the severity of the risk than autodidacts are.

That is exactly what I would assume too. Autodidacts' risk estimates should be worse than experts'. It does not follow that autodidacts' risk estimates should be milder than experts', though. The latter claim is what I meant to contest.

Comment author: private_messaging 27 January 2014 11:46:18PM *  5 points [-]

"Autodidacts" was in quotes for a reason.

Let's talk about some woo that you're not interested in. E.g. health risks of thymerosal and vaccines in general. Who's more likely to notice it, some self proclaimed "autodidacts", or normal biochemistry experts? Who noticed the possibility of a nuke, back-then conspiracy theorists or scientists? Was Semmelweis some weird outsider, or was he a regular medical doctor with medical training? And so on and so forth.

Right now, experts are concerned with things like nuclear war, run-away methane releases, epidemics, and so on, while various self proclaimed existential risk people (mostly philosophers) seem to be to greater or lesser extent neglecting said risks in favor of movie plot dangers such as runaway self improving AI or perhaps totalitarian world government. (Of course if you listen to said x-risk folks, they're going to tell you that it's because the real experts are wrong.)

Comment author: fortyeridania 28 January 2014 12:29:26AM -1 points [-]

Who's more likely to notice it, some self proclaimed "autodidacts", or normal biochemistry experts? Who noticed the possibility of a nuke, back-then conspiracy theorists or scientists? Was Semmelweis some weird outsider, or was he a regular medical doctor with medical training?

All are good and relevant examples, and they all support the claim in question. Thanks!

But your second paragraph supports the opposite claim. (Again, the claim in question is: Experts are more likely to be concerned over risks than autodidacts are.) In the second paragraph, you give a couple "movie plot" risks, and note that autodidacts are more concerned about them than experts are. Those would therefore be cases of autodidacts being more concerned about risks than experts, right?

If the claim were "Experts have more realistic risk estimates than autodidacts do," then I would readily agree. But you seem to have claimed that autodidacts' risk estimates aren't just wrong--they are biased downward. Is that indeed what you meant to claim, or have I misunderstood you?

Comment author: private_messaging 27 January 2014 11:28:02PM *  5 points [-]

To clarify, I have nothing anything against self educated persons. Some do great things. The "autodidacts" was specifically in quotes.

What is implausible, is this whole narrative where you have a risk obvious enough that people without any relevant training can see it (by the way of that paperclipping argument), yet the relevant experts are ignoring it. Especially when the idea of an intelligence turning against it's creator is incredibly common in fiction, to the point that nobody has to form that idea on their own.

Comment author: [deleted] 28 January 2014 03:54:03PM 3 points [-]

In general, current AGI architectures work via reinforcement learning: reward and punishment. Relevant experts are worried about what will happen when an AGI with the value-architecture of a pet dog finds that it can steal all the biscuits from the kitchen counter without having to do any tricks.

They are less worried about their current creations FOOMing into god-level superintelligences, because current AI architectures are not FOOMable, and it seems quite unlikely that you can create a self-improving ultraintelligence by accident. Except when that's exactly what they plan for them to do (ie: Shane Legg).

Juergen Schmidhuber gave an interview on this very website where he basically said that he expects his Goedel Machines to undergo a hard takeoff at some point, with right and wrong being decided retrospectively by the victors of the resulting Artilect War. He may have been trolling, but it's a bit hard to tell.

Comment author: private_messaging 28 January 2014 04:18:03PM 1 point [-]

I'd need to have links and to read it by myself.

With regards to reinforcement learning, one thing to note is that the learning process is in general not the same thing as the intelligence that is being built by the learning process. E.g. if you were to evolve some ecosystem of programs by using "rewards" and "punishments", the resulting code ends up with distinct goals (just as humans are capable of inventing and using birth control). Not understanding this, local genuises of the AI risk been going on about "omg he's so stupid it's going to convert the solar system to smiley faces" with regards to at least one actual AI researcher.

Comment author: [deleted] 28 January 2014 04:31:05PM 1 point [-]

I'd need to have links and to read it by myself.

Here is his interview. It's very, very hard to tell if he's got his tongue firmly in cheek (he refers to minds of human-level intelligence and our problems as being "small"), or if he's enjoying an opportunity to troll the hell out of some organization with a low opinion of his work.

With regards to reinforcement learning, one thing to note is that the learning process is in general not the same thing as the intelligence that is being built by the learning process.

With respect to genetic algorithms, you are correct. With respect to something like neural networks (real world stuff) or AIXI (pure theory), you are incorrect. This is actually why machine-learning experts differentiate between evolutionary algorithms ("use an evolutionary process to create an agent that scores well on X") versus direct learning approaches ("the agent learns to score well on X").

Not understanding this, local genuises of the AI risk been going on about "omg he's so stupid it's going to convert the solar system to smiley faces" with regards to at least one actual AI researcher.

What, really? I mean, while I do get worried about things like Google trying to take over the world, that's because they're ideological Singulatarians. They know the danger line is there, and intend to step over it. I do not believe that most competent Really Broad Machine Learning (let's use that nickname for AGI) researchers are deliberately, suicidally evil, but then again, I don't believe you can accidentally make a dangerous-level AGI (ie: a program that acts as a VNM-rational agent in pursuit of an inhumane goal).

Accidental and evolved programs are usually just plain not rational agents, and therefore pose rather more limited dangers (crashing your car, as opposed to killing everyone everywhere).

Comment author: atorm 27 January 2014 12:12:13PM 6 points [-]

Upvoted for writing style.

Comment author: [deleted] 27 January 2014 04:16:36PM 4 points [-]

I'm quite happy to hear that, but it's not very useful advice. I'm not an AIXI agent, so I can't deduce what's being praised solely from the fact that it is praised.

Comment author: wedrifid 03 February 2014 08:55:35AM -1 points [-]

I'm quite happy to hear that, but it's not very useful advice. I'm not an AIXI agent, so I can't deduce what's being praised solely from the fact that it is praised.

You can, however, glean information about how to write, particularly given that the reasoning was made explicit. That probably has more actual practical value for just about all readers.

Comment author: Tenoke 27 January 2014 11:36:00AM 2 points [-]

..has someone had a polite word with them about not killing all humans by sheer accident?

If you believe this, Deepmind had to push for an ethics board which suggests that people are mentioning it to Google and that Google is not taking the issue too seriously.

Comment author: yohui 27 January 2014 11:48:22AM 8 points [-]

That interpretation seems tenuous. The sentence in which "pushed" is used:

The DeepMind-Google ethics board, which DeepMind pushed for, will devise rules for how Google can and can't use the technology.

suggests nothing more than that the proposal originated with DeepMind. I may as well imagine that Google's apparent amenability to the arrangement augurs well.

(Unless the article goes on to explain further? Not a subscriber.)

Comment author: Khoth 27 January 2014 11:45:44AM 3 points [-]

I'd expect their ethics board will in any case not be about the risk of killing all humans, but about things like privacy issues and the more mundane safety issues that you get when you connect a machine learning thing to a robot (or a car).

Comment author: David_Gerard 27 January 2014 11:56:38AM 3 points [-]

but about things like privacy issues

I'm sure Google will be right on that.

Comment author: Eneasz 27 January 2014 10:12:04PM 1 point [-]

Well, someone's gotta do it.

Comment author: CellBioGuy 29 January 2014 02:53:32AM *  1 point [-]

A Weyland-Yutani style outcome is a far bigger risk. EDIT: Does this mean anti-trust laws probably should've hit them a long time ago?

Comment author: Vulture 30 January 2014 09:45:48PM 0 points [-]

Should've, sure. Didn't. And won't, in all likelihood. Google is very, very rich, influential, and popular with the public, so the chances of them getting taken down a notch legally (or in pretty much any other way) are low.

Comment author: knb 28 January 2014 07:44:47AM *  1 point [-]

I was somewhat concerned when Google hired Kurzweil because he comes across as very Pollyanna-ish in his popular writings.

Now they're buying a company founded by the guy who created this game.

Comment author: [deleted] 28 January 2014 01:48:21PM 2 points [-]

/sigh

Yet another game I could play in my Copious Free Time. I really need to figure out how to make my morning routine more efficient so I don't end up distracted by the internet when I'm lacking a hard deadline, thus recovering a few hours a day of spare time.

Comment author: Chatham 26 February 2014 04:46:01AM 0 points [-]

The predictions in his popular writings have been pretty off base. More unsettling is the way he twists the words around to pretend they're accurate.

Comment author: knb 26 February 2014 10:27:56AM 1 point [-]

I'm most worried about the fact that Kurzweil argued that AGI would be no threat to humans because we would "merge with the machines". He always left vague how he knew that would happen, and how he knew that would stop AI from being a threat.

Comment author: Chatham 26 February 2014 04:08:03PM 0 points [-]

Agreed, especially since, from what I’ve seen, Kurzweil’s reason for being so sanguine about Global Warming is exponential growth. He doesn’t seem to reflect on the problems that Global Warming is causing right now, or that the growth in renewables has come in a large part because of people who are concerned.

And the idea that we shouldn’t worry isn’t reassuring when it comes from someone who’s predictions of the future have mostly been incorrect. This is a man who stands by his predictions that by 2009, human musicians and cybernetic musicians would routinely play music together and that most text would come from voice recognition software, not keyboards. Anyone that takes him seriously should re-read that chapter with predictions for 2009 (which talks about 3D entertainment rooms, the growing popularity of computer authors, 3D art coming from computer artists being displayed on screens hung up on people’s houses, nanobots that think for themselves, the growing industry of creating the personalities for the artificial personas we routinely communicate with, etc.) and keep in mind that Kurzweil says his predictions were mostly accurate.