http://www.huffingtonpost.com/stephen-hawking/artificial-intelligence_b_5174265.html

Very surprised none has linked to this yet:

TL;DR: AI is a very underfunded existential risk.

Nothing new here, but it's the biggest endorsement the cause has gotten so far. I'm greatly pleased they got Stuart Russell, though not Peter Norvig, who seems to remain lukewarm to the cause. Also too bad this was Huffington vs something more respectable. With some thought I think we could've gotten the list to be more inclusive and found a better publication; still I think this is pretty huge.

 

New to LessWrong?

New Comment
28 comments, sorted by Click to highlight new comments since: Today at 9:26 PM

Hawking/Russell/Tegmark/Wilczek:

If a superior alien civilization sent us a text message saying, "We'll arrive in a few decades," would we just reply, "OK, call us when you get here -- we'll leave the lights on"? Probably not -- but this is more or less what is happening with AI.

Nice.

Actually, in the alien civilization scenario we would already be screwed: there wouldn't be much that can be done. This is not the case with AI.

If a few decades is enough to make an FAI, we could build one and either have it deal with the aliens, or have it upload everyone, put them in static storage, and send a few Von Neumann probes faster than it would be economical for aliens to send them to catch us if they are interested in maximum spread, instead of maximum speed, to galaxies which will soon be outside the aliens' cosmological horizon.

It is unlikely that the FAI would be able to deal with the aliens. The aliens would have (or be) their own "FAIs" much older and therefore more powerful.

Regarding probes to extremely far galaxies: theoretically might work, depending on economics of space colonization. We would survive at the cost of losing most of potential colonization space. Neat.

It is unlikely that the FAI would be able to deal with the aliens. The aliens would have (or be) their own "FAIs" much older and therefore more powerful.

This needs unpacking of "deal with". A FAI is still capable of optimizing a "hopeless" situation better than humans, so if you focus on optimizing and not satisficing, it doesn't matter if the absolute value of the outcome is much less than without the aliens. Considering this comparison (value with aliens vs. without) is misleading, because it's a part of the problem statement, not of a consequentialist argument that informs some decision within that problem statement. FAI would be preferable simply as long as it delivers more expected value than alternative plans that would use the same resources to do something else.

Apart from that general point, it might turn out to be easy (for an AGI) to quickly develop significant control over a local area of the physical world that's expensive to take away (or take away without hurting its value) even if the opponent is a superintelligence that spent aeons working on this problem (analogy with modern cryptography, where defense wins against much stronger offense), in which case a FAI would have something to bargain with.

A FAI is still capable of optimizing a "hopeless" situation better than humans...

This argument is not terribly convincing by itself. For example, a Neanderthal is a much better optimizer than a fruit fly but both a almost equally powerless against an H-bomb.

...it might turn out to be easy (for an AGI) to quickly develop significant control over a local area of the physical world that's expensive to take away...

Hmm, what about the following idea. The FAI can threaten the aliens to somehow consume a large portion of the free energy in the solar system. Assuming the 2nd law of thermodynamics is watertight, it will be profitable for them to leave us a significant portion (1/2?) of that portion. Essentially it's the Ultimatum game. The negotiation can be done acausally assuming each side has sufficient information about the other.

Thus we remain a small civilization but survive for a long time.

For example, a Neanderthal is a much better optimizer than a fruit fly but both a almost equally powerless against an H-bomb.

There is no reason to expect exact equality, only close similarity. If you optimize, you still prefer something that's a tiny bit better to something that's a tiny bit worse. I'm not claiming that there is a significant difference. I'm claiming that there is some expected difference, all else equal, however tiny, which is all it takes to prefer one decision over another. In this case, a FAI gains you as much difference as available, minus the opportunity cost of FAI's development (if we set aside the difficulty in predicting success of a FAI development project).

(There are other illustrations I didn't give for how the difference may not be "tiny" in some senses of "tiny". For example, one possible effect is a few years of strongly optimized world, which might outweigh all of the moral value of the past human history. This is large compared to the value of millions of human lives, tiny compared to the value of uncontested future light cone.)

(I wouldn't give a Neanderthal as a relevant example of an optimizer, as the abstract argument about FAI's value is scrambled by the analogy beyond recognition. The Neanderthal in the example would have to be better than the fly at optimizing fly values (which may be impossible to usefully define for flies), and have enough optimization power to render the difference in bodies relatively morally irrelevant, compared to the consequences. Otherwise, the moral difference between their bodies is a confounder that renders the point of the difference in their optimization power, all else equal, moot, because all alse is now significantly not equal.)

...a FAI gains you as much difference as available, minus the opportunity cost of FAI's development...

Exactly. So for building FAI to be a good idea we need to expect its benefits to outweigh the opportunity cost (we can spend the remaining time "partying" rather than developing FAI).

For example, one possible effect is a few years of strongly optimized world, which might outweigh all of the moral value of the past human history.

Neat. One way it might work is the FAI running much-faster-than-realtime WBE's so that we gain a huge amount of subjective years of life. This works for any inevitable impending disaster.

Thus we remain a small civilization but survive for a long time.

It's not obvious that having a long time is preferable. For example, optimizing a large amount of resources in a short time might be better than optimizing a small amount of resources for a long time. Whatever's preferable, that's the trade that a FAI might be in a position to facilitate.

Just FYI, that analogy is originally due to Russell specifically, according to an interview I saw with Norvig.

[-][anonymous]10y-10

Eeeehhhhh... it's not that surprising when you consider that billions of people people really, truly believe in a form of divine-command moral realism that implies universally compelling arguments.

It is, however, worrying.

I'm not sure that being on HuffPo is a win. Unless you want to mingle in the company of SHOCKING celebrity secrets and 17 Nightmare-Inducing Easter Photos You Can't Unsee, that is...

Hard to find a major news site these days that isn't paywalled and hasn't hopped on the clickbait train, unfortunately. I'm hoping the fad will pass once these people realize that they're spending credibility faster than they get it back, but I'm not expecting it to happen anytime soon.

What This Superintelligent Computer Will Do To Your Species Will Completely Blow Your Mind! Literally.

[-][anonymous]10y00

You know, I'm fairly sure being paper-clipped will merely destroy my mind, rather than blow it. Oh well, got anything Friendly?

[This comment is no longer endorsed by its author]Reply

Maybe that's because the whole concept of a "major news site" isn't looking good nowadays.

The Independent ran pretty much the same article

This has been circulating among the tumblrers for a little bit, and I wrote a quick infodump on where it comes from.

TL;DR: The article comes from (is co-authored by) a brand-new x-risk organization founded by Max Tegmark and four others, with all of the authors from it's scientific advisory board.

I decided to post this with a catchy title (edit. on retrospect that title doesn't put nearly enough emphasis on the danger aspect) to bunch of subreddits on reddit to get more recognition to this. Asking for upvotes is not allowed, so do with this information as you wish.

Submission on /r/technology

Submission on /r/TrueReddit

Submission on /r/Futurology

It's a big deal. In particular, I was startled to see Russell signing it. I don't put much weight on the physicists, who are well outside their area of expertise. But Russell is a totally respectable guy and this is exactly his nominal area. I interacted with him a few times as a student and he impressed me as a smart thoughtful guy who wasn't given to pet theories.

Has he ever stopped by MIRI to chat? The Berkeley CS faculty are famously busy, but I'd think if he bothers to name-check MIRI in a prominent article, he'd be willing to come by and have a real technical discussion.

I don't know, but I found his omission of MIRI in this interview (found via lukeprog's FB) surprising http://is.gd/Dx0lw0

It's not surprising to me at all, I think you might have an overly inflated opinion of MIRI. MIRI has no mainstream academic status, and isn't getting more any time soon.

Not sure if you're saying I have an inflated opinion of MIRI or of MIRI's status. If it's the earlier, in my own opinion FWIW, is that what MIRI lacks in terms of academic status it well makes up by (initially) being the only org doing reasonably productive work in the area of safety research.

More specifically, AIMA mentions Friendly AI, and Yudkowsky by name, which is why I found the omission somewhat surprising.

If the quotation from their placeholder website on somertva's tumblr is to be believed, it's a "sister site" of fqxi.org. This worries me a little -- FQXI is funded by the John Templeton Foundation, which has its own agenda and one I don't much care for. Is FLI also Templeton-funded? I'm not aware that Templeton has had any particular malign influence on FQXI, though, and the people at this new organization don't seem like they've been cherry-picked for (e.g.) religion-friendliness, so maybe it's OK.

No, FLI has nothing to do with the Templeton Foundation. The website was a "sister site" of the FQXi site, because both organizations are run by Max, and he wanted to keep the same web platform for simplicity.

That's encouraging. Thanks for the information!

This article was pretty lacking in actual argument. I feel like if I hadn't already been concerned about AI risk, reading that wouldn't have changed my mind. I guess the fact that the authors are pretty high-powered authority figures makes it still somewhat significant, though.

The argument is simply an argument from authority. What more could you reasonably expect for six paragraphs and the mega-sized inferential distance between physicists/computer scientists and the readers of Huffington Post?