Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: HungryHobo 27 July 2015 05:08:14PM *  1 point [-]

I may not have been clear, by UFAI I didn't just mean an AI which just trashes the home planet of the civilization that creates it and then stops but rather one which then continues to convert the remainder of the universe into computronium to store the integer for it's fitness function or some similar doomsday scenario.

It doesn't matter how safe you are about AI if there's a million other civilizations in the universe and some non-trivial portion of them aren't being as careful as they should assuming it's a non-trivial risk.

Which either argues for AI-risk not being so risky or for an early filter.

Comment author: CarlShulman 28 July 2015 10:50:12PM *  1 point [-]

Let's consider a few propositions:

  1. There is enough cumulative early filtration that very few civilizations develop, with less than 1 in expectation in a region like our past light-cone.
  2. Interstellar travel is impossible.
  3. Some civilizations have expanded but not engaged in mega-scale engineering that we could see or colonization that would have pre-empted our existence, and enforce their rules on dissenters.
  4. Civilizations very reliably wipe themselves out before they can colonize.
  5. Civilizations very reliably choose not to expand at all.

1-3 account for the Great Filter directly, and whether biological beings make AI they are happy with is irrelevant. For #4 and #5 what difference does it make whether biological beings make 'FAI' that helps them or 'UFAI' that kills them before going about its business? Either way the civilization (biological, machine, or both) could still wipe itself out or not (AIs could nuke each other out of existence too), and send out colonizers or not.

Unless there is some argument that 'UFAI' is much less likely to wipe out civilization (including itself), or much more likely to send out colonizers, how do the odds of alien 'FAI' vs 'UFAI' matter for explaining the Great Filter any more than whether aliens have scales or feathers? Either way they could produce visible signs or colonize Earth.

Comment author: HungryHobo 27 July 2015 01:13:00PM 0 points [-]

There's also the UFAI-Fermi-paradox: If AI is a significant risk and if intelligent life is common why aren't we all paperclips already? AI doesn't work as a filter because it's the kind of disaster likely to keep spreading and we'd expect to see large parts of the sky going dark as the stars get turned into pictures of smiling faces or computronium.

There's also the anthropic principle. We probably wouldn't be alive to ask the question in a universe where the earth has been turned into a strip-mall for aliens.

Though combining the anthropic principle with the Drake equation gives us another possibility.

Compute the Drake equation for a cone 60 light years thick extending back through time outwards from earth. x billion planets with y civilisations with z probability of producing a UFAI. Which gives you a rough estimate for the chances of someone elses UFAI killing us all within your lifetime.

Comment author: CarlShulman 27 July 2015 04:36:25PM 1 point [-]

There's also the UFAI-Fermi-paradox:

This is just the regular Fermi paradox/Great Filter. If AI has any impact, it's that it may make space colonization easier. But what's important for that is that eventually industrial civilizations will develop AI (say in a million years). Whether the ancient aliens would be happy with the civilization that does the colonizing is irrelevant (i.e. UFAI/FAI) to the Filter.

You could also have the endotherm-Fermi-paradox, or the hexapodal-Fermi-paradox, or the Klingon-Great-Filter, but there is little to be gained by slicing up the Filter in that way.

Comment author: CarlShulman 26 July 2015 05:04:42PM *  6 points [-]

Furthermore, although smaller stars are much more common than larger stars (the Sun is actually larger than over 80% of stars in the universe) stars smaller than about 0.5 solar masses (and thus 0.08 solar luminosities) are usually ‘flare stars’ – possessing very strong convoluted magnetic fields and periodically putting out flares and X-ray bursts that would frequently strip away the ozone and possibly even the atmosphere of an earthlike planet.

I have been wanting better stats on this for a while. Basically, what percentage of the eventual sum of potential-for-life-weighted habitable windows (undisturbed by technology) comes from small red dwarfs that can exist far longer than our sun, offsetting long stellar lifetimes with the various (nasty-looking) problems? ETA: wikipedia article. And how robust is the evidence?

Comment author: Kaj_Sotala 06 March 2015 01:54:53PM 11 points [-]

His article commentary on G+ seems to get more into the "dissing" territory:

Enough thoughtful AI researchers (including Yoshua Bengio​, Yann LeCun) have criticized the hype about evil killer robots or "superintelligence," that I hope we can finally lay that argument to rest. This article summarizes why I don't currently spend my time working on preventing AI from turning evil.

Comment author: CarlShulman 06 March 2015 05:27:41PM *  8 points [-]

See this video at 39:30 for Yann LeCun giving some comments. He said:

  • Human-level AI is not near
  • He agrees with Musk that there will be important issues when it becomes near
  • He thinks people should be talking about it but not acting because a) there is some risk b) the public thinks there is more risk than there is

Also here is an IEEE interview:

Spectrum: You’ve already expressed your disagreement with many of the ideas associated with the Singularity movement. I’m interested in your thoughts about its sociology. How do you account for its popularity in Silicon Valley?

LeCun: It’s difficult to say. I’m kind of puzzled by that phenomenon. As Neil Gershenfeld has noted, the first part of a sigmoid looks a lot like an exponential. It’s another way of saying that what currently looks like exponential progress is very likely to hit some limit—physical, economical, societal—then go through an inflection point, and then saturate. I’m an optimist, but I’m also a realist.

There are people that you’d expect to hype the Singularity, like Ray Kurzweil. He’s a futurist. He likes to have this positivist view of the future. He sells a lot of books this way. But he has not contributed anything to the science of AI, as far as I can tell. He’s sold products based on technology, some of which were somewhat innovative, but nothing conceptually new. And certainly he has never written papers that taught the world anything on how to make progress in AI.

Spectrum: What do you think he is going to accomplish in his job at Google?

LeCun: Not much has come out so far.

Spectrum: I often notice when I talk to researchers about the Singularity that while privately they are extremely dismissive of it, in public, they’re much more temperate in their remarks. Is that because so many powerful people in Silicon Valley believe it?

LeCun: AI researchers, down in the trenches, have to strike a delicate balance: be optimistic about what you can achieve, but don’t oversell what you can do. Point out how difficult your job is, but don’t make it sound hopeless. You need to be honest with your funders, sponsors, and employers, with your peers and colleagues, with the public, and with yourself. It is difficult when there is a lot of uncertainty about future progress, and when less honest or more self-deluded people make wild claims of future success. That’s why we don’t like hype: it is made by people who are either dishonest or self-deluded, and makes the life of serious and honest scientists considerably more difficult.

When you are in the kind of position as Larry Page and Sergey Brin and Elon Musk and Mark Zuckerberg, you have to prepare for where technology is going in the long run. And you have a huge amount of resources to make the future happen in a way that you think will be good. So inevitably you have to ask yourself those questions: what will technology be like 10, 20, 30 years from now. It leads you to think about questions like the progress of AI, the Singularity, and questions of ethics.

Spectrum: Right. But you yourself have a very clear notion of where computers are going to go, and I don’t think you believe we will be downloading our consciousness into them in 30 years.

LeCun: Not anytime soon.

Spectrum: Or ever.

LeCun: No, you can’t say never; technology is advancing very quickly, at an accelerating pace. But there are things that are worth worrying about today, and there are things that are so far out that we can write science fiction about it, but there’s no reason to worry about it just now.

Comment author: John_Maxwell_IV 23 January 2015 03:41:33AM 2 points [-]

That was in reference to the labor issue, right?

Comment author: CarlShulman 23 January 2015 05:37:17AM 6 points [-]

AI that can't compete in the job market probably isn't a global catastrophic risk.

Comment author: JoshuaZ 15 January 2015 11:26:00PM 5 points [-]

This is good news. In general, since all forms of existential risk seem underfunded as a whole, funding more to any one of them is a good thing. But a donation of this size for AI specifically makes me now start to wonder if people should identify other existential risks which are now more underfunded. In general, it takes a very large amount of money to change what has the highest marginal return, but this is a pretty large donation.

Comment author: CarlShulman 17 January 2015 12:11:10AM *  7 points [-]

GiveWell is on the case, and has said it is looking at bio threats (as well as nukes, solar storms, interruptions of agriculture). See their blog post on global catastrophic risks potential focus areas.

The open letter is an indication that GiveWell should take AI risk more seriously, while the Musk donation is an indication that near-term room for more funding will be lower. That could go either way.

On the room for more funding question, it's worth noting that GiveWell and Good Ventures are now moving tens of millions of dollars per year, and have been talking about moving quite a bit more than Musk's donation to the areas the Open Philanthropy Project winds up prioritizing.

However, even if the amount of money does not exhaust the field, there may be limits on how fast it can be digested, and the efficient growth path, that would favor gradually increasing activity.

Comment author: Pablo_Stafforini 15 January 2015 05:50:46AM *  1 point [-]

Why should we consider possible rather than actual experiences in this context? It seems that cryonics patients who are successfully revived will retain their original reward circuitry, so I don't see why we should expect their best possible experiences to be as good as their worst possible experiences are bad, given that this is not the case for current humans.

Comment author: CarlShulman 16 January 2015 02:28:17AM *  2 points [-]

For some of the same reasons depressed people take drugs to elevate their mood.

Comment author: CarlShulman 26 December 2014 09:58:02PM 0 points [-]

Typo, "amplified" vs "amplify":

"on its motherboard as a makeshift radio to amplified oscillating signals from nearby computers"

Comment author: Brian_Tomasik 15 December 2014 05:03:27AM 2 points [-]

Thanks for the correction! I changed "endorsed" to "discussed" in the OP. What I meant to convey was that these authors endorsed the logic of the argument given the premises (ignoring sim scenarios), rather than that they agreed with the argument all things considered.

Comment author: CarlShulman 15 December 2014 05:16:13AM 2 points [-]

Thanks Brian.

Comment author: CarlShulman 15 December 2014 04:59:07AM *  2 points [-]

It has been endorsed by Robin Hanson, Carl Shulman, and Nick Bostrom.

The article you cite for Shulman and Bostrom does not endorse the SIA-doomsday argument. It describes it, but:

  • Doesn't take a stance on the SIA; it does an analysis of alternatives including SIA
  • Argues that the interaction with the Simulation Argument changes the conclusion of the Fermi Paradox SIA Doomsday argument given the assumption of SIA.

View more: Next