Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Stuart_Armstrong 18 September 2015 12:01:22PM 0 points [-]

This is supposed to be a toy model of excessive simplicity. Do you have suggestions for improving it (for purposes of presenting to others)?

Comment author: CarlShulman 18 September 2015 03:31:48PM 1 point [-]

Maybe explain how it works when being configured, and then stops working when B gets a better model of the situation/runs more trial-and-error trials?

Comment author: CarlShulman 17 September 2015 07:15:31PM *  5 points [-]

An illustration with a game-playing AI, see 15:50 and after in the video. The system has a reward function based on bytes in memory, which leads it to pause the game forever when it is about to lose.

Comment author: Stuart_Armstrong 17 September 2015 06:34:54AM *  4 points [-]

Maybe the easiest way of generalising this is programming B to put 1 block in the hole, but, because B was trained in a noisy environment, it gives only a 99.9% chance of the block being in the hole if it observes that. Then six blocks in the hole is higher expected utility, and we get the same behaviour.

Comment author: CarlShulman 17 September 2015 06:02:50PM *  1 point [-]

That still involves training it with no negative feedback error term for excess blocks (which would overwhelm a mere 0.1% uncertainty).

Comment author: CarlShulman 17 September 2015 03:02:48AM 2 points [-]

Of course, with this model it's a bit of a mystery why A gave B a reward function that gives 1 per block, instead of one that gives 1 for the first block and a penalty for additional blocks. Basically, why program B with a utility function so seriously out of whack with what you want when programming one perfectly aligned would have been easy?

Comment author: HungryHobo 29 July 2015 10:11:44AM 0 points [-]

yes, 1 is equivalent to an early filter.

2 would be somewhat surprising since there's no physical law that disallows it.

3 comes close to theology and would imply low AI risk since such entities would probably not allow a potentially dangerous AI to exist within any area they control.

4 is sort of a re-phrasing of 1.

5 is possible but implies some strong reason many would all reliably choose the same options.

For #4 and #5 what difference does it make whether biological beings make 'FAI' that helps them or 'UFAI' that kills them before going about its business?

Do you mean that an alien FAI may look very much like an UFAI to us? if so I agree.

Comment author: CarlShulman 30 July 2015 12:05:13AM *  0 points [-]

1 is early filter meaning before our current state, #4 would be around or after our current state.

Do you mean that an alien FAI may look very much like an UFAI to us? if so I agree.

Not in the sense of harming us. For the Fermi paradox visible benevolent aliens are as inconsistent with our observations as murderous Berserkers.

I'm trying to get you to explain why you think a belief that "AI is a significant risk" would change our credence in any of #1-5, compared to not believing that.

Comment author: HungryHobo 27 July 2015 05:08:14PM *  1 point [-]

I may not have been clear, by UFAI I didn't just mean an AI which just trashes the home planet of the civilization that creates it and then stops but rather one which then continues to convert the remainder of the universe into computronium to store the integer for it's fitness function or some similar doomsday scenario.

It doesn't matter how safe you are about AI if there's a million other civilizations in the universe and some non-trivial portion of them aren't being as careful as they should assuming it's a non-trivial risk.

Which either argues for AI-risk not being so risky or for an early filter.

Comment author: CarlShulman 28 July 2015 10:50:12PM *  1 point [-]

Let's consider a few propositions:

  1. There is enough cumulative early filtration that very few civilizations develop, with less than 1 in expectation in a region like our past light-cone.
  2. Interstellar travel is impossible.
  3. Some civilizations have expanded but not engaged in mega-scale engineering that we could see or colonization that would have pre-empted our existence, and enforce their rules on dissenters.
  4. Civilizations very reliably wipe themselves out before they can colonize.
  5. Civilizations very reliably choose not to expand at all.

1-3 account for the Great Filter directly, and whether biological beings make AI they are happy with is irrelevant. For #4 and #5 what difference does it make whether biological beings make 'FAI' that helps them or 'UFAI' that kills them before going about its business? Either way the civilization (biological, machine, or both) could still wipe itself out or not (AIs could nuke each other out of existence too), and send out colonizers or not.

Unless there is some argument that 'UFAI' is much less likely to wipe out civilization (including itself), or much more likely to send out colonizers, how do the odds of alien 'FAI' vs 'UFAI' matter for explaining the Great Filter any more than whether aliens have scales or feathers? Either way they could produce visible signs or colonize Earth.

Comment author: HungryHobo 27 July 2015 01:13:00PM 0 points [-]

There's also the UFAI-Fermi-paradox: If AI is a significant risk and if intelligent life is common why aren't we all paperclips already? AI doesn't work as a filter because it's the kind of disaster likely to keep spreading and we'd expect to see large parts of the sky going dark as the stars get turned into pictures of smiling faces or computronium.

There's also the anthropic principle. We probably wouldn't be alive to ask the question in a universe where the earth has been turned into a strip-mall for aliens.

Though combining the anthropic principle with the Drake equation gives us another possibility.

Compute the Drake equation for a cone 60 light years thick extending back through time outwards from earth. x billion planets with y civilisations with z probability of producing a UFAI. Which gives you a rough estimate for the chances of someone elses UFAI killing us all within your lifetime.

Comment author: CarlShulman 27 July 2015 04:36:25PM 2 points [-]

There's also the UFAI-Fermi-paradox:

This is just the regular Fermi paradox/Great Filter. If AI has any impact, it's that it may make space colonization easier. But what's important for that is that eventually industrial civilizations will develop AI (say in a million years). Whether the ancient aliens would be happy with the civilization that does the colonizing is irrelevant (i.e. UFAI/FAI) to the Filter.

You could also have the endotherm-Fermi-paradox, or the hexapodal-Fermi-paradox, or the Klingon-Great-Filter, but there is little to be gained by slicing up the Filter in that way.

Comment author: CarlShulman 26 July 2015 05:04:42PM *  6 points [-]

Furthermore, although smaller stars are much more common than larger stars (the Sun is actually larger than over 80% of stars in the universe) stars smaller than about 0.5 solar masses (and thus 0.08 solar luminosities) are usually ‘flare stars’ – possessing very strong convoluted magnetic fields and periodically putting out flares and X-ray bursts that would frequently strip away the ozone and possibly even the atmosphere of an earthlike planet.

I have been wanting better stats on this for a while. Basically, what percentage of the eventual sum of potential-for-life-weighted habitable windows (undisturbed by technology) comes from small red dwarfs that can exist far longer than our sun, offsetting long stellar lifetimes with the various (nasty-looking) problems? ETA: wikipedia article. And how robust is the evidence?

Comment author: Kaj_Sotala 06 March 2015 01:54:53PM 11 points [-]

His article commentary on G+ seems to get more into the "dissing" territory:

Enough thoughtful AI researchers (including Yoshua Bengio​, Yann LeCun) have criticized the hype about evil killer robots or "superintelligence," that I hope we can finally lay that argument to rest. This article summarizes why I don't currently spend my time working on preventing AI from turning evil.

Comment author: CarlShulman 06 March 2015 05:27:41PM *  8 points [-]

See this video at 39:30 for Yann LeCun giving some comments. He said:

  • Human-level AI is not near
  • He agrees with Musk that there will be important issues when it becomes near
  • He thinks people should be talking about it but not acting because a) there is some risk b) the public thinks there is more risk than there is

Also here is an IEEE interview:

Spectrum: You’ve already expressed your disagreement with many of the ideas associated with the Singularity movement. I’m interested in your thoughts about its sociology. How do you account for its popularity in Silicon Valley?

LeCun: It’s difficult to say. I’m kind of puzzled by that phenomenon. As Neil Gershenfeld has noted, the first part of a sigmoid looks a lot like an exponential. It’s another way of saying that what currently looks like exponential progress is very likely to hit some limit—physical, economical, societal—then go through an inflection point, and then saturate. I’m an optimist, but I’m also a realist.

There are people that you’d expect to hype the Singularity, like Ray Kurzweil. He’s a futurist. He likes to have this positivist view of the future. He sells a lot of books this way. But he has not contributed anything to the science of AI, as far as I can tell. He’s sold products based on technology, some of which were somewhat innovative, but nothing conceptually new. And certainly he has never written papers that taught the world anything on how to make progress in AI.

Spectrum: What do you think he is going to accomplish in his job at Google?

LeCun: Not much has come out so far.

Spectrum: I often notice when I talk to researchers about the Singularity that while privately they are extremely dismissive of it, in public, they’re much more temperate in their remarks. Is that because so many powerful people in Silicon Valley believe it?

LeCun: AI researchers, down in the trenches, have to strike a delicate balance: be optimistic about what you can achieve, but don’t oversell what you can do. Point out how difficult your job is, but don’t make it sound hopeless. You need to be honest with your funders, sponsors, and employers, with your peers and colleagues, with the public, and with yourself. It is difficult when there is a lot of uncertainty about future progress, and when less honest or more self-deluded people make wild claims of future success. That’s why we don’t like hype: it is made by people who are either dishonest or self-deluded, and makes the life of serious and honest scientists considerably more difficult.

When you are in the kind of position as Larry Page and Sergey Brin and Elon Musk and Mark Zuckerberg, you have to prepare for where technology is going in the long run. And you have a huge amount of resources to make the future happen in a way that you think will be good. So inevitably you have to ask yourself those questions: what will technology be like 10, 20, 30 years from now. It leads you to think about questions like the progress of AI, the Singularity, and questions of ethics.

Spectrum: Right. But you yourself have a very clear notion of where computers are going to go, and I don’t think you believe we will be downloading our consciousness into them in 30 years.

LeCun: Not anytime soon.

Spectrum: Or ever.

LeCun: No, you can’t say never; technology is advancing very quickly, at an accelerating pace. But there are things that are worth worrying about today, and there are things that are so far out that we can write science fiction about it, but there’s no reason to worry about it just now.

Comment author: John_Maxwell_IV 23 January 2015 03:41:33AM 2 points [-]

That was in reference to the labor issue, right?

Comment author: CarlShulman 23 January 2015 05:37:17AM 6 points [-]

AI that can't compete in the job market probably isn't a global catastrophic risk.

View more: Next