Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Kaj_Sotala 06 March 2015 01:54:53PM 11 points [-]

His article commentary on G+ seems to get more into the "dissing" territory:

Enough thoughtful AI researchers (including Yoshua Bengio​, Yann LeCun) have criticized the hype about evil killer robots or "superintelligence," that I hope we can finally lay that argument to rest. This article summarizes why I don't currently spend my time working on preventing AI from turning evil.

Comment author: CarlShulman 06 March 2015 05:27:41PM *  8 points [-]

See this video at 39:30 for Yann LeCun giving some comments. He said:

  • Human-level AI is not near
  • He agrees with Musk that there will be important issues when it becomes near
  • He thinks people should be talking about it but not acting because a) there is some risk b) the public thinks there is more risk than there is

Also here is an IEEE interview:

Spectrum: You’ve already expressed your disagreement with many of the ideas associated with the Singularity movement. I’m interested in your thoughts about its sociology. How do you account for its popularity in Silicon Valley?

LeCun: It’s difficult to say. I’m kind of puzzled by that phenomenon. As Neil Gershenfeld has noted, the first part of a sigmoid looks a lot like an exponential. It’s another way of saying that what currently looks like exponential progress is very likely to hit some limit—physical, economical, societal—then go through an inflection point, and then saturate. I’m an optimist, but I’m also a realist.

There are people that you’d expect to hype the Singularity, like Ray Kurzweil. He’s a futurist. He likes to have this positivist view of the future. He sells a lot of books this way. But he has not contributed anything to the science of AI, as far as I can tell. He’s sold products based on technology, some of which were somewhat innovative, but nothing conceptually new. And certainly he has never written papers that taught the world anything on how to make progress in AI.

Spectrum: What do you think he is going to accomplish in his job at Google?

LeCun: Not much has come out so far.

Spectrum: I often notice when I talk to researchers about the Singularity that while privately they are extremely dismissive of it, in public, they’re much more temperate in their remarks. Is that because so many powerful people in Silicon Valley believe it?

LeCun: AI researchers, down in the trenches, have to strike a delicate balance: be optimistic about what you can achieve, but don’t oversell what you can do. Point out how difficult your job is, but don’t make it sound hopeless. You need to be honest with your funders, sponsors, and employers, with your peers and colleagues, with the public, and with yourself. It is difficult when there is a lot of uncertainty about future progress, and when less honest or more self-deluded people make wild claims of future success. That’s why we don’t like hype: it is made by people who are either dishonest or self-deluded, and makes the life of serious and honest scientists considerably more difficult.

When you are in the kind of position as Larry Page and Sergey Brin and Elon Musk and Mark Zuckerberg, you have to prepare for where technology is going in the long run. And you have a huge amount of resources to make the future happen in a way that you think will be good. So inevitably you have to ask yourself those questions: what will technology be like 10, 20, 30 years from now. It leads you to think about questions like the progress of AI, the Singularity, and questions of ethics.

Spectrum: Right. But you yourself have a very clear notion of where computers are going to go, and I don’t think you believe we will be downloading our consciousness into them in 30 years.

LeCun: Not anytime soon.

Spectrum: Or ever.

LeCun: No, you can’t say never; technology is advancing very quickly, at an accelerating pace. But there are things that are worth worrying about today, and there are things that are so far out that we can write science fiction about it, but there’s no reason to worry about it just now.

Comment author: John_Maxwell_IV 23 January 2015 03:41:33AM 2 points [-]

That was in reference to the labor issue, right?

Comment author: CarlShulman 23 January 2015 05:37:17AM 6 points [-]

AI that can't compete in the job market probably isn't a global catastrophic risk.

Comment author: JoshuaZ 15 January 2015 11:26:00PM 5 points [-]

This is good news. In general, since all forms of existential risk seem underfunded as a whole, funding more to any one of them is a good thing. But a donation of this size for AI specifically makes me now start to wonder if people should identify other existential risks which are now more underfunded. In general, it takes a very large amount of money to change what has the highest marginal return, but this is a pretty large donation.

Comment author: CarlShulman 17 January 2015 12:11:10AM *  7 points [-]

GiveWell is on the case, and has said it is looking at bio threats (as well as nukes, solar storms, interruptions of agriculture). See their blog post on global catastrophic risks potential focus areas.

The open letter is an indication that GiveWell should take AI risk more seriously, while the Musk donation is an indication that near-term room for more funding will be lower. That could go either way.

On the room for more funding question, it's worth noting that GiveWell and Good Ventures are now moving tens of millions of dollars per year, and have been talking about moving quite a bit more than Musk's donation to the areas the Open Philanthropy Project winds up prioritizing.

However, even if the amount of money does not exhaust the field, there may be limits on how fast it can be digested, and the efficient growth path, that would favor gradually increasing activity.

Comment author: Pablo_Stafforini 15 January 2015 05:50:46AM *  1 point [-]

Why should we consider possible rather than actual experiences in this context? It seems that cryonics patients who are successfully revived will retain their original reward circuitry, so I don't see why we should expect their best possible experiences to be as good as their worst possible experiences are bad, given that this is not the case for current humans.

Comment author: CarlShulman 16 January 2015 02:28:17AM *  2 points [-]

For some of the same reasons depressed people take drugs to elevate their mood.

Comment author: CarlShulman 26 December 2014 09:58:02PM 0 points [-]

Typo, "amplified" vs "amplify":

"on its motherboard as a makeshift radio to amplified oscillating signals from nearby computers"

Comment author: Brian_Tomasik 15 December 2014 05:03:27AM 2 points [-]

Thanks for the correction! I changed "endorsed" to "discussed" in the OP. What I meant to convey was that these authors endorsed the logic of the argument given the premises (ignoring sim scenarios), rather than that they agreed with the argument all things considered.

Comment author: CarlShulman 15 December 2014 05:16:13AM 2 points [-]

Thanks Brian.

Comment author: CarlShulman 15 December 2014 04:59:07AM *  2 points [-]

It has been endorsed by Robin Hanson, Carl Shulman, and Nick Bostrom.

The article you cite for Shulman and Bostrom does not endorse the SIA-doomsday argument. It describes it, but:

  • Doesn't take a stance on the SIA; it does an analysis of alternatives including SIA
  • Argues that the interaction with the Simulation Argument changes the conclusion of the Fermi Paradox SIA Doomsday argument given the assumption of SIA.
Comment author: examachine 17 November 2014 02:23:48PM 0 points [-]

We believe we can achieve trans-sapient performance by 2018, he is not that off the mark. But dangers as such, those are highly over-blown, exaggerated, pseudo-scientific fears, as always.

Comment author: CarlShulman 17 November 2014 05:37:49PM *  4 points [-]

By "we" do you mean Gök Us Sibernetik Ar & Ge in Turkey? How many people work there?

Comment author: PhilGoetz 07 October 2014 11:05:27PM 1 point [-]

It would be better than nothing. I am grinding one of my favorite axes more than I probably should. But those numbers make my case. My intuition says it would be hard to mine a few million SNPs, pick the most strongly associated 9500, and have them account for less than .29 of the variance, even if there were no relationship at all. And height is probably a very simple property, which may depend mainly on the intensity and duration of expression of a single growth program, minus interference from deficiencies or programs competing for resources.

Comment author: CarlShulman 07 October 2014 11:41:54PM 3 points [-]

"My intuition says it would be hard to mine a few million SNPs, pick the most strongly associated 9500, and have them account for less than .29 of the variance, even if there were no relationship at all."

With sample sizes of thousands or low tens of thousands you'd get almost nothing. Going from 130k to 250k subjects took it from 0.13 to 0.29 (where the total contribution of all common additive effects is around 0.5).

Most of the top 9500 are false positives (the top 697 are genome-wide significant and contribute most of the variance explained). Larger sample sizes let you overcome noise and correctly weight the alleles with actual effects. The approach looks set to explain everything you can get (and the bulk of heritability for height and IQ) without whole genome sequencing for rare variants just by scaling up another order of magnitude.

Comment author: PhilGoetz 07 October 2014 11:56:19AM *  1 point [-]

One problem is that for that approach, you would need, say, standardized IQ tests and genomes for a large number of people, and then to identify genome properties correlated with high IQ.

First, all biologists everywhere are still obsessed with "one gene" answers. Even when they use big-data tools, they use them to come up with lists of genes, each of which they say has a measurable independent contribution to whatever it is they're studying. This is looking for your keys under the lamppost. The effect of one gene allele depends on what alleles of other genes are present. But try to find anything in the literature acknowledging that. (Admittedly we have probably evolved for high independence of genes, so that we can reproduce thru sex.)

Second, as soon as you start identifying genome properties associated with IQ, you'll get accused of racism.

Comment author: CarlShulman 07 October 2014 11:32:24PM *  3 points [-]

You can deal with epistasis using the techniques Hsu discusses and big datasets, and in any case additive variance terms account for most of the heritability even without doing that. There is much more about epistasis (and why it is of secondary importance for characterizing the variation) in the linked preprint.

View more: Next