gjm comments on 2013 Survey Results - Less Wrong

74 Post author: Yvain 19 January 2014 02:51AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (558)

You are viewing a single comment's thread. Show more comments above.

Comment author: XiXiDu 19 January 2014 01:33:48PM *  5 points [-]

Unfriendly AI: 233, 14.2%

Nanotech/grey goo: 57, 3.5%

Could someone who voted for unfriendly AI explain how nanotech or biotech isn't much more of a risk than unfriendly AI (I'll assume MIRI's definition here)?

I ask this question because it seems to me that even given a technological singularity there should be enough time for "unfriendly humans" to use precursors to fully fledged artificial general intelligence (e.g. advanced tool AI) in order to solve nanotechnology or advanced biotech. Technologies which themselves will enable unfriendly humans to cause a number of catastrophic risks (e.g. pandemics, nanotech wars, perfect global surveillance (an eternal tyranny) etc.).

Unfriendly AI, as imagined by MIRI, seems to be the end product of a developmental process that provides humans ample opportunity to wreck havoc.

I just don't see any good reason to believe that the tools and precursors to artificial general intelligence are not themselves disruptive technologies.

And in case you believe advanced nanotechnology to be infeasible, but unfriendly AI to be an existential risk, what concrete scenarios do you imagine on how such an AI could cause human extinction without nanotech?

Comment author: gjm 19 January 2014 02:07:29PM 3 points [-]

Presumably many people fear a very rapid "hard takeoff" where the time from "interesting slightly-smarter-than-human AI experiment" to "full-blown technological singularity underway" is measured in at days (or less) rather than months or years.

Comment author: XiXiDu 19 January 2014 03:45:45PM *  1 point [-]

The AI risk scenario that Eliezer Yudkowsky relatively often uses is that of the AI solving the protein folding problem.

If you believe a "hard takeoff" to be probable, what reason is there to believe that the distance between a.) an AI capable of cracking that specific problem and b.) an AI triggering an intelligence explosion is too short for humans to do something similarly catastrophic as what the AI would have done with the resulting technological breakthrough?

In other words, does the protein folding problem require AI to reach a level of sophistication that would allow humans, or the AI itself, within days or months, to reach the stages where it undergoes an intelligence explosion? How so?

Comment author: NancyLebovitz 26 January 2014 01:03:13AM 2 points [-]

My assumption is that the protein-folding problem is unimaginably easier than an AI doing recursive self-improvement without breaking itself.

Admittedly, Eliezer is describing something harder than the usual interpretation of the protein-folding problem, but it still seems a lot less general than a program making itself more intelligent.

Comment author: TheOtherDave 19 January 2014 04:55:43PM 1 point [-]

Is this question equivalent to "Is the protein-folding problem equivalently hard to the build-a-smarter-intelligence-than-I-am problem?" ? It seems like it ought to be, but I'm genuinely unsure, as the wording of your question kind of confuses me.

If so, my answer would be that it depends on how intelligent I am, since I expect the second problem to get more difficult as I get more intelligent. If we're talking about the actual me... yeah, I don't have higher confidence either way.

Comment author: XiXiDu 19 January 2014 06:17:46PM *  1 point [-]

Is this question equivalent to "Is the protein-folding problem equivalently hard to the build-a-smarter-intelligence-than-I-am problem?" ?

It is mostly equivalent. Is it easier to design an AI that can solve one specific hard problem than an AI that can solve all hard problems?

Expecting that only a fully-fledged artificial general intelligence is able to solve the protein-folding problem seems to be equivalent to believing the conjunction "an universal problem solver can solve the protein-folding problem" AND "an universal problem solver is easier to solve than the protein-folding problem". Are there good reasons to believe this?

ETA: My perception is that people who believe unfriendly AI to come sooner than nanotechnology believe that it is easier to devise a computer algorithm to devise a computer algorithm to predict protein structures from their sequences rather than to directly devise a computer algorithm to predict protein structures from their sequences. This seems counter-intuitive.

Comment author: TheOtherDave 19 January 2014 08:23:40PM 1 point [-]

it is easier to devise a computer algorithm to devise a computer algorithm to predict protein structures from their sequences rather than to directly devise a computer algorithm to predict protein structures from their sequences. This seems counter-intuitive.

Ah, this helps, thanks.

For my own part, the idea that we might build tools better at algorithm-development than our own brains are doesn't seem counterintuitive at all... we build a lot of tools that are better than our own brains at a lot of things. Neither does it seem implausible that there exist problems that are solvable by algorithm-development, but whose solution requires algorithms that our brains aren't good enough algorithm-developers to develop algorithms to solve.

So it seems reasonable enough that there are problems which we'll solve faster by developing algorithm-developers to solve them for us, than by trying to solve the problem itself.

Whether protein-folding is one of those problems, I have absolutely no idea. But it sounds like your position isn't unique to protein-folding.

Comment author: XiXiDu 20 January 2014 10:18:53AM *  -1 points [-]

For my own part, the idea that we might build tools better at algorithm-development than our own brains are doesn't seem counterintuitive at all...

So you believe that many mathematical problems are too hard for humans to solve but that humans can solve all of mathematics?

I already asked Timothy Gowers a similar question and I really don't understand how people can believe this.

In order to create an artificial mathematician it is first necssary to discover, prove and encode the mathematics of discovering and proving non-arbitrary mathematics (i.e. to encode a formalization of the natural language goal “be as good as humans at mathematics”). This seems much more difficult than solving any single problem. And that's just mathematics...

Neither does it seem implausible that there exist problems that are solvable by algorithm-development, but whose solution requires algorithms that our brains aren't good enough algorithm-developers to develop algorithms to solve.

I do not disagree with this in theory. After all, evolution is an example of this. But it was not computationally simple for evolution to do so and it did do so by a bottom-up approach, piece by piece.

So it seems reasonable enough that there are problems which we'll solve faster by developing algorithm-developers to solve them for us, than by trying to solve the problem itself.

To paraphrase your sentence: It seems reasonable that we can design an algorithm that can design algorithms that we are unable to design.

This can only be true in the sense that this algorithm-design-algorithm would run faster on other computational substrates than human brains. I agree that this is possible. But are relevant algorithms in a class for which a speed advantage would be substantial?

Again, in theory, all of this is fine. But how do you know that general algorithm design can be captured by an algorithm that a.) is simpler than most specific algorithms b.) whose execution is faster than that of evolution c.) which can locate useful algorithms within the infinite space of programs and d.) that humans will discover this algorithm?

Some people here seem to be highly confident about this. How?

ETA: Maybe this post better highlights the problems I see.

Comment author: [deleted] 21 January 2014 06:41:41PM 0 points [-]

I already asked Timothy Gowers a similar question and I really don't understand how people can believe this.

Why did you interview Gowers anyway? It's not like he has any domain knowledge in artificial intelligence.

Comment author: XiXiDu 21 January 2014 07:35:27PM *  2 points [-]

Why did you interview Gowers anyway?

He works on automatic theorem proving. In addition I was simply curious what a topnotch mathematician thinks about the whole subject.

Comment author: TheOtherDave 20 January 2014 02:50:30PM 0 points [-]

So you believe that many mathematical problems are too hard for humans to solve but that humans can solve all of mathematics?

All of mathematics? Dunno. I'm not even sure what that phrase refers to. But sure, there exist mathematical problems that humans can't solve unaided, but which can be solved by tools we create.

I really don't understand how people can believe this. In order to create an artificial mathematician it is first necssary to discover, prove and encode the mathematics of discovering and proving non-arbitrary mathematics (i.e. to encode a formalization of the natural language goal “be as good as humans at mathematics”). This seems much more difficult than solving any single problem.

In other words: you believe that if we take all possible mathematical problems and sort them by difficulty-to-humans, that one will turn out to be the most difficult?

I don't mean to put words in your mouth here, I just want to make sure I understood you.

If so... why do you believe that?

To paraphrase your sentence: It seems reasonable that we can design an algorithm that can design algorithms that we are unable to design.

Yes, that's a fair paraphrase.

This can only be true in the sense that this algorithm-design-algorithm would run faster on other computational substrates than human brains. I agree that this is possible. But are relevant algorithms in a class for which a speed advantage would be substantial?

Nah, I'm not talking about speed.

But how do you know that general algorithm design can be captured by an algorithm that a.) is simpler than most specific algorithms

Can you clarify what you mean by "simpler" here? If you mean in some objective sense, like how many bits would be required to specify it in a maximally compressed form or some such thing, I don't claim that. If you mean easier for humans to develop... well, of course I don't know that, but it seems more plausible to me than the idea that human brains happen to be the optimal machine for developing algorithms.

b.) whose execution is faster than that of evolution

We have thus far done pretty good at this; evolution is slow. I don't expect that to change.

c.) which can locate useful algorithms within the infinite space of programs

Well, this is part of the problem specification. A tool for generating useless algorithms would be much easier to build.

d.) that humans will discover this algorithm?

(shrug) Perhaps we won't. Perhaps we won't solve protein-folding, either.

Some people here seem to be highly confident about this. How?

Can you quantify "highly confident" here?

For example, what confidence do you consider appropriate for the idea that there exists at least one useful algorithm A, and at least one artificial algorithm-developer AD, such that it's easier for humans to develop AD than to develop A, and it's easier for AD to develop A than it is for humans to develop A?

Comment author: XiXiDu 20 January 2014 04:35:38PM 1 point [-]

In other words: you believe that if we take all possible mathematical problems and sort them by difficulty-to-humans, that one will turn out to be the most difficult?

If you want an artificial agent to solve problems for you then you need to somehow constrain it, since there are an infinite number of problems. In this sense it is easier to specify an AI to solve a single problem, such as the protein-folding problem, rather than all problems (whatever that means, supposedly "general intelligence").

The problem here is that goals and capabilities are not orthogonal. It is more difficult to design an AI that can play all possible games, and then tell it to play a certain game, than designing an AI to play a certain game in the first place.

Can you clarify what you mean by "simpler" here?

The information theoretic complexity of the code of a general problem solver constrained to solve a specific problem should be larger than the constrain itself. I assume here that the constrain is most of the work in getting an algorithm to do useful work. Which I like to exemplify by the difference between playing chess and doing mathematics. Both are rigorously defined activities, one of which has a clear and simple terminal goal, the other being infinite and thus hard to constrain.

For example, what confidence do you consider appropriate for the idea that there exists at least one useful algorithm A, and at least one artificial algorithm-developer AD, such that it's easier for humans to develop AD than to develop A, and it's easier for AD to develop A than it is for humans to develop A?

The more general the artificial algorithm-developer is, the less confident I am that it is easier to create than the specific algorithm itself.

Comment author: TheOtherDave 20 January 2014 08:48:02PM 1 point [-]

I agree that specialized tools to perform particular tasks are easier to design than general-purpose tools. It follows that if I understand a problem well enough to know what tasks must be performed in order to solve that problem, it should be easier to solve that problem by designing specialized tools to perform those tasks, than by designing a general-purpose problem solver.

I agree that the complexity of a general problem solver should be larger than that of whatever constrains it to work on a specific task.

I agree that for a randomly selected algorithm A2, and a randomly selected artificial algorithm-developer AD2, the more general AD2 is the more likely it is that A2 is easier to develop than AD2.

Comment author: gjm 19 January 2014 05:13:47PM -1 points [-]

I have no strong opinion on whether a "hard takeoff" is probable. (Because I haven't thought about it a lot, not because I think the evidence is exquisitely balanced.) I don't see any particular reason to think that protein folding is the only possible route to a "hard takeoff".

What is alleged to make for an intelligence explosion is having a somewhat-superhuman AI that's able to modify itself or make new AIs reasonably quickly. A solution to the protein folding problem might offer one way to make new AIs much more capable than oneself, I suppose, but it's hardly the only way one can envisage.