XiXiDu comments on 2013 Survey Results - Less Wrong

74 Post author: Yvain 19 January 2014 02:51AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (558)

Sort By: Leading

You are viewing a single comment's thread.

Comment author: XiXiDu 19 January 2014 01:33:48PM *  5 points [-]

Unfriendly AI: 233, 14.2%

Nanotech/grey goo: 57, 3.5%

Could someone who voted for unfriendly AI explain how nanotech or biotech isn't much more of a risk than unfriendly AI (I'll assume MIRI's definition here)?

I ask this question because it seems to me that even given a technological singularity there should be enough time for "unfriendly humans" to use precursors to fully fledged artificial general intelligence (e.g. advanced tool AI) in order to solve nanotechnology or advanced biotech. Technologies which themselves will enable unfriendly humans to cause a number of catastrophic risks (e.g. pandemics, nanotech wars, perfect global surveillance (an eternal tyranny) etc.).

Unfriendly AI, as imagined by MIRI, seems to be the end product of a developmental process that provides humans ample opportunity to wreck havoc.

I just don't see any good reason to believe that the tools and precursors to artificial general intelligence are not themselves disruptive technologies.

And in case you believe advanced nanotechnology to be infeasible, but unfriendly AI to be an existential risk, what concrete scenarios do you imagine on how such an AI could cause human extinction without nanotech?

Comment author: dspeyer 20 January 2014 05:03:30AM 5 points [-]

Two reasons: uFAI is deadlier than nano/biotech and easier to cause by accident.

If you build an AGI and botch friendliness, the world is in big trouble. If you build a nanite and botch friendliness, you have a worthless nanite. If you botch growth-control, it's still probably not going to eat more than your lab before it runs into micronutrient deficiencies. And if you somehow do build grey goo, people have a chance to call ahead of it and somehow block its spread. What makes uFAI so dangerous is that it can outthink any responders. Grey goo doesn't do that.

Comment author: XiXiDu 20 January 2014 09:37:30AM *  1 point [-]

This seems like a consistent answer to my original question. Thank you.

If you botch growth-control, it's still probably not going to eat more than your lab before it runs into micronutrient deficiencies.

You on the one hand believe that grey goo is not going to eat more than your lab before running out of steam and on the other hand believe that AI in conjunction with nanotechnology will not run out of steam, or only after humanity's demise.

And if you somehow do build grey goo, people have a chance to call ahead of it and somehow block its spread.

You further believe that AI can't be stopped but grey goo can.

Comment author: dspeyer 23 January 2014 01:05:02AM 7 points [-]

Accidental grey goo is unlikely to get out of the lab. If I design a nanite to self-replicate and spread through a living brain to report useful data to me, and I have an integer overflow bug in the "stop reproducing" code so that it never stops, I will probably kill the patient but that's it. Because the nanites are probably using glucose+O2 as their energy source. I never bothered to design them for anything else. Similarly if I sent solar-powered nanites to clean up Chernobyl I probably never gave them copper-refining capability -- plenty of copper wiring to eat there -- but if I botch the growth code they'll still stop when there's no more pre-refined copper to eat. Designing truely dangerous grey goo is hard and would have to be a deliberate effort.

As for stopping grey goo, why not? There'll be something that destroys it. Extreme heat, maybe. And however fast it spreads, radio goes faster. So someone about to get eaten radios a far-off military base saying "help! grey goo!" and the bomber planes full of incindiaries come forth to meet it.

Contrast uFAI, which has thought of this before it surfaces, and has already radioed forged orders to take all the bomber planes apart for maintenance or something.

Comment author: Eugine_Nier 23 January 2014 02:20:04AM 0 points [-]

Also, the larger the difference between the metabolisms of the nanites and the biosphere, the easier it is to find something toxic to one but not the other.

Comment author: KnaveOfAllTrades 19 January 2014 02:15:31PM *  4 points [-]

I think a large part of that may simply be LW'ers being more familiar with UFAI and therefore knowing more details that make it seem like a credible threat / availability heuristic. So for example I would expect e.g. Eliezer's estimate of the gap between the two to be less than the LW average. (Edit: Actually, I don't mean that his estimate of the gap would be lower, but something more like it would seem like less of a non-question to him and he would take nanotech a lot more seriously, even if he did still come down firmly on the side of UFAI being a bigger concern.)

Comment author: RobbBB 20 January 2014 11:24:42AM 3 points [-]

If I understand Eliezer's view, it's that we can't be extremely confident of whether artificial superintelligence or perilously advanced nanotechnology will come first, but (a) there aren't many obvious research projects likely to improve our chances against grey goo, whereas (b) there are numerous obvious research projects likely to improve our changes against unFriendly AI, and (c) inventing Friendly AI would solve both the grey goo problem and the uFAI problem.

Cheer up, the main threat from nanotech may be from brute-forced AI going FOOM and killing everyone long before nanotech is sophisticated enough to reproduce in open-air environments.

The question is what to do about nanotech disaster. As near as I can figure out, the main path into [safety] would be a sufficiently fast upload of humans followed by running them at a high enough speed to solve FAI before everything goes blooey.

But that's already assuming pretty sophisticated nanotech. I'm not sure what to do about moderately strong nanotech. I've never really heard of anything good to do about nanotech. It's one reason I'm not sending attention there.

Comment author: Kawoomba 20 January 2014 12:05:36PM *  2 points [-]

Considering ... please wait ... tttrrrrrr ... prima facie, Grey Goo scenarios may seem more likely simply because they make better "Great Filter" candidates; whereas a near-arbitrary Foomy would spread out in all directions at relativistic speeds, with self-replicators no overarching agenty will would accelerate them out across space (the insulation layer with the sparse materials).

So if we approached x-risks through the prism of their consequences (extinction, hence no discernible aliens) and then reasoned our way back to our present predicament, we would note that within AI-power-hierachies (AGI and up) there are few distinct long-term dan-ranks (most such ranks would only be intermediary steps while the AI falls "upwards"), whereas it is much more conceivable that there are self-replicators which can e.g. transform enough carbon into carbon copies (of themselves) to render a planet uninhabitable, but which lack the oomph (and the agency) to do the same to their light cone.

Then I thought that Grey Goo may yet be more of a setback, a restart, not the ultimate planetary tombstone. Once everything got transformed into resident von Neumann machines, evolution amongst those copies would probably occur at some point, until eventually there may be new macroorganisms organized from self-replicating building blocks, which may again show significant agency and turn their gaze towards the stars.

Then again (round and round it goes), Grey Goo would still remain the better transient Great Filter candidate (and thus more likely than uFAI when viewed through the Great Filter spectroscope), simply because of the time scales involved. Assuming the Great Filter is in fact an actual absence of highly evolved civilizations in our neighborhood (as opposed to just hiding or other shenanigans), Grey Goo biosphere-resets may stall the Kardashev climb sufficiently to explain us not having witnessed other civs yet. Also, Grey Goo transformations may burn up all the local negentropy (nanobots don't work for free), precluding future evolution.

Anyways, I agree that FAI would be the most realistic long-term guardian against accidental nanogoo (ironically, also uFAI).

Comment author: RobbBB 20 January 2014 11:47:13PM *  4 points [-]

My own suspicion is that the bulk of the Great Filter is behind us. We've awoken into a fairly old universe. (Young in terms of total lifespan, but old in terms of maximally life-sustaining years.) If intelligent agents evolve easily but die out fast, we should expect to see a young universe.

We can also consider the possibility of stronger anthropic effects. Suppose intelligent species always succeed in building AGIs that propagate outward at approximately the speed of light, converting all life-sustaining energy into objects or agents outside our anthropic reference class. Then any particular intelligent species Z will observe a Fermi paradox no matter how common or rare intelligent species are, because if any other high-technology species had arisen first in Z's past light cone it would have prevented the existence of anything Z-like. (However, species in this scenario will observe much younger universes the smaller a Past Filter there is.)

So grey goo creates an actual Future Filter by killing their creators, but hyper-efficient hungry AGI creates an anthropic illusion of a Future Filter by devouring everything in their observable universe except the creator species. (And possibly devouring the creator species too; that's unclear. Evolved alien values are less likely to eat the universe than artificial unFriendly-relative-to-alien-values values are, but perhaps not dramatically less likely; and unFriendly-relative-to-creator AI is almost certainly more common than Friendly-relative-to-creator AI.)

Once everything got transformed into resident von Neumann machines, evolution amongst those copies would probably occur at some point, until eventually there may be new macroorganisms organized from self-replicating building blocks, which may again show significant agency and turn their gaze towards the stars.

Probably won't happen before the heat death of the universe. The scariest thing about nanodevices is that they don't evolve. A universe ruled by nanodevices is plausibly even worse (relative to human values) than one ruled by uFAI like Clippy, because it's vastly less interesting.

(Not because paperclips are better than nanites, but because there's at least one sophisticated mind to be found.)

Comment author: gjm 19 January 2014 02:07:29PM 3 points [-]

Presumably many people fear a very rapid "hard takeoff" where the time from "interesting slightly-smarter-than-human AI experiment" to "full-blown technological singularity underway" is measured in at days (or less) rather than months or years.

Comment author: XiXiDu 19 January 2014 03:45:45PM *  1 point [-]

The AI risk scenario that Eliezer Yudkowsky relatively often uses is that of the AI solving the protein folding problem.

If you believe a "hard takeoff" to be probable, what reason is there to believe that the distance between a.) an AI capable of cracking that specific problem and b.) an AI triggering an intelligence explosion is too short for humans to do something similarly catastrophic as what the AI would have done with the resulting technological breakthrough?

In other words, does the protein folding problem require AI to reach a level of sophistication that would allow humans, or the AI itself, within days or months, to reach the stages where it undergoes an intelligence explosion? How so?

Comment author: NancyLebovitz 26 January 2014 01:03:13AM 2 points [-]

My assumption is that the protein-folding problem is unimaginably easier than an AI doing recursive self-improvement without breaking itself.

Admittedly, Eliezer is describing something harder than the usual interpretation of the protein-folding problem, but it still seems a lot less general than a program making itself more intelligent.

Comment author: TheOtherDave 19 January 2014 04:55:43PM 1 point [-]

Is this question equivalent to "Is the protein-folding problem equivalently hard to the build-a-smarter-intelligence-than-I-am problem?" ? It seems like it ought to be, but I'm genuinely unsure, as the wording of your question kind of confuses me.

If so, my answer would be that it depends on how intelligent I am, since I expect the second problem to get more difficult as I get more intelligent. If we're talking about the actual me... yeah, I don't have higher confidence either way.

Comment author: XiXiDu 19 January 2014 06:17:46PM *  1 point [-]

Is this question equivalent to "Is the protein-folding problem equivalently hard to the build-a-smarter-intelligence-than-I-am problem?" ?

It is mostly equivalent. Is it easier to design an AI that can solve one specific hard problem than an AI that can solve all hard problems?

Expecting that only a fully-fledged artificial general intelligence is able to solve the protein-folding problem seems to be equivalent to believing the conjunction "an universal problem solver can solve the protein-folding problem" AND "an universal problem solver is easier to solve than the protein-folding problem". Are there good reasons to believe this?

ETA: My perception is that people who believe unfriendly AI to come sooner than nanotechnology believe that it is easier to devise a computer algorithm to devise a computer algorithm to predict protein structures from their sequences rather than to directly devise a computer algorithm to predict protein structures from their sequences. This seems counter-intuitive.

Comment author: TheOtherDave 19 January 2014 08:23:40PM 1 point [-]

it is easier to devise a computer algorithm to devise a computer algorithm to predict protein structures from their sequences rather than to directly devise a computer algorithm to predict protein structures from their sequences. This seems counter-intuitive.

Ah, this helps, thanks.

For my own part, the idea that we might build tools better at algorithm-development than our own brains are doesn't seem counterintuitive at all... we build a lot of tools that are better than our own brains at a lot of things. Neither does it seem implausible that there exist problems that are solvable by algorithm-development, but whose solution requires algorithms that our brains aren't good enough algorithm-developers to develop algorithms to solve.

So it seems reasonable enough that there are problems which we'll solve faster by developing algorithm-developers to solve them for us, than by trying to solve the problem itself.

Whether protein-folding is one of those problems, I have absolutely no idea. But it sounds like your position isn't unique to protein-folding.

Comment author: XiXiDu 20 January 2014 10:18:53AM *  -1 points [-]

For my own part, the idea that we might build tools better at algorithm-development than our own brains are doesn't seem counterintuitive at all...

So you believe that many mathematical problems are too hard for humans to solve but that humans can solve all of mathematics?

I already asked Timothy Gowers a similar question and I really don't understand how people can believe this.

In order to create an artificial mathematician it is first necssary to discover, prove and encode the mathematics of discovering and proving non-arbitrary mathematics (i.e. to encode a formalization of the natural language goal “be as good as humans at mathematics”). This seems much more difficult than solving any single problem. And that's just mathematics...

Neither does it seem implausible that there exist problems that are solvable by algorithm-development, but whose solution requires algorithms that our brains aren't good enough algorithm-developers to develop algorithms to solve.

I do not disagree with this in theory. After all, evolution is an example of this. But it was not computationally simple for evolution to do so and it did do so by a bottom-up approach, piece by piece.

So it seems reasonable enough that there are problems which we'll solve faster by developing algorithm-developers to solve them for us, than by trying to solve the problem itself.

To paraphrase your sentence: It seems reasonable that we can design an algorithm that can design algorithms that we are unable to design.

This can only be true in the sense that this algorithm-design-algorithm would run faster on other computational substrates than human brains. I agree that this is possible. But are relevant algorithms in a class for which a speed advantage would be substantial?

Again, in theory, all of this is fine. But how do you know that general algorithm design can be captured by an algorithm that a.) is simpler than most specific algorithms b.) whose execution is faster than that of evolution c.) which can locate useful algorithms within the infinite space of programs and d.) that humans will discover this algorithm?

Some people here seem to be highly confident about this. How?

ETA: Maybe this post better highlights the problems I see.

Comment author: [deleted] 21 January 2014 06:41:41PM 0 points [-]

I already asked Timothy Gowers a similar question and I really don't understand how people can believe this.

Why did you interview Gowers anyway? It's not like he has any domain knowledge in artificial intelligence.

Comment author: XiXiDu 21 January 2014 07:35:27PM *  2 points [-]

Why did you interview Gowers anyway?

He works on automatic theorem proving. In addition I was simply curious what a topnotch mathematician thinks about the whole subject.

Comment author: TheOtherDave 20 January 2014 02:50:30PM 0 points [-]

So you believe that many mathematical problems are too hard for humans to solve but that humans can solve all of mathematics?

All of mathematics? Dunno. I'm not even sure what that phrase refers to. But sure, there exist mathematical problems that humans can't solve unaided, but which can be solved by tools we create.

I really don't understand how people can believe this. In order to create an artificial mathematician it is first necssary to discover, prove and encode the mathematics of discovering and proving non-arbitrary mathematics (i.e. to encode a formalization of the natural language goal “be as good as humans at mathematics”). This seems much more difficult than solving any single problem.

In other words: you believe that if we take all possible mathematical problems and sort them by difficulty-to-humans, that one will turn out to be the most difficult?

I don't mean to put words in your mouth here, I just want to make sure I understood you.

If so... why do you believe that?

To paraphrase your sentence: It seems reasonable that we can design an algorithm that can design algorithms that we are unable to design.

Yes, that's a fair paraphrase.

This can only be true in the sense that this algorithm-design-algorithm would run faster on other computational substrates than human brains. I agree that this is possible. But are relevant algorithms in a class for which a speed advantage would be substantial?

Nah, I'm not talking about speed.

But how do you know that general algorithm design can be captured by an algorithm that a.) is simpler than most specific algorithms

Can you clarify what you mean by "simpler" here? If you mean in some objective sense, like how many bits would be required to specify it in a maximally compressed form or some such thing, I don't claim that. If you mean easier for humans to develop... well, of course I don't know that, but it seems more plausible to me than the idea that human brains happen to be the optimal machine for developing algorithms.

b.) whose execution is faster than that of evolution

We have thus far done pretty good at this; evolution is slow. I don't expect that to change.

c.) which can locate useful algorithms within the infinite space of programs

Well, this is part of the problem specification. A tool for generating useless algorithms would be much easier to build.

d.) that humans will discover this algorithm?

(shrug) Perhaps we won't. Perhaps we won't solve protein-folding, either.

Some people here seem to be highly confident about this. How?

Can you quantify "highly confident" here?

For example, what confidence do you consider appropriate for the idea that there exists at least one useful algorithm A, and at least one artificial algorithm-developer AD, such that it's easier for humans to develop AD than to develop A, and it's easier for AD to develop A than it is for humans to develop A?

Comment author: XiXiDu 20 January 2014 04:35:38PM 1 point [-]

In other words: you believe that if we take all possible mathematical problems and sort them by difficulty-to-humans, that one will turn out to be the most difficult?

If you want an artificial agent to solve problems for you then you need to somehow constrain it, since there are an infinite number of problems. In this sense it is easier to specify an AI to solve a single problem, such as the protein-folding problem, rather than all problems (whatever that means, supposedly "general intelligence").

The problem here is that goals and capabilities are not orthogonal. It is more difficult to design an AI that can play all possible games, and then tell it to play a certain game, than designing an AI to play a certain game in the first place.

Can you clarify what you mean by "simpler" here?

The information theoretic complexity of the code of a general problem solver constrained to solve a specific problem should be larger than the constrain itself. I assume here that the constrain is most of the work in getting an algorithm to do useful work. Which I like to exemplify by the difference between playing chess and doing mathematics. Both are rigorously defined activities, one of which has a clear and simple terminal goal, the other being infinite and thus hard to constrain.

For example, what confidence do you consider appropriate for the idea that there exists at least one useful algorithm A, and at least one artificial algorithm-developer AD, such that it's easier for humans to develop AD than to develop A, and it's easier for AD to develop A than it is for humans to develop A?

The more general the artificial algorithm-developer is, the less confident I am that it is easier to create than the specific algorithm itself.

Comment author: TheOtherDave 20 January 2014 08:48:02PM 1 point [-]

I agree that specialized tools to perform particular tasks are easier to design than general-purpose tools. It follows that if I understand a problem well enough to know what tasks must be performed in order to solve that problem, it should be easier to solve that problem by designing specialized tools to perform those tasks, than by designing a general-purpose problem solver.

I agree that the complexity of a general problem solver should be larger than that of whatever constrains it to work on a specific task.

I agree that for a randomly selected algorithm A2, and a randomly selected artificial algorithm-developer AD2, the more general AD2 is the more likely it is that A2 is easier to develop than AD2.

Comment author: gjm 19 January 2014 05:13:47PM -1 points [-]

I have no strong opinion on whether a "hard takeoff" is probable. (Because I haven't thought about it a lot, not because I think the evidence is exquisitely balanced.) I don't see any particular reason to think that protein folding is the only possible route to a "hard takeoff".

What is alleged to make for an intelligence explosion is having a somewhat-superhuman AI that's able to modify itself or make new AIs reasonably quickly. A solution to the protein folding problem might offer one way to make new AIs much more capable than oneself, I suppose, but it's hardly the only way one can envisage.

Comment author: MugaSofer 21 January 2014 12:58:24PM -1 points [-]

perfect global surveillance (an eternal tyranny)

Oooh, that would nicely solve the problem of the other impending apocalypses, wouldn't it?

Comment author: Eugine_Nier 20 January 2014 03:48:15AM -2 points [-]

How is grey goo realistically a threat, especially without a uFAI guiding it? Remember: grey goo has to out-compete the existing biosphere. This seems hard.

Comment author: Risto_Saarelma 20 January 2014 07:08:10AM 3 points [-]

Gray goo designs don't need to be built up with miniscule steps, each of which makes evolutionary sense, like the evolved biosphere was. This might open up designs that are feasible to invent, very difficult to evolve naturally, and sufficiently different from anything in the natural biosphere to do serious damage even without a billion years of evolutionary optimization.

Comment author: [deleted] 20 January 2014 08:32:54AM 1 point [-]

So far in the history of technology, deliberate design over a period of years has proven consistently less clever (in the sense of "efficiently capturing available mass-energy as living bodies") than evolution operating over aeons.

Comment author: Risto_Saarelma 20 January 2014 06:07:04PM 3 points [-]

And so far the more clever biosphere design is getting its thermodynamical shit handed to it everywhere the hairless apes go and decide to start building and burning stuff.

If a wish to a genie went really wrong and switched the terminal goals of every human on earth into destroying the earth's biosphere in the most thorough and efficient way possible, the biosphere would be toast, much cleverer than the humans or not. If the wish gave you a billion AGI robots with the that terminal goal, any humans getting in their way would be dead and the biosphere would be toast again. But if the robots were really small and maybe not that smart, then we'd be entirely okay, right?

Comment author: [deleted] 20 January 2014 10:02:10PM 1 point [-]

Think about it: it's the intelligence that makes things dangerous. Try and engineer a nanoscale robot that's going to be able to unintelligently disassemble all living matter without getting eaten by a bacterium. Unintelligently, mind you: no invoking superintelligence as your fallback explanation.

Comment author: Risto_Saarelma 21 January 2014 03:32:14AM 1 point [-]

Humans aren't superintelligent, and are still able to design macroscale technology that can wipe out biospheres and that can be deployed and propagated with less intelligence than it took to design. I'm not taking the bet that you can't shrink down the scale of the technology and the amount of intelligence needed to deploy it while keeping around the at least human level designer. That sounds too much like the "I can't think of a way to do this right now, so it's obviously impossible" play.

Comment author: michaelsullivan 22 January 2014 06:14:23PM 1 point [-]

It seems that very few people considered the bad nanotech scenario obviously impossible, merely less likely to cause a near extinction event than uFAI.

Comment author: [deleted] 21 January 2014 07:42:43AM *  1 point [-]

In addition, to my best knowledge, trained scientists believe it impossible to turn the sky green and have all humans sprout spider legs. Mostly, they believe these things are impossible because they're impossible, not because scientists merely lack the leap of superintelligence or superdetermination necessary to kick logic out and do the impossible.

Comment author: CCC 21 January 2014 09:49:54AM 3 points [-]

If I wanted to turn the sky green for some reason (and had an infinite budget to work with), then one way to do it would be to release a fine, translucent green powder in the upper atmosphere in large quantities. (This might cause problems when it began to drift down far enough that it can be breathed in, of course). Alternatively, I could encase the planet Earth in a solid shell of green glass.

Comment author: CCC 21 January 2014 09:52:30AM 0 points [-]

Make it out of antimatter? Say, a nanoscale amount of anticarbon - just an unintelligent lump?

Dump enough of those on any (matter) biosphere and all the living matter will be very thoroughly disassembled.

Comment author: [deleted] 21 January 2014 12:30:14PM 2 points [-]

That's not a nanoscale robot, is it? It's antimatter: it annihilates matter, because that's what physics says it does. You're walking around the problem I handed you and just solving the "destroy lots of stuff" problem. Yes, it's easy to destroy lots of stuff: we knew that already. And yet if I ask you to invent grey goo in specific, you don't seem able to come up with a feasible design.

Comment author: CCC 21 January 2014 06:09:52PM 0 points [-]

How is it not a nanoscale robot? It is a nanoscale device that performs the assigned task. What does a robot have that the nanoscale anticarbon lump doesn't?

I admit that it's not the sort of thing one thinks of when one thinks of the word 'robot' (to be fair, though, what I think of when I think of the word 'robot' is not nanoscale either). But I have found that, often, a simple solution to a problem can be found by, as you put it, 'walking around' it to get to the desired outcome.

Comment author: Locaha 20 January 2014 09:06:59AM 1 point [-]

I'll have to disagree here. Evolution operating over aeons never got to jet engines and nuclear weapons. Maybe it needs more time?

Comment author: [deleted] 20 January 2014 04:20:35PM 3 points [-]

Category error: neither jet engines nor nuclear weapons capture available/free mass-energy as living (ie: self-reproducing) bodies. Evolution never got to those because it simply doesn't care about them: nuclear bombs can't have grandchildren.

Comment author: Locaha 20 January 2014 05:07:20PM 1 point [-]

You can use both jet engines and nuclear weapons to increase your relative fitness.

There are no living nuclear reactors, either, despite the vast potential of energy.

Comment author: Nornagest 20 January 2014 10:44:26PM 3 points [-]

There are organisms that use gamma radiation as an energy source. If we lived in an environment richer in naturally occurring radioisotopes, I think I'd expect to see more of this sort of thing -- maybe not up to the point of criticality, but maybe so.

Not much point in speculating, really; living on a planet that's better than four billion years old and of middling metallicity puts something of a damper on the basic biological potential of that pathway.

Comment author: Locaha 21 January 2014 07:14:52AM 1 point [-]

Not much point in speculating, really; living on a planet that's better than four billion years old and of middling metallicity puts something of a damper on the basic biological potential of that pathway.

And yet humanity did it, on a much smaller time scale. This is what I'm saying, we are better than evolution at some stuff.

Comment author: [deleted] 20 January 2014 10:03:09PM 0 points [-]

You can use both jet engines and nuclear weapons to increase your relative fitness.

Which living beings created by evolution have done -- also known as us!

Comment author: Locaha 21 January 2014 07:16:39AM 1 point [-]

This would be stretching the definition of evolution beyond its breaking point.

Comment author: CCC 21 January 2014 08:33:23AM *  2 points [-]

Evolution has got as far as basic jet engines; see the octopus for an example.

Interestingly, this page provides some interesting data; it seems that a squid's jet is significantly less energy-efficient than a fish's tail for propulsion. This implies that that's perhaps why we see so little jet propulsion in the oceans...

Comment author: MugaSofer 21 January 2014 12:51:22PM 0 points [-]

So far in the history of technology, deliberate design over a period of years has proven consistently less clever (in the sense of "efficiently capturing available mass-energy as living bodies")

... because we don't know how to build "living bodies". That's a rather unfair comparison, regardless of whether your point is valid.

Although, of course, we built factory farms for that exact purpose, which are indeed more efficient at that task.

And there's genetic engineering, which can leapfrog over millions of years of evolution by nicking (simple, at our current tech level) adaptations from other organisms - whereas evolution would have to recreate them from scratch. I reflexively avoid anti-GM stuff due to overexposure when I was younger, but I wouldn't be surprised if a GM organism could outcompete a wild one, were a mad scientist to choose that as a goal rather than a disaster to be elaborately defended against. (Herbicide-resistant plants, for a start.)

So I suppose it isn't even very good at biasing the results, since it can still fail - depending, of course, on how true of a scotsman you are, because those do take advantage of prexisting adaptations - and artificially induced ones, in the case of farm animals.

(Should this matter? Discuss.)

Comment author: Kawoomba 20 January 2014 09:57:52AM *  1 point [-]

grey goo has to out-compete the existing biosphere. This seems hard.

Really? Von Neumann machines (the universal assembler self-replicating variety, not the computer architecture) versus regular ol' mitosis, and you think mitosis would win out?

I've only ever heard "building self-replicating machinery on a nano-scale is really hard" as the main argument against the immediacy of that particular x-risk, never "even if there were self-replicators on a nano-scale, they would have a hard time out-competing the existing biosphere". Can you elaborate?

Comment author: Vaniver 20 January 2014 06:16:01PM *  3 points [-]

As one of my physics professors put it, "We already have grey goo. They're called bacteria."

The intuition behind the grey goo risk appears to be "as soon as someone makes a machine that can make itself, the world is a huge lump of matter and energy just waiting to be converted into copies of that machine." That is, of course, not true- matter and energy and prized and fought over, and any new contender is going to have to join the fight.

That's not to say it's impossible for an artificial self-replicating nanobot to beat the self-replicating nanobots which have evolved naturally, just that it's hard. For example, it's not clear to me what part of "regular ol' mitosis" you think is regular, and easy to improve upon. Is it that the second copy is built internally, preventing it from attack and corruption?

Comment author: Kawoomba 20 January 2014 06:50:39PM *  2 points [-]

Bacteria et al. are only the locally optimal solution after a long series of selection steps, each of which generally needed to be an improvement upon the previous step, i.e. the result of a greedy algorithm. There are few problems in which you'd expect a greedy algorithm to end up anywhere but in a very local optimum:

DNA is a hilariously inefficient way of storing partly superfluous data (all of which must undergo each mitosis), informational density could be an order/orders of magnitude higher with minor modifications, and the safety redundancies are precarious at best, compared to e.g. Hamming code. A few researchers in a poorly funded government lab can come up with deadlier viruses in a few years (remember the recent controversy) than what nature engineered in millenia. That's not to say that compared to our current macroscopic technology the informational feats of biological data transmission, duplication etc. aren't impressive, but that's only because we've not yet achieved molecular manufacturing (a necessity for a Grey Goo scenario). (We could go into more details on gross biological inefficiencies if you'd like.)

Would you expect some antibodies and phagocytosis to defeat an intelligently engineered self-replicating nanobot the size of a virus (but which doesn't depend on live cells and without the telltale flaws and tradeoffs of Pandemic-reminiscent"can't kill the host cell too quickly" etc.)?

To me it seems like saying "if you drowned the world in acid, the biosphere could well win the fight in a semi-recognizable form and claim the negentropy for themselves" (yes, cells can survive in extremely adverse environments and survive in some sort of niche, but I wouldn't exactly call such a pseudo-equilibrium winning, and self-replicators wouldn't exactly wait for its carbon food source to evolutionary adapt).

Comment author: Vaniver 20 January 2014 07:36:13PM 1 point [-]

A few researchers in a poorly funded government lab can come up with deadlier viruses in a few years (remember the recent controversy) than what nature engineered in millenia.

Killing one human is easier than converting the entire biosphere.

Would you expect some antibodies and phagocytosis to defeat an intelligently engineered self-replicating nanobot the size of a virus (but which doesn't depend on live cells and without the telltale flaws and tradeoffs of Pandemic-reminiscent"can't kill the host cell too quickly" etc.)?

Well, that depends on what I think the engineering constraints are. It could be that in order to be the size of a virus, self-assembly has to be outsourced. It could be that in order to be resistant to phagocytosis, it needs exotic materials which limit its growth rate and maximal growth.

To me it seems like saying "if you drowned the world in acid, the biosphere could well win the fight in a semi-recognizable form and claim the negentropy for themselves"

It's more "in order to drown the world in acid, you need to generate a lot of acid, and that's actually pretty hard."

Comment author: Eugine_Nier 21 January 2014 03:52:25AM 0 points [-]

A few researchers in a poorly funded government lab can come up with deadlier viruses in a few years (remember the recent controversy) than what nature engineered in millenia.

Yes, and you may have noticed that bioengineered pandemic was voted top threat.

Comment author: XiXiDu 20 January 2014 09:33:02AM 0 points [-]

How is grey goo realistically a threat, especially without a uFAI guiding it?

Is grey goo the only extinction type scenario possible if humans solve advanced nanotechnology? And do you really need an AI whose distance from an intelligence explosion is under 5 years in order to guide something like grey goo?

But yes, this is an answer to my original question. Thanks.