You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

JoshuaZ comments on For FAI: Is "Molecular Nanotechnology" putting our best foot forward? - Less Wrong Discussion

48 Post author: leplen 22 June 2013 04:44AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (117)

You are viewing a single comment's thread. Show more comments above.

Comment author: JoshuaZ 23 June 2013 06:29:09PM 6 points [-]

Another key factor here is that the machine that does this would operate inside an alien environment compared to existing life - it would operate in a clean vacuum, possibly at low temperatures, and would use extremely stiff subunits made of covalently bonded silicon or carbon

If you have to do this, then the threat of nanotech looks a lot smaller. Replicators that need a nearly perfect vacuum aren't much of a threat.

Also, this is one place where AI comes in. The universe doesn't have any trouble modeling the energetics of a large network of atoms. If we have trouble doing the same, even using gigantic computers made of many many of these same atoms, then maybe the problem is we are doing it a hugely inefficient way. An entity smarter that humans might find a way to re-formulate the math for many orders of magnitude more efficient calculations, or it might find a way to build a computer that more efficiently uses the atoms it is composed of.

This sounds very close to a default assumption that these processes are genuinely easy to not just compute, but to actually work out what solutions one wants. Answering "how will this protein most likely fold?" is computationally much easier (as far as we can tell) than answering "what protein will fold like this?" It may well be that these are substantially computationally easier than we currently think. Heck, it could be that P=NP, or it could be that even with P != NP that there's still some extremely slow growing algorithm that solves NP complete problems. But these don't seem like likely scenarios unless one has some evidence for them.

Comment author: pengvado 24 June 2013 10:45:39AM 2 points [-]

Answering "how will this protein most likely fold?" is computationally much easier (as far as we can tell) than answering "what protein will fold like this?"

Got a reference for that? It's not obvious to me (CS background, not bio).

What if you have an algorithm that attempts to solve the "how will this protein most likely fold?" problem, but is only tractable on 1% of possible inputs, and just gives up on the other 99%? As long as the 1% contains enough interesting structures, it'll still work as a subroutine for the "what protein will fold like this?" problem. The search algorithm just has to avoid the proteins that it doesn't know how to evaluate. That's how human engineers work, anyway: "what does this pile of spaghetti code do?" is uncomputable in the worst case, but that doesn't stop programmers from solving "write a program that does X".

Comment author: JoshuaZ 24 June 2013 03:11:23PM *  1 point [-]

Sure, see for example here which discusses some of the issues involved. Although your essential point may still have merit, because it is likely that many of the proteins we would want will have much more restricted shapes than those in general problem. Also, I don't know much about what work has been done in the last few years, so it is possible that the state of the art has changed substantially.

Comment author: Baughn 26 June 2013 02:50:27PM 1 point [-]

If you have to do this, then the threat of nanotech looks a lot smaller. Replicators that need a nearly perfect vacuum aren't much of a threat.

The idea is to have a vacuum inside the machinery, a macroscopic nanofactory can still exist in an atmosphere.

Comment author: JoshuaZ 26 June 2013 07:18:04PM 1 point [-]

Sure, but a lot of the hypothetical nanotech disasters are things that require nanotech devices that are themselves very small (e.g. the grey goo scenarios). If one requires a macroscopic object to keep a stable vacuum then the set of threats goes down by a lot. Obviously a lot of them are still possibly present (such as the possibility that almost anyone will be able to refine uranium), but many of them don't, and many of the obvious scenarios connected to AI would then look less likely.

Comment author: Baughn 27 June 2013 10:32:47AM 1 point [-]

I don't know.. I think 'grey goo' scenarios would still work even if the individual goolets are insect-sized.