Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

whpearson comments on Q&A with Michael Littman on risks from AI - Less Wrong Discussion

15 Post author: XiXiDu 19 December 2011 09:51AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (88)

You are viewing a single comment's thread. Show more comments above.

Comment author: whpearson 19 December 2011 09:53:58PM 6 points [-]

It just has to be good at physics and engineering.

I would contend it would have to know what is in the current environment as well. What bacteria and other micro organisms it is likely to face ( a largely unexplored question by humans), what chemicals it will have available (as potential feedstocks and poisons) and what radiation levels.

To get these from first principles it would have to recreate the evolution of earth from scratch.

Some engineering tasks are limited by computing power too, e.g. protein folding is an already formalized problem,

What do you mean by a formalized problem in this context? I'm interested in links on the subject.

Comment author: cousin_it 19 December 2011 11:12:23PM *  2 points [-]

Sorry for speaking so confidently. I don't really know much about protein folding, it was just the impression I got from Wikipedia: 1, 2.

Comment author: JoshuaZ 19 December 2011 10:06:59PM *  3 points [-]

There are a variety of formalized versions of protein folding. See for example this paper(pdf). There are however questions if these models are completely accurate. Computing how a protein will fold based on a model is often so difficult that testing the actual limits of the models is tricky. The model given in the paper I linked to is known to be too simplistic in many practical cases.