Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Lumifer comments on The map of "Levels of defence" in AI safety - Less Wrong Discussion

0 Post author: turchin 12 December 2017 10:44AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (5)

You are viewing a single comment's thread. Show more comments above.

Comment author: Lumifer 05 January 2018 04:35:27PM *  1 point [-]

There seems to be a complexity limit to what humans can build. A full GAI is likely to be somewhere beyond that limit.

The usual solution to that problem -- see the EY's fooming scenario -- is to make the process recursive: let a mediocre AI improve itself, and as it gets better it can improve itself more rapidly. Exponential growth can go fast and far.

This, of course, gives rise to another problem: you have no idea what the end product is going to look like. If you're looking at the gazillionth iteration, your compiler flags were probably lost around the thousandth iteration and your chained monitor system mutated into a cute puppy around the millionth iteration...

Probabilistic safety systems are indeed more tractable, but that's not the question. The question is whether they are good enough.