Vladimir_Nesov comments on The Magnitude of His Own Folly - Less Wrong

26 Post author: Eliezer_Yudkowsky 30 September 2008 11:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (127)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Vladimir_Nesov 30 September 2008 03:43:45PM *  5 points [-]

Shane [Legg], unless you know that your plan leads to a good outcome, there is no point in getting there faster (and it applies to each step along the way). Outcompeting other risks only becomes relevant when you can provide a better outcome. If your plan says that you only launch an AGI when you know it's a FAI, you can't get there faster by omitting the FAI part. And if you do omit the FAI, you are just working for destruction, no point in getting there faster.

The amendment to your argument might say that you can get a crucial technical insight in the FAI while working on AGI. I agree with it, but work on AGI should remain a strict subgoal, neither in a "I'll fail at it anyway, but might learn something" sense, nor "I'll genuinely try to build an AGI", but as "I'll try to think about technical side of developing an AGI, in order to learn something". Like studying statistics, machine learning, information theory, computer science, cognitive science, evolutionary psychology, neuroscience, and so on, to develop understanding of the problem of FAI, you might study your own FAI-care-free ideas on AGI. This is dangerous, but might prove useful. I don't know how useful it is, but neither do I know how modern machine learning is useful for the same task, beyond basics. Thinking about AGI seems closer to the target than most of machine learning, but we learn machine learning anyway. The catch is that currently there is no meaningful science of AGI.