timtyler comments on The Magnitude of His Own Folly - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (127)
Eli, sometimes I find it hard to understand what your position actually is. It seems to me that your position is:
1) Work out an extremely robust solution to the Friendly AI problem
Only once this has been done do we move on to:
2) Build a powerful AGI
Practically, I think this strategy is risky. In my opinion, if you try to solve Friendliness without having a concrete AGI design, you will probably miss some important things. Secondly, I think that solving Friendliness will take longer than building the first powerful AGI. Thus, if you do 1 before getting into 2, I think it's unlikely that you'll be first.
I say much the same thing on: The risks of caution.
The race doesn't usually go to the most cautious.