anon895 comments on Drive-less AIs and experimentation - Less Wrong

4 Post author: whpearson 17 June 2011 02:33PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (14)

You are viewing a single comment's thread. Show more comments above.

Comment author: XiXiDu 17 June 2011 03:43:29PM *  2 points [-]

The strongest counterargument offered was that a scope-limited AI doesn't stop rogue unfriendly AIs from arising and destroying the world.

I don't quite understand that argument, maybe someone could elaborate.

If there is a rule that says 'optimize X for X seconds' why would an AGI make a difference between 'optimize X' and 'for X seconds'? In other words, why is it assumed that we can succeed to create a paperclip maximizer that cares strongly enough about the design parameters of paperclips to consume the universe (why would it do that as long as it isn't told to do so) but somehow ignores all design parameters that have to do with spatio-temporal scope boundaries or resource limitations?

I see that there is a subset of unfriendly AGI designs that would never halt, or destroy humanity while pursuing their goals. But how large is that subset, how many do actually halt or proceed very slowly?

Comment author: anon895 17 June 2011 09:31:02PM 3 points [-]

(I wrote this before seeing timtyler's post.)

If there is a rule that says 'optimize X for X seconds' why would an AGI make a difference between 'optimize X' and 'for X seconds'?

I does seem like you misinterpreted the argument, but one possible failure there is if the most effective way to maximize paperclips within the time period is to build paperclip-making Von Neumann machines. If it designs the machines from scratch, it won't build a time limit into them because that won't increase the production of paperclips within the period of time it cares about.