wedrifid comments on Not Taking Over the World - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (91)
Or merely aware of the same potential weakness that you are. I'd be overwhelmingly uncomfortable with someone developing a super-intelligence without the awareness of their human limitations at risk assessment. (Incidentally 'perfect' risk assessment isn't required. They make the most of whatever risk assessment ability they have either way.)
I consider this a rather inferior solution - particularly in as much as it pretends to be minimizing two things. Since steps will almost inevitably be differentiated by size the assessment of lowest risks barely comes into play. An algorithm that almost never considers risk rather defeats the point.
If you must artificially circumvent the risk assessment algorithm - presumably to counter known biases - then perhaps make the "small steps" a question of satisficing rather than minimization.
Good point.
How would you word that?