endoself comments on Reduced impact AI: no back channels - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (41)
Don't you have to get the exact level of noise that will prevent the AI from hiding from P without letting P reconstruct the AI's actions if it does allow itself to be destroyed? An error in either direction can be catastrophic. If the noise is to high, the AI takes over the world. If the noise is to low, E'(P(Sᵃ|X,Oᵃ,B)/P(Sᵃ|¬X,Õᵃ,B) | a) is going to be very far from 1 no matter what, so there is no reason to expect that optimizing it is still equivalent to reducing impact.
It's not so sensitive. The AI's actions in the box are very hard to detect from the perspective of fifty years, with minimal noise. The AI expanding dangerously across the universe would be easy to detect, even with a lot of noise (if nothing else, because humans would have recorded this and broadcast messages about it).