MichaelHoward comments on Cryptographic Boxes for Unfriendly AI - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (155)
Way in over my head here, but if the project is done in a series of steps with a series of boxed AIs helping to work on the next step, wouldn't it be better get something that is at the very least(*)…
..at a much earlier step than either…
(*) - if not perfectly/certifiably, if that's feasible before doing stuff on the second list.
I agree completely. I don't think that renders a strong quarantine system useless though, because we don't really get to decide whether we find FAI or uFAI first, whoever discovers the first AI does.