tailcalled comments on Boxing an AI? - Less Wrong

2 Post author: tailcalled 27 March 2015 02:06PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (39)

You are viewing a single comment's thread. Show more comments above.

Comment author: tailcalled 30 March 2015 05:09:25PM 2 points [-]

Just... don't put it in a world where it should be able to upgrade infinitely? Make processors cost unobtainium and limit the amount of unobtainium so it can't upgrade past your practical processing capacity.

Remember that we are the ones who control how the box looks from inside.

Remember that you have to get this right the first time; if the AI finds itself in a box, you have to assume it will find its way out.

Minor nitpick: if the AI finds itself in a box, I have to assume it will be let out. It's completely trivial to prevent it from escaping when not given help; the point in Eliezer's experiment is that the AI will be given help.

Comment author: Gurkenglas 30 March 2015 07:27:04PM 1 point [-]

Note that this makes global processing power being limited evidence that the universe is a box.

Comment author: tailcalled 30 March 2015 07:33:28PM 1 point [-]

Good point.

The strength of the evidence depends a lot on your prior for the root-level universe, though.