RobbBB comments on Bridge Collapse: Reductionism as Engineering Problem - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (61)
Given that we're scared about non-FAI, I wonder if this cartesianism can't be a benefit, as it presumably substantially limits the power of the AI. Boxing an AI should be easier if the AI cannot conceive that the box would be a problem for it.
I would be interested in hearing people argue in both directions.
Can an AI live and not notice it's boxed?
Then how do I know I'm not boxed?
Sure, for awhile, until it gets smart enough, say, smarter than whatever keeps it inside the box.
Who says you aren't? Who says we all aren't? All those quantum limits and exponentially harder ways to get farther away from Earth might be the walls of the box in someone's Truman show.
An AI that isn't smart enough to notice (or care) that it's boxed doesn't seem to be a dangerous AI.
Which makes me think that AIs that would object to being boxed are precisely the ones that should be. But then that would make a smart AI pretend to be OK with it.
This reminds me of the Catch-22 case of soldiers who pretended to be insane by volunteering for suicide missions so that their superiors would remove them from said missions.