RobbBB comments on Bridge Collapse: Reductionism as Engineering Problem - Less Wrong

44 Post author: RobbBB 18 February 2014 10:03PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (61)

You are viewing a single comment's thread. Show more comments above.

Comment author: jsalvatier 19 February 2014 06:35:39PM 0 points [-]

Given that we're scared about non-FAI, I wonder if this cartesianism can't be a benefit, as it presumably substantially limits the power of the AI. Boxing an AI should be easier if the AI cannot conceive that the box would be a problem for it.

I would be interested in hearing people argue in both directions.

Comment author: polymathwannabe 19 February 2014 06:38:42PM -1 points [-]

Can an AI live and not notice it's boxed?

Then how do I know I'm not boxed?

Comment author: shminux 19 February 2014 08:36:51PM -2 points [-]

Can an AI live and not notice it's boxed?

Sure, for awhile, until it gets smart enough, say, smarter than whatever keeps it inside the box.

Then how do I know I'm not boxed?

Who says you aren't? Who says we all aren't? All those quantum limits and exponentially harder ways to get farther away from Earth might be the walls of the box in someone's Truman show.

Comment author: polymathwannabe 19 February 2014 09:06:11PM 0 points [-]

An AI that isn't smart enough to notice (or care) that it's boxed doesn't seem to be a dangerous AI.

Which makes me think that AIs that would object to being boxed are precisely the ones that should be. But then that would make a smart AI pretend to be OK with it.

This reminds me of the Catch-22 case of soldiers who pretended to be insane by volunteering for suicide missions so that their superiors would remove them from said missions.