soreff comments on Should I believe what the SIAI claims? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (600)
Good point. The resources expended towards a "small" goal aren't directly bounded by the size of the goal. As you said, an obstacle can make the resources used go arbitrarily high. An alternative constraint would be on what the AI is allowed to use up in achieving the goal - "No more that 10 kilograms of matter, nor more than 10 megajoules of energy, nor any human lives, nor anything with a market value of more that $1000". This will have problems of its own, when the AI thinks up something to use up that we never anticipated (We have something of a similar problem with corporations - but at least they operate on human timescales).
Part of the safety of existing optimizers is that they can only use resources or perform actions that we've explicitly let them try using. An electronic CAD program may tweak transistor widths, but it isn't going to get creative and start trying to satisfy its goals by hacking into the controls of the manufacturing line and changing their settings. An AI with the option to send arbitrary messages to arbitrary places is quite another animal...