To reiterate where we are, an AI is described as steadfast by Goertzel "if, over a long period of time, it either continues to pursue the same goals it had at the start of the time period, or stops acting altogether."[1] I took this to be a more technical specification of what you mean by "reliable", you disagreed. I don't see what other definition you could mean ....
[1] http://goertzel.org/GOLEM.pdf
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Well this argument I can understand, although Omohundro’s point 6 is tenuous. Boxing setups could prevent the AI from acquiring resources, and non-agents won’t be taking actions in the first place, to acquire resources or otherwise. And as you notice the ‘undetectable’ qualifier is important. Imagine you were locked in a box guarded by a gatekeeper of completely unknown and alien psychology. What procedure would you use for learning the gatekeeper’s motives well enough to manipulate it, all the while escaping detection? It’s not at all obvious to me that with proper operational security the AI would even be able to infer the gatekeeper’s motivational structure enough to deceive, no matter how much time it is given.
MIRI is currently taking actions that only really make sense as priorities in a hard-takeoff future. There are also possible actions which align with a soft-takeoff scenario, or double-dip for both (e.g. Kaj’s proposed research[1]), but MIRI does not seem to be involving itself with this work. This is a shame.
[1] http://intelligence.org/files/ConceptLearning.pdf
There's no guarantee that boxing will ensure the safety of a soft takeoff. When your boxed AI starts to become drastically smarter than a human -- 10 times --- 1000 times -- 1000000 times -- the sheer enormity of the mind may slip out of human possibility to understand. All the while, a seemingly small dissonance between the AI's goals and human values -- or a small misunderstanding on our part of what goals we've imbued -- could magnify to catastrophe as the power differential between humanity and the AI explodes post-transition.
If an AI goes through the intelligence explosion, its goals will be what orchestrates all resources (as Omohundro's point 6 implies). If the goals of this AI does not align with human values, all we value will be lost.