The environment only adapts to your actions.
Is this how you define environment?
The environment only adapts to your actions.
Is this how you define environment?
At least as an informal definition, it seems pretty good.
I always had the informal impression that the optimal policies were deterministic
Really? I wouldn't have ever thought that at all. Why do you think you thought that?
when facing the environment rather that other players. But stochastic policies can also be needed if the environment is partially observable
Isn't kind of what a player is? Part of the environment with a strategy and only partially observable states?
Although for this player, don't you have an optimal strategy, except for the first move? The Markov "Player" seems to like change.
Isn't this strategy basically optimal? ABABABABABAB... Deterministic, just not the same every round. Am I missing something?
ABABABABABAB...
It's deterministic, but not memoryless.
But it really does seem that there is a difference between facing an environment and another player - the other player adapts to your strategy in a way the environment doesn't. The environment only adapts to your actions.
I think for unbounded agents facing the environment, a deterministic policy is always optimal, but this might not be the case for bounded agents.
Is the Absent-minded Driver an example of a single-player decision problem whose optimal policy is stochastic? Isn't the optimal policy to condition your decision on an unbiased coin?
I ask because it seems like it might make a good intuitive example, as opposed to the POMDP in the OP. But I'm not sure who your intended audience is.
Yes, you can see this POMDP as a variant of the absent minded-driver, and get that result.
Yup, I think I understand that, and agree you need to at least tend to one. I'm just wondering why you initially use the loser definition of theta (where it doesn't need to tend to one, and can instead be just 0 )
When defining safe interruptibility, we let theta tend to 1. We probably didn't specify that earlier, when we were just introducing the concept?
perfectly feasible
Citation needed.
In software, it's trivial: create a subroutine with only a very specific output, include the entity inside it. Some precautions are then needed to prevent the entity from hacking out through hardware weaknesses, but that should be doable (using isolation in faraday cage if needed).
I like how the examples of the robot failures are... uhm... not like from the Terminator movie. May make some people discuss them more seriously.
Very interesting paper, congratulations on the collaboration.
I have a question about theta. When you initially introduce it, theta lies in [0,1]. But it seems that if you choose theta = (0n)n, just a sequence of 0s, all policies are interruptible. Is there much reason to initially allow such a wide ranging theta - why not restrict them to converge to 1 from the very beginning? (Or have I just totally missed the point?)
We're working on the theta problem at the moment. Basically we're currently defining interruptibility in terms of convergence to optimality. Hence we need the agent to explore sufficiently, hence we can't set theta=1. But we want to be able to interrupt the agent in practice, so we want theta to tend to one.
these folks say that you won't be able to sandbox a AGI, due to the nature of computing itself.
Assuming that a superintelligence will contain a program that includes all the programs that can be executed by a universal Turing machine on input potentially as complex as the state of the world, strict containment requires simulations of such a program, something theoretically (and practically) infeasible.
http://arxiv.org/abs/1607.00913v1
But perhaps we could fool it, by poisoning some crucial databases it uses in subtle ways.
DeepFool: a simple and accurate method to fool deep neural networks
strict containment requires simulations of such a program, something theoretically (and practically) infeasible.
Sandboxing just requires that you be sure that the sandboxed entity can't send bits outside the system (except on some defined channel, maybe), which is perfectly feasible.
With the internet of things physical goods can treat their owner differently than other people. A car can be programmed to only be driven by their owner.
Theoretically yes, but that doesn't seem to be how "smart" devices are actually being programmed.
With the internet of things physical goods can treat their owner differently than other people. A car can be programmed to only be driven by their owner.
Which shift the verification to the imperfect car code.
So an impression that optimal memoryless polices were deterministic?
That seems even less likely to me. If the environment has state, and you're not allowed to, you're playing at a disadvantage. Randomness is one way to counter state when you don't have state.
I still don't see a difference. Your strategy is only known from your actions by both another player and the environment, so they're in the same boat.
Labeling something the environment or a player seems arbitrary and irrelevant. What capabilities are we talking about? Are these terms of art for which some standard specifying capability exists?
What formal distinctions have been made between players and environments?
Take a game with a mixed strategy Nash equilibrium. If you and the other player follow this, using source of randomness that remain random for the other player, then it is never to your advantage to deviate from this. You play this game, again and again, against another player or against the environment.
Consider an environment in which the opponent's strategies are in an evolutionary arms race, trying to best beat you; this is an environmental model. Under this, you'd tend to follow the Nash equilibrium on average, but, at (almost) any given turn, there's a deterministic choice that's a bit better than being stochastic, and it's determined by the current equilibrium of strategies of the opponent/environment.
However, if you're facing another player, and you make deterministic choices, you're vulnerable if ever they figure out your choice. This is because they can peer into your algorithm, not just track your previous actions. To avoid this, you have to be stochastic.
This seems like a potentially relevant distinction.