you can just use a different utility function with U=0 if the constraints are violated.
I assume you meant "U = large negative number".
Conventional wisdom, I believe, is that setting up a constraint that has the desired effect is really difficult.
My intuition is that it becomes less difficult if you assign the responsibility of maintaining the constraint to a different sub-agent than the one who is trying to maximize unconstrained U. And have those two sub-agents interact by bargaining to resolve their non-zero-sum game.
It is just an intuition. I'll be happy to clarify it, but less happy if someone insists that I rigorously defend it.
you can just use a different utility function with U=0 if the constraints are violated.
I assume you meant "U = large negative number".
I was thinking about bounded utiliity - normalized on [0,1].
Link: aleph.se/andart/archives/2011/02/why_we_should_fear_the_paperclipper.html