StrivingForLegibility

Sequences

The Math of Geometric Utilitarianism
Geometric Utilitarianism
Distributed Strategic Epistemology
Delegative Decision Theory

Wiki Contributions

Comments

Sorted by

Thank you! I think it's exactly the same kind of "conditioning my output on their output" that you were pointing to in your analogy to iterated games. And I expect there's a strong correspondence between "program equilibria where you only condition on predicted outputs" and "iterated game equilibria that can form a stable loop."

Thank you! Ideally, I think we'd all like a model of individual rationality that composes together into a nice model of group rationality. And geometric rationality seems like a promising step in that direction.

This might be a framing thing!

The background details I’d been imagining are that Alive and Bob were in essentially identical situations before their interaction, and it was just luck that Alice and Bob got the capabilities they did.

Alice and Bob have two ways to convert tokens into money, and I’d claim that any rational joint strategy involves only using Bob’s way. Alice's ability to convert tokens into pennies is a red herring that any rational group should ignore.

At that point, it's just a bargaining game over how to split the $1,000,000,000. And I claim that game is symmetric, since they’re both equally necessary for that surplus to come into existence.

If Bob had instead paid huge costs to create the ability to turn tokens into tens of millions of dollars, I totally think his costs should be repaid before splitting the remaining surplus fairly.

Limiting it to economic/comparable values is convenient, but also very inaccurate for all known agents - utility is private and incomparable.

I think modeling utility functions as private information makes a lot of sense! One of the claims I’m making in this post is that utility valuations can be elicited and therefore compared.

My go-to example of an honest mechanism is a second-price auction, which we know we can implement from within the universe. The bids serve as a credible signal of valuation, and if everyone follows their incentives they’ll bid honestly. The person that values the item the most is declared the winner, and economic surplus is maximized.

(Assuming some background facts, which aren't always true in practice, like everyone having enough money to express their preferences through bids. I used tokens in this example so that “willingness to pay” and “ability to pay” can always line up.)

We use the same technique when we talk about the gains from trade, which I think the Ultimatum game is intended to model. If a merchant values a shirt at $5, and I value it at $15, then there's $10 of surplus to be split if we can agree on a price in that range.

Bob values the tokens more than Alice does. We can tell because he can buy them from her at a price she's willing to accept. Side payments let us interpersonally compare valuations.

As I understand it, economic surplus isn't a subjective quantity. It's a measure of how much people would be willing to pay to go from the status quo to some better outcome. Which might start out as private information in people's heads, but there is an objective answer and we can elicit the information needed to compute and maximize it.

a purely rational Alice should not expect/demand more than $1.00, which is the maximum she could get from the best possible (for her) split without side payments.

I don't know of any results that suggest this should be true! My understanding of the classic analysis of the Ultimatum game is that if Bob makes a take-it-or-leave-it offer to Alice, where she would receive any tiny amount of money like $0.01, she should take it because $0.01 is better than $0.

My current take is that CDT-style thinking has crippled huge parts of economics and decision theory. The agreement of both parties is needed for this $1,000,000,000 of surplus to exist, if either walk away they both get nothing. The Ultimatum game is symmetric and the gains should be split symmetrically.

If we actually found ourselves in this situation, would we actually accept $1 out of $1 billion? Is that how we’d program a computer to handle this situation on our behalf? Is that the sort of reputation we’d want to be known for?

The problem remains though: you make the ex ante call about which information to "decision-relevantly update on", and this can be a wrong call, and this creates commitment races, etc.

My understanding is that commitment races only occur in cases where "information about the commitments made by other agents" has negative value for all relevant agents. (All agents are racing to commit before learning more, which might scare them away from making such a commitment.)

It seems like updateless agents should not find themselves in commitment races.

My impression is that we don't have a satisfactory extension of UDT to multi-agent interactions. But I suspect that the updateless response to observing "your counterpart has committed to going Straight" will look less like "Swerve, since that's the best response" and more like "go Straight with enough probability that your counterpart wishes they'd coordinated with you rather than trying to bully you."

Offering to coordinate on socially optimal outcomes, and being willing to pay costs to discourage bullying, seems like a generalizable way for smart agents to achieve good outcomes.

Got it, thank you!

It seems like trapped priors and commitment races are exactly the sort of cognitive dysfunction that updatelessness would solve in generality. 

My understanding is that trapped priors are a symptom of a dysfunctional epistemology, which over-weights prior beliefs when updating on new observations. This results in an agent getting stuck, or even getting more and more confident in their initial position, regardless of what observations they actually make. 

Similarly, commitment races are the result of dysfunctional reasoning that regards accurate information about other agents as hazardous. It seems like the consensus is that updatelessness is the general solution to infohazards.

My current model of an "updateless decision procedure", approximated on a real computer, is something like "a policy which is continuously optimized, as an agent has more time to think, and the agent always acts according to the best policy it's found so far." And I like the model you use in your report, where an ecosystem of participants collectively optimize a data structure used to make decisions.

Since updateless agents use a fixed optimization criterion for evaluating policies, we can use something like an optimization market to optimize an agent's policy. It seems easy to code up traders that identify "policies produced by (approximations of) Bayesian reasoning", which I suspect won't be subject to trapped priors.

So updateless agents seem like they should be able to do at least as well as updateful agents. Because they can identify updateful policies, and use those if they seem optimal. But they can also use different reasoning to identify policies like "pay Paul Ekman to drive you out of the desert", and automatically adopt those when they lead to higher EV than updateful policies.

I suspect that the generalization of updatelessness to multi-agent scenarios will involve optimizing over the joint policy space, using a social choice theory to score joint policies. If agents agree at the meta level about "how conflicts of interest should be resolved", then that seems like a plausible route for them to coordinate on socially optimal joint policies.

I think this approach also avoids the sky-rocketing complexity problem, if I understand the problem you're pointing to. (I think the problem you're pointing to involves trying to best-respond to another agent's cognition, which gets more difficult as that agent becomes more complicated.)

The distinction between "solving the problem for our prior" and "solving the problem for all priors" definitely helps! Thank you!

I want to make sure I understand the way you're using the term updateless, in cases where the optimal policy involves correlating actions with observations. Like pushing a red button upon seeing a red light, but pushing a blue button upon seeing a blue light. It seems like (See Red -> Push Red, See Blue -> Push Blue) is the policy that CDT, EDT, and UDT would all implement.

In the way that I understand the terms, CDT and EDT are updateful procedures, and UDT is updateless. And all three are able to use information available to them. It's just that an updateless decision procedure always handles information in ways that are endorsed a priori. (True information can degrade the performance of updateful decision theories, but updatelessness implies infohazard immunity.)

Is this consistent with the way you're describing decision-making procedures as updateful and updateless?

 

It also seems like if an agent is regarding some information as hazardous, that agent isn't being properly updateless with respect to that information. In particular, if it finds that it's afraid to learn true information about other agents (such as their inclinations and pre-commitments), it already knows that it will mishandle that information upon learning it. And if it were properly updateless, it would handle that information properly.

It seems like we can use that "flinching away from true information" as a signal that we'd like to change the way our future self will handle learning that information. If our software systems ever notice themselves calculating a negative value of information for an observation (empirical or logical), the details of that calculation will reveal at least one counterfactual branch where they're mishandling that information. It seems like we should always be able to automatically patch that part of our policy, possibly using a commitment that binds our future self.

In the worst case, we should always be able to do what our ignorant self would have done, so information should never hurt us.

Got it, I think I understand better the problem you're trying to solve! It's not just being able to design a particular software system and give it good priors, it's also finding a framework that's robust to our initial choice of priors.

Is it possible for all possible priors to converge on optimal behavior, even given unlimited observations? I'm thinking of Yudkowsky's example of the anti-Occamian and anti-Laplacian priors: the more observations an anti-Laplacian agent makes, the further its beliefs go from the truth.

I'm also surprised that dynamic stability leads to suboptimal outcomes that are predictable in advance. Intuitively, it seems like this should never happen.

It sounds like we already mostly agree!

I agree with Caspar's point in the article you linked: the choice of metric determines which decision theories score highly on it. The metric that I think points towards "going Straight sometimes, even after observing that your counterpart has pre-committed to always going Straight" is a strategic one. If Alice and Bob are writing programs to play open-source Chicken on their behalf, then there's a program equilibrium where:

  • Both programs first try to perform a logical handshake, coordinating on a socially optimal joint policy.
    • This only succeeds if they have compatible notions of social optimality.
  • As a fallback, Alice's program adopts a policy which
    • Caps Bob's expected payoff at what Bob would have received under Alice's notion of social optimality
      • Minus an extra penalty, to give Bob an incentive gradient to climb towards what Alice sees as the socially optimal joint policy
    • Otherwise maximizes Alice's payoff, given that incentive-shaping constraint
  • Bob's fallback operates symmetrically, with respect to his notion of social optimality.

The motivating principle is to treat one's choice of decision theory as itself strategic. If Alice chooses a decision theory which never goes Straight, after making the logical observation that Bob's decision theory always goes Straight, then Bob's best response is to pick a decision theory that always goes straight and make that as obvious as possible to Alice's decision theory.

Whereas if Alice designs her decision theory to grant Bob the highest payoff when his decision theory legibly outputs Bob's part of  (what Alice sees as a socially optimal joint policy), then Bob's best response is to pick a decision theory that outputs Bob's part of  and make that as obvious as possible to Alice's decision theory.

It seems like one general recipe for avoiding commitment races would be something like:

  • Design your decision theory so that no information is hazardous to it
    • We should never be willing to pay in order to not know certain implications of our beliefs, or true information about the world
  • Design your decision theory so that it is not infohazardous to sensible decision theories
    • Our counterparts should generally expect to benefit from reasoning more about us, because we legibly are trying to coordinate on good outcomes and we grant the highest payoffs to those that coordinate with us
    • If infohazard resistance is straightforward, then our counterpart should hopefully have that reflected in their prior.
  • Do all the reasoning you want about your counterpart's decision theory
    • It's fine to learn that your counterpart has pre-committed to going Straight. What's true is already so. Learning this doesn't force you to Swerve.
    • Plus, things might not be so bad! You might be a hypothetical inside your counterpart's mind, considering how you would react to learning that they've pre-committed to going Straight.
      • Your actions in this scenario can determine whether it becomes factual or counterfactual. Being willing to crash into bullies can discourage them from trying to bully you into Swerving in the first place.
    • You might also discover good news about your counterpart, like that they're also implementing your decision theory.
      • If this were bad news, like for commitment-racers, we'd want to rethink our decision theory.

So we seem to face a fundamental trade-off between the information benefits of learning (updating) and the strategic benefits of updatelessness. If I learn the digit, I will better navigate some situations which require this information, but I will lose the strategic power of coordinating with my counterfactual self, which is necessary in other situations.

 

It seems like we should be able to design software systems that are immune to any infohazard, including logical infohazards.

  • If it's helpful to act on a piece of information you know, act on it.
  • If it's not helpful to act on a piece of information you know, act as if you didn't know it.

Ideally, we could just prove that "Decision Theory X never calculates a negative value of information". But if needed, we could explicitly design a cognitive architecture with infohazard mitigation in mind. Some options include:

  • An "ignore this information in this situation" flag
    • Upon noticing "this information would be detrimental to act on in this situation", we could decide to act as if we didn't know it, in that situation.
    • (I think this is one of the designs you mentioned in footnote 4.)
  • Cognitive sandboxes
    • Spin up some software in a sandbox to do your thinking for you.
    • The software should only return logical information that is true, and useful in your current situation
    • If it notices any hazardous information, it simply doesn't return it to you.
    • Upon noticing that a train of thought doesn't lead to any true and useful information, don't think about why that is and move on.

I agree with your point in footnote 4, that the hard part is knowing when to ignore information. Upon noticing that it would be helpful to ignore something, the actual ignoring seems easy.

Load More