Frugality and working from finite data

27 Snowyowl 03 September 2010 09:37AM

The scientific method is wonderfully simple, intuitive, and above all effective. Based on the available evidence, you formulate several hypotheses and assign prior probabilities to each one. Then, you devise an experiment which will produce new evidence to distinguish between the hypotheses. Finally, you perform the experiment, and adjust your probabilities accordingly. 

So far, so good. But what do you do when you cannot perform any new experiments?

This may seem like a strange question, one that leans dangerously close to unprovable philosophical statements that don't have any real-world consequences. But it is in fact a serious problem facing the field of cosmology. We must learn that when there is no new evidence that will cause you to change your beliefs (or even when there is), the best thing to do is to rationally re-examine the evidence you already have.

continue reading »

Taking the awkwardness out of a Prenup - A Game Theoretic solution

29 VijayKrishnan 22 May 2010 12:45AM

I would strongly advise you to look at the short review on Thomas Schelling's Strategy of Conflict posted on Less Wrong some time back. The idea that deliberately constraining one's own choices can actually leave a person better off in a negotiation is a very interesting one. The most classic game theoretic example of this is the game of Chicken. In the game of Chicken, two people drive toward each other on a wide freeway. If neither of them swerve, they both stand to lose by way of substantial financial damage and possible loss of lives. If not, the first one to swerve is the proverbial "chicken" and stands to lose face against the other person who was brave enough to not swerve. If one person were to throw away their steering wheel and blindfold themselves before driving on the freeway, that would force the other person to swerve given that the first person has completely given up control of the situation. 

     There is a slightly more generalizable example of a similar principle at work. Suppose you wanted to buy a used car from a car dealer and were prepared to pay up to $5000 for the car and the car dealer in turn was willing to sell it for any price above $4000. In such a situation, any price between $4000 and $5000 is an admissible solution. However you ideally want to pay as close to $4000 as possible, while the car dealer would like you to pay close to $5000. In a such a situation, each party would pretend that their "last price" (the price that represents the worst possible outcome for them, which they would nonetheless be willing to accept) was different from the true last price, since if one party realizes the other party's true last price, that party can put it to effective use in the negotiation. Let us now assume a situation wherein you and the car dealer know perfectly well about the each other's financial details, the degree of urgency in having the transaction done etc., and have a very reliable idea of the last price of the other person. Now, you can break the symmetry and get the best possible deal out of the situation by deliberately handicapping yourself in the following fashion. You sign a contract with a third party individual which states that if you happen to do this transaction and pay more than $4000, you will have to pay the third party $1500. Now, all you need to do is show this contract to your used card dealer which would make it clear to him that your last price has now shrunk to $4000 since paying anything above that effectively means paying in excess of $5500 which is well past your original last price.

    For countries that face the menace of airline hijacking, it can likewise be an effective deterrent to future hijackers if release of terrorists or other kinds of negotiations with hijackers were explicitly prohibited by the country's laws, and these laws would be impossible to overturn during a hijacking incident. 

     This brings to mind the following question. Why don't there exist companies that explicitly sign contracts with individuals or other entities for a fee, which would handicap the entities in some way that cannot be easily overturned and consequently give them negotiating leverage as a result. 

      One example I can think of is pertaining to wealthy individuals in California and other US States with Community Property laws. Given the high divorce rates in the US, it would be prudent for such individuals to have as tight prenuptial agreements as possible prior to getting married, to minimize financial loss in the event of a divorce and also to avoid financially incentivizing one's spouse to initiate a divorce with a promise of a financial windfall. However there are some practical difficulties which might make many such individuals shy away from doing this. A couple of the practical issues are:

A. It is clearly rather unromantic to have to haggle with one's fiancée and their lawyers regarding a prenuptial agreement. The implied "lack of belief" in the potential durability of the marriage might be a turn off for one's partner and other close people involved.

B. The individuals themselves might get carried away by emotion and believe that they have found "the one" and assign a much lower probability of divorce or forcible concessions that they would need to make in future when faced with the threat of divorce. In such a situation, they would fail to realize that probably 50% of Americans who felt they found "the one" just like them, went on to eventually get divorced.

   Now imagine the beneficial role a company signing such contracts could provide. The individual in question could sign a contract with this company stating that if they were to get married without a bullet proof pre-specified prenuptial agreement, the company could lay claim to half their net worth immediately after the wedding were registered. Ideally, the individual in question could sign such a contract when they were single or not seriously seeing anyone with the intention of getting married. The advantage of such a contract is the following:

1. Community property and other modern divorce laws essentially change the defaults with regard to what happens in the aftermath of a divorce, compared to how marriages worked prior to the existence of such laws. Such a contract would reset the default state to one where neither party would financially profit in the aftermath of a divorce. Most of the awkwardness comes when trying to override the default state with a bunch of legal riders at the time of a wedding. 

2. The advantage of signing up for a contract well in advance is that the aforesaid individual is then not exposed to issues A and B above. Signing a tight pre-nuptial agreement in the background of such a contract, simply means that the individual in question has no desire to part with half their finances to this third party company. It makes no implicit statement about the individual's probability estimate for the durability of the marriage. There always exists the plausible explanation that the individual in question was opposed to non-prenup marriages in the past, but now saw no need for that given that they subsequently found "the one". However they are constrained by a certain contract they signed in the past that they are now powerless to change.

   Do you know if there are entities that play the role of the third party company with regard to signing contracts that enable people to handicap themselves and consequently come out stronger in future negotiations? Do you know of people who did this specifically with regard to prenuptial agreements? If such companies don't exist, is that a potential business opportunity? I would love to hear from you in the comments. 

 

 

Newcomb's problem happened to me

37 Academian 26 March 2010 06:31PM

Okay, maybe not me, but someone I know, and that's what the title would be if he wrote it.  Newcomb's problem and Kavka's toxin puzzle are more than just curiosities relevant to artificial intelligence theory.  Like a lot of thought experiments, they approximately happen.  They illustrate robust issues with causal decision theory that can deeply affect our everyday lives.

Yet somehow it isn't mainstream knowledge that these are more than merely abstract linguistic issues, as evidenced by this comment thread (please no Karma sniping of the comments, they are a valuable record).  Scenarios involving brain scanning, decision simulation, etc., can establish their validy and future relevance, but not that they are already commonplace.  For the record, I want to provide an already-happened, real-life account that captures the Newcomb essence and explicitly describes how.

So let's say my friend is named Joe.  In his account, Joe is very much in love with this girl named Omega… er… Kate, and he wants to get married.  Kate is somewhat traditional, and won't marry him unless he proposes, not only in the sense of explicitly asking her, but also expressing certainty that he will never try to leave her if they do marry

Now, I don't want to make up the ending here.  I want to convey the actual account, in which Joe's beliefs are roughly schematized as follows: 

  1. if he proposes sincerely, she is effectively sure to believe it.
  2. if he proposes insincerely, she will 50% likely believe it.
  3. if she believes his proposal, she will 80% likely say yes.
  4. if she doesn't believe his proposal, she will surely say no, but will not be significantly upset in comparison to the significance of marriage.
  5. if they marry, Joe will 90% likely be happy, and will 10% likely be unhappy.

He roughly values the happy and unhappy outcomes oppositely:

  1. being happily married to Kate:  125 megautilons
  2. being unhapily married to Kate:  -125 megautilons.

So what should he do?  What should this real person have actually done?1  Well, as in Newcomb, these beliefs and utilities present an interesting and quantifiable problem…

continue reading »

Information theory and the symmetry of updating beliefs

45 Academian 20 March 2010 12:34AM

Contents:

1.  The beautiful symmetry of Bayesian updating
2.  Odds and log odds: a short comparison
3.  Further discussion of information

Rationality is all about handling this thing called "information".  Fortunately, we live in an era after the rigorous formulation of Information Theory by C.E. Shannon in 1948, a basic understanding of which can actually help you think about your beliefs, in a way similar but complementary to probability theory. Indeed, it has flourished as an area of research exactly because it helps people in many areas of science to describe the world.  We should take advantage of this!

The information theory of events, which I'm about to explain, is about as difficult as high school probability.  It is certainly easier than the information theory of multiple random variables (which right now is explained on Wikipedia), even though the equations look very similar.  If you already know it, this can be a linkable source of explanations to save you writing time :)

So!  To get started, what better way to motivate information theory than to answer a question about Bayesianism?

The beautiful symmetry of Bayesian updating

The factor by which observing A increases the probability of B is the same as the factor by which observing B increases the probability of A.  This factor is P(A and B)/(P(A)·P(B)), which I'll denote by pev(A,B) for reasons to come.  It can vary from 0 to +infinity, and allows us to write Bayes' Theorem succinctly in both directions:

     P(A|B)=P(A)·pev(A,B),   and   P(B|A)=P(B)·pev(A,B)

What does this symmetry mean, and how should it affect the way we think?

A great way to think of pev(A,B) is as a multiplicative measure of mutual evidence, which I'll call mutual probabilistic evidence to be specific.  If pev=1 if they're independent, if pev>1 they make each other more likely, and if pev<1 if they make each other less likely.

But two ways to think are better than one, so I will offer a second explanation, in terms of information, which I often find quite helpful in analyzing my own beliefs:

continue reading »

Getting Over Dust Theory

6 jhuffman 15 December 2009 10:40PM

It has been well over a year since I first read Permutation City and relating writings on the internet on Greg Egan's dust theory. It still haunts me. The theory has been discussed tangentially in this community, but I haven't found an article that directly addresses the rationality of Egan's own dismissal of the theory.

In the FAQ, Egan says things like:

I wrote the ending as a way of dramatising[sic] a dissatisfaction I had with the “pure” Dust Theory that I never could (and still haven't) made precise (see Q5): the universe we live in is more coherent than the Dust Theory demands, so there must be something else going on.

and:

I have yet to hear a convincing refutation of it on purely logical grounds...

However, I think the universe we live in provides strong empirical evidence against the “pure” Dust Theory, because it is far too orderly and obeys far simpler and more homogeneous physical laws than it would need to, merely in order to contain observers with an enduring sense of their own existence. If every arrangement of the dust that contained such observers was realised, then there would be billions of times more arrangements in which the observers were surrounded by chaotic events, than arrangements in which there were uniform physical laws.

continue reading »

Extreme risks: when not to use expected utility

4 Stuart_Armstrong 23 October 2009 02:40PM

Would you prefer a 50% chance of gaining €10, one chance in a million off gaining €5 million, or a guaranteed €5? The standard position on Less Wrong is that the answer depends solely on the difference between cash and utility. If your utility scales less-than-linearly with money, you are risk averse and should choose the last option; if it scales more-than-linearly, you are risk-loving and should choose the second one. If we replaced €’s with utils in the example above, then it would simply be irrational to prefer one option over the others.

 

There are mathematical proofs of that result, but there are also strong intuitive arguments for it. What’s the best way of seeing this? Imagine that X1 and X2 were two probability distributions, with mean u1 and u2 and variances v1 and v2. If the two distributions are independent, then the sum X1 + X2 has mean u1 + u2, and variance v1 + v2.

 

Now if we multiply the returns of any distribution by a constant r, the mean scales by r and variance scales by r2. Consequently if we have n probability distributions X1, X2, ... , Xn representing n equally expensive investments, the expected average return is (Σni=1 ui)/n, while the variance of this average is (Σni=1 vi)/n2. If the vn are bounded, then once we make n large enough, that variance must tend to zero. So if you have many investments, your averaged actual returns will be, with high probability, very close to your expected returns.

 

continue reading »

Localized theories and conditional complexity

7 jimmy 19 October 2009 07:29AM

Suppose I hand you a series of data points without providing the context. Consider the theory v = a*t for t<<1, v = b for t>>1. Without knowing anything a priori about the shapes of the curves, one must have enough data to make sure that v follows the right lines at the two limits since there is complexity that must be justified.  Here we have two one-parameter curves, so we need at least two data points to pick the right slope and offset, as well as at least a couple more to make sure it follows the right shape.

This is what I’ll call a completely local theory – see data, fit curve.  Dealing with problems at this level does not leave much room for human bias or error, but it also does not allow for improvement by including  background knowledge.

continue reading »

Correlated decision making: a complete theory

7 Stuart_Armstrong 26 September 2009 11:47AM

The title of this post most probably deserves a cautious question mark at the end, but I'll go out on a limb and start sawing it behind me: I think I've got a framework that consistently solves correlated decision problems. That it is, those situation where different agents (a forgetful you at different times, your duplicates, or Omega’s prediction of you) will come to the same decision.

After my first post on the subject, Wei Dai asked whether my ideas could be formalised enough that it could applied mechanically. There were further challenges: introducing further positional information, and dealing with the difference between simulations and predictions. Since I claimed this sort of approach could apply to the Newcomb’s problem, it is also useful to see it work in cases were the two decisions are only partially correlated - where Omega is good, but he’s not perfect.

The theory

In standard decision making, it is easy to estimate your own contribution to your own utility; the contribution of others to your own utility is then estimated separately. In correlated decision-making, both steps are trickier; estimating your contribution is non-obvious, and the contribution from others is not independent. In fact, the question to ask is not "if I decide this, how much return will I make", but rather "in a world in which I decide this, how much return will I make".

You first estimate the contribution of each decision made to your own utility, using a simplified version of the CDP: if N correlated decisions are needed to gain some utility, then each decision maker is estimated to have contributed 1/N of the effort towards the gain of that utility.

continue reading »

Hypothetical Paradoxes

10 Psychohistorian 19 September 2009 06:28AM

When we form hypotheticals, they must use entirely consistent and clear language, and avoid hiding complicated operations behind simple assumptions. In particular, with respect to decision theory, hypotheticals must employ a clear and consistent concept of free will, and they must make all information available to the theorizer available to the decider in the question. Failure to do either of these can make a hypothetical meaningless or self-contradictory if properly understood.

Newcomb's problem and the the Smoking Lesion fail to do both. I will argue that hidden assumptions in both problems imply internally contradictory concepts of free will, and thus both hypotheticals are incomprehensible and irrelevant when used to contradict decision theories.

And I'll do it without math or programming! Metatheory is fun.

continue reading »

Fighting Akrasia: Survey Design Help Request

1 gworley 14 August 2009 07:48PM

Follow-up to:  Fighting Akrasia:  Finding the Source

In the last post in this series I posted a link to a Google Docs survey to try to gather some data on what techniques, if any, work for people in conquering akrasia, but we haven't gotten very much information so far:  the response pool is fairly homogeneous in terms of age, sex, and personality type.  In part this is because we need to get more responses outside of the LW readership, but probably also because I'm not asking the right questions.  So, my challenge this weekend is to come up with some good revisions for the survey.

In order to maximize comment usefulness, please suggest one revision per top level comment and then any discussion of that revision can take place in the replies.

In the interest of keeping the comments on topic, I request a moratorium on discussions of whether or not akrasia exists and whether or not we can or should do something about it in the comments on this article.  It's not that I want to exclude or silence opinions contrary to what I'm trying to accomplish:  it's just that I would like to keep this article on the topic of revising the akrasia fighting survey.  By all means, if my posting about akrasia really bothers you, write up an article explaining why I'm wrong and we'll discuss the issue more there.

Thanks!

View more: Prev | Next