A Fervent Defense of Frequentist Statistics

43 jsteinhardt 18 February 2014 08:08PM

[Highlights for the busy: de-bunking standard "Bayes is optimal" arguments; frequentist Solomonoff induction; and a description of the online learning framework. Note: cross-posted from my blog.]

Short summary. This essay makes many points, each of which I think is worth reading, but if you are only going to understand one point I think it should be “Myth 5″ below, which describes the online learning framework as a response to the claim that frequentist methods need to make strong modeling assumptions. Among other things, online learning allows me to perform the following remarkable feat: if I’m betting on horses, and I get to place bets after watching other people bet but before seeing which horse wins the race, then I can guarantee that after a relatively small number of races, I will do almost as well overall as the best other person, even if the number of other people is very large (say, 1 billion), and their performance is correlated in complicated ways.

If you’re only going to understand two points, then also read about the frequentist version of Solomonoff induction, which is described in “Myth 6″.

Main article. I’ve already written one essay on Bayesian vs. frequentist statistics. In that essay, I argued for a balanced, pragmatic approach in which we think of the two families of methods as a collection of tools to be used as appropriate. Since I’m currently feeling contrarian, this essay will be far less balanced and will argue explicitly against Bayesian methods and in favor of frequentist methods. I hope this will be forgiven as so much other writing goes in the opposite direction of unabashedly defending Bayes. I should note that this essay is partially inspired by some of Cosma Shalizi’s blog posts, such as this one.

This essay will start by listing a series of myths, then debunk them one-by-one. My main motivation for this is that Bayesian approaches seem to be highly popularized, to the point that one may get the impression that they are the uncontroversially superior method of doing statistics. I actually think the opposite is true: I think most statisticans would for the most part defend frequentist methods, although there are also many departments that are decidedly Bayesian (e.g. many places in England, as well as some U.S. universities like Columbia). I have a lot of respect for many of the people at these universities, such as Andrew Gelman and Philip Dawid, but I worry that many of the other proponents of Bayes (most of them non-statisticians) tend to oversell Bayesian methods or undersell alternative methodologies.

If you are like me from, say, two years ago, you are firmly convinced that Bayesian methods are superior and that you have knockdown arguments in favor of this. If this is the case, then I hope this essay will give you an experience that I myself found life-altering: the experience of having a way of thinking that seemed unquestionably true slowly dissolve into just one of many imperfect models of reality. This experience helped me gain more explicit appreciation for the skill of viewing the world from many different angles, and of distinguishing between a very successful paradigm and reality.

continue reading »

Preferences without Existence

14 Coscott 08 February 2014 01:34AM

Cross-posted on By Way of Contradiction

My current beliefs say that there is a Tegmark 4 (or larger) multiverse, but there is no meaningful “reality fluid” or “probability” measure on it. We are all in this infinite multiverse, but there is no sense in which some parts of it exist more or are more likely than any other part. I have tried to illustrate these beliefs as an imaginary conversation between two people. My goal is to either share this belief, or more likely to get help from you in understanding why it is completely wrong.

A: Do you know what the game of life is?

B: Yes, of course, it is a cellular automaton. You start with a configuration of cells, and they update following a simple deterministic rule. It is a simple kind of simulated universe.

A: Did you know that when you run the game of life on an initial condition of a 2791 by 2791 square of live cells, and run it for long enough, creatures start to evolve. (Not true)

B: No. That’s amazing!

A: Yeah, these creatures have developed language and civilization. Time step 1,578,891,000,000,000 seems like it is a very important era for them, They have developed much technology, and it someone has developed the theory of a doomsday device that will kill everyone in their universe, and replace the entire thing with emptyness, but at the same time, many people are working hard on developing a way to stop him.

B:How do you know all this?

A: We have been simulating them on our computers. We have simulated up to that crucial time.

B: Wow, let me know what happens. I hope they find a way to stop him

A: Actually, the whole project is top secret now. The simulation will still be run, but nobody will ever know what happens.

B: Thats too bad. I was curious, but I still hope the creatures live long, happy, interesting lives.

A: What? Why do you hope that? It will never have any effect over you.

B: My utility function includes preferences between different universes even if I never get to know the result.

A: Oh, wait, I was wrong. It says here the whole project is canceled, and they have stopped simulating.

B: That is to bad, but I still hope they survive.

A: They won’t survive, we are not simulating them any more.

B: No, I am not talking about the simulation, I am talking about the simple set of mathematical laws that determine their world. I hope that those mathematical laws if run long enough do interesting things.

A: Even though you will never know, and it will never even be run in the real universe.

B: Yeah. It would still be beautiful if it never gets run and no one ever sees it.

A: Oh, wait. I missed something. It is not actually the game of life. It is a different cellular automaton they used. It says here that it is like the game of life, but the actual rules are really complicated, and take millions of bits to describe.

B: That is too bad. I still hope they survive, but not nearly as much.

A: Why not?

B: I think information theoretically simpler things are more important and more beautiful. It is a personal preference. It is much more desirable to me to have a complex interesting world come from simple initial conditions.

A: What if I told you I lied, and none of these simulations were run at all and never would be run. Would you have a preference over whether the simple configuration or the complex configuration had the life?

B: Yes, I would prefer if the simple configuration to have the life.

A: Is this some sort of Solomonoff probability measure thing?

B: No actually. It is independent of that. If the only existing things were this universe, I would still want laws of math to have creatures with long happy interesting lives arise from simple initial conditions.

A: Hmm, I guess I want that too. However, that is negligible compared to my preferences about things that really do exist.

B: That statement doesn’t mean much to me, because I don’t think this existence you are talking about is a real thing.

A: What? That doesn’t make any sense.

B: Actually, it all adds up to normality.

A: I see why you can still have preferences without existence, but what about beliefs?

B: What do you mean?

A: Without a concept of existence, you cannot have Solomonoff induction to tell you how likely different worlds are to exist.

B: I do not need it. I said I care more about simple universes than complicated ones, so I already make my decisions to maximize utility weighted by simplicity. It comes out exactly the same, I do not need to believe simple things exist more, because I already believe simple things matter more.

A: But then you don’t actually anticipate that you will observe simple things rather than complicated things.

B: I care about my actions more in the cases where I observe simple things, so I prepare for simple things to happen. What is the difference between that and anticipation?

A: I feel like there is something different, but I can’t quite put my finger on it. Do you care more about this world than that game of life world?

B: Well, I am not sure which one is simpler, so I don’t know, but it doesn’t matter. It is a lot easier for me to change our world than it is for me to change the game of life world. I therefore will make choices that roughly maximizes preferences about the future of this world in the simplest models.

A: Wait, if simplicity changes preferences, but does not change the level of existence, how do you explain the fact that we appear to be in a world that is simple? Isn’t that a priori extremely unlikely?

B: This is where it gets a little bit fuzzy, but I do not think that question makes sense. Unlikely by what measure? You are presupposing an existence measure on the collection of theoretical worlds just to ask that question.

A: Okay, it seems plausible, but kind of depressing to think that we do not exist.

B: Oh, I disagree! I am still a mind with free will, and I have the power to use that will to change my own little piece of mathematics — the output of my decision procedure. To me that feels incredibly beautiful, eternal, and important.

How big of an impact would cleaner political debates have on society?

4 adamzerner 06 February 2014 12:24AM

See this Newsroom clip.

Basically, their news network is trying to change the way political debates work by having the moderator force the candidates to answer the questions that are asked of them, not interrupt each other, justify arguments that are based on obvious falsehoods etc.

How big of a positive impact do you guys think that this would have on society?

My initial thoughts are that it would be huge. It would lead to better politicians, which would be a high level of action. The positive effects would trickle down into many aspects of our society.

The question then becomes, "can we make this happen?". I don't see a way right now, but the idea has enough upside to me that I keep it in the back of my mind in case I come up with a plausible way of implementing the change.

Thoughts?

How Not to Make Money

-3 diegocaleiro 24 January 2014 08:36PM

Sarcastic Practical Advice Series: 1 How Not to Make Money

I'm calling this a series because I would like it to be a series, feel free to write your own post on "how not to do something many people want to do", especially you, future me. 

 

I'm very good at not making money, and maybe this is a skill you have found yourself needing to perfect

But worry not. Stop rationalizing! I'll teach you some of the craft before you can say all the palindromes in the Finnish language. 

 

(1) Be one of those people who actually turn knowledge, general knowledge, into personally designed actions/policies. The kind of people who, upon learning that driving is more dangerous than being attacked by spiders, and experiencing the first person evolved fear of spiders, understands that he should be as afraid of driving badly as he is of spiders, or much more, and drives accordingly.

(2) Understand that there is no metaphysical Self, only a virtual center of narrative gravity (Read Dennett), whose manner of discounting time is hyperbolic (Read George Ainslie), weirdly self-representative (Read GEB), and basically a mess

(3) Read Reasons and Persons, by Parfit, and really give up on your Naïve intuitions about personal identity over time. Using (1) act accordingly, i.e. screw future retired you

(4) Go through a university program in the humanities, so no one tempts you by throwing money at you after you graduate - This has happened to an academically oriented friend of mine who graduated a Medical Doctor, but actually wanted to be in the lab playing with brains. - If you can make into Greek Mythology, or Iranian Literature, good for you, Philosophy is ok, as are social sciences, as long as you do theory and don't get into politics or institutional design later on. If you go to psychology, you are dangerously near Human Resources, so be sure to be doing it for the reasons Pinker would do it, because you want to understand our internal computer, not to treat people. 

(5) Have some cash: This seems obvious, but it’s worth reminding if you are a machine discounting hyperbolically, you'd better be safe for the next two months. 

(6) Study research on happiness and money: Money doesn't buy happiness, and when it does, it's by buying things to others, regardless of Price. Giving a bike, a Porsche, or a Starbucks coffee to your friends provides you the same amount of fuzzies. Use (1) act accordingly. 

(7) Be curious: If you are the kind of person who knows by heart that the Finish language is more propense to palindromization, you are in a great route not to make money. If you get really excited about space, good for you. If you are so moved by curiosity you can't sleep before you finally figure it out, worry not, money ain't coming your way. Don’t forget all those really cool books you want to read.

(8) Avoid being Anhedonic: Anhedonia is one of the great enemies of those who don't want to make money. If all feels more or less the same to you, there is great incentive to go after the gold, it won't harm you much, and it will afford you the number one value of the Anhedonic, a false sense of security, and the illusion that happiness lies somewhere ahead of you in the future.  If you can be thrilled or excited by the latest Adam Sandler movie, if a double rainbow will make you cry like a baby even in a video, and if you watch this sax video with a young, healthy, fertile female more than once because it’s a good video, rest assured, you’ll be fine.

(9) What do you care what other people think?:
Feynman nailed this aspect of the no-money making business. You may not have noticed but everyone, especially your family, thinks you should make money, Graham says

All parents tend to be more conservative for their kids than they would for themselves, simply because, as parents, they share risks more than rewards. If your eight year old son decides to climb a tall tree, or your teenage daughter decides to date the local bad boy, you won’t get a share in the excitement, but if your son falls, or your daughter gets pregnant, you’ll have to deal with the consequences. - How to do what you love.

It’s not just parents; everyone gets more shares of your money than of your excitement. If this was not the case, Effective Altruists would be advocating roller coasters and volcano lairs with cat people, not high income careers.

(10) Couchsurf and meet couchsurfers and world travelers: If you never did it, go around couchsurfing for a while. As it happens, due to many factors, travelling all the time, a dream of the majority, is cheaper than staying in one spot. Meeting world travelers like 1Mac Madison, 2Puneet Sahani, 3Frederico Balbiani, and 4Rand Hunt  made me realize, respectively, that: 1 It’s possible to travel 2/3rd of the time as a CS major; 2 Indian Citizenship and zero money won’t stop you; 3 Not speaking English or wanting to work with what gave you degrees doesn’t stop you; 4 Spending 90 dollars in 100 days is possible. You’ll feel much less pressure to make money after meeting similar people and being one of them.

(11) Don’t experience Status Anxiety: The World suffers from an intense affliction. Alain de Botton named it Status Anxiety. You are not just richer than most people nowadays. You are unimaginably, unbelievably wealthy (in term of resources you can use) in comparison to everyone that ever lived.  But the point is, the less time you spend comparing, regardless of who you are comparing with, the happier you feel.

(12) Be persuadable by intellectuals outside traditional science, like De Botton and Alan Watts, but not by really terrible The Secret style self-help.

(13) Consider money over-valued: In economics, the price of things is determined by the supply and demand of that particular thing. The interesting thing is that demand is not measured by how many people want something how badly, but this multiplied by each person’s wealth…  If so many (wealthy) people value Rolex watches, they will be overpriced for you, especially if they are paying in luck, inheritance, or interest, and you are paying in work (though both use money as a medium).
Money is a medium of trade, how could it be over-valued?
Simple, there are many other mediums of trade (being nice, becoming more attractive, being a good listener, going to the “right place at the right time”, knowledge, enviable skills, prestige, dominance, strength, signaling, risk – i.e. stealing, Vegas, or bitcoin - , sex, time, energy). If you think these items are cheaper than money, you go for them as your medium of trade. And indeed they are cheaper than money, because everyone knows that money is valuable, and nearly no one thought consciously of the trade value of those things.

(14) Fake it till you don’t make it: My final advice would be to try out not spending money. Do it for a month (I did it for two), set a personal unbearably low barrier according to your standards. Dine before going to dinner with friends, by bike, of course. Carry water instead of buying it. Deny any social activity that would be somewhat costly and substitute it for some personal project, internet download, or analogous near-free alternative. Exercise outside, not in the gym. Take notes on how good your days were, you may find out, as did Kingsley that: “We act as though comfort and luxury were the chief requirements of life, when all that we need to make us happy is something to be enthusiastic about.”  Furthermore, with Barry Schwartz, you may find out that less is more, and when you have fewer options of what to do, this gives you not only happiness, but extra capacity to use your psychological attention to actually do what you want to do, do as Obama did, save your precious share of mindspace.

 

There, I hope you feel more fully equipped not to make money, should you ever need this hard earned, practical life-skill. You’re welcome. 

Do we underuse the genetic heuristic?

4 Stefan_Schubert 22 January 2014 05:37PM

Someone, say Anna, has uttered a certain proposition P, say "Betty is stupid", and we want to evaluate whether it is true or not. We can do this by investigating P directly - i.e. we disregard the fact that Anna has said that Betty is stupid, but look only at what we know about Betty's behaviour (and possibly, we try to find out more about it). Alternatively, we can do this indirectly, by evaluating Anna's credibility with respect to P. If we know, for instance, that Anna is in general very reliable, then we are likely to infer that Betty is indeed stupid, but if we know that Anna hates Betty and that she frequently bases her beliefs on emotion, we are not.

The latter kind of arguments are called ad hominem arguments, or, in Hal Finney's apt phrase, the genetic heuristic (I'm going to use these terms interchangeably here). They are often criticized, not the least within analytical philosophy, where the traditional view is that they are more often than not fallacious. Certainly the genetic heuristic is often applied in fallacious ways, some of which are pointed out in Yudkowsky's article on the topic. Moreover, it seems reasonable to assume that such fallacies would be much more common if they weren't so frequently pointed out by people (accussations of ad hominem fallacies are common in all sorts of debates). No doubt, we are biologically disposed to attack the person rather than what he is saying on irrelevant grounds.

The genetic heuristic is not always fallacious, though. If a reputable scientist tells us that P is true, where P falls under her domain, then we have reason to believe that P is true. Similarly, if we know that Liza is a compulsive liar, then we have reason to believe that P is false if Liza has said P.

We see that genetic reasoning can be both positive and negative - i.e. it can both be used to confirm, and to disconfirm, P. It should also be noted that negative genetic arguments typically only make sense if we assume that we generally put trust into what other people say - i.e. that we use a genetic argument to the effect that the fact that S having said P makes P more likely to be true. If people don't use such arguments, but only look at P directly to evaluate whether it is true or not, it is unclear what importance arguments that throw doubt on the reliability of S have, since it that case, knowing whether S is reliable or not shouldn't affect our belief in P.

Three kinds of genetic arguments

We can differentiate between three kinds of genetic arguments (this list is not intended to be exhaustive):

1) Caren is unreliable. Hence we disregard anything she says (e.g. since Caren is three years old).

2) David says P, and given what we know about P and about David (especially of David's knowledge and attitute to P), we have reason to believe that David is not reliable with respect to P. (For instance, P might be some complicated idea in theoretical physics, and we know that David greatly overestimates his knowledge of theoretical physics.)

3) Eric's beliefs on a certain topic has a certain pattern. Given what we know of Eric's beliefs and preferences, this pattern is best explained on the hypothesis that he uses some non-rational heuristic (e.g. wishful thinking). Hence we infer that Eric beliefs on this topic are not justified. (E.g. Eric is asked to order different people with respect to friendliness, beauty and intelligence. Eric orders people very similarly on all these criteria - a striking pattern that is best explained, given what we now know of human psychology, by the halo effect.)

(Possibly 3) could be reduced to 2) but the prototypical instances of these categories are sufficiently different to justify listing them as separate.)

Now I would like to put forward the hypothesis that we underuse the genetic heuristic, possibly to quite a great degree. I'm not completely sure of this, though, which is part of the reason for why I write this post: I'm curious to see what you think. In any case, here is how I'm thinking.

Direct arguments for the genetic heuristic

My first three arguments are direct arguments purporting to show that genetic arguments are extremely useful.

a) The differences in reliability between different people are vast (as I discuss here; Kaj Sotala gave some interesting data which backed up my speculations). Not only are the differences between, e.g. Steven Pinker and uneducated people vast, but also, and more interestingly, so are the difference between Steven Pinker and an average academic. If this is true, it makes sense to think that P is more probable conditional on Pinker having said it, compared to if some average academic in his field have said P. But also, and more importantly, it makes sense to read whatever Pinker has written. The main difference between Pinker and the average academic does not concern the probabilities that what they say is true, but in the strikingness of what they are saying. Smart academics say interesting things, and hence it makes sense to read whatever they write, whereas not-so-smart academics generally say dull things. If this is true, then it definitely makes sense to keep a good track of who's reliable and interesting (within a certain area or all-in-all), and who is not. 

b) Psychologists have during the last decades amassed a lot of knowledege of different psychological mechanisms such as the halo effect, the IKEA effect, the just world hypothesis, etc. This knowledge was not previously available (even though people did have a hunch of some of them, as pointed out, e.g. by Daniel Kahnemann in Thinking Fast and Slow). This knowledge gives us a formidable tool for hypothesizing that others' (and, indeed, our own), beliefs are the result of unreliable processes. For instance, there are, I'd say, lots of patterns of beliefs which are suspicious in the same way Eric's are suspicious, and which also are best explained by reference to some non-rational psychological mechanism. (I think a lot of the posts on this site could be seen in these terms - as genetic arguments against certain beliefs or patterns of beliefs, which are based on our knowledge of different psychological mechanisms. I haven't seen anyone phrase this in terms of the genetic heuristic, though.)

c) As mentioned in the first paragraph, those who only use direct arguments against P disregard some information - i.e. the information that Betty has uttered P. It's a general principle in the philosophy of science and Bayesian reasoning that you should use all the available evidence and not disregard anything unless you have special reasons for doing so. Of course, there might be such reasons, but the burden of proof seems to be on those arguing that we should disregard it.

Genetic arguments for the genetic heuristic

My next arguments are genetic arguments (well I should use genetic arguments when arguing for the usefulness of genetic arguments, shouldn't I?) intended to show why we fail to see how useful they are. Now it should be pointed out that I think that we do use them on a massive scale - even though that's too seldom pointed out (and hence it is important to do so). My main point is, however, that we don't do it enough.

d) There are several psychological mechanisms that block us from seeing the scale of the usefulness of the genetic heuristic. For instance, we have a tendency to "believe everything we read/are told". Hence it would seem that we do not disregard what poor reasoners (whose statements we shouldn't believe) say to a sufficient degree. Also, there is, as pointed out in my previous post, the Dunning-Kruger effect which says that incompetent people overestimate their level of competence massively, while competent people underestimate their level of competence. This makes the levels of competence to look more similar than they actually are. Also, it is just generally hard to assess reasoning skills, as frequently pointed out here, and in the absence of reliable knowledge people often go for the simple and egalitarian hypothesis that people are roughly equal (I think the Dunning-Kruger effect is partly due to something like this).

It could be argued that there is at least one other important mechanism that plays in the other direction, namely the fundamental attribution error (i.e. we explain others' actions by reference to their character rather than to situational factors). This could lead us to explain poor reasoning by lack of capability, even though the true cause is some situational factor such as fatigue. Now even though you sometimes do see this, my experience is that it is not as common as one would think. It would be interesting to see your take on this.

Of course people do often classify people who actually are quite reliable and interesting as stupid based on some irrelevant factor, and then use the genetic heuristic to disregard whatever they say. This does not imply that the genetic heuristic is generally useless, though - if you really are good at tracking down reliable and interesting people, it is, in my mind, a wonderful weapon. It does imply that we should be really careful when we classify people, though. Also, it's of course true that if you are absolutely useless at picking out strong reasoners, then you'd better not use the genetic heuristic but have to stick to direct arguments.

e) Many social institutions are set up in a way which hides the extreme differences in capability between different people (this is also pointed out in my previous post). Professors are paid roughly the same, are given roughly the same speech time in seminars, etc, regardless of their competence. This is partly due to the psychological mechanisms that make us believe people are more cognitively equally than they are, but it also reinforces this idea. How could the differences between different academics be so vast, given that they are treated in roughly the same way by society? We are, as always, impressed by what is immediately visible and have differences understanding that under the surface huge differences in capability are hidden.

f) Another reason for why these social institutions are set up in this way is egalitarianism: we have a political belief that people should be roughly equally treated, and letting the best professors talk all the time is not compatible with that. This egalitarianism also is, I think, an obstacle to us seeing the vast differences in capability. We engage in wishful thinking to the effect that talent is more equally distributed than it is. 

g) There are strong social norms against giving ad hominem arguments to someone else's face. These norms are not entirely unjustified: ad hominem arguments do have a tendency to make debates derail into quarrels. In any case, this makes the genetic heuristic invisible, and, again, people tend to go by what they see and hear, so if they don't hear any ad hominem arguments, they'll use them less. I use the genetic heuristic much more often when I think than when I speak and since I suspect that others do likewise, its visibility doesn't match its use nor its usefulness. (More on this below).

These social norms are also partly due to the history of analytic philosophy. Analytical philosophers were traditionally strongly opposed to ad hominem arguments. This had partly to do with their strong opposition to "psychologism" - a rather vague term which refers to different uses of psychology in philosophy and logic. Genetic arguments typically speculate that this or that belief was due to some non-rational psychological mechanism, and hence it is easy to see how someone who'd like to banish psychology from philosophy (under which argumentation theory was supposed to fall) would be opposed to such arguments.* 

h) Unlike direct arguments, genetic arguments can be seen as "embarrasing", in a sense. Starting to question why others, or I myself came to have a certain belief is a rather personal business. (This is of course an important reason why people get upset when someone gives an ad homimen argument against them.) Most people don't want to start question whether they believe in this or that simply because it's in their material interest, for if that turned out to be true, they'd come out as selfish. It seems to me that people who underuse genetic reasoning are generally poor not only at metacognition (thinking about one's own thinking) on a narrow construal -i.e. on thinking of what biases they suffer from - but also are bad at analyzing their own personalities as a whole. If that speculation is true, it incidates that genetic reasoning has an empathic and emotional component that direct reasoning typically lack. I think I've observed many people who are really smart at direct reasoning, but who completely fail at genetic reasoning (e.g. they treat arguments coming from incompetent people as on par with those from competent people). These people tend to lack empathy (i.e. they don't understand other people - or themselves, I would guess).

i) Another important and related reason for why we underuse ad hominem arguments is, I think, that we wish to avoid negative emotions, and ad hominem reasoning often does give rise to negative feelings (we think we're being judgy). This goes especially for the kind of ad hominem reasoning that classifies people into smart/dumb people in general. Most people have rather egalitarian views and don't like thinking those kinds of thoughts. Indeed when I discuss this idea with people they are visibly uncomfortable with it even though they admit that there is some truth to it. We often avoid thinking about ideas that we're not emotionally comfortable with.

j) Another reason is mostly relevant to the third genetic heuristic and has to do with the fact that many of these patterns might be so complex as to be hard to spot. This is definitely so, but I'm convinced that with training you could be much better at spotting these patterns than most people are today. As stated, ad hominem-arguments aren't held in high regard today, which makes people not so inclined to look for them. In groups where such arguments are seen as important - such as Marxists and Freudians - people come up with intricate ad hominem arguments all the time. True, these are generally invalid, as they postulate psychological mechanisms that simply aren't there, but there's no reason to believe that you couldn't come up with equally complex ad hominem-arguments that track real psychological mechanisms.

Pragmatic considerations

It is true, as many have pointed out, that since genetic reasoning are bound to upset, we need to proceed cautiously if we're going to use it against someone we're discussing with. However, there are many situations where the object of our genetic reasoning doesn't know that we're using it, and hence can't get upset. For instance, I'm using it all the time when I'm thinking for myself, and this obviously doesn't upset anyone. Likewise, if I'm discussing someone's views - say Karl Popper's - with a friend and I use genetic arguments against Popper's views, that's unlikely to upset him.

Also, given the ubiquity of wishful thinking, the halo effect, etc, it seems to me that reasonable people shouldn't get too upset if others hypothesize that they have fallen prey to these biases if the patterns of their beliefs suggest this might be so (such as they do in the case of Eric). Indeed, ideally they should anticipate such hypotheses, or objections, by explicitly showing that the patterns that seem to indicate that they have fallen prey to some bias actually do not do that. At the very least, they should acknowledge that these patterns are bound to raise their discussion partners' suspicoun. I think it would be a great step forward if our debating culture would change so that this would become standard practice.

In general, it seems to me that we pay too much heed to the arguments given by people who are not actually persuaded by those arguments, but rather have decided what to believe beforehand, and then simply pick whatever arguments support their view (e.g. doctors' arguments for why doctors should be better paid). It is true that such people might sometimes actually come up with good arguments or evidence for their position, but in general their arguments tend to be poor. I certainly often just turn off when I hear that someone is arguing in this way: I have a limited amount of time, and prioritize to listen to people who are genuinely interested in the truth for its own sake.

Another factor that should be considered is that it is true that genetic reasoning is kind of judgy, elitistic and negative to a certain extent. This is not unproblematic: I consider it important to be generally optimistic and positive, not the least for your own sake. I'm not really sure what to conclude from this, other than that I think genetic reasoning is an indispensable tool in the rationalist's toolbox, and that you thus have to use it frequently even if it would have an emotional cost attached to it.

In genetic reasoning, you treat what is being said - P - as a "black box", more or less: you don't try to analyze P or look at how justified P is directly. Instead, you look at the process of how someone came to believe P. This is obviously especially useful when it's hard or time-consuming to assess P directly, while comparatively easy to assess the reliability of the process that gave rise to the belief in P. I'd say there are many such situations. To take but one example, consider a certain academic discipline - call it "modernpostism". We don't know much about the content of modernpostism, since modernpostists use terminology that is hard to penetrate for outsiders. We know, however, how the bigshots of modernpostism tend to behave and think in other areas. On the basis of this, we have inferred that they're intellectually dishonest, prone to all sorts of irrational thinking, and simply not very smart. On the basis of this, we infer that they probably have no justification for what they're saying in their professional life either. (More examples of useful ad hominem arguments are very welcome.)

Psychology is all the time uncovering new data relevant for ad hominem-reasoning - data not only on cognitive biases but also on thought-styles, personality psychology, etc. Indeed, it might even be that brain-scanning could be used for these purposes in the future. In principle it should be possible to do a brainscan on the likes of Zizek, Derrida or Foucault, observe that there is nothing much going on in the relevant areas of the brain, and conclude that what they say is indeed rubbish. That would be a glorious victory of cold science over empty bullshit indeed...

I clearly need to learn to write shorter.

* "Anti-psychologism" is a rather absurd position, to my mind. Even though there have of course been misapplications of psychological knowledge in philosophy, a blanket prohibition of the use of psychological knowledge - knowledge of how people typically do reason - in philosophy - which is, at least in part, the study of how we ought to reason - seems to me to be quiet absurd. For an interesting sociological explanation of why this idea became so widespread, see Martin Kusch's Psychologism: A Case Study in the Sociology of Philosophical Knowledge - in effect a genetic argument against anti-psychologism...

Another reason was that analytical philosophers revolted against the rather crude genetic arguments often given by Marxists ("you only say so because you're bourgeois") and Freudians ("you only say so because you're sexually repressed"). Popper's name especially comes to mind here. The problem with their ad hominem arguments was not so much that they were ad hominem, though, but that they were based on flawed theories of how our mind works. We now know much better - the psychological mechanisms discussed here have been validated in countless experiments - and should make use of that knowledge.

There are also other reasons, such as early analytic philosophy's much too "individualistic" picture of human knowledge (a picture which I think comes naturally to us for biological reasons, but which also is an important aspect of Enlightenment thought, starting perhaps with Descartes). They simply underestimated the degree to which we rely on trusting other people in modern society (something discussed, e.g. by Hilary Putnam). I will come back to this theme in a later post but will not go into it further now.

Why don't more rationalists start startups?

-3 adamzerner 20 January 2014 07:29AM

My motivation behind this post stems from Aumann's agreement theorem. It seems that my opinions on startups differ from most of the rationality community, so I want to share my thoughts, and hear your thoughts, so we could reach a better conclusion.

I think that if you're smart and hard working, there's a pretty good chance that you achieve financial independence within a decade of the beginning of your journey to start a startup. And that's my conservative estimate.

"Achieve financial independence" only scratches the surface of the benefits of succeeding with a startup. If you're an altruist, you'll get to help a lot of other people too. And making millions of dollars will also allow you the leverage you need to make riskier investments with much higher expected values, allowing you to grow your money quickly so you could do more good.

A lot of this is predicated on my belief that you have a good chance at succeeding if you're smart and hardworking, so let me explain why I think this.


 

Along the lines of reductionism, "success with a startup" is an outcome (I guess we could define success as a $5-10M exit in under 10 years). And outcomes consist of their components. My argument consists of breaking the main outcome into it's components, and then arguing that the components are all likely enough for the main outcome to be likely.

I think that the 4 components are:

  1. Devise an idea for a product that creates demand.
  2. Build it.
  3. Market and sell it.
  4. Things run smoothly (some might call this luck).

The Idea

Your idea has to be for a product or service (I'll just say product to keep things simple) that creates demand, and can be met profitably. In other words, make something people want (this article spells it out pretty well).

What could go wrong?

  • Failure to think specifically about benefits. These articles explain what I mean by this better than I could.
  • Failure to understand customers. To put yourself in their minds and understand what it is that they do and don't want. This is distinct from the first bullet point. You could have a specific benefit in mind, but be wrong about whether it's something your customer really wants (or about how badly they want it).
  • Failure to research competitors. Maybe you came up with a great idea, but it turns out that it exists already.
The big issue here is the first bullet point. As spelled out by Eliezer's article, people are horrible at thinking specifically about the benefits that their idea will bring customers. They're horrible at moving down the ladder of abstraction. They think more along the lines of "we connect people" instead of "we let you talk to your friends". Even YC applicants (probably the best startup accelerator in the world) suffer from this problem immensely. I think that this problem is the single biggest cause of failure for startups. (They say that 90% of startups fail? Well >99% of people can't think concretely.) However, I think that it's something that could be avoided with willpower, reading the LessWrong sequences, and taking some time to practice your new habit.

The second bullet point shouldn't be too hard, once your thinking becomes specific. And the third one is mostly a matter of taking a few days to do some research.


Build It

What I mean by 'build it' is pretty straightforward: take that idea you had, and make it real.

What could go wrong?
  • Our society doesn't have the technological or scientific progress necessary to build the product. For example, I have an idea for a machine that teleports you from one place to another. Unfortunately, we as a society aren't at a point where someone could build that.
  • You personally don't have the skills to build it.
  • You don't work hard enough. Maybe you try, and find that you don't have the willpower. Maybe you try, find that you do have the willpower, but realize that the amount of work it take isn't worth it to you.
  • You can't find people with the skills to work on it with you (cofounders).
  • You can't raise money from investors to hire people to help you build it.
  • The people you work with/hire aren't good enough to build the product you envisioned.
There's probably other things that could go wrong that I can't think of, but I think this is enough to work with for now.

First bullet point: you really just have to avoid unfeasible ideas. Doesn't sound too hard. I guess this could be a problem for someone at the forefront of their field, trying to push the boundaries, but who makes an error in judging what's buildable. However, I think that there's plenty of ideas that don't run you this risk.

Second bullet point: if you don't have the skills, then get them. There's plenty of resources available to learn. For one, it only takes a couple months to get the skills you'll need to build a decent website. Or you could invest more time to study something like engineering or design, which will increase your options of what ideas you could build.

Third bullet point: if you don't have willpower, it'll be pretty tough to succeed. Possible, but pretty tough. I don't recommend trying.

Fourth bullet point: thats just another thing that limits the ideas you could build successfully. Some ideas you can't build without a cofounder/cofounders, and some you can. Finding a cofounder shouldn't be too difficult though.

Fifth bullet point: this is actually a tough one. A lot of ideas will require at least seed funding (tens/hundreds of thousands of dollars) to build. There are definitely a bunch of ideas that you could build without any investment, but they're the minority. So let's say you have an idea that does require investment, but you're having trouble raising money (which I think would be understandable). Basically, I'd say that you should focus on peeling away the layers of risk. By following doing that, reading up on fundraising and using Angel List, I think you'd have a pretty good shot at raising the money you need. Still though, I think not being able to find an investor is a legitimate risk.

Sixth bullet point: I've never hired anyone before, but it doesn't seem that hard. Doing a good job optimizing your hires seems like something you'd have to be skilled at, but satisficing to the point that they could do a sufficient job building the product you envision seems to be something that any reasonable person can do.


Market and Sell It

Once you think up your product and build it, you then have to sell it to your customers. This means reaching them, convincing them, and distributing to them.

What could go wrong?
  • You're unable to communicate clearly to your customers what benefits they'll be receiving if they use your product.
  • You're unable to persuade them. (There are other elements to persuasion aside from clear communication).
  • You didn't reach enough people. Maybe you didn't advertise enough. Maybe you thought word would spread, and it didn't.
  • You're having distribution problems (delivering the product to your customer).
  • PR problems. Something goes wrong and you obtain a bad reputation.
First bullet point: see The Idea.

Second bullet point: First of all, read that book (Influence by Robert Cialdini). I'm no expert on persuasion, but I think taking a little time to read a few books would make you sufficiently good at it. And it's not that hard to persuade people when you've got a product that they love.

Third bullet point: I'm no expert on this either. However, I do hear that internet ads nowadays make it pretty easy and affordable to reach a targeted and good sized audience. Also, as always for things you don't know too much about, read up on it and educate yourself. I don't know enough about this to argue it well, and I don't feel too strongly about it, but I get the sense that this is unlikely to prevent success. Doing this stuff seems like it'd be sufficient.

Fourth bullet point: I don't know much about distribution. It seems that distribution is really only a problem for certain types of businesses. For them, I guess that's something you have to take into account before you go forth with an idea. Otherwise, it doesn't seem like to big a deal.

Fifth bullet point: I guess this is something that could kill a business. To a reasonable person though, it doesn't seem like too big a risk.


Things Running Smoothly

Obviously, crazy things could happen. However, they don't seem too likely.

What could go wrong?
  • Legal issues (current). Maybe you did something illegal and didn't realize it (ex. copyright infringement), and sanctions or a lawsuit killed your startup.
  • Legal issues (future). Maybe new laws were enacted that killed your startup.
  • Something in your personal life goes wrong that requires you to quit.
  • Your competitors innovate and beat you out. Or a big company decides to enter the market, and crushes you.
  • Scientific findings lead to your product being obsolete.
  • Macroeconomic conditions change, which somehow leads to people not wanting your product.
  • Political/social conditions lead to people not wanting your product.
Most of these seem like they have pretty low probabilities of happening. Low enough where they don't influence the overall likelihood of success too much. Especially if you're doing something that genuinely helps people (if so, it's less likely that things like legal/economic/political/social changes will end up hurting you).

Regarding competitors beating you out, that's something that sounds like a big risk, but actually doesn't happen as often as you'd think. You'd think that if a startup comes across an innovative idea, that big companies that are hundreds or thousands of times the size of that startup would just copy the idea and execute it themselves, given that the big company has so many resources. Somehow that doesn't happen too often. Big companies just seem slow to adapt. By the time they react, the startup usually has momentum, which often times causes the big company to acquire the startup, or lose market share. So just based off of my understanding of what actually tends to happen, this risk seems to be something to note, but not something to really worry about (see lesson #4).


Conclusion

Given all of this, I think that if you're smart and hard working, you should have *at least* an 80-90% chance at succeeding at a startup. Again... you have to think about what specific benefits your idea provides... you have to map out how it'll be built, and work hard at doing so... and you have to read up on marketing, and work hard at it. As I argue above, the components all seem very doable, and thus the parent outcome seems very achievable.

I really mean for this article to be a starting point for discussion. I think that if we outline the components and discuss each one, we'll make a lot of progress in coming to an agreement. So let me know which components you think I omitted, and which components you think I'm mistaken about.


PS: A lot of people seem to disregard startups as something they don't know much about, and aren't too interested in. Why? Success = millions of dollars. Aren't you curious as to how likely that success is? If there's an outcome you desire, shouldn't you be interested in how achievable it is?

The first AI probably won't be very smart

-2 jpaulson 16 January 2014 01:37AM

Claim: The first human-level AIs are not likely to undergo an intelligence explosion.

1) Brains have a ton of computational power: ~86 billion neurons and trillions of connections between them. Unless there's a "shortcut" to intelligence, we won't be able to efficiently simulate a brain for a long time. http://io9.com/this-computer-took-40-minutes-to-simulate-one-second-of-1043288954 describes one of the largest computers in the world simulating 1s of brain activity in 40m (i.e. this "AI" would think 2400 times slower than you or me). The first AIs are not likely to be fast thinkers.

2) Being able to read your own source code does not mean you can self-modify. You know that you're made of DNA. You can even get your own "source code" for a few thousand dollars. No humans have successfully self-modified into an intelligence explosion; the idea seems laughable.

3) Self-improvement is not like compound interest: if an AI comes up with an idea to modify it's source code to make it smarter, that doesn't automatically mean it will have a new idea tomorrow. In fact, as it picks off low-hanging fruit, new ideas will probably be harder and harder to think of. There's no guarantee that "how smart the AI is" will keep up with "how hard it is to think of ways to make the AI smarter"; to me, it seems very unlikely.

Community bias in threat evaluation

5 pianoforte611 17 January 2014 04:01AM

If I were to ask the question "What threat poses the greatest risk to society/humanity?" to several communities I would expect to get some answers that follow a predictable trend:

If I asked the question on an HBD blog I'd probably get one of the answers demographic disaster/dysgenics/immigration.

If I asked the question to a bunch of environmentalists they'd probably say global warming or pollution.

If I asked the question on a leftist blog I might get the answer: growing inequality/exploitation of workers.

If I asked the question to Catholic bishops they might say abortion/sexual immorality.

And if I were to ask the question on LessWrong (which is heavily populated by Computer scientists and programmers) many would respond with unfriendly AI.

One of these groups might be right, I don't know. However I would treat all of their claims with caution.

Edit: This may not be a bad from thing from an instrumental rationality perspective. If you think that the problem you're working on is really important then you're more likely to put a good effort into solving it.

Functional Side Effects

0 Coscott 14 January 2014 08:22PM

Cross Posted on By Way of Contradiction

You have probably heard the argument in favor of functional programming languages that functions act like functions in mathematics, and therefore have no side effects. When you call a function, you get an output, and with the exception of possibly the running time nothing matters except for the output that you get. This is in contrast with other programming languages where a function might change the value of some other global variable and have a lasting effect.

Unfortunately the truth is not that simple. All functions can have side effects. Let me illustrate this with Newcomb’s problem. In front of you are two boxes. The first box contains 1000 dollars, while the second box contains either 1,000,000 or nothing. You may choose to take either both boxes or just the second box. An Artificial Intelligence, Omega, can predict your actions with high accuracy, and has put 1,000,000 in the second box if and only if he predicts that you will take only the second box.

You, being a good reflexive decision agent, take only the second box, and it contains 1,000,000.

Omega can be viewed as a single function in a functional programming language, which takes in all sorts of information about you and the universe, and outputs a single number, 1,000,000 or 0. This function has a side effect. The side effect is that you take only the second box. If Omega did not simulate you and just output 1,000,000, and you knew this, then you would take two boxes.

Perhaps you are thinking “No, I took one box because I BELIEVED I was being simulated. This was not a side effect of the function, but instead a side effect of my beliefs about the function. That doesn’t count.”

Or, perhaps you are thinking “No, I took one box because of the function from my actions to states of the box. The side effect is no way dependent on the interior workings of Omega, but only on the output of Omega’s function in counterfactual universes. Omega’s code does not matter. All that matters is the mathematical function from the input to the output.”

These are reasonable rebuttals, but they do not carry over to other situations.

Imagine two programs, Omega 1 and Omega 2. They both simulate you for an hour, then output 0. The only difference is that Omega 1 tortures the simulation of you for an hour, while Omega 2 tries its best to simulate the values of the simulation of you. Which of these functions would your rather be run.

The fact that you have a preference between these (assuming you do have a preference) shows that function has a side effect that is not just a consequence of the function application in counterfactual universes.

Further, notice that even if you never know which function is run, you still have a preference. It is possible to have preference over things that you do not know about. Therefore, this side effect is not just a function of your beliefs about Omega.

Sometimes the input-output model of computation is an over simplification.

Let’s look at an application of thinking about side effects to Wei Dai’s Updateless Decision Theory. I will not try to explain UDT if you don’t already know about it, so this post should not be viewed alone.

UDT 1.0 is an attempt at a reflexive decision theory. It views a decision agent as a machine with code S, given input X, and having to choose an output Y. It advises the agent to consider different possible outputs, Y, and consider all consequences of the fact that the code S when run on X outputs Y. It then outputs the Y which maximizes his perceived utility of all the perceived consequences.

Wei Dai noticed an error with UDT 1.0 with the following thought experiment:

“Suppose Omega appears and tells you that you have just been copied, and each copy has been assigned a different number, either 1 or 2. Your number happens to be 1. You can choose between option A or option B. If the two copies choose different options without talking to each other, then each gets $10, otherwise they get $0.”

The problem is that all the reasons that S(1)=A are the exact same reasons why S(2)=A, so the two copies will probably the same result. Wei Dai proposes a fix, UDT 1.1 which is that instead of choosing an output S(1), you instead choose a function S, from 1,2 to A,B from the 4 available functions which maximizes utility. I think this was not the correct correction, which I will probably talk about in the future. I prefer UDT 1.0 to UDT 1.1.

Instead, I would like to offer an alternative way of looking at this thought experiment. The error is in the fact that S only looked at the outputs, and ignored possible side effects. I am aware that when S looked at the outputs, he was also considering his output in simulations of himself, but those are not side effects of the function. Those are direct results of the output of the function.

We should look at this problem and think, ”I want to output A or B, but in such a way that has the side effect that the other copy of me outputs B or A respectively.” S could search through functions considering their output on input 1 and the side effects of that function. S might decide to run the UDT 1.1 algorithm, which would have the desired result.

The difference between this and UDT 1.1 is that in UDT 1.1 S(1) is acting as though it had complete control over the output of S(2). In this thought experiment that seems like a fair assumption, but I do not think it is a fair assumption in general, so I am trying to construct a decision theory which does not have to make this assumption. This is because if the problem was different, then S(1) and S(2) might have had different utility functions.

[Link] More ominous than a [Marriage] strike

6 GLaDOS 04 January 2014 05:34PM

Dalrock writes an interesting article related to Dr. Helen Smith's book the Marriage Strike. I really have to bump it up on my too rapidly growing reading list. (^_^)

Dr. Helen has a thoughtful post up asking if the title of her book is an accurate description of men’s response to the changes in the law and culture.  While the title of her book is extremely effective in opening the discussion (which is what it needs to do), it isn’t an accurate description of problem we face in the West.  A strike can be negotiated with;  offer them a bit more and they’ll get back to work.  Better yet, offer a few of them a side deal and break the cohesion.  True strikes require moral or legal force to avoid this sort of peeling off.  The problem for the modern West is far worse.  What we are seeing isn’t men throwing a collective temper tantrum, noble or otherwise.  What we are seeing is men responding to incentives.  Even worse, inertia has delayed the response to incentives, which means much more adjustment is likely on the way.

There was an old joke in the Soviet Union to the effect of:

""We pretend to work.  They pretend to pay us.""

The problem for the Soviets was this wasn’t a movement.  They knew how to handle a movement, and Siberia had plenty of room above ground and below.  The Soviets were masters at coercion through fear, but the problem wasn’t a rebellion, it was that they had reached the limits of incentive through fear.  In the short and even medium term fear is a very effective motivator.  But over time if overused it loses some of its power, especially when it comes to the kind of productivity which requires creativity and risk taking.  Standing out is risky;  you don’t want to be the worst worker on the line in a fear based system, but you also have reason to fear being the best worker on the line.  This doesn’t happen so much by conscious choice, but due to the influence of the incentive structure on the culture over time.  Conscious choices can be bargained with, and threats of punishment are still effective.  The culture itself is far harder to negotiate with.  No one is refusing anything.  So the Soviets had no choice but to assign quotas, and severely punish those who failed to meet them.  But while the quota/coercion system keeps production running, it works against human nature.  If you become the best producer you end up being assigned a larger share of the quota burden;  from each according to his abilities.  Over time the logic of this works its way into the culture, as everyone gets just a little more inclined to go with the flow and not do more than required.  The problem is while momentum causes the response to be slow, it also means it is very difficult to deal with once you have enough of it to recognize.

The problem we presently face in the West is similar.  While we have a small number of men who have decided to slack off as a form of protest, the far more insidious risk to our economy is the across the board weakening of the incentive that a marriage based social structure creates for men to produce at their full potential.  We’ve moved from a mostly reward based incentive structure to a model the Soviets would have been proud of.

You can see this at the micro level with a man whose wife goes Jenny Erickson on him.  The courts understand that throwing a man out of the home and taking away his children naturally reduces the man’s normal incentive to work to support his family.  How could it not?  It isn’t that most men in this situation will stand by and watch their children starve, but they won’t be motivated to produce quite as much.  You can confiscate a percentage of his income in the form of child support, but he no longer has the incentive to fight his way quite so high up our progressive tax structure.  This is why the courts have to assign the man an income quota he has to meet, Soviet style.  Imputation of income isn’t incidental to the child support family model;  it is essential to the function of the model.  Note that this doesn’t mean the courts have to formally calculate an income quota for each man who ends up in the new child support family structure;  in most cases the man has already assigned himself a quota based on past production.  All the family courts need to do in most cases is make sure he doesn’t fall below this quota.

As I mentioned above coercion is generally a very effective incentive in the near and medium term.  Part of the reason conservatives are so enamored with child support is the threatpoint it provides to keep existing husbands working as hard as possible.  While in the long run this will ultimately create a culture where husbands are less inclined to become stand out earners, as Keynes famously put it in the long run we are all dead.  The other problem is the changes in the culture in response to over use of coercion are by their very nature difficult to identify and quantify.  This isn’t unlike the Laffer Curve;  while both liberals and conservatives agree regarding the principle of the curve, the shape of the curve is impossible to get agreement on.  Eventually you can raise tax rates so high that you end up with lower revenue, but due to the problems of momentum identifying exactly when you have (or will) hit that point can be very difficult.

The more immediate problem in the West is the reduced incentive young men perceive to compete as breadwinners due to the continuing delay in the age of marriage.  Again this isn’t a movement, it is a delayed response by the culture to reality.  When the average woman marries in her late teens or even her early twenties, the average young man will see himself as competing with his peers for the job of husband.  Not only is he competing to not be left out of the game entirely, but he is jockeying for a better choice of wife.  But move the age of marriage out far enough, and eventually young men don’t see themselves so clearly as competing for the job of husband.  Extend the age of marriage far enough and eventually the culture of young men will be less focused on competing to signal provider status, and their priorities will shift (on the margin) toward slacking off.  The question isn’t if this will happen, but how long you can push the age of marriage out before this starts to happen, how much this will reduce the motivation of young men, and how long between the change in reality and the change in culture.  Note also that this doesn’t require men to swear off marriage entirely for this to greatly impact our tax base.  Changing the culture of men in their formative years will have a lasting impact.  You can’t rewind time and undo a decade of (relative) slacking.  Additionally, momentum tends to start working against you at some point.  As the expectations of men as providers declines it eventually creates an expectation of decline.  As each generation of new husbands come to the table with less to offer as providers, we eventually will start to expect future generations of husbands to offer even less.

As I’ve said before, all of this places our elites in a very difficult bind.  Eventually the momentum which initially masked the problem makes it extremely difficult to address.  Denial of the problem is a flawed strategy but it has important advantages.  Once you acknowledge that the incentive structure is flawed you tend to accelerate the delayed response to the new structure.  At the same time, the changes at the core of the problem are very close to the hearts of both liberals and conservatives.  However, ignoring the problem will become more and more difficult because of the impact on the bottom line.  Because of this, we can expect to see more of what we already see.  Feminists will continue their handwringing tentatively asking if perhaps we have gone a bit too far, and conservatives will redouble their efforts to convince men they need to man up and stop sabotaging the glorious feminist progress.  Less conspicuously I also expect we will see some dialing back of the worst excesses of the family courts.  However, because of the momentum involved and the reluctance to acknowledge the fundamental problem, these changes will at best only slow the problem, and they will always run the risk of initially accelerating it.

View more: Prev | Next