Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: IlyaShpitser 29 July 2014 06:48:11PM *  0 points [-]

"Statisticians" is a pretty large set.

I still don't understand your original "because." I am talking about modeling the truth, not modeling what humans do. If the truth is not linear and humans use a linear modeling algorithm, well then they aren't a very good role model are they?

[ edit: did not downvote. ]

Comment author: Stuart_Armstrong 30 July 2014 09:42:22AM 0 points [-]

Because human flaws creep in in the process of modelling as well. Taking non linear relationships into account (unless there is a causal reason to do so) is asking for statistical trouble unless you very carefully account for how many models you have tested and tried (which almost nobody does).

Comment author: gwern 29 July 2014 05:40:18PM 1 point [-]

1968? Seriously?

Comment author: Stuart_Armstrong 30 July 2014 09:40:13AM 0 points [-]

Well there's Goldberg, Lewis R. "Five models of clinical judgment: An empirical comparison between linear and nonlinear representations of the human inference process." Organizational Behavior and Human Performance 6.4 (1971): 458-479.

The main thing is that these old papers seem to still be considered valid, see eg Shanteau, James. "How much information does an expert use? Is it relevant?." Acta Psychologica 81.1 (1992): 75-86.

Comment author: gwern 29 July 2014 04:07:32PM 2 points [-]

Best done? Better than, say, decision trees or expert systems or Bayesian belief networks? Citation needed.

Comment author: Stuart_Armstrong 29 July 2014 05:20:12PM 0 points [-]

Goldberg, Lewis R. "Simple models or simple processes? Some research on clinical judgments." American Psychologist 23.7 (1968): 483.

Comment author: John_D 29 July 2014 12:11:29PM *  -1 points [-]

I'm surprised by the lack of research on organic foods and health, and it seems like it wouldn't be too hard for a talented researcher to compare the health and mortality of people who consume organic vs. inorganic diets, after controlling for differences between the two groups, such total nutrient consumption, exercise, premorbid conditions prior to organic consumption, etc. Modified food may or may not have adverse effects beyond different nutrient contents (which so far is debatable), but I'm surprised at the amount of people who have jumped on this bandwagon with scant supporting evidence.

There is also the possibility that people will eat worse when consuming organic. I suspect that an inorganic diet composed of fish, fruits and vegetables, legumes, lean dairy, and nuts will be far healthier than an organic diet composed of fried chips, fatty artisan cheeses, chocolate bars, and low fiber carbs. Go to Trader Joe's or Whole Foods and watch how many carts are filled with the things you shouldn't eat. In fact, it seems the all-natural industry follows #1 (as far as they can) and #2 quite well, and if organic retailers are a proxy, they are about as good at ignoring #3 as the rest of the industry.

Comment author: Stuart_Armstrong 29 July 2014 04:49:24PM *  1 point [-]

Controlling doesn't get rid of all the confounders (easiest one: people who eat organic care more about what they eat, almost by definition - how do you control for that?), and long term studies are very hard to do.

Comment author: IlyaShpitser 28 July 2014 07:53:10PM *  2 points [-]

I don't follow you. Overfitting happens when your model has too many parameters, relative to the amount of data you have. It is true that linear models may have few parameters compared to some non-linear models (for example linear regression models vs regression models with extra interaction parameters). But surely, we can have sparsely parameterized non-linear models as well.

All I am saying is that if things are surprising it is either due to "noise" (variance) or "getting the truth wrong" (bias). Or both.

I agree that "models we can quickly and easily use while under publish-or-perish pressure" is an important class of models in practice :). Moreover, linear models are often in this class, while a ton of very interesting non-linear models in stats are not, and thus are rarely used. It is a pity.

Comment author: Stuart_Armstrong 29 July 2014 09:44:42AM -1 points [-]

The problem is more practical than theoretical (don't have the links to hand. but you can find some in my silos of expertise post). Statisticians do not adjust properly for extra degrees of freedom, so among some category of published models, the linear ones will be best. Also, it seems that linear models are very good for modelling human expertise - we might think we're complex, but we behave pretty linearly.

Comment author: gwern 28 July 2014 06:52:44PM 1 point [-]

Because in many fields, linear models (even poor ones) are the best we're going to get, with more complex models losing to overfitting.

I don't think that's true. What fields show optimal performance from linear models where better predictions can't be gotten from other techniques like decision trees or neural nets or ensembles of techniques?


Showing that crude linear models, with no form of regularization or priors, beats human clinical judgement, doesn't show your previous claim.

Comment author: Stuart_Armstrong 29 July 2014 09:42:16AM 0 points [-]

Modelling human clinical judgement is best done with linear models, for instance.

Comment author: Kaj_Sotala 29 July 2014 09:07:20AM *  1 point [-]

(I liked your post, but here's a sidenote...)

It bothers me that we keep talking about preferences without actually knowing what they are. I mean yes, in the VNM formulation a preference is something that causes you to choose one of two options, but we also know that to be insufficient as a definition. Humans have lots of different reasons for why they might choose A over B, and we'd need to know the exact reasons for each choice if we wanted to declare some choices as "losing" and some as "not losing". To use Eliezer's paraphrase, maybe the person in question really likes riding a taxi between those locations, and couldn't in fact use their money in any better way.

The natural objection to this is that in that case, the person isn't "really" optimizing for their location and being irrational about it, but is rather optimizing for spending a lot of time in the taxi and being rational about it. But 1) human brains are messy enough that it's unclear whether this distinction actually cuts reality at the joints; and 2) "you have to look deeper than just their actions in order to tell whether they're behaving rationally or not" was my very point.

Comment author: Stuart_Armstrong 29 July 2014 09:39:43AM 1 point [-]

Valid point, but do let me take babysteps away from vNM and see where that leads, rather than solving the whole preference issue immediately :-)

Comment author: James_Miller 28 July 2014 07:24:39PM 2 points [-]

Unlosing agents, living in a world with extorters, might have to be classically irrational in the sense that they would not give into threats even when a rational person would. Furthermore, unlosing agents living in a world in which other people can be threatened might need to have an irrationally strong desire to carry out threats so as not to lose the opportunity of extorting others. These examples assume that others can correctly read your utility function.

Generalizing, an unlosing agent would have an attitude towards threats and promises that maximized his utility given that other people know his attitude towards threats and promises. I strongly suspect that this situation would have multiple equilibria when multiple unlosing agents interacted.

Comment author: Stuart_Armstrong 29 July 2014 09:34:35AM 0 points [-]

The problem isn't solved for expected utility maximisers. Would unlosing agents be easier to solve?

Comment author: Squark 28 July 2014 06:49:03PM 3 points [-]

Just a sidenote, but IMO the solution to Pascal mugging is simply using a bounded utility function. I don't understand why people insist on unboundedness.

Comment author: Stuart_Armstrong 29 July 2014 09:30:42AM 0 points [-]

I probably agree in practice, but let's see if there's other avenues first...

Comment author: torekp 28 July 2014 09:58:09PM 0 points [-]

The justifications for the axioms of expected utility are, roughly: (Completeness) "If you don't decide, you'll probably lose pointlessly." [...] (Independence) "If your choice aren't independent, people can expect to make you lose pointlessly."

Just to record my objections: the axioms of Completeness and Independence are stronger than needed to guard against the threats mentioned.

Comment author: Stuart_Armstrong 29 July 2014 09:22:40AM 0 points [-]

I probably agree with you, but what's your exact point?

View more: Next