Response to Man-with-a-hammer syndrome.

It's been claimed that there is no way to spot Affective Death Spirals, or cultish obsession with the One Big Idea of Everything. I'd like to posit a simple way to spot such error, with the caveat that it may not work for every case.

There's an old game called Two Truths and a Lie. I'd bet almost everyone's heard of it, but I'll summarize it just in case. A person makes three statements, and the other players must guess which of those statements is false. The statement-maker gets points for fooling people, people get points for not being fooled. That's it. I'd like to propose a rationalist's version of this game that should serve as a nifty check on certain Affective Death Spirals, runaway Theory-Of-Everythings, and Perfectly General Explanations. It's almost as simple.

Say you have a theory about human behaviour. Get a friend to do a little research and assert three factual claims about how people behave that your theory would realistically apply to. At least one of these claims must be false. See if you can explain every claim using your theory before learning which one's false. 

If you can come up with a convincing explanation for all three statements, you must be very cautious when using your One Theory. If it can explain falsehoods, there's a very high risk you're going to use it to justify whatever prior beliefs you have. Even worse, you may use it to infer facts about the world, even though it is clearly not consistent enough to do so reliably. You must exercise the utmost caution in applying your One Theory, if not abandon reliance on it altogether. If, on the other hand, you can't come up with a convincing way to explain some of the statements, and those turn out to be the false ones, then there's at least a chance you're on to something.

Come to think of it, this is an excellent challenge to any proponent of a Big Idea. Give them three facts, some of which are false, and see if their Idea can discriminate. Just remember to be ruthless when they get it wrong; it doesn't prove their idea is totally wrong, only that reliance upon it would be.

Edited to clarify: My argument is not that one should simply abandon a theory altogether. In some cases, this may be justified, if all the theory has going for it is its predictive power, and you show it lacks that, toss it. But in the case of broad, complex theories that actually can explain many divergent outcomes, this exercise should teach you not to rely on that theory as a means of inference. Yes, you should believe in evolution. No, you shouldn't make broad inferences about human behaviour without any data because they are consistent with evolution, unless your application of the theory of evolution is so precise and well-informed that you can consistently pass the Two-Truths-and-a-Lie Test.

New Comment
67 comments, sorted by Click to highlight new comments since:

It's an interesting experiment, and probably a good teaching exercise under controlled conditions to teach people about falsificationism, but real theories are too complex and theories about human behavior are way too complex.

Take the "slam dunk" theory of evolution. If "Some people and animals are homosexual" was in there, I'd pick that as the lie without even looking at the other two (well, if I didn't already know). There are some okay explanations of how homosexuality might fit into evolution, but they're not the sort of thing most people would start thinking about unless they already knew homosexuality existed.

(Another example: plate tectonics and "Hawaii, right smack in the middle of a huge plate, is full of volcanoes".)

Take the "slam dunk" theory of evolution. If "Some people and animals are homosexual" was in there, I'd pick that as the lie without even looking at the other two (well, if I didn't already know).

A rationalist ends up being wrong sometimes, and can only hope for well-calibrated probabilities. I think that, in the absence of observation, this is the sort of prediction that most human-level intelligences would end up getting wrong, and I wouldn't necessarily assume they were making any errors of rationality in doing so, but rather hitting the 1 out of 20 occasions when a 5% probability occurs.

it doesn't prove their idea is totally wrong, only that reliance upon it would be.

As that bit shows, I agree completely. But while evolution is correct, you can't use it to go around making broad factual inferences. While you should believe in evolution, you shouldn't go around making statements like, "There are no homosexuals," or "Every behaviour is adaptive in a fairly obvious way," just because your theory predicts it. This exercise properly demonstrates that while the theory is true in a general sense, broad inferences based on a simplistic model of it are not appropriate.

But evolution really does make homosexuality less likely to occur. If given a set of biological statements like "some animals are homosexual" together with the theory of evolution, you will be able to get many more true/false labelings correct than if you did not have the theory of evolution. Sure, you'll get that one wrong, but you'll still get a lot more right than you otherwise would. (I read part of a book, in fact, whose title I can't remember although I just tried awhile to look it up, about evolution, from a professor who teaches evolution, and the thesis was that armed only with the theory of evolution, you can correctly answer a large number of biological questions without knowing anything about the species involved.)

With complex theories and complex truths, you get statistical predictive value, rather than perfection. That doesn't mean that testing your theories on real data (the basic idea behind this post) is a bad thing! It just means you need a larger data set.

Also: "the human eye sees objects in incredible detail, but a third of people's eyes can't effectively see stuff when it's a few feet away". Wtf.

Anyone got any insight about eyes or homos?

AFAIK, myopia seems to be caused, at least in part, by spending a lot of time focusing on close objects (such as books, computer screens, blackboards, walls, etc.); it's the result of another mismatch between the environment we live in and our genes. (Although it's fairly easily corrected, so there's not really any selection pressure against it these days.)

According to the studies referenced by the Wikipedia article, this is disputed and even if true would be, at most, a contributing factor active only in some of the cases. Even with no "near-work" many people would be myopic.

According to the WP article's section on epidemiology, possibly more than half of all people have a very weak form of myopia (0.5 to 1 diopters). The general amount of prevalence (as much as a third of population for significant myopia) is much bigger than could be explained solely by the proposed correlations (genetic or environmental).

To me this high prevalence and smooth distribution (in degree of myopia) suggests that it should just be treated as a weakness or a disease. We shouldn't act surprised that such exist. It doesn't even mean that it's not selected against, as CronoDAS suggested (it would only be true within the last 50-100 years). Just that the selection isn't strong enough and hasn't been going on long enough to eliminate myopia. (With 30-50% prevalence, it would take quite strong selection effects.)

Why are you surprised that such defects exist? The average human body has lots of various defects. Compare: "many humans are physically incapable of the exertions required by the life of a professional Roman-era soldier, and couldn't be trained for it no matter how much they tried."

Maybe we should be surprised that so few defects exist, or maybe we shouldn't be surprised at all - how can you tell?

The two factors this suggests to me, over that time period, are "increase in TV watching among young children" and "change in diet toward highly processed foods high in carbohydrates". This hypothesis would also predict the finding that myopia increased faster among blacks than among whites, since these two factors have been stronger in poorer urban areas than in wealthier or more rural ones.

Hypotheses aside, good find!

change in diet toward highly processed foods high in carbohydrates

Has this happened since 1970?

(The article suggests "computers and handheld devices.")

It didn't begin then, but it certainly continued to shift in that direction. IIRC from The Omnivore's Dilemma, it was under Nixon that massive corn subsidies began and vast corn surpluses became the norm, which led to a frenzy of new, cheap high-fructose-corn-syrup-based products as well as the use of corn for cow feed (which, since cows can't digest corn effectively, led to a whole array of antibiotics and additives as the cheap solution).

Upshot: I'd expect that the diet changes in the 1970s through 1990s were quite substantial, that e.g. sodas became even cheaper and more ubiquitous, etc.

The surprise is that an incredibly highly selection-optimized trait isn't selection-optimized to work at all in a surprising fraction of people (including myself). So many bits of optimization pressure exerted, only to choke on the last few.

Well then it's not all that highly selection-optimized. The reality is that many people do have poor eyesight and they do survive and reproduce. Why do you expect stronger selection than is in fact the case?

Look, for thousands of generations, natural selection applied its limited quantity of optimization pressure toward refining the eye. But now it's at a point where natural selection only needs a few more bits of optimization to effect a huge vision improvement by turning a great-but-broken eye into a great eye.

The fact that most people have fantastic vision shows that this trait is high utility for natural selection to optimize. So it's astounding that natural selection doesn't think it's worth selecting for working fantastic eyes over broken fantastic eyes, when that selection only takes a few bits to make. Natural selection has already proved its willingness to spend way more bits on way less profound vision imrovements, get it?

As Eliezer pointed out, the modern prevalence of bad vision is probably due to developmental factors specific to the modern world.

Just because you can imagine a better eye, doesn't mean that evolution will select for it. Evolution only selects for things that help the organisms it's acting on produce children and grandchildren, and it seems at least plausible to me that perfect eyesight isn't in that category, in humans. Even before we invented glasses, living in groups would have allowed us to assign the individuals with the best eyesight to do the tasks that required it, leaving those with a tendency toward nearsightedness to do less demanding tasks and still contribute to the tribe and win mates. In fact, in such a scenario it may even be plausible for nearsightedness to be selected for: It seems to me that someone assigned to fishing or planting would be less likely to be eaten by a tiger than someone assigned to hunting.

First of all I'm not "imagining a better eye"; by "fantastic eye" I mean the eye that natural selection spent 10,000 bits of optimization to create. Natural selection spent 10,000 bits for 10 units of eye goodness, then left 1/3 of us with a 5 bit optimization shortage that reduces our eye goodness by 3 units.

So I'm saying, if natural selection thought a unit of eye goodness is worth 1,000 bits, up to 10 units, why in modern humans doesn't it purchase 3 whole units for only 5 bits -- the same 3 units it previously purchased for 3333 bits?

I am aware of your general point that natural selection doesn't always evolve things toward cool engineering accomplishments, but your just-so story about potential advantages of nearsightedness doesn't reduce my surprise.

Your strength as a rationalist is to be more confused by fiction than by reality. Making up a story to explain the facts in retrospect is not a reliable algorithm for guessing the causal structure of eye-goodness and its consequences. So don't increase the posterior probability of observing the data as if your story is evidence for it -- stay confused.

So I'm saying, if natural selection thought a unit of eye goodness is worth 1,000 bits, up to 10 units, why in modern humans doesn't it purchase 3 whole units for only 5 bits -- the same 3 units it previously purchased for 3333 bits?

Perhaps, in the current environment, those 3 units aren't worth 5 bits, even though at one point they were worth 3,333 bits. (Evolution thoroughly ignores the sunk cost fallacy.)

This suggestion doesn't preclude other hypotheses; in fact, I'm not even intending to suggest that it's a particularly likely scenario - hence my use of the word plausible rather than anything more enthusiastic. But it is a plausible one, which you appeared to be vigorously denying was even possible earlier. Disregarding hypotheses for no good reason isn't particularly good rationality, either.

Why are you surprised that such defects exist?

A priori, I wouldn't have expected such a high-resolution retina to evolve in the first place, if the lens in front of it wouldn't have allowed one to take full advantage of it anyway. So I would have expected the resolving power of the lens to roughly match the resolution of the retina. (Well, oversampling can prevent moiré effects, but how likely was that to be an issue in the EEA?)

That may be diet, not evolutionary equilibrium.

I like the cuteness of turning an old parlor game into a theory-test. But I suspect a more direct and effective test would be to take one true fact, invert it, and then ask your test subject which statement fits their theory better. (I always try to do that to myself when I'm fitting my own pet theory to a new fact I've just heard, but it's hard once I already know which one is true.)

Other advantages of this test over the original one proposed in the post: (1) You don't have to go to the trouble of thinking up fake data (a problematic endeavor, because there is some art to coming up with a realistic-sounding false fact -- and also because you actually have to do some research to make sure that you didn't generate a true fact by accident). (2) Your test subject only has a 1 in 2 shot at guessing right by chance, as opposed to a 2 in 3 shot.

I think you oversell the usefulness of this test, both because of how hard it is to make predictions about unrepeatable "experiments" that don't include value-judgments and because of how easy it is to game the statements - imagine:

(A) the false statement to be selected to be false for extraneous reasons and (B) for the proponent of the Big Idea to argue (A) when it isn't true.

Let's say my friend and I are doing this test. His Big Idea is signaling; my task is to construct three statements.

1) Men who want to mate spend a lot of money. (Signaling resources!) 2) Women who want to mate volunteer. (Signaling nurturing!) 3) Children often share with each other, unprompted, while young. (Signaling cooperation to parents!)

Well, obviously #3 isn't right because of other concerns - it turns out competing for, and hoarding, resources has been evolutionarily more successful than signaling social fitness. Does that mean signaling as an idea isn't useful? No; it wrongly explained (3) for a valid reason. (3) is false for reasons unrelated to signaling.

[-]adb50

Psychohistorian doesn't say the idea isn't useful, just that reliance on it is incorrect. If the theory is "people mostly do stuff because of signalling", honestly, that's a pretty crappy theory. Once Signalling Guy fails this test, he should take that as a sign to go back and refine the theory, perhaps to

"People do stuff because of signalling when the benefit of the signal, in the environment of evolutionary adaptation, was worth more than its cost."

This means that making predictions requires estimating the cost and benefit of the behavior in advance, which requires a lot more data and computation, but that's what makes the theory a useful predictor instead of just another bogus Big Idea.

Not to point fingers at Freakonomics fans (not least because I'm guilty of this myself in party conversation) but it's real easy to look at a behavior that doesn't seem to make sense otherwise and say "oh, duh, signalling". The key is that the behavior doesn't make sense otherwise: it's costly, and that's an indication that, if people are doing it, there's a benefit you're not seeing. That technique may be helpful for explaining, but it's not helpful for predicting since, as you pointed out, it can explain anything if there's not enough cost/benefit information to rule it out.

[-]pjeby220

it's real easy to look at a behavior that doesn't seem to make sense otherwise and say "oh, duh, signalling". The key is that the behavior doesn't make sense otherwise: it's costly, and that's an indication that, if people are doing it, there's a benefit you're not seeing.

People do all sorts of insane things for reasons other than signaling, though. Like because their parents did it, or because the behavior was rewarded at some point.

Of course, signaling behavior is often rewarded, due to it being successful signaling... which means it might be more accurate to say that people do things because they've been rewarded at some point for doing them, and it just so happens that signaling behavior is often rewarded.

(Which is just the sort of detail we would want to see from a good theory of signaling -- or anything else about human behavior.)

Unfortunately, the search for a Big Idea in human behavior is kind of dangerous. Not just because a big-enough idea gets close to being tautological, but also because it's a bad idea to assume that people are sane or do things for sane reasons!

If you view people as stupid robots that latch onto and imitate the first patterns they see that produce some sort of reward (as well as freezing out anything that produces pain early on) and then stubbornly refusing to change despite all reason, then that's definitely a Big Idea enough to explain nearly everything important about human behavior.

We just don't like that idea because it's not beautiful and elegant, the way Big Ideas like evolution and relativity are.

(It's also not the sort of idea we're looking for, because we want Big Ideas about psychology to help us bypass any need to understand individual human beings and their tortured histories, or even look at what their current programming is. Unfortunately, this is like expecting a Theory of Computing to let us equally predict obscure problems in Vista and OS X, without ever looking at their source code or development history of either one.)

(It's also not the sort of idea we're looking for, because we want Big Ideas about psychology to help us bypass any need to understand individual human beings and their tortured histories, or even look at what their current programming is. Unfortunately, this is like expecting a Theory of Computing to let us equally predict obscure problems in Vista and OS X, without ever looking at their source code or development history of either one.)

So do you think e.g. overcoming akrasia necessitates understanding your self-programming via a set of decent algorithms for doing so (e.g. what Less Wrong is for epistemic rationality) that allow you to figure out for yourself whatever problems you may have? That would be a little worrying insofar as something like akrasia might be similar to a blue screen of death in your Theory of Computing example: a common failure mode resulting from any number of different problems that can only be resolved by the application of high-level learned algorithms that most people simply don't have and never bother to find, and those who do find are unable to succinctly express in such a way as to be memetically fit.

On top of that, similar to how most people never notice that they're horrible epistemic rationalists and that there is a higher standard to which they could aspire, most good epistemic rationalists themselves may at least notice that they're sub-par along many dimensions of instrumental rationality and yet completely fail to be motivated to do anything about it: they pride themselves on being correct, not being successful, in the same way most people pride themselves on their success and not their correctness (by gerrymandering their definition of correctness to be success like rationalists may gerrymander their definition of success to be correctness, resulting in both of them losing by either succeeding at the wrong things or failing to succeed at the right things).

So do you think e.g. overcoming akrasia necessitates understanding your self-programming via a set of decent algorithms for doing so (e.g. what Less Wrong is for epistemic rationality) that allow you to figure out for yourself whatever problems you may have?

Yes; see here for why.

Btw, it would be more accurate to speak of "akrasias" as individual occurrences, rather than "akrasia" as a non-countable. One can overcome an akrasia, but not "akrasia" in some general sense.

they pride themselves on being correct, not being successful

Yep, major failure mode. Been there, done that. ;-)

Btw, it would be more accurate to speak of "akrasias" as individual occurrences, rather than "akrasia" as a non-countable. One can overcome an akrasia, but not "akrasia" in some general sense.

I bet you think the war on terror is a badly framed concept.

I'd like to see this expanded into a post.

Signaling behavior is often rewarded, due to it being successful signaling... which means it might be more accurate to say that people do things because they've been rewarded at some point for doing them, and it just so happens that signaling behavior is often rewarded.

The evolutionary/signaling explanation is distinct from the rewards/conditioning explanation, because the former says that people are predisposed to engage in behaviors that were good signaling in the ancestral environment whether or not they are rewarded today.

The evolutionary/signaling explanation is distinct from the rewards/conditioning explanation, because the former says that people are predisposed to engage in behaviors that were good signaling in the ancestral environment whether or not they are rewarded today.

As a practical matter of evolution, signal-detection has to evolve before signal-generation, or there's no benefit to generating the signal. And evolution likes to reuse existing machinery, e.g. reinforcement.

In practice, human beings also seem to have some sort of "sociometer" or "how other people probably see me", so signaling behavior can be reinforcing even without others' direct interaction.

It's very unparsimonious to assume that specific human signaling behaviors are inborn, given that there are such an incredible number of such behaviors in use. Much easier to assume that signal detection and self-reflection add up to standard reinforcement, as signal-detection and self-reflection are independently useful, while standalone signaling behaviors are not.

As a practical matter of evolution, signal-detection has to evolve before signal-generation, or there's no benefit to generating the signal

Er?
This seems to preclude cases where pre-existing behaviors are co-opted as signals.
Did you mean to preclude such cases?

This seems to preclude cases where pre-existing behaviors are co-opted as signals. Did you mean to preclude such cases?

Bleah. I notice that I am confused. Or at least, confusing. ;-)

What I was trying to say was that there's no reason to fake (or enhance) a characteristic or behavior until after it's being evaluated by others. So the evolutionary process is:

  1. There's some difference between individuals that provides useful information
  2. A detector evolves to exploit this information
  3. Selection pressure causes faking of the signal

This process is also repeated in memetic form, as well as genetic form. People do a behavior for some reason, people learn to use it to evaluate, and then other people learn to game the signal.

Ah, gotcha. Yes, that makes sense.

It is very unparsimonious to assume that specific human signaling behaviors are inborn, given that there are such an incredible number of such behaviors in use.

I agree that the vast majority of specific human behaviors, signaling or otherwise are learned, not in-born, as an Occam prior would suggest. That does not, however, mean that all signaling behaviors are learned. Many animals have instinctual mating rituals, and it would be quite surprising if the evolutionary pressures that enable these to develop in other species were entirely absent in humans.

Much easier to assume that signal detection and self-reflection add up to standard reinforcement, as signal-detection and self-reflection are independently useful, while standalone signaling behaviors are not.

I would expect signaling to show up both in reinforced behaviors and in the rewards themselves (the feeling of having signaled a given trait could feel rewarding). Again, most are probably behaviors that have been rewarded or learned memetically, but given the large and diverse signaling behaviors, the more complex explanation probably applies to some (but not most) of them.

People do all sorts of insane things for reasons other than signaling, though. Like because their parents did it, or because the behavior was rewarded at some point.

Minor quibble: the conscious reasons for someone's actions may not be signaling, but that may be little more than a rationalization for an unconsciously motivated attempt to signal some quality. Mating is filled with such signalling. While most people probably have some vague idea about sending the right signals to the opposite (or same) sex, few people realize that they are subconsciously sending and responding to signals. All they notice are their feelings.

Minor quibble: the conscious reasons for someone's actions may not be signaling, but that may be little more than a rationalization for an unconsciously motivated attempt to signal some quality.

If you read the rest of the comment to which you are replying, I pointed out that it's effectively best to assume that nobody knows why they're doing anything, and that we're simply doing what's been rewarded.

That some of those things that are rewarded can be classed as "signaling", may actually have less to do (evolutionarily) with the person exhibiting the behavior, and more to do with the person(s) rewarding or demonstrating those behaviors.

IOW, we may not have an instinct to "signal", but only to imitate what we see others responding to, and do more of what gives appropriate responses. That would allow our motivation to be far less conscious, for one thing.

(Somewhat-unrelated point: the most annoying thing about trying to study human motivation is the implicit assumption we have that people should know why they do things. But when viewed from an ev. psych perspective, it makes more sense to ask why is there any reason for us to know anything about our own motivations at all? We don't expect other animals to have insight into their own motivation, so why would we expect that, at 5% difference from a chimpanzee, we should automatically know everything about our own motivations? It's absurd.)

I'm not sure that the class of all actions that are motivated by signaling is the same as (or a subset of) the class of all actions that are rewarded. At least, if by rewarded, you mean something other than the rewards of pleasure and pain that the brain gives.

This activity doesn't sound terribly promising to me, but it DOES sound like the sort of thing that is easy enough to try that people should try it rather than just criticizing it. One of the great insights of the Enlightenment is that math-guys are more averse to doing actual experiments to test their ideas than they should be if they are trying to understand the world and thus to win.

Points for pjeby's comment once again, by the way.

I don't think it's anywhere as easy or as effective as you seem to suggest. In the land of theories so informal that there is a nontrivial question of whether they are trivial, one can get a hint of which statements are correct by other means and then rationalize these hints using the Great Theory. If it's not possible to tell which statements are correct, one can argue that the questions are too tough, and any theory has limits to its application.

Try this with Natural Selection and you will find that it can explain just about any animal behavior, even fake animal behavior. What should be the take away lesson from this?

. . .even fake animal behavior.

I didn't understand Psychohistorian's post as suggesting that we should make up fictional data - for then of course it may be no surprise that the given theory would have to bend in order to accommodate it. Rather, we should take real data, which is not explained by the theory (but which is understood in light of some different theory), and see just how easily the advocate can stretch his explanation to accommodate it. Does he/she notice the stretch? Can he/she resolve the difference between that data from the others?

What should be the take away lesson from this?

People get into man-with-a-hammer mode with evolutionary explanations. A lot. Because of the nature of evolutionary biology, sometimes they just reason like, "I can imagine what advantages this feature could have conferred in the past. Thus, ...". And yes, a lot of the time what you get is ad hoc crap.

But what if we don't know which data is actually explained by the theory or not? That will make it hard to come up with "real data, which is not explained by the theory".

Rather, we should take real data...

Not quite. The idea is to see if the theory can convincingly explain fake data. If it can, it doesn't mean the theory is wrong, it just means your capacity to infer things from it is limited. Natural selection is interesting and useful, but it is not a reliable predictor in many cases. You routinely see people say, "The market must do X because of Y." If they could say basically the exact same thing about ~X, then it's a fake explanation; their theory really doesn't tell them what the market will do. If a theory can convincingly explain false data, you've got to be very cautious in using it to make predictions or claims about the world.

Conversely, theories with extremely high predictive power will consistently pass the test. If you used facts centered in physics or chemistry, a competent test-taker should always spot the false data, because our theories in physics and chemistry mostly have extremely precise predictive power.

[-]Cyan20

That one ought not to attempt utterly unsupported natural-selection-based inferences in the domain of animal behavior but rather limit them to the domain of physical characteristics.

Do you have any examples in mind? It seems to me that only a misunderstanding of natural selection could explain fake animal behavior.

It seems that it is almost as easy to come up with a Natural Selection story which would explain why a bird in a certain environment would move slowly, stealthily, and be have feathers that are a similar color as the ground as it is to explain why a bird moves quickly, calls loudly, and is brightly colored. It seems that the ability to explain animal behavior and physical characteristics using Natural Selection is in large part up to the creativity of the person doing the explaining.

[-]Cyan60

Natural selection doesn't explain why or predict that a bird might have detrimental traits such as bright coloring that can betray it to predators. Darwin invented a whole other selective mechanism to explain the appearance of such traits -- sexual selection, later elaborated into the Handicap principle. Sexually selected traits are necessarily historically contingent, but you can't just explain away any hereditary handicap as a product of sexual selection: the theory makes the nontrivial prediction that mate selection will depend on such traits.

Sexual selection is just a type of natural selection, not a different mechanism. Just look at genes and be done with it.

I wish I could upvote this comment twice.

[-]Cyan30

Why? I didn't really feel like trying to win over Michael Vassar, but since you feel so strongly about it, I should point out that biologists do find it useful to distinguish between "ecological selection" and "sexual selection".

For an analogy, consider the fact that mathematicians also find it useful to distinguish between "squares" and "rectangles" -- but they nevertheless correctly insist that all squares are in fact rectangles.

The problem here isn't that "sexual selection" isn't a useful concept on its own; the problem is the failure to appreciate how abstract the concept of "natural selection" is.

I have a similar feeling, ultimately, about the opposition between "natural selection" and "artificial selection", even though that contrast is perhaps more pedagogically useful.

[-]Cyan50

The problem here isn't that "sexual selection" isn't a useful concept on its own; the problem is the failure to appreciate how abstract the concept of "natural selection" is.

I think there's a substantive dispute here, not merely semantics. The original complaint was that Natural Selection was an unconstrained theory; the point of my comment was that in specific cases, the actual operating selective mechanisms obey specific constraints. The more abstract a concept is (in OO terms, the higher in the class hierarchy), the less constraints it obeys. Saying that natural selection is an abstract concept that encompasses a variety of specific mechanisms is all well and good, but you can't instantiate an abstract class.

Sexually selected traits are necessarily historically contingent, but you can't just explain away any hereditary handicap as a product of sexual selection: the theory makes the nontrivial prediction that mate selection will depend on such traits.

Hmm. Generalization: a theory that concentrates probability mass in a high-dimensional space might not do so in a lower-dimensional projection. This seems important, but maybe only because I find false claims of nonfalsifiability/lack of predictive power very annoying.

[-]Cyan10

I'm having trouble seeing the relation between your comment and mine, but I'm intrigued and wish to subscribe to your newsletter would like to see it spelled out a bit.

Birds, which fly, or which descend from birds who flew, are often brighly colored. Animals that don't fly are seldom brightly colored. A major exception is animals or insects that are poisonous. Natural selection handles this pretty well.

I expect that, in practice, the theory holders will demure to offer answers. They will instead say, "Those aren't the kinds of questions that my theory is designed to answer. My theory answers questions like . . . ". They will then suggest questions that are either vague, impossible to answer empirically, or whose answers they know.

That's my theory, and I'm sticken' to it :).

Well, at least you know you can start ignoring them if they say that ...

No, you shouldn't make broad inferences about human behaviour without any data because they are consistent with evolution, unless your application of the theory of evolution is so precise and well-informed that you can consistently pass the Two-Truths-and-a-Lie Test.

This sentence could have beneficially ended after "behavior," "data," or "evolution." The last clause seems to be begging the question - why am I assuming the Two-Truths-and-a-Lie Test is so valuable? Shouldn't the test itself be put to some kind of test to prove its worth?

I think that it should be tested on our currently known theories, but I do think it will probably perform quite well. This is on the basis that its analogically similar to cross validation in the way that Occam's Razor is similar to the information criteria (Aikake, Bayes, Minimum Description Length, etc.) used in statistics.

I think that, in some sense, its the porting over of a statistical idea to the evaluation of general hypotheses.

OK, so my favorite man-with-a-hammer du jour is the "everyone does everything for selfish reasons" view of the world. If you give money to charity, you do it for the fuzzy feeling, not because you are altruistic.

What would you propose as the three factual claims to test this? I'm having a hard time figuring any that would be a useful discriminant.

Thinking about this a bit, it seems most useful to assert negative factual claims, ie: "X never happens".

OK, so my favorite man-with-a-hammer du jour is the "everyone does everything for selfish reasons" view of the world. If you give money to charity, you do it for the fuzzy feeling, not because you are altruistic.

That's not a disagreement about the nature of the world, it's a disagreement about the meaning of the word "altruistic".

Merely stylistic, I think, but I'd avoid "It has been said that...". Aside from being inappropriate passive voice (Who said it?), it has that weird feel of invoking ancient wisdom. That's only cute when Eliezer does it.

A modification for this game: come up with a list of 3 (or more) propositions, which are randomly false/true. This way the predictive theory can fail or succeed in a more obvious way.

I think this is cross-validation for tests. There have been several posts on Occam's Razor as a way to find correct theories, but this is the first I have seen on cross-validation.

In machine learning and statistics, a researcher often is trying to find a good predictor for some data and they often have some "training data" on which they can use to select the predictor from a class of potential predictors. Often one has more than one predictor that performs well on the training data so the question is how else can one choose an appropriate predictor.

One way to handle the problem is to use only a class of "simple predictors" (I'm fudging details!) and then use the best one: that's Occam's razor. Theorists like this approach and usually attach the word "information" to it. The other "practitioner" approach is use a bigger class of predictors where you tune some of the parameters on one part of the data and tune other parameters (often hyper-parameters if you know the jargon) on a separate part of the data. That's the cross-validation approach.

There's some results on the asymptotic equivalence of the two approaches. But, what's cool about this post is that I think it offers a way to apply cross-validation to an area where I have never heard it discussed (I think, in part, because its the method of the practitioner and not so much the theorist--there are exceptions of course!)

this is why I am not a libertarian. I believe in liberty in general, but it is just one more tool to getting what we want and should be treated accordingly.

but you need liberty to have wants

no. thou art physics remember? postulating magical preferences that materialize from the ether is silly. your wants are shaped by forces beyond your control. balking at a few more restrictions in cases where there is clearly a net good is silly.

[-][anonymous]00

This reminds me of David Deutsch's talk "A new way to explain explanation" (oddly enough also reposted here)