Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: EGarrett 18 August 2014 08:17:28PM *  2 points [-]

Sure, from page 22 of the 2nd paper:

"... Now, the reason for this, and what makes it especially interesting, is in what it reveals about the origins of our humor instinct, which is that it clearly evolved not only before language, but also before we had higher brain functions that allowed hypothetical scenarios or sophisticated deception."

Our humor instinct comes from a part of our brain that was evolutionarily programmed in a time where our intellect expressed itself in terms of "A," then "B", and get "C." Put the animal in the fire, wait until it smells good, then eat it. Grab the stick, hit the branch, and the fruit will fall."

This study was announced a few days ago in Harvard magazine (http://harvardmagazine.com/2014/09/was-the-human-brain-unleashed), discussing the basic parts of the brain compared to the more "advanced" ones in humans.

"...the neurons in the sensory and motor areas seem to be playing a game of telephone, in which information follows serial paths, the cells in the association areas use a communications strategy more like the Internet—with lots of simultaneous connections and pathways.

Buckner and Krienen looked for a simple way to explain this phenomenon. Association areas not only evolved later in humans, they also form later in an individual’s development."


The functioning of our "misplacement" instinct, as I found it in my studying of humor, seemed to indicate precisely that our early form of intelligence, and the laughter that came from, functioned only in terms of A-B-C and recognizing errors in those types of sequences, and our advanced functions that allowed more abstract and hypothetical thinking must have occurred later. The study seems to have used different methods to have arrived at the same conclusion.

Comment author: Cyan 19 August 2014 08:44:41PM 1 point [-]

Awesome, thanks!

Comment author: Stuart_Armstrong 19 August 2014 12:09:02PM 1 point [-]

The challenge is not to combine different algorithms in the same area, but in different areas. A social bot and a stock market predictor - how should they interface best? And how would you automate the construction of interfaces?

Comment author: Cyan 19 August 2014 08:35:33PM *  0 points [-]

Meh. That's only a problem in practice, not in principle. In principle, all prediction problems can be reduced to binary sequence prediction. (By which I mean, in principle there's only one "area".)

Comment author: Cyan 18 August 2014 07:16:51PM *  1 point [-]

I invite you to spell out the prediction that you drew about the evolution of human intelligence from your theory of humor and how the recently published neurology research verified it.

Comment author: Cyan 18 August 2014 06:43:58PM 1 point [-]

What if it was very hard to produce an intelligence that was of high performance across many domains?... There are a few strong counters to this - for instance, you could construct good generalists by networking together specialists...

In fact, we already know the minimax optimal algorithm for combining "expert" predictions (here "expert" denotes an online sequence prediction algorithm of any variety); it's the weighted majority algorithm.

Comment author: Thrasymachus 02 August 2014 02:15:08AM 2 points [-]

Interesting: Is there a story as to why that is the case? One guess that springs to mind is that market performance in sectors is always correlated, but you don't see it in well functioning markets due to range restriction/tails-come-apart reasons, but you do see it when things go badly wrong as it reveals more of the range.

Comment author: Cyan 02 August 2014 11:39:15AM *  2 points [-]

market performance in sectors is always correlated, but you don't see it

The problem is the word "always". If I interpret it to mean "over all possible time scales" then the claim is basically false; if I interpret it to mean "over the longest time scales" then the claim is true, but trivially so given that sector performances are sometimes correlated.

We won't get to an explanation by just thinking about probability measures on stochastic processes. What's needed here is a causal graph. The basic causal graph has the financial sector internally highly connected, with the vast majority of the connections between lenders/investors and debtors/investees passing through it. That, I think, is sufficient to explain the stylized fact in the grandparent (although of course financial researchers can and do find more to say).

Comment author: Cyan 27 July 2014 08:04:55PM 8 points [-]

Just as markets are anti-inductive, it turns out that markets reverse the "tails come apart" phenomenon found elsewhere. When times are "ordinary", performance in different sectors is largely uncorrelated, but when things go to shit, they go to shit all together, a phenomenon termed "tail dependence".

In response to comment by Cyan on Too good to be true
Comment author: V_V 20 July 2014 03:36:19PM 1 point [-]

Thanks.

So if I set size at 5%, collect the data, and run the test, and repeat the whole experiment with fresh data multiple times, should I expect that, if the null hypothesis is true, the test accepts exactly %5 of times, or at most 5% of times?

In response to comment by V_V on Too good to be true
Comment author: Cyan 20 July 2014 04:10:27PM 2 points [-]

If the null hypothesis is simple (that is, if it picks out a single point in the hypothesis space), and the model assumptions are true blah blah blah, then the test (falsely) rejects the null with exactly 5% probability. If the null is composite (comprises a non-singleton subset of parameter space), and there is no nice reduction to a simple null via mathematical tricks like sufficiency or the availability of a pivot, then the test falsely rejects the null with at most 5% probability.

But that's all very technical; somewhat less technically, almost always, a bootstrap procedure is available that obviates these questions and gets you to "exactly 5%"... asymptotically. Here "asymptotically" means "if the sample size is big enough". This just throws the question onto "how big is big enough," and that's context-dependent. And all of this is about one million times less important than the question of how well each study addresses systematic biases, which is an issue of real, actual study design and implementation rather than mathematical statistical theory.

Comment author: Algernoq 16 July 2014 07:06:51AM 2 points [-]

Compared to a martial arts club, LW goals are typically more all-consuming. Martial arts is occasionally also about living well, while LW encourages optimizing all aspects of life.

Comment author: Cyan 16 July 2014 04:38:16PM *  1 point [-]

Sure, that's a distinction, but to the extent that one's goals include making/maintaining social connections with people without regard to their involvement in LW so as to be happy and healthy, it's a distinction that cuts against the idea that "involvement in LW pulls people away from non-LWers".

This falls under the utility function is not up for grabs. It finds concrete expression in the goal factoring technique as developed by CFAR, which is designed to avoid failure modes like, e.g., cutting out the non-LWers one cares about due to some misguided notion that that's what "rationality" requires.

Comment author: buybuydandavis 14 July 2014 07:19:25AM *  22 points [-]

LW has a cult-like social structure. ...

Where the evidence for this is:

Appealing to people based on shared interests and values. Sharing specialized knowledge and associated jargon. Exhibiting a preference for like minded people. More likely to appeal to people actively looking to expand their social circle.

Seems a rather gigantic net to cast for "cults".

Comment author: Cyan 14 July 2014 02:32:07PM 9 points [-]

Well, there's this:

However, involvement in LW pulls people away from non-LWers.

But that is similarly gigantic -- on this front, in my experience LW isn't any worse than, say, joining a martial arts club. The hallmark of cultishness is that membership is contingent on actively cutting off contact with non-cult members.

Comment author: V_V 14 July 2014 10:46:51AM 1 point [-]

According to Wikipedia:

In statistical significance testing, the p-value is the probability of obtaining a test statistic result at least as extreme as the one that was actually observed, assuming that the null hypothesis is true.[1][2] A researcher will often "reject the null hypothesis" when the p-value turns out to be less than a predetermined significance level, often 0.05[3][4] or 0.01.

In response to comment by V_V on Too good to be true
Comment author: Cyan 14 July 2014 02:23:29PM 1 point [-]

You want size, not p-value. The difference is that size is a "pre-data" (or "design") quantity, while the p-value is post-data, i.e., data-dependent.

View more: Next