So, being able to observe that one behaviour causes the desired outcome more often than another behaviour counts as reasoning using Bayes Theorem? On this level of vagueness we could proclaim children natural frequentists, or Popperian falsificationists, or whatever else with equal ease.
The children adjusted their hypotheses appropriately when they saw the statistical data
Using such words to describe small children trying to light up a toy makes me suspect that this post is a parody.
There also seems to be a reference to the Singularity Institute:
You should get out more :)
By this I mean to become more acquainted with non-SI efforts in machine learning and AI (which is almost the same as "efforts in machine learning and AI").
If children really are natural Bayesians, then why and how do you think we change?
"Change"? Are you saying that many adults would use an obviously less-reliable technique? Somehow I doubt it. Did they run the same experiment with the adult subjects?
Did they run the same experiment with the adult subjects?
Yes, they did. Gopnik writes:
As we get older our “priors,” rationally enough, get stronger and stronger. We rely more on what we already know, or think we know, and less on new data. In some studies we’re doing in my lab now, my colleagues and I found that the very fact that children know less makes them able to learn more. We gave 4-year-olds and adults evidence about a toy that worked in an unusual way. The correct hypothesis about the toy had a low “prior” but was strongly supported by the data. The 4-year-olds were actually more likely to figure out the toy than the adults were.
Interesting, and still perfectly Bayesian. Adults have stronger priors, so their updates are not as large.
Yup. The nature of the change in JQuinton's question was a change in the available evidence. (A quibble: this is not perfectly Bayesian, since adults ought not to treat toys in psychology experiments as exchangeable with toys encountered in the wild. I'd posit that Thinking, Fast and Slow is relevant here.)
Reality: 2 + 2 = 4.
Newspaper headline: "2 + 2 = 4 000 000 000. Scientists worry: Is our society prepared for a dramatic impact of large numbers, or will our civilization collapse? (read more on page 13...)"
It seems my model of LessWrong is somehow broken, and so I want to know why--
The OP is at -3. Why is that? (note: I am not the OP). The article is relevant, and not a re-post, and contains both a link AND a synopsis. The only reason I can think of is either people thought it should go in Open (and didn't leave a comment to say that). I think the article is not controversial enough, too old, and too downvoted for it to merely be the initial downvote wave that posts sometimes get.
Anyways, my expectations would have been that the post is in the low positive numbers. A -3 punches my expectations in the face and insults my expectation's mother. So now I'm curious. Ideas?
I'm pretty sure that the -3 is just the initial downvote wave; it'll climb back up to ~2 during the next 24hrs. Of course the fact that this discussion is in the comments might affect things.
I am part of the "initial downvote wave". I downvoted the post because although the "Bayesian" hypothesis might be interesting to LessWrong, the academic articles linked to from the Slate article didn't really support it., the Slate article was just written by some researcher trying to push their own research angle, and the LW post didn't do any further analysis.
My advice when linking to something like this is to link directly to the academic paper and to draw your summary directly from the abstract of the paper, so that you don't misrepresent what the paper claims. Popular science pieces normally write whatever they feel like and then link to a couple of vaguely related papers, so they can't be trusted at all.
My advice when linking to something like this is to link directly to the academic paper and to draw your summary directly from the abstract of the paper, so that you don't misrepresent what the paper claims.
Hear, hear.
I observe that there are also three reasonably highly upvoted comments critical of the OP.
My working theory is that the post was downvoted for reasons similar to those listed in those comments.
Perhaps even by the commenters themselves.
This recent article at Slate thinks so:
Why Your 4-Year-Old Is As Smart as Nate Silver
There also seems to be a reference to the Singularity Institute:
(Of course, I don't know how many other AI researchers are using Bayes Theorem, so the author also might not have the SI in mind)
If children really are natural Bayesians, then why and how do you think we change?