To "come the uncle over someone" means, according to this highly-trustworthy-looking site, to "overdo your privilege of reproving or castigating" someone.
As an example of illusion of transparency: On first reading, I interpretred your phrase 'highly-trustworthy-looking site' as sarcastic. Since it's a Webster's site, I'm going to guess that you were not intending to be sarcastic?
Wasn't Joseph already using the Rubik's cube as an example of a trivial toy?
Yes, I don't think Joseph's intention was to get Archimedes to understand a Rubix cube. I believe his intention was to get Archimedes to play with 'trivial toys' and so he thought talking about Rubix cubes might do the trick.
In the situation you described, it would be necessary to test values that did and didn't match the hypothesis, which ends up working an awful lot like adjusting away from an anchor. Is there a way of solving the 2 4 6 problem without coming up with a hypothesis too early?
The problem is not that they come up with a hypothesis too early, it's that they stop too early without testing examples that are not supposed to work. In most cases people are given as many opportunities to test as they'd like, yet they are confident in their answer after only testing one or two cases (all of which came up positive).
The trick is that you should come up with one or more hypotheses as soon as you can (maybe without announcing them), but test both cases which do and don't confirm it, and be prepared to change your hypothesis if you are proven wrong.
I think something else is going on with the 2 4 6 experiment, as described. Many of the students are making the assumption about the set of potential rules. Specifically, the assumption is that most pairs of rules in this set have the following mutual relationship: most of the instances allowed by one rule, are disallowed by the other rule. This being the case, then the quickest way to test any hypothetical rule is to produce a variety of instances which conform with that rule, to see whether they conform with the hidden rule.
I'll give you an example. Suppose that we are considering a family of rules, "the third number is an integer polynomial of the first two numbers". The quickest way to disconfirm a hypothetical rule is to produce instances in accordance with it and test them. If the rule is wrong, then the chances are good that an instance will quickly be discovered that does not match the hidden rule. It is much less efficient to proceed by producing instances not in accordance with it.
I'll give a specific example. Suppose the hidden rule is c = a + b, and the hypothesized rule being tested is c = a - b. Now pick just one random instance in accordance with the hypothesized rule. I will suppose a = 4, b = 6, so c = -2. So the instance is 4 6 -2. That instance does not match the hidden rule, so the hypothesized rule is immediately disconfirmed. Now try the following: instead of picking a random instance in accordance with the hypothesized rule, pick one not in accordance with it. I'll pick 4 6 8. This also fails to match the hidden rule, so it fails to tell us whether our hypothesized rule is correct. We see that it was quicker to test an instance that agrees with the hypothetical rule.
Thus we can see that in a certain class of situations, the most efficient way to test a hypothesis is to come up with instances that conform with the hypothesis.
Now you can fault people on having made this assumption. But if you do, then it is still a different error from the one describe. If the assumption about the kind of problem faced had been correct, then the approach (testing instances that agree with the hypothesis) would have been a good one. The error, if any, lies not in the approach per se but in the assumption.
Finally, I do not think one can rightly fault people for making that assumption. For, it is inevitable that very large and completely untested assumptions must be made in order to come to a conclusion at all. For, infinitely many rules are consistent with the evidence no matter how many instances you test. The only way ever to whittle this infinity of rules consistent with all the evidence down to one concluded rule is to make very large assumptions. The assumption that I have described may simply be the assumption which they made (and they had to make some assumption).
Furthermore, it doesn't matter what assumptions people make (and they must make some, because of the nature of the problem), a clever scientist can learn what assumptions people tend to make and then violate those assumptions. So no matter what people do, someone can come along, construct an experiment in which those assumptions are violated, and then say, "gotcha" when the majority of his test subjects come to the wrong conclusions (because of the assumptions they were making which were violated by the experiment).
The problem is not that they are trying examples which confirm their hypothesis it's that they are trying only those examples which test their hypothesis.
The article focuses on testing examples which don't work because people don't do this enough. Searching for positive examples is (as you argue) a neccessary part of testing a hypothesis, and people seem to have no problem applying this. What people fail to do is to search for the negative as well.
Both positive and negative examples are, I'd say, equally important, but people's focus is completely imbalanced.
The sky is black about half the time
If you count navy as blue rather than as black, that happens more rarely than “half the time”. (I'd say “10% of the time” as I have that number cached in my mind as the duty cycle of fluorescence detectors for ultra-high-energy cosmic rays.) You know, the moon.
and it's pretty common for it to be white, too.
And when that happens, in places where electric lighting is widely used, it tends to become orange (not quite -- does that colour have a name?) during the night!
I believe CronoDAS was referring to overcast days when they said the sky is sometimes white.
I agree that "offense is all about status" is probably too simple and that a more complex and refined theory can have greater explanatory/predictive value. On the other hand, the simplicity does have a benefit in that it's easier to apply when you're addressing an audience. It's probably easier to think "will what I write/say cause someone to lose social status?" (with a broad view of what constitutes status) than to try to keep more detailed models of the audience's minds (ETA: except in situations where your social brain works well and does the latter for you automatically).
If you disagree, can you try to distill your theory into some practical advice for writers?
The context here is a human dealing with a human. Thus it can be considered a useful heuristic to think "will what I write/say cause someone to lose social status?" and depending on the reply that your brain returns, judge whether it could be considered offensive (since this might prove to be a more accurate means of judging offense than trying to do so directly).
Naturally, if you were actually trying to develop an artificial intelligence that needed to refrain from offending people, it probably wouldn't be as easy as just 'calculating the objective status change' and basing the response on that.
"I wish I could believe that no one could possibly believe in belief in belief in belief..."
You wish you could believe Eliezer? Is this a dliberate stroke of irony or a subconcious hint at the fact that you do have an empathic understanding of the thought processes behind tailoring your own beliefs?
View more: Prev
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
-- Wikipedia
Ah, good to know.