Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
peuddO00

That's just not very correct. There are no external errors in measuring probability, seeing as the unit and measure comes from internal processes. Errors in perceptions of reality and errors in evaluating the strength of an argument will invariably come from oneself, or alternatively from ambiguity in the argument itself (which would make it a worse argument anyway).

Intelligent people do make bad ideas seem more believable and stupid people do make good ideas seem less believable, but you can still expect the intelligent people to be right more often. Otherwise, what you're describing as intelligence... ain't. That doesn't mean you should believe something just because a smart person said it - just that you shouldn't believe it less.

It's going back to the entire reverse stupidity thing. Trying to make yourself unbiased by compensating in the opposite direction doesn't remove the bias - you're still adjusting from the baseline it's established.

On a similar note, I may just have given you an uncharitable reading and assumed you meant something you didn't. Such a misunderstanding won't adjust the truth of what I'm saying about what I'd be reading into your words, and it won't adjust the truth of what you were actually trying to say. Even if there's a bias on my part, it skews perception rather than reality.

peuddO00

Signalling doesn't have to be that straightforward. A clever individual (of which we have a few) may choose to be significantly more circumspect, and imply that a piece of knowledge is obvious by omitting it from a statement that presupposes it, or alluding to it off-hand. We do this all the time, but I'm going to say that this probably has more to do with mind projection than anything else. It often simply won't occur to us to modulate a statement to encompass the receivers.

However, I don't know if this is a ploy we can entirely defeat just by making obviousness a bad word. If anything, that just requires people trying to make such a ploy to be circumspect...

peuddO00

I think a better approach than doing away with the notion that obviousness is bad (because, to be honest, if something really is obvious to you, getting a detailed explanation of it can be very annoying), might simply be to explain concepts like inferential distances and mind projection to posters who don't seem to understand them. If people understand those problems of communication and others like them implicitly, they can more easily allow themselves to say something that might be obvious. At least it works that way for me. I won't explain seemingly obvious preconditions of a discussion unless called upon to do so, but I do my best not to assume that everything that might be obvious, is. There are usually plenty of clues. Even if it sometimes requires someone eventually saying "Uh, what does that mean?"

Maybe I'm being terribly optimistic. In my example of one, however, knowing that I have knowledge others might not share is usually enough to make me check if they understand me instead of making the supposition that they do.

peuddO20

Is this really a contextually relevant oversight? Most terms do have multiple uses, but they depend a lot on the context for their applicability. I might be missing something, but I get the impression that the post's primary purpose is to highlight the problems with using the concept of obviousness here (and could plausibly be extended to do so in other circumstances where you're dealing with an audience to whom you can't immediately measure the inferential distance).

Using the concept of obviousness to signal that you possess or anticipate a certain level of knowledge has its, uh, obvious strengths, but I happily read the post as an explanation of how that usage might include some rather undesirable side-effects.

I'm not really concerned with where the post belongs in a broader sense, so I'm not challenging that statement, just its prior condition.

peuddO20

Time to abandon cryosleep. I hope this post isn't too big.

This comparison seems to rely on too many dubious assumptions: First, that the IQ scores reported in the survey were precise for a uniform standard deviation. Second, that these scores correlate strongly with the forms of competence relevant to LessWrong. Third, that this correlation will further correlate strongly with the total Karma of a user. Fourth, it rests on an understanding of the Dunning-Kruger effect and its implications that I either don't understand or don't at all agree with.

Pertaining to the question of IQ, I have yet to see a LessWrong survey that required specificity on the IQ question. Standard deviation and test type aren't included with the answers, and so the answers are hard to standardize. The internal relationship of these scores is obfuscated by us not knowing which tests they were derived from. Yvain's request to only include "respectable tests" is sensible, but still leaves a lot of room for interpretation, and could reasonably include differing standard deviations. Even assuming a strong prior for the more common standard deviations of 15 and 16, a lot of these IQ scores are out of bounds of what might be considered accurate testing. Tests with a wide battery of subtests will be especially likely to make some scores roof out and lead to distortions of the average - g may be strong, but it's difficult to combat test design. Don't expect anything above mensa entry level to be measured especially accurately. 150 is probably indicative of higher ability than 135, but it's hard to say how strongly a score has been distorted by an arbitrary roof to the level tested.

IQ score totals (when not accounting for standard deviations) are not especially well-correlated to begin with, and not accounting for these variables and many others besides them will only compound that problem. I'm sure there's interesting stuff you could achieve with the survey numbers, but I doubt an accurate intra-community comparison of the user base is one of those things.

As for the second and third problem, IQ obviously has strong correlations with some forms of competence. However, I would also expect most posters here to be at least vaguely aware of the Dunning-Kruger effect or the general concept it derives from, and so post selectively on the stuff they are fairly sure they know. This would skew the correlations towards widely supported sentiments, well-crafted posts and total volume of posts, except for users who are polymaths of some description (of which, admittedly, we have a few).

As for the fourth problem: if the IQ results are anywhere near accurate, sub-normal ability is very abnormal on LessWrong. Most of us posting here aren't stupid, or even close to normal intelligence, let alone significantly sub-normal intelligence. The Donning-Kruger effect does not operate on a sliding scale where people of higher intelligence tend to think of themselves as even smarter. It inverts. People of actual ability tend to underestimate themselves. Accounting for that, it is also difficult to quantify the effect on people who are far above the LessWrong mean IQ (if we accept that measurement at all), mainly because those people are very rare. Do they tend to hold extremely pessimistic views of their own ability, or do they estimate more rational than less intelligent individuals? It's difficult to muster a normative study of something like that - being far out of bounds of any conventional IQ test certainly doesn't help - and arguing from conjecture would be inaccurate in lieu of some very strong priors.

peuddO10

Thanks for the help. I'll see what works best for me.

peuddO70

I find that Lesswrong yields interesting subjects for study, as well as useful insights pertaining to said subjects, both in the articles themselves and in the attached comments.

However, because of the website format, I have a tendency to succumb to Chronic Internet Distraction Disease while browsing here. To solve this problem, I would like to devise a way to transfer articles and their associated commentary from Lesswrong to my hard drive, where I can read them without the tantalizing proximity of embedded hyperlinks.

The articles themselves can be copy-pasted, but I can think of no good way to handle the issue of translating threaded comments. When I try to copy these directly, my word processor tells me to stick a finger up my nose, because producing smart looks evidently ain't in my nature.

Informed suggestions and clever solutions will be appreciated.

peuddO60

I've learned that people significantly more knowledgeable and intelligent than me do exist, and not just as some mythical statistical entity at the fringes of what I'll realistically encounter in my everyday life.

The internet - and indeed communications technology in general - is beneficial like that, even if it takes some searching to find a suitable domain.

peuddO20

One of the reasons why I took the step from lurker to user - a month or so ago - was that I thought I should reply to this comment. I subsequently forgot where to find it, and stumbled upon it again just now.

I'm 18. Whether or not that makes me qualified for whatever help you had in mind I do not know, but I'm certainly interested.

peuddO50

Hrm. Now someone's downvoted your question, it seems. It's all a great, sinister conspiracy.

Well, regardless... peuddO is a username I occasionally utilize on internet forums. It's "upside down" in Norwegian, written upside down in Norwegian (I'm so very clever). Even so, I know that I personally prefer to know the names people go by out-of-internet. It's a strange quirk, perhaps, but it makes me feel obligated to provide my real first name when introducing myself.

Load More