Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: asr 21 September 2012 12:23:29AM 4 points [-]

I think fictional evidence isn't terribly convincing. Note also that monarchy in the current era is constantly at risk of turning into either democracy or tyranny. "Ancient blood" hasn't been a reliable source of legitimacy since 1789. As a result, monarchs need either elections or raw force to keep their grip. And tyranny is unstable and tends to result in great wasted effort in preventing coups and insurrections.

Comment author: billswift 21 September 2012 07:13:57PM 2 points [-]

I think fictional evidence isn't terribly convincing.

Indeed. Try Hans-Herman Hoppe's Democracy: The God that Failed or Graham's The Case Against Democracy. Neither is all that convincing that monarchy is much better than democracy, but they make a decent case that it is at least marginally better. Note that Hoppe's book obviously started as a collection of articles, it is seriously repetitive. Both books are short and fairly easy reads.

Comment author: cata 10 September 2012 06:30:36AM *  4 points [-]

I don't think it's useful to argue about the word "elitism" any longer. I think most people already agree with most of the points in your post about "elitism" except for the actual actions we should take as a result.

I think that the problem with making a beginner and advanced section is basically shame. In lieu of a quantifiable metric that classifies people into the two sections (not likely) it's going to be very hard for people in the "lower" section to admit that the people in the "higher" section are actually better writers or smarter or more rational or whatever, even if they are. The foundation of anti-intellectualism in the real world is a bunch of people in lower sections sneering at people in higher sections. With that as a backdrop, I don't think that the lower section would be a fertile place for actual self-improvement.

Comment author: billswift 10 September 2012 08:19:24PM *  1 point [-]

You are grossly over-simplifying anti-intellectualism, some streams of which are extremely valuable. Your claim only fits the "thalamic anti-intellectual", one of at least five broad types Eric Raymond discusses.

The most important and useful to society is the "epistemic-skeptical anti-intellectual. His complaint is that intellectuals are too prone to overestimate their own cleverness and attempt to commit society to vast utopian schemes that invariably end badly." Of course lefties who want to change society to fit their theories try to smear them with claims like yours, but:

Because it’s extremely difficult to make people like F. A. Hayek or Thomas Sowell look stupid enough to be thalamic or totalitarian enough to be totalizers, the usual form of dishonest attack intellectuals use against epistemic skeptics is to accuse them of being traditionalists covertly intent on preserving some existing set of power relationships. Every libertarian who has ever been accused of conservatism knows about this one up close and personal.

And:

"If “intellectuals” really want to understand and defeat anti-intellectualism, they need to start by looking in the mirror. They have brought this hostility on themselves by serving their own civilization so poorly. Until they face that fact, and abandon their neo-clericalist presumptions, “anti-intellectualism” will continue to get not only more intense, but more deserved."

Comment author: NancyLebovitz 10 September 2012 06:31:28AM 7 points [-]

I don't think people who feel comfortable posting average youtube comments are going to be welcome or useful at LessWrong, I don't think this is a problem, and there are a lot of people like that.

Raising the sanity waterline on a grand scale should affect the comments on youtube, but we're a long way from that.

This being said, I'd like to see more rationality materials for people of average intelligence, but that's another long term possibility. Not does there not seem to be huge interest in the project, figuring out simple explanations for new ideas is work, and it seems to be be a relatively rare talent.

I only recently ran into a good simple explanation for Bayes-- that the more detailed a prediction becomes, the less likely it is to be true. And I got it from a woman who doesn't post on LW because she thinks the barriers to entry are too high. (It's possible that this explanation was on LW, and I didn't see it or it didn't register--- has anyone seen it here?)

There's some degree of natural sorting on LW-- I'm not the only person who doesn't read the more mathematical or technical material here, and I'm not commenting on that material, either.

I don't think having separate ranked areas is going to solve the problem of people living down to expectations.

Comment author: billswift 10 September 2012 08:06:25PM *  0 points [-]

I only recently ran into a good simple explanation for Bayes-- that the more detailed a prediction becomes, the less likely it is to be true.

That looks like a good way of explaining the conjunction and narrative fallacies, too. They could easily be looked at as adding details to a simpler argument. I wonder what other fallacies could be "generalized" similarly?

One thing I think we should be working on is a way of organizing the mass of fallacies and heuristics. There are too many to keep straight without some sort of organizing principles.

Comment author: billswift 08 September 2012 06:42:04PM 3 points [-]

Go to Google Scholar and search on "argument maps" and "argument diagram", you'll get plenty of hits.

Comment author: billswift 08 September 2012 02:56:43PM 3 points [-]

The survey is ended and he has posted the results, A Survey Question.

Comment author: Morendil 05 September 2012 12:54:44PM *  5 points [-]

Many of the jokes are constructed as twists on the Sorting Hat's "House X" pattern, and common food items prepared at home are called "House X". For instance "House Pickled Strawberries" prepared in a jar with a sweet brine. "Stewberries" is a corruption of "strawberries". (There's one blog out there suggesting it may have something to do with Jon Stewart of the Daily Show, but I think it more likely that it's meant to be the way a very young child would pronounce "strawberries".)

Comment author: billswift 05 September 2012 02:15:35PM 1 point [-]

Another possibility I saw, though it probably wasn't intended, is that both pickled and stewed are slang for drunk; maybe they are really powerful fruits.

Comment author: billswift 02 September 2012 03:15:17PM *  7 points [-]

You might find this useful, it isn't a source of papers, it is first-hand accounts by autistics and what life and other people were like to them. This one, Don't Mourn For Us, is probably the best general description. A quote from it:

You try to relate to your autistic child, and the child doesn't respond. He doesn't see you; you can't reach her; there's no getting through. That's the hardest thing to deal with, isn't it? The only thing is, it isn't true.

Look at it again: You try to relate as parent to child, using your own understanding of normal children, your own feelings about parenthood, your own experiences and intuitions about relationships. And the child doesn't respond in any way you can recognize as being part of that system.

That does not mean the child is incapable of relating at all. It only means you're assuming a shared system, a shared understanding of signals and meanings, that the child in fact does not share. It's as if you tried to have an intimate conversation with someone who has no comprehension of your language. Of course the person won't understand what you're talking about, won't respond in the way you expect, and may well find the whole interaction confusing and unpleasant.

The best single source I know of is Tony Attwood's Complete Guide to Asperger's Syndrome.

As a more general response to your title, you need to learn more about the science, and especially pay attention to how the ideas in the field hang together. A less effective method is to consider how well what you are reading relates to what you already know to be true; unfortunately a lot of real science cannot pass this latter test unless you already know a lot of science.

Comment author: wedrifid 02 September 2012 04:52:53AM 0 points [-]

Astronomers use metal to mean elements other than hydrogen and helium.

Wow, Astronomers are lazy. It's not hard to make up new terms for things when the existing ones clearly don't fit. Heck, if making up a word was too difficult they could have used an arbitrary acronym.

Comment author: billswift 02 September 2012 02:42:06PM *  0 points [-]

Not really. If you look at a periodic table, the vast majority actually are metals.

Comment author: blogospheroid 02 September 2012 08:47:50AM 1 point [-]

Sorry for missing the stupid questions thread, but since the sequences didn't have something direct about WBE, I thought Open thread might be a better place to ask this question.

I want to know how is the fidelity of Whole Brain Emulation expected to be empirically tested, other than replication of taught behaviour ?

After uploading a rat, would someone look at the emulation of its lifetime and say," I really knew this rat. This is that rat alone and no one else".

Would only trained behaviour replication be the empirical standard? What would that mean in terms of emulating more complex beings, who might have their own thoughts other than the behaviours taught to them? Please point me to any literature on the same. I checked the WBE roadmap and replication of trained behaviour seems to be the only way mentioned there.

Comment author: billswift 02 September 2012 02:30:52PM *  -1 points [-]

The world (including brains) is strictly deterministic. The only source of our mental contents are our genetics and what we are "taught" by our environments (and the interactions between them). The only significant difference between rat and human brains for the purpose of uploading should be the greater capacity and more complex interactions supported by human brains.

Comment author: Stuart_Armstrong 30 August 2012 06:32:13PM 0 points [-]

Hum... That is one suggested way of going. But it does seem to ignore the fact that these non-causal models are claimed to be correct, without needing to know anything much about the underlying processes.

Maybe "small" should be calibrated by the claims of the model?

Comment author: billswift 30 August 2012 08:32:44PM 0 points [-]

At least for the three examples you cited, I seem to remember them bring called approximations, not "correct".

View more: Next