Wiki Contributions

Comments

Sorted by

Any time you find yourself being tempted to be loyal to an idea, it turns out that what you should actually be loyal to is whatever underlying feature of human psychology makes the idea look like a good idea; that way, you'll find it easier to fucking update when it turns out that the implementation of your favorite idea isn't as fun as you expected!

I agree that this is a step in the right direction, but I want to elaborate why I think this is hard.

It is my impression that many utopians stay loyal to their chosen tactics that are supposed to closen the utopia even after the efficiency of the those tactics come into question. My hypothesis for why such thing can happen is that, typically, the tactics are relatively concrete whereas goals they are supposed to achieve are usually quite vague (e.g. "greater good"). Thus when goals and tactics conflict the person who tries to reconcile them will find it easier to modify the goals than the tactics, perhaps without even noticing that the new goals may be slightly different from the old goals, since due to vagueness the old goals and the new goals overlap so much. Over time, goals may drift into becoming quite different from the starting point. At the same time, since tactics are more concrete it is easier to notice changes in them.

I suspect that in your case we might observe something similar, since it is often quite hard to pinpoint exactly what underlying features of human psychology make a certain idea compelling.

really smart people who know lots of science and lots of probability and game theory might be able to do better for themselves

I agree that science, probability and game theory put constraints on how hard problems of politics could be solved. Nevertheless, I suspect that those constraints, coupled with vagueness of our desires, may turn out to be lax enough to allow many different answers for most problems. In this case this idea would help to weed out a lot of bad ideas, but it may not be enough to choose among those the rest. In another case, those constraints may turn out to be too restrictive to satisfy a lot of desires people find compelling. Then we would get some kind of impossibility theorem and the question which requirements to relax and what imperfections to tolerate.

From doing this internet propaganda in the early years of the internet, I learned how to do propaganda. You don't appeal to emotion, or to reason, or anything. You just SHOUT. And REPEAT, and explain the position, and let the reader defend it for himself.

In the end, most readers agree with you (if you are right), but they will come up to you, much as you did, and say "While you are right, I see that, you are doing yourself a disservice by being so emotional--- you aren't persuasive...."

But I persuaded this reader! The fact is, I am persuasive, and maximally so. When there is a hostile political environment, if a paper is called "bullshit" or "pseudoscience", you need to first MOCK the idiots calling it that, so as to establish a level playing field. That means calling them "douchebag", "fuckwit", "turd-brain", etc, so that both you and the other person sound like children fighting in the playground, no authority.

Then you need to state the objective case (after the name-calling and cussing, or simultaneously), and then wait. If you are objectively right, people will sort it out on their own time, you don't have to do anything. The people who didn't sort it out will say "oh my, there's a controversy" and will keep an open mind.

It's classic propaganda techniques, and it can be used for good as easily as it can be used for evil. Of course, when calling people idiots for not agreeing with material that is called crackpot, you had better be careful, because if you are not right about the material, if it is crackpot, you are gone for good. The main difficulty is evaluating the work well, understanding it fully, and making sure that it is not crackpot, before posting the first cussword.

Ron Maimon

I have found it interesting and thought provoking how this quote basically inverts the principle of charity. Sometimes, for various reasons, one idea is considered much more respectable than the other. Since such unequal playing field of ideas may make it harder for the correct idea to prevail, it might be desirable to establish a level playing field. In situations when there are two people who believe different things and there is no cooperation between them, the person who holds the more respectable opinion can unilaterally apply the principle of charity and thus help to establish it.

However, the person who holds the less respectable opinion cannot unilaterally level a playing field by applying the principle of charity, therefore they resort to shouting (as the quote describes) or, in other contexts, satire, although just like shouting it is often used for other, sometimes less noble purposes.

To what extent do you think these two things are symmetrical?

When we are talking about science, social science, history or other similar disciplines the disparity may arise from the fact most introductory texts present the main ideas which are already well understood and well articulated, whereas the actual researchers spend the vast majority of their time on poorly understood edge cases of those ideas (it is almost tautological to say that the harder and less understood part of your work takes up more time since the well understood ideas are often called such because they no longer require a lot of time and effort).

A clunky solution: right click on "context" in your inbox, select "copy link location", paste it into your browser's address bar, trim the URL and press enter. At least that's what I do.

Different subjects do seem to require different thinking style, but, at least for me, they are often quite hard to describe in words. If one has an inclination for one style of thinking, can this inclination manifest in seemingly unrelated areas thus leading to unexpected correlations? This blog posts presents an interesting anecdote.

Sarunas550

I have taken the survey.

I remember reading SEP on Feminist Epistemology where I got the impression that it models the world in somewhat different way. Of course, this is probably one of those cases where epistemology is tailored to suit political ideas (and they themselves most likely wouldn't disagree) but much less vice versa.

When I (or, I suppose, most LWers) think about how knowledge about the world is obtained the central example is an empirical testing of hypotheses, i.e. situation when I have more than one map of a territory and I have to choose one of them. An archetypal example of this is a scientist testing hypotheses in a laboratory.

On the other hand, feminist epistemology seems to be largely based on Feminist Standpoint Theory which basically models the world as being full of different people who are adversarial to each other and try to promote different maps. It seems to me that it has an assumption that you cannot easily compare accuracies of maps, either because they are hard to check or because they depict different (or even incommensurable) things. The central question in this framework seems to be "Whose map should I choose?", i.e. choice is not between maps, but between mapmakers. Well, there are situations where I would do something that fits this description very well, e.g. if I was trying to decide whether to buy a product which I was not able to put my hands on and all information I had was two reviews, one from the seller and one from an independent reviewer, I would be more likely to trust the latter's judgement.

It seems to me that the first archetypal example is much more generalizable than the second one, and strange claims that were cited in a Pfft's comment is what one gets when one stretches the second example to extreme lengths.

There also exists Feminist Empiricism which seems to be based on idea that since one cannot interpret empirical evidence without a framework, something must be added to an inquiry, and since biases that favour a desirable interpretations is something, it is valid to add them (since this is not a Bayesian inference, this is different from the problem of choice of priors). Since the whole process is deemed to be adversarial (scientists in this model look like prosecutors or defense attorneys), different people inject different biases and then argue that others should stop injecting theirs.

(disclaimer: I have read SEP article some time ago and wrote about these ideas from my memory, it wouldn't be a big surprise if I misrepresented them in some way. In addition to that, there are other obvious sources of potential misrepresentations)

I think that one very important difference between status games and things that might remind people of status game is how long they are expected to stay in people's memory.

For example, I play pub quizzes and often I am the person who is responsible for the answer sheet. Due to strict time limits, discussion must be as quick as possible, therefore in many situations I (or another person who is responsible for the answer sheet) have to reject an idea a person has came up with based on vague heuristic arguments and usually there is no time for long and elaborate explanations. From the outside, it might look like a status related thing, because I had dismissed a person's opinion without a good explanation. However, the key difference is that this does not stay in your memory. After a minute or two, all these things that might seem related to status are already forgotten. Ideally, people should not even come into picture (because paying attention to anything else but the question is a waste of time) - very often I do not even notice who exactly came up with a correct answer. If people tend to forget or not even pay attention whom a credit should be given, also, if they tend to forget cases where their idea was dismissed in favour of another person's idea. In this situation, small slights that happened because discussion should be as quick as possible are not worth remembering, one can be pretty certain that other people will not remember them either. Also, if "everyone knows" they are to be quickly forgotten, they are not very useful in status games either. If something is forgotten it cannot be not forgiven.

Quite different dynamics arise if people have long memories for small slights and "everyone knows" that people have long memories for them. Short memory made them unimportant and useless for status games, but in the second case where they are important and "everyone knows" they are important, they become useful for social games and therefore a greater proportion of them have might have some status related intentionality behind them and not just be random noise.

Similarly, one might play a board game that might things that look like social games, e.g. backstabbing. However, it is expected that when figures go back to the box, all of that is forgotten.

I think that what differentiates information sharing and social games is which of those are more likely to be remembered and which one of them is likely to be quickly forgotten (and whether or not "everyone knows" which is most likely to forgotten or remembered by others). Of course, different people might remember different things about the same situation and they might be mistaken about what other people remember or forget - that's how a culture clash might look like. On the other hand, the same person might tend to remember different things about different situations, therefore people cannot be neatly divided into different cultures, but at the same time frequency of situations of each type seems to be different for different people.

Load More