Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Sarunas 07 March 2017 02:10:10PM *  1 point [-]

Any time you find yourself being tempted to be loyal to an idea, it turns out that what you should actually be loyal to is whatever underlying feature of human psychology makes the idea look like a good idea; that way, you'll find it easier to fucking update when it turns out that the implementation of your favorite idea isn't as fun as you expected!

I agree that this is a step in the right direction, but I want to elaborate why I think this is hard.

It is my impression that many utopians stay loyal to their chosen tactics that are supposed to closen the utopia even after the efficiency of the those tactics come into question. My hypothesis for why such thing can happen is that, typically, the tactics are relatively concrete whereas goals they are supposed to achieve are usually quite vague (e.g. "greater good"). Thus when goals and tactics conflict the person who tries to reconcile them will find it easier to modify the goals than the tactics, perhaps without even noticing that the new goals may be slightly different from the old goals, since due to vagueness the old goals and the new goals overlap so much. Over time, goals may drift into becoming quite different from the starting point. At the same time, since tactics are more concrete it is easier to notice changes in them.

I suspect that in your case we might observe something similar, since it is often quite hard to pinpoint exactly what underlying features of human psychology make a certain idea compelling.

really smart people who know lots of science and lots of probability and game theory might be able to do better for themselves

I agree that science, probability and game theory put constraints on how hard problems of politics could be solved. Nevertheless, I suspect that those constraints, coupled with vagueness of our desires, may turn out to be lax enough to allow many different answers for most problems. In this case this idea would help to weed out a lot of bad ideas, but it may not be enough to choose among those the rest. In another case, those constraints may turn out to be too restrictive to satisfy a lot of desires people find compelling. Then we would get some kind of impossibility theorem and the question which requirements to relax and what imperfections to tolerate.

Comment author: Sarunas 03 January 2017 10:52:07PM *  1 point [-]

From doing this internet propaganda in the early years of the internet, I learned how to do propaganda. You don't appeal to emotion, or to reason, or anything. You just SHOUT. And REPEAT, and explain the position, and let the reader defend it for himself.

In the end, most readers agree with you (if you are right), but they will come up to you, much as you did, and say "While you are right, I see that, you are doing yourself a disservice by being so emotional--- you aren't persuasive...."

But I persuaded this reader! The fact is, I am persuasive, and maximally so. When there is a hostile political environment, if a paper is called "bullshit" or "pseudoscience", you need to first MOCK the idiots calling it that, so as to establish a level playing field. That means calling them "douchebag", "fuckwit", "turd-brain", etc, so that both you and the other person sound like children fighting in the playground, no authority.

Then you need to state the objective case (after the name-calling and cussing, or simultaneously), and then wait. If you are objectively right, people will sort it out on their own time, you don't have to do anything. The people who didn't sort it out will say "oh my, there's a controversy" and will keep an open mind.

It's classic propaganda techniques, and it can be used for good as easily as it can be used for evil. Of course, when calling people idiots for not agreeing with material that is called crackpot, you had better be careful, because if you are not right about the material, if it is crackpot, you are gone for good. The main difficulty is evaluating the work well, understanding it fully, and making sure that it is not crackpot, before posting the first cussword.

Ron Maimon

I have found it interesting and thought provoking how this quote basically inverts the principle of charity. Sometimes, for various reasons, one idea is considered much more respectable than the other. Since such unequal playing field of ideas may make it harder for the correct idea to prevail, it might be desirable to establish a level playing field. In situations when there are two people who believe different things and there is no cooperation between them, the person who holds the more respectable opinion can unilaterally apply the principle of charity and thus help to establish it.

However, the person who holds the less respectable opinion cannot unilaterally level a playing field by applying the principle of charity, therefore they resort to shouting (as the quote describes) or, in other contexts, satire, although just like shouting it is often used for other, sometimes less noble purposes.

To what extent do you think these two things are symmetrical?

Comment author: Viliam 02 January 2017 03:45:23PM *  9 points [-]

A consequence of availability bias: the less you understand what other people do, the easier "in principle" it seems.

By "in principle" I mean that you wouldn't openly call it easy, because the work obviously requires specialized knowledge you don't have, and cannot quickly acquire. But it seems like for people who already have the specialized knowledge, it should be relatively straightforward.

"It's all just a big black box for me, but come on, it's only one black box, don't act like it's hundreds of boxes."

as opposed to:

"It's a transparent box with hundreds of tiny gadgets. Of course it takes a lot of time to get it right!"

Comment author: Sarunas 03 January 2017 10:32:02PM *  1 point [-]

When we are talking about science, social science, history or other similar disciplines the disparity may arise from the fact most introductory texts present the main ideas which are already well understood and well articulated, whereas the actual researchers spend the vast majority of their time on poorly understood edge cases of those ideas (it is almost tautological to say that the harder and less understood part of your work takes up more time since the well understood ideas are often called such because they no longer require a lot of time and effort).

Comment author: Viliam 14 December 2016 10:59:55AM *  5 points [-]

Here is a problem I have with LW user interface, please tell me if there is a simple solution I have not noticed:

Suppose someone posted a link post, a few weeks ago, and I have posted a comment below that article. Today I come to LW and a red icon tells me I got a reply in the discussion.

I want to display all the comments in the discussion below that link post, so that all yet unseen comments are marked as new. But the post is a bit old, and I don't want to browse through the Discussion pages to find it. I would like to somehow get there from my inbox. But without destroying the information about which comments were already seen in the process.

How to achieve this?

Below the reply in the inbox is the "Context" link. Clicking it will only display the reply to my comment, but will mark all comments below the article as seen. This is not what I want.

With non-link posts I usually do the following trick: I click on the name of the user who wrote the reply. Hopefully their reply is on the first page of their recent comment, with a header saying "In response to comment by Viliam on <article name>." Clicking on the article name displays the whole debate, with all unseen comments markes as unseen.

Unfortunately, with link posts, clicking on the article name leads to the linked article, not to the LW debate. So this solution is not available.

Comment author: Sarunas 14 December 2016 05:20:30PM *  5 points [-]

A clunky solution: right click on "context" in your inbox, select "copy link location", paste it into your browser's address bar, trim the URL and press enter. At least that's what I do.

Comment author: [deleted] 15 September 2016 06:04:06PM 0 points [-]

Has anyone here had associations/subjective feelings about subjects of study? (Probably in high school, where the range of subjects is wide and students' attitudes depend on the teachers' images, to some degree.) I tended to like algebra, because solving equations and the like reminded me of gradual shifts of attention in yoga-style exercise - flowing, ordered and always seeking the point of balance. Geometry, I took for a much more "masculine" discipline, a form of exercise in endurance, and was not fond of it. Of course, calculus messed things up...:)

In response to comment by [deleted] on Open thread, Sep. 12 - Sep. 18, 2016
Comment author: Sarunas 20 September 2016 02:38:42PM *  0 points [-]

Different subjects do seem to require different thinking style, but, at least for me, they are often quite hard to describe in words. If one has an inclination for one style of thinking, can this inclination manifest in seemingly unrelated areas thus leading to unexpected correlations? This blog posts presents an interesting anecdote.

Comment author: Huluk 26 March 2016 12:55:37AM *  26 points [-]

[Survey Taken Thread]

By ancient tradition, if you take the survey you may comment saying you have done so here, and people will upvote you and you will get karma.

Let's make these comments a reply to this post. That way we continue the tradition, but keep the discussion a bit cleaner.

Comment author: Sarunas 26 March 2016 03:42:26PM 42 points [-]

I have taken the survey.

Comment author: Viliam 17 February 2016 11:58:49AM 2 points [-]


Logic itself has a very gendered and white supremacist history.

These people are clearly unable to distinguish between "the territory" and "the person who talks about the territory".

I had to breathe calmly for a few moments. Okay, I'm not touching this shit on the object level again.

On a meta level, I wonder how much of the missing rationality skills these people never had vs how much they had but lost later when they became politically mindkilled.

Comment author: Sarunas 17 February 2016 05:27:02PM *  3 points [-]

I remember reading SEP on Feminist Epistemology where I got the impression that it models the world in somewhat different way. Of course, this is probably one of those cases where epistemology is tailored to suit political ideas (and they themselves most likely wouldn't disagree) but much less vice versa.

When I (or, I suppose, most LWers) think about how knowledge about the world is obtained the central example is an empirical testing of hypotheses, i.e. situation when I have more than one map of a territory and I have to choose one of them. An archetypal example of this is a scientist testing hypotheses in a laboratory.

On the other hand, feminist epistemology seems to be largely based on Feminist Standpoint Theory which basically models the world as being full of different people who are adversarial to each other and try to promote different maps. It seems to me that it has an assumption that you cannot easily compare accuracies of maps, either because they are hard to check or because they depict different (or even incommensurable) things. The central question in this framework seems to be "Whose map should I choose?", i.e. choice is not between maps, but between mapmakers. Well, there are situations where I would do something that fits this description very well, e.g. if I was trying to decide whether to buy a product which I was not able to put my hands on and all information I had was two reviews, one from the seller and one from an independent reviewer, I would be more likely to trust the latter's judgement.

It seems to me that the first archetypal example is much more generalizable than the second one, and strange claims that were cited in a Pfft's comment is what one gets when one stretches the second example to extreme lengths.

There also exists Feminist Empiricism which seems to be based on idea that since one cannot interpret empirical evidence without a framework, something must be added to an inquiry, and since biases that favour a desirable interpretations is something, it is valid to add them (since this is not a Bayesian inference, this is different from the problem of choice of priors). Since the whole process is deemed to be adversarial (scientists in this model look like prosecutors or defense attorneys), different people inject different biases and then argue that others should stop injecting theirs.

(disclaimer: I have read SEP article some time ago and wrote about these ideas from my memory, it wouldn't be a big surprise if I misrepresented them in some way. In addition to that, there are other obvious sources of potential misrepresentations)

Comment author: bogus 15 January 2016 02:24:00PM *  0 points [-]

A person coming from "social culture" to "nerd culture" may interpret everything as a part of some devious status game.

The social person is right here. Remember 'X is not about Y'?. The difference is that your 'social culture' person is in fact low-to-average status on the relevant hierarchy. Something that's just "harmless social banter" to people who are confident in their social position can easily become a 'status attack', or a 'microaggression' from the POV of someone who happens to be more vulnerable. This is not limited to information-exchange at all, it's a ubiquitous social phenomenon. And this dynamic makes engaging in such status games a useful signal of confidence, so they're quite likely to persist.

Comment author: Sarunas 15 January 2016 06:02:56PM *  2 points [-]

I think that one very important difference between status games and things that might remind people of status game is how long they are expected to stay in people's memory.

For example, I play pub quizzes and often I am the person who is responsible for the answer sheet. Due to strict time limits, discussion must be as quick as possible, therefore in many situations I (or another person who is responsible for the answer sheet) have to reject an idea a person has came up with based on vague heuristic arguments and usually there is no time for long and elaborate explanations. From the outside, it might look like a status related thing, because I had dismissed a person's opinion without a good explanation. However, the key difference is that this does not stay in your memory. After a minute or two, all these things that might seem related to status are already forgotten. Ideally, people should not even come into picture (because paying attention to anything else but the question is a waste of time) - very often I do not even notice who exactly came up with a correct answer. If people tend to forget or not even pay attention whom a credit should be given, also, if they tend to forget cases where their idea was dismissed in favour of another person's idea. In this situation, small slights that happened because discussion should be as quick as possible are not worth remembering, one can be pretty certain that other people will not remember them either. Also, if "everyone knows" they are to be quickly forgotten, they are not very useful in status games either. If something is forgotten it cannot be not forgiven.

Quite different dynamics arise if people have long memories for small slights and "everyone knows" that people have long memories for them. Short memory made them unimportant and useless for status games, but in the second case where they are important and "everyone knows" they are important, they become useful for social games and therefore a greater proportion of them have might have some status related intentionality behind them and not just be random noise.

Similarly, one might play a board game that might things that look like social games, e.g. backstabbing. However, it is expected that when figures go back to the box, all of that is forgotten.

I think that what differentiates information sharing and social games is which of those are more likely to be remembered and which one of them is likely to be quickly forgotten (and whether or not "everyone knows" which is most likely to forgotten or remembered by others). Of course, different people might remember different things about the same situation and they might be mistaken about what other people remember or forget - that's how a culture clash might look like. On the other hand, the same person might tend to remember different things about different situations, therefore people cannot be neatly divided into different cultures, but at the same time frequency of situations of each type seems to be different for different people.

Comment author: Gunnar_Zarncke 03 January 2016 05:54:17PM 2 points [-]

Did I miss the 2015 survey or will there be none?

Comment author: Sarunas 03 January 2016 08:29:57PM 3 points [-]
Comment author: pianoforte611 01 January 2016 07:26:36PM 0 points [-]

Not sure if there is a thread for this, does anyone have access to this article?

“Comparative Efficiency of Informal (Subjective, Impressionistic) and Formal (Mechanical, Algorithmic) Prediction Procedures: The Clinical Statistical Controvery”, Psychology, Public Policy, and Law 2: 293—323

Comment author: Sarunas 02 January 2016 10:36:07PM 1 point [-]

View more: Next