All of lucidian's Comments + Replies

5Gunnar_Zarncke
This goes deeper than just avoiding the form of to-be. Mainly by following up with "why". But also consider the Team X example.

Highly recommend kazerad, for Scott-level insights about human behavior. Here's his analysis of 4chan's anonymous culture. Here's another insightful essay of his. And a post on memetics. And these aren't necessarily the best posts I've read by him, just the three I happened to find first.

By the way, I'm really averse to the label "hidden rationalists". It's like complimenting people by saying "secretly a member of our ingroup, but just doesn't know it yet". Which simultaneously presupposes the person would want to be a member of o... (read more)

1OrphanWilde
I gave him/her a shot. After five or six pages of angry ranting about Gamergate, which was four or five pages too many, I quit. I have no dog in that fight, and I find the notion of arguing about specific people's specific lives as if they were culturally or socially significant to be a really misguided enterprise. It's tribal superstimulus, and it is both addictive and socially self-destructive.
4snarles
Hopefully people here do not interpret "rationalists" as synonymous for "the LW ingroup." For one, you can be a rationalist without being a part of LW. And secondly, being a part of LW in no way certifies you as a rationalist, no matter how many internal "rationality tests" you subject yourself to.

Here are the ten I thought of:

  • decorations for your house/apartment
  • a musical instrument
  • lessons for the musical instrument
  • nice speakers (right now I just have computer speakers and they suck)
  • camping equipment
  • instruction books for crafts you want to learn (I'm thinking stuff like knitting, sewing etc.)
  • materials for those crafts
  • gas money / money for motels, so you can take a random road trip to a place you've never been before
  • gym membership
  • yoga classes (or martial arts or whatever)

Also I totally second whoever said "nice kitchen knives".... (read more)

1taryneast
I agree - I was given a good set of knives as a gift and they are an excellent investment

Read more things that agree with what you want to believe. Avoid content that disagrees with it or criticizes it.

I don't have an answer, but I would like to second this request.

This post demonstrates a common failure of LessWrong thinking, where it is assumed that there is one right answer to something, when in fact this might not be the case. There may be many "right ways" for a single person to think about how much to give to charity. There may be different "right ways" for different people, especially if those people have different utility functions.

I think you probably know this, I am just picking on the wording, because I think that this wording nudges us towards thinking about these kinds of questions in an unhelpful way.

But it asks about “the right way to think about how much to give to charity”, not “the right amount to give to charity”. It is well possible (depending on what one means by “way to think about”) that there is one right way to think about how much to give to charity but it returns different outputs given different inputs.

I think that we should have fewer meta posts like this. We spend too much time trying to optimize our use of this website, and not enough time actually just using the website.

Thanks for this post! I also spend far too much time worrying about inconsequential decisions, and it wouldn't surprise me if this is a common problem on LessWrong. In some sense, I think that rationality actually puts us at risk for this kind of decision anxiety, because rationality teaches us to look at every situation and ask, "Why am I doing it this way? Is there a different way I could do it that would be better?" By focusing on improving our lives, we end up overthinking our decisions. And we tend to frame these things as optimization ... (read more)

7TimMartin
That's very true re: mindset! There was one time in my life when the decision of where to live was made for me (I used to each English in Japan), and I was placed in a location I never would have picked on my own. But because I didn't have a choice in the matter, I made the best of it, and things worked out pretty well. Telling yourself "this is fine, this is going to work" is necessary sometimes.

I think it's worth including inference on the list of things that make machine learning difficult. The more complicated your model is, the more computationally difficult it will be to do inference in it, meaning that researchers often have to limit themselves to a much simpler model than they'd actually prefer to use, in order to make inference actually tractable.

Analogies are pervasive in thought. I was under the impression that cognitive scientists basically agree that a large portion of our thought is analogical, and that we would be completely lost without our capacity for analogy? But perhaps I've only been exposed to a narrow subsection of cognitive science, and there are many other cognitive scientists who disagree? Dunno.

But anyway I find it useful to think of analogy in terms of hierarchical modeling. Suppose you have a bunch of categories, but you don't see any relation between them. So maybe you kno... (read more)

0SilentCal
Perhaps a better title would have been "The Correct System-II Use of Analogy", or "The Correct Use of Analogy in Intellectual Debate." What you're saying is true about day-to-day/on-the-fly thinking, but written argument requires a higher standard.

I'm also reading this book, and I'm actually finding it profoundly unimpressive. Basically it's a 500-page collection of examples, with very little theoretical content. The worst thing, though, is that its hypothesis seems to fundamentally undermine itself. Hofstadter and Sander claim that concepts and analogy are the same phenomenon. But they also say that concepts are very flexible, non-rigid things, and that we expand and contract their boundaries whenever it's convenient for reasoning, and that we do this by making analogies between the original co... (read more)

Potluck means we bring our own food and then share it? Is there a list of what people are bringing, to avoid duplicates?

Oh hey, this is convenient, I just got to Sydney yesterday and you guys have a meetup tonight. =) I'll probably attend. (I'm in town for three months, visiting from the United States.)

I have an ulterior motive for attending: I am looking for housing near Macquarie University for the next three months. I don't suppose anyone here has a room for rent, or knows of a good place to stay? (Sorry if this is the wrong place to ask about such things!)

Sure, but that understanding is very specific to our culture. It's only recently that we've come to see procreation as "recreation" - something unnecessary that we do for personal fulfillment.

Many people don't hold jobs just to avoid being poor. It's also a duty to society. If you can't support yourself, then you're a burden on society and its infrastructure.

Similarly, having children was once thought of as a duty to society. I read an article about this recently: http://www.artofmanliness.com/2014/03/03/the-3-ps-of-manhood-procreate/

Anyway, ... (read more)

1DanArmak
Maybe in other cultures children get more instructions on eventually having children of their own, too? I don't know.

To construct a friendly AI, you need to be able to make vague concepts crystal clear, cutting reality at the joints when those joints are obscure and fractal - and them implement a system that implements that cut.

Strongly disagree. The whole point of Bayesian reasoning is that it allows us to deal with uncertainty. And one huge source of uncertainty is that we don't have precise understandings of the concepts we use. When we first learn a new concept, we have a ton of uncertainty about its location in thingspace. As we collect more data (either thro... (read more)

1Kaj_Sotala
Related paper. Also sections 1 and 2 of this paper.

I can't help but think that some of this has to do with feminism, at least in the case of girl teenagers. I hear a lot of people emphasizing that having children is a choice, and it's not for everyone. People are constantly saying things like "Having children is a huge responsibility and you have to think very carefully whether you want to do it." The people saying this seem to have a sense that they're counterbalancing societal pressures that say everyone should have children, or that women should focus on raising kids instead of having a car... (read more)

1DanArmak
A career is often equated with having a job. Or rather, a stable job, job security, a good salary that increases with time, etc. Therefore, unless you are independently wealthy, having a job / career is seen as both good and necessary: the alternative is to be poor. On the other hand, having children is related mostly to happiness, satisfaction, and perhaps the social life. We know some people have no children and are still happy. So it's much easier to accept that having children is optional for others (whether or not you want it for yourself or for your children). There are certainly negative concepts associated with being childless-by-choice, but not as many or as strong as those associated with being poor-by-choice.

Cog sci question about how words are organized in our minds.

So, I'm a native English speaker, and for the last ~1.5 years, I've been studying Finnish as a second language. I was making very slow progress on vocabulary, though, so a couple days ago I downloaded Anki and moved all my vocab lists over to there. These vocab lists basically just contained random words I had encountered on the internet and felt like writing down; a lot of them were for abstract concepts and random things that probably won't come up in conversation, like "archipelago"... (read more)

5primality
I find that making up mnemonics works well to combat interference. They don't have to be good mnemonics for this to work. Example: I noticed I kept mixing up the Spanish words aquí (here) and allí (there). I then made up the mnemonic that aquí has a "k" sound so it's close, and allí contains l's so it's long away. A few days later, I encounter the word "allí". My thinking then goes "That's either here or there, I keep confusing those" -> "oh yeah, I made up a mnemonic" -> "allí means there". I wonder how well this method would work for others.
8ChristianKl
That problem is called memory interference. I think reading Wozniaks 20 rules, gives you a good elementary understanding of concepts like that. In general there doesn't seem to be a good way to predict memory interference in advance. When faced with apparent interference I usually make a card specifically for the interference: Front: (kai / hai) -> shark Back: hai Front: (kai / hai) -> probably Back: kai
2Emile
I tend to think of this in terms of compression: you can use various compression schemes to store english words in fewer bits, but that will make you store foreign words in more bits. For example, you could order letters by frequency and represent frequent letters with fewer bits. You can do the same with groups of letters (e.g. "thing" = "th" + "ing", both very frequent combinations in English), or take advantage of conditional probabilities ('t' much more likely to be followed by 'h' than 'n') to squeeze a few more bits of compression. Similarly, if a westerner wanted to describe the Chinese character 語 without any prior knowledge of Chinese, the description would be very long, but a Chinese speaker would describe it as "the key for speech, and a five above a mouth". This is just another way of describing what you call phonetic space. Simple issues of frequency makes learners see words as "closer" than native speakers do, another problem is when the "phonetic space" of one language has more(or different) dimensions than those of another; e.g. many people find it hard to learn words when the distinction between voiced and unvoiced "th" is important, or when the tone of a syllable also carries meaning (as in Chinese). The Chinese words for "mother", "insult" and "horse" all sound like exactly the same word, "ma", to non-Chinese speakers.
7TheOtherDave
You may find "linguistic cohort"a useful search phrase. When I studied linguistics back in the 80s it was a popular way of thinking about lexical retrieval. E.g., a cohort model might explain collisions between "kertautua" and "kuvastaa" by observing that they share an initial-sound, final-sound, and (I think?) number of syllables, all of which are lexical search keys. (Put another way: it's easy to list words that start with "k", words that end with "a", and three-syllable words.) That said, I remember thinknig at the time that it was kind of vacuous. (After all, it's also easy to list words with "v" in the middle somewhere.)

These are interesting questions. I think the keyword you want for "hash collisions" is interference. Here's a more helpful overview from an education perspective: Learning Vocabulary in Lexical Sets: Dangers and Guidelines (2000). It mostly talks about semantic interference, but it mentions some other work on similar-sounding and similar-looking words.

Hmm. If you want to know how Bayesian models of cognition work, this paper might be a good place to start, but I haven't read it yet: "Bayesian Models of Cognition", by Griffiths, Kemp, and Tenenbaum.

I'm taking a philosophy class right now on Bayesian models of cognition, and we've read a few papers critiquing Bayesian approaches: "Bayesian Fundamentalism or Enlightenment?", by Jones and Love "Bayesian Just-So Stories in Psychology and Neuroscience", by Bowers and Davis Iirc, it's the latter that discusses the unfalsifiabilit... (read more)

It might be worth noting that Bayesian models of cognition have played a big role in the "rationality wars" lately. The idea is that if humans are basically rational, their behaviors will resemble the output of a Bayesian model. Since human behavior really does match the behavior of a Bayesian model in a lot of cases, people argue that humans really are rational. (There has been plenty of criticism of this approach, for instance that there are so many different Bayesian models in the world that one is sure to match the data, and thus the whole... (read more)

2Stefan_Schubert
That's very helpful! I've heard a lot of scattered remarks about this perspective but never read up on it systematically. I will look into Tennenbaum and Griffiths. Any particular suggestions (papers, books)? The unfalsifiability remark is interesting, btw.

This description/advice is awesome, and I mostly agree, but I think it presents an overly uniform impression of what love is like. I've been in Mature Adult Love multiple times, and the feelings involved have been different every time. I wouldn't necessarily reject your division into obsession, closeness, and sexual desire, but I think maybe there are different kinds (or components) of closeness, such as affection, understanding, appreciation, loyalty, etc., and any friendship or relationship will have these in differing degrees. For instance, for a lot of people, family love seems to involve a lot of loyalty but not as much understanding.

1Viliam_Bur
Yes, I agree completely. This classification is (aims to be) "hardware"-oriented; the three groups should be supported by different sets of hormones. (I am not a biologist, I merely copy the info from other sources; mostly the Married Man Sex Life blog. The author is a nurse, so I trust his expertise.) I can imagine that the same "hardware" foundation could be used to implement multiple different "software" emotional flavors in the brain. Actually, I believe there might be even some cultural variations; if nothing else, the mere belief that some two emotions should go together, or that some emotion should be felt in some situation, would create a cultural difference.

Hmm, I can see arguments for and against calling computationalism a form of dualism. I don't think it matters much, so I'll accept your claim that it's not.

As for embodied cognition, most of what I know about it comes from reading Lawrence Shapiro's book Embodied Cognition. I was much less impressed with the field after reading that book, but I do think the general idea is important, that it's a mistake to think of the mind and body as separate things, and that in order to study cognition we have to take the body into consideration.

I agree that embodimen... (read more)

1Said Achmiz
I agree. I don't think embodiment is irrelevant; my own field (human-computer interaction) takes embodiment quite seriously — it's an absolutely integral factor in natural user interface design, for example. I just don't think embodiment is in any way magic, the way that the embodied cognition people seem to think and imply. If you can simulate a human and their environment on any level you like, then embodiment stops being an issue. It seems like we don't actually disagree on this. This is certainly not impossible, but it's not clear to me why you couldn't then simulate the substrate on a sufficiently low level as to capture whatever aspect of the substrate is responsible for enabling cognition. After all, we could in principle simulate the entire universe down to quantum configuration distributions, right? If you wanted to make a weaker claim based on computational tractability, then that would of course be another thing. I concur with this. To the extent that I have any kind of a handle of what subjective experience even is, it does seem quite important. P.S. Yeah, this is probably a question of preferred terminology and I am not inclined to argue about it too much; I just wanted to clarify my actual views.

Ah. I'm not sure I agree with you on the nature of the self. What evidence do you have that your mind could be instantiated in a different medium and still lead to the same subjective experience? (Or is subjective experience irrelevant to your definition of self?)

I mean, I don't necessarily disagree with this kind of dualism; it seems possible, even given what I know about embodied cognition. I just am not sure how it could be tested scientifically.

0Said Achmiz
No direct evidence, just the totality of what we currently know about the mind (i.e. cognitive science). Subjective experience is not irrelevant, though I am still confused about its nature. I don't, however, have any reason to believe that it's tied to any particular instantiation. I don't think my view can properly be characterized as dualism. I don't posit any sort of nonmaterial properties of mind, for instance, nor that the mind itself is some nonmaterial substance. Computationalism merely says, essentially, that "the mind is what the brain does", and that other physical substrates can perform the same computation. Everything that I know about the idea of embodied cognition leads me to conclude that it is a brand of mysticism. I've never heard a cogent argument for why embodiment can't be simulated on some suitable level.

Hmm, I'll have to look into the predictive power thing, and the tradeoff between predictive power and efficiency. I figured viewing society as an organism would drastically improve computational efficiency over trying to reason about and then aggregate individual people's preferences, so that any drop in predictive power might be worth it. But I'm not sure I've seen evidence in either direction; I just assumed it based on analogy and priors.

As for why you should care, I don't think you should, necessarily, if you don't already. But I think for a lot of people, serving some kind of emergent structure or higher ideal is an important source of existential fulfillment.

1Said Achmiz
Sorry, when I said "predictive power", I was actually assuming normalization for efficiency. That is, my claim that the total predictive capacity you get for your available computation resources is greatest by taking the physical stance in this case.

What does it mean to benefit a person, apart from benefits to the individual cells in that person's body? I don't think it's unreasonable to think of society as having emergent goals, and fulfilling those goals would benefit it.

1Said Achmiz
Oh, and: It is almost trivial to imagine such a thing. For example, my body may be destroyed utterly in the process of transferring my mind, unharmed, into a new, artificial body, better in every way than my old one. This would be great for me (assuming the new body suited my wants and needs), but bad for the cells making up my existing body. The core idea here is that I am not my body. I am currently instantiated in my body, but that's not the same thing. I care about my current instantiation only to the degree that doing so is necessary for me to survive and prosper.
2Said Achmiz
I actually do think it is unreasonable to take any but the physical stance toward society; the predictive power of taking the intentional stance (or the design stance, for that matter) is just less. But! We might assume, for the sake of argument, that we can think of society as having emergent goals, goals that do not benefit its members (or do not benefit a majority of its members, or something). In that case, however, my question is: Why should I care? Society's emergent goals can go take a flying leap, as can evolution's goals, the goals of my genes, the goals of the human species, and any other goals of any other entity that is not me or the people I care about.

Thanks for this post. I basically agree with you, and it's very nice to see this here, given how one-sided LW's discussion on death usually is.

I agree with you that the death of individual humans is important for the societal suporganism because it keeps us from stagnating. But even if that weren't true, I would still strongly believe in the value of accepting death, for pretty much exactly the reasons you mentioned. Like you, I also suspect that modern society's sheltering, both of children and adults, is leading to our obsession with preventing death ... (read more)

What would it mean to examine this issue dispassionately? From a utilitarian perspective, it seems like choosing between deathism and anti-deathism is a matter of computing the utility of each, and then choosing the one with the higher utility. I assume that a substantial portion of the negative utility surrounding death comes from the pain it causes to close family members and friends. Without having experienced such a thing oneself, it seems difficult to estimate exactly how much negative utility death brings.

(That said, I also strongly suspect that cultural views on death play a big role in determining how much negative utility there will be.)

I wish I could upvote this comment more than once. This is something I've struggled with a lot over the past few months: I know that my opinions/decisions/feelings are probably influenced by these physiological/psychological things more than by my beliefs/worldview/rational arguments, and the best way to gain mental stability would be to do more yoga (since in my experience, this always works). Yet I've had trouble shaking my attachment to philosophical justifications. There's something rather terrifying about methods (yoga, narrative, etc.) that work o... (read more)

Particularly frightening to me has been the idea that doing yoga or meditation might change my goals, especially since the teachers of these techniques always seem to wrap the techniques in some worldview or other that I may dislike.

Yesterday I was in a church, for a friend's wedding. I was listening to some readings from the Bible, about love (obviously 1 Cor 13) etc. I knew this was cherry-picking from a book that a few hundred pages sooner also describes how non-believers or people who violate some rule should be murdered. But still, the message was ... (read more)

5TheOtherDave
(nods) I think this happens to a lot of people, especially in our tribe... we tend to prefer to engage the world as an intellectual puzzle, even the parts of it that are better engaged with in other ways. I got better about this after my stroke and recovery, but it's still something I fall back to a lot. I hope it gets easier for you.

I was wondering this too. I haven't looked at this A_p distribution yet (nor have I read all the comments here), but having distributions over distributions is, like, the core of Bayesian methods in machine learning. You don't just keep a single estimate of the probability; you keep a distribution over possible probabilities, exactly like David is saying. I don't even know how updating your probability distribution in light of new evidence (aka a "Bayesian update") would work without this.

Am I missing something about David's post? I did go through it rather quickly.

Forgive me, but the premise of this post seems unbelievably arrogant. You are interested in communicating with "intellectual elites"; these people have their own communities and channels of communication. Instead of asking what those channels are and how you can become part of them, you instead ask how you can lure those people away from their communities, so that they'll devote their limited free time to posting on LW instead.

I'm in academia (not an "intellectual elite", just a lowly grad student), and I've often felt torn between my... (read more)

1Username
I would love to locate and learn how to integrate into more interesting high-signal channels! If anyone feels like they wouldn't be polluted with a little attention from LWers, would you mind sharing the ones you know?
3AlexMennen
Attracting academics to Less Wrong is not incompatible with approaching them through academic channels (which MIRI has been doing), and does not require separating them from academic communities (which I doubt MIRI intends to do). Point me to where Luke denied that academia has any advantages over LW. If you're going to claim that LW is obviously not "the highest-quality relatively-general-interest forum on the web", it would help your case to provide an obvious counterexample (academic channels themselves are generally not on the web, and LW has some advantages over them, even if the reverse is also true). LW is also not as homogeneous as you appear to believe; plenty of us are academics. It is at least as unreasonable to claim without justification that it is impossible to attract intellectual elites to LW, or that it would be bad for those people if they did.
0Lumifer
Nice rant :-) A bit overboard, though -- may I make a suggestion? Read it again, but replace "LW" with "internet discussion forum". That should put your statements like "LessWrong frames itself as an alternative to academia" or "LessWrong has repeatedly rejected academia" into proper perspective. LOL You do realize that LW has no shortage of grad students and even gasp! actual academics who read and post here? LAUNCELOT: Look, my liege! ARTHUR: Camelot! GALAHAD: Camelot! LAUNCELOT: Camelot! PATSY: It's only a model. ARTHUR: Shhh! Knights, I bid you welcome to your new home. Let us ride... to Camelot. [singing] We're knights of the round table We dance when e'er we're able We do routines and parlour scenes With footwork impecc-Able. We dine well here in Camelot We eat ham and jam and spam a lot [dancing] We're knights of the Round Table Our shows are for-mid-able Though many times we're given rhymes That are quite unsing-able We not so fat in Camelot We sing from the diaphragm a lot [tap-dancing] Oh we're tough and able Quite indefatigable Between our quests we sequin vests And impersonate Clark Gable It's a bit too loud in Camelot I have to push the pram a lot. ARTHUR: Well, on second thought, let's not go to Camelot -- it is a silly place. Right.

Who are some of the best writers in the history of civilization?

Different writers have such different styles that I'm not sure it's possible to measure them all on a simple linear scale from "bad writing" to "good writing". (Or rather, of course it's possible, but I think it reduces the dimensionality so much that the answer is no longer useful.)

If I were to construct such a linear scale, I might do so by asking "How well does this writer's style serve his goals?" Or maybe "How well does this writer's style match his... (read more)

4Baughn
My mental image of writing quality is somewhat like a many-dimensional moss ball branching out from a central point. In the centre there is unequivocally bad writing, mostly written by writers with no experience writing. As you follow the moss outwards the writers get better, but they get better in different ways, and different readers have different requirements. It seems to match reality, at least somewhat. There are a lot more ways to be good at writing than there are to be bad at writing. Unfortunately, while this means it's possible to warn people away from bad books, it makes it hard to recommend good ones.

The tradeoff between efficiency and accuracy. It's essential for computational modeling, but it also comes up constantly in my daily life. It keeps me from being so much of a perfectionist that I never finish anything, for instance.

I cannot agree with this more strongly. I was burnt out for a year, and I've only just begun to recover over the last month or two. But one thing that speeded my recovery greatly over the last few weeks was stopping worrying about burnout. Every time I sat down to work, I would gauge my wanting-to-work-ness. When I inevitably found it lacking, I would go off on a thought spiral asking "why don't I like working? how can I make myself like working?" which of course distracted me from doing the actual work. Also, the constant worry about my bur... (read more)

1buybuydandavis
That's still a step up from the given alternative of miserable unproductivity.
3buybuydandavis
Bingo. When you have a seemingly intractable problem, always remember to ask yourself if what you're trying to do to solve the problem is making it worse. That's where intractable problems come from.

There are also things which are bad to learn for epistemic rationality reasons.

Sampling bias is an obvious case of this. Suppose you want to learn about the demographics of city X. Maybe half of the Xians have black hair, and the other half have blue hair. If you are introduced to 5 blue-haired Xians but no black-haired Xians, you might infer that all or most Xians have blue hair. That is a pretty obvious case of sampling bias. I guess what I'm trying to get at is that learning a few true facts (Xian1 has blue hair, Xian2 has blue hair, ... , Xian5 ha... (read more)

0Kurros
If a-priori you had no reason to expect that the population was dominantly blue-haired then you should begin to suspect some alternative hypothesis, like your sampling is biased for some reason, rather than believe everyone is blue haired.

This kind of AI might not cause the same kinds of existential risk typically described on this website, but I certainly wouldn't call it "safe". These technologies have a huge potential to reshape our lives. In particular, they can have a huge influence on our perceptions.

All of our search results come filtered through google's algorithm, which, when tailored to the individual user, creates a filter bubble. This changes our perception of what's on the web, and we're scarcely even conscious that the filter bubble exists. If you don't know abou... (read more)

4bartimaeus
A post from the sequences that jumps to mind is Interpersonal Entanglement: If people gain increased control of their reality, they might start simplifying it past the point where there are no more sufficiently complex situations to allow your mind to grow, and for you to learn new things. People will start interacting more and more with things that are specifically tailored to their own brains; but if we're only exposed to things we want to be exposed to, the growth potential of our mind becomes very limited. Basically an extreme version of Google filtering your search results to only show you what it thinks you'll like, as opposed to what you should see. Seems like a step in the wrong direction.
4ChristianKl
I think most people don't like the idea of shutting down their own perception in this way. Having people go invisible to yourself feels like you lose control over your reality. This means that humans are quite adaptable and can speak differently to the computer than they speak to their fellow humans. I mean with parent with with their 3 year old toddler the same way they speak on the job? The computer is just an additional audience.
9savageorange
.. and occasionally, they instead have direct implications of perception-filtering. Altering my query because you couldn't match a term, and not putting this fact in glaring huge red print, leads me to think there are actual results here, rather than a selection of semi-irrelevance. Automatically changing my search terms is similar in effect -- no, I don't care about 'pick', I'm searching for 'gpick'!. This is worse than mere suggestions ;) I can notice these things, but I also wonder whether the Google Glass users would have their availability-heuristic become even more skewed by these kinds of misleading behaviours. I wonder whether mine is.
7ThrustVectoring
How much this is true is up for quite a bit of debate. Sapir-Whorf hypothesis and whatnot.

Hmm, you're probably right. I guess I was thinking that quick heuristics (vocabulary choice, spelling ability, etc.) form a prior when you are evaluating the actual quality of the argument based on its contents, but evidence might be a better word.

Where is the line drawn between evidence and prior? If I'm evaluating a person's argument, and I know that he's made bad arguments in the past, is that knowledge prior or evidence?

1Luke_A_Somers
Where that goes depends on whether you're evaluating "He's right" or "This argument is right".

Unless the jargon perpetuates a false dichotomy, or otherwise obscures relevant content. In politics, those who think in terms of a black-and-white distinction between liberal and conservative may have a hard time understanding positions that fall in the middle (or defy the spectrum altogether). Or, on LessWrong, people often employ social-status-based explanations. We all have the jargon for that, so it's easy to think about and communicate, but focusing on status-motivations obscures people's other motivations.

(I was going to explain this in terms of dimensionality reduction, but then I thought better of using potentially-obscure machine learning jargon. =) )

I agree with you that it's useful to optimize communication strategies for your audience. However, I don't think that always results in using shared jargon. Deliberately avoiding jargon can presumably provide new perspectives, or clarify issues and definitions in much the way that a rationalist taboo would.

6James_Miller
But good jargon reduces the time it takes to communicate ideas and so allows for more time to gain new perspectives.

This is very related to something my friend pointed out a couple weeks ago. Jargon doesn't just make us less able to communicate with people from outside groups - it makes us less willing to communicate with them.

As truth-seeking rationalists, we should be interested in communicating with people who make good arguments, consider points carefully, etc. But I think we often judge someone's rationality based on jargon instead of the content of their message. If someone uses a lot of LessWrong jargon, it gives a prior that they are rational, which may bia... (read more)

5Luke_A_Somers
That's not what prior means. You mean evidence.

I think it's a grave mistake to equate self-esteem with social status. Self-esteem is an internal judgment of self-worth; social status is an external judgment of self-worth. By conflating the two, you surrender all control of your own self-worth to the vagaries of the slavering crowd.

Someone can have high self-esteem without high social status, and vice versa. In fact, I might expect someone with a strong internal sense of self-worth to be less interested in seeking high social status markers (like a fancy car, important career, etc.). When I say &quo... (read more)

1gothgirl420666
Yeah, I was using the term self-esteem in a specific sense to mean "the result of some primitive algorithm in the brain that attempts to compute your tribal status". I tried to find some alternative term to call the result of this algorithm to prevent this exact confusion, but everything I could come up with was awkward. Maybe "status meter"? I agree with you in that I think there's only a moderate correlation between the result of this algorithm and a person's self-worth as it's usually understood. I don't really agree with this, assuming that I'm right in reading you as saying "A low-status person can hack their brain into running off the high-status algorithm by developing a strong sense of self-worth." At least it's not true for me personally. To be completely honest, I think I'm very intelligent and creative, and I do spend a sizeable chunk of every day working on my major life goals, which I enjoy doing. But at the same time, I would definitely say I'm running off of a low-status algorithm in most of my interactions. And even self-esteem purely in social interactions doesn't really seem to help my "status meter". For example, when I lost my virginity, I thought that it would make talking to girls much easier in the future. But this didn't really happen at all. Yeah, now that I think about it, this seems like the weakest link in my argument. I imagine most people fluidly switch from low status to high status algorithms on a regular basis depending on who they're interacting with. But maybe there's also a sort of larger meter somewhere in the brain that maintains a more constant level and guides long-term behavior? I don't know. Thank you for your response, though - this is definitely the most interesting response I've gotten for this comment. :)

Regarding PUA jargon...

I'm female and submissive and I've always been attracted to guys about eight years older than me. (When I say "always", I mean since my first serious crush at age 13.) My parents are feminists, they're the same age as each other, and they strongly believe in power equality in relationships. Thus, growing up, I always thought there was something terribly wrong with me.

In college, I learned about PUA and alpha males an all of that. Suddenly, here was an ideological system that treated my desires as natural instead of pe... (read more)

7A1987dM
I think PUAs' essentialist explanations are correct statistically, the way men are taller than women statistically, but there still are quite a few five-foot-six (1.68 m) men and five-foot-eleven (1.80 m) women.
7[anonymous]
In the interests of luminosity, to what extent do you believe this statement is an example of the naturalistic fallacy? That is, if feminism is an ethical stance, then it is concerned with how people ought to act, not how they do act. Your justification of PUA seems to be that it better describes reality, which wasn't the goal of feminism to begin with.

From what I could tell, feminism was just another optimistic belief system built on a very common but very rotten foundation: the idea that humans are rational creatures, that our rationality elevates us high above our brutal and bestial forebears.

I believe that at least a large part of feminism was created by women who were unhappy in an environment which didn't suit their personalities and/or made it very easy for men to abuse women.

Suddenly, here was an ideological system that treated my desires as natural instead of perverted.

Revolutions generally come with from an impulse to throw off imposed ideals, but usually end up imposing new ideals. The king is dead. Long live the king.

The desire for freedom is freedom from a constraint, and doesn't allow the naturally coalition building and power accretion of those who would impose constraints.

...I'm not really sure why I'm telling this story.

My guess - you saw the value to yourself of seeing your views not being portrayed as perverted, and took the opportunity to give the same kind of support to others who might feel that way.

I am female, and (to a large extent) my experience agrees with Submitter E's. I'm glad to see this posted here, because after reading the other LW and Women posts, I had begun to suspect that I was a complete outlier, and that I couldn't use my own experiences as a reference point for other women's at all.

This relates to something I've been concerned about in regards to social justice discussions-- the discussions actively discourage people from saying that they aren't being hurt even though they're in a group which (probably) gets hurt more than the other group on the same axis.

While I can see discouraging people from saying "I'm not getting hurt, therefore getting hurt almost never happens/doesn't matter", leaving out single data points about not getting hurt leads to another version of not knowing what's going on.

Do you know of any features for predicting who will recover from burnout, and who won't?

4Shmi
I would have quit if the alternative wasn't even more unbearable.

You may be interested in the literature on "concept learning", a topic in computational cognitive science. Researchers in this field have sought to formalize the notion of a concept, and to develop methods for learning these concepts from data. (The concepts learned will depend on which specific data the agent encounters, and so this captures the some of the subjectivity you are looking for.)

In this literature, concepts are usually treated as probability distributions over objects in the world. If you google "concept learning" you should find some stuff.

This is one of the big reasons that niceness annoys me. I think I've developed a knee-jerk negative reaction to comments like "good job!" because I don't want to be manipulated by them. Even when the speaker is just trying to express gratitude, and has no knowledge of behaviorism, "good job!" annoys me. I think it's an issue of one-place vs. two-place predicates - I have no problem with people saying "I like that" or "I find that interesting".

If I let my emotional system process both statements without filtering, ... (read more)

0buybuydandavis
I have a real problem with "good job". For me, it's associated with positive encouragement after someone screws up on a team sport. No, it wasn't a good job, it was a screw up. It's epitomized by a very sweet, very positive Christian girl in PE class, while playing volleyball. The contrast of a hypersaccharine "good job!" and the annoyance at the screw up had me grinding my teeth. The more general issue on "good job" is the inherent condescension. I am your superior, here to judge your performance and pat you on the head to encourage you to improve. No thanks. That goes a little with your point about the difference between "good job" and "I like that". I made a similar distinction between "You're wrong"/"That's wrong" and "I disagree". It does seem less annoying or insulting to have people phrase their opinion in terms of themselves, instead of an objective fact about you or reality. Convey your evaluation as your evaluation, instead of as a objective fact. Seems to feel better for the issues we're annoyed with. But do the nicies want to hear "good jobs" that the meanies don't?

Hmm, so I'm thinking about smileys and exclamation points now. I don't think they just demonstrate friendliness - I think they also connote femininity. I used to use them all the time on IRC, until I realized that the only people who did so were female, or were guys who struck me as more feminine as a result. I didn't want to be conspicuously feminine on IRC, so I stopped using smileys/exclamation points there.

It never bothered me when other people didn't use smileys/exclamations. But when I stopped using them on IRC, everything I wrote sounded cold or... (read more)

9mstevens
Okay, after threatening, I had a go at hacking up a smiley gender detector for lesswrong irc. Looking at the counts of smileys-per-message by nick, no obvious pattern. Looking at averages: male avg 0.015764359871 female avg 0.0194180023583 The dataset I'm using is so male dominated I don't think the results can be particularly meaningful.
1curiousepic
I am male, don't associate smileys with femininity, and often use them in most text conversations and also posts online if I would smile in meatspace when saying what I'm typing (which usually is not the case in work emails). It can occasionally put me on edge if I type with someone who does not use them, in a conversation where I would expect them to smile in meatspace.
3Luke_A_Somers
My associations... Well, first I check if the smiley significantly clarifies the tone of the comment. If so, I take that as the explanation. Beyond that, I associate youth, extroversion, being hip to tech, and emotional openness. This last has a tendency to be associated as feminine, though not particularly by me.
5mstevens
we must create a smiley based gender detector! for science!
4jooyous
RE: smileys in formal settings. I grew up speaking Russian, which is a language that has a formal-you pronoun, and I spent most of my school life feeling really weird writing "you" to adults in emails, because it felt too friendly and rude and presumptuous. Badly-raised child! I generally don't use smileys in professional emails unless the other person has used them first or I really want to make a nerdy joke. But sometimes that policy feels weird if your co-workers in meatspace are fun, joking, informal people. Why would you limit yourself with people if you know you don't have to? Also, I will add this link to a relevant post you might find interesting, mostly because I didn't notice this until the author pointed it out but also because I'm proud that I managed to hunt it down. (It is unfortunately not that well-written and touches on a lot of mind-killer topics.)

Do other people associate smileys and exclamations with femininity, or is it just me?

Apparently! I started talking to someone about this and he just told me this exact thing independently of you. He said men can only use smileys with women because it's flirting. (??) Which is weird to me because I've met men who are WAY more animated than I am in meatspace. Do they also not use exclamation marks? I don't think I'd be able chat with them online if they didn't; my brain would explode.

But actually, I think this whole issue comes up because we subconsciousl... (read more)

Hmm, I definitely see where you're coming from, and I don't (usually) want my comments to hurt anyone. If my comments were consistently upsetting people when I was just trying to have a normal conversation, then I would want to know about this and fix it - both because I actually do care about people's feelings, and because I don't want to prevent every single interesting person from conversing with me. It would take a lot of work, and it would go against my default conversational style, but it would be worth it in the long run.

However, it sounds more li... (read more)

4buybuydandavis
Yes, any niceness level will involve a trade off between the two preferences. I prefer a leaner and meaner LW.

Personally, I find the niceness-padding to be perfectly well-calibrated for dealing with disagreements because people are thoughtful and respectful. I find it to be insufficient when dealing with people talking past each other. It's really frustrating! This is a community full of interesting, intelligent people whose opinion I want to know ... that sometimes aren't bothering to carefully read what I wrote. And then not bothering to read carefully when I politely tell them that they misread what I wrote and clarify. So then I start thinking that this isn't ... (read more)

1ahartell
I would tend towards the last two, I think, and wouldn't find the forth to be rude (though it might depend on the nature and scale of the clarifications, with this method being most apt for smaller ones). However, I am one of those who likes the style of discussion on lesswrong.

I agree with your second paragraph completely, and I would be averse to comments whose only content was "niceness". I'm on LW for intellectual discussions, not for feel-goodism and self-esteem boosts.

I think it's worth distinguishing niceness from respect here. I define niceness to be actions done with the intention of making someone feel good about him/herself. Respect, on the other hand, is an appreciation for another person's viewpoint and intelligence. Respect is saying "We disagree on topic X, but I acknowledge that you are intellig... (read more)

2buybuydandavis
Those are all things I'd have to discover about you. There are some here I consider worthwhile conversation partners because I recognize their usernames and have formed opinions of them. I don't expect respect from people who don't know me, and I don't even expect it from those that do know me. I am not due respect from anyone, I have to earn it, by their lights.
-1jooyous
I feel like part of this is not acknowledging that quite a few people will experience non-fuzzy or anti-fuzzy feelings if they are disagreed with in a dismissive way. Or maybe when they feel like they are disagreed with in a dismissive way. And this may happen while the disagree-er is completely oblivious to this perception, and I think it is a little bit on the disagree-er to add some padding of niceness? Like you're not going to be a bit careful if you're in danger of accidentally stepping on people's feet in real life, right? That has pretty little to do with respect and more to do with compassion. It's a mutual understanding that human feet are squishy and hurt to be stepped on. Or you'd add niceness if you accidentally offend someone in a meatspace discussion? So why not here? I feel like it doesn't take away from the discussion to say "Oh sorry! I really meant [this]" instead of "I said [this] not [that]," which sounds pretty unfriendly on the internet. (Also, I feel like I'm the only person here that regularly uses exclamation marks. ) I feel like I've come across a lot of discussions where it's pretty obvious that the parties involved are frustrated, but they don't acknowledge it because there's a little bit of that Spocklike rationalists-don't-get-frustrated attitude still lingering around.
9Vaniver
That's a good interpretation, but I wonder if status is a simpler lens. Defining people and their traits is a high-status thing; the guy retorting that she's a thinker moves power from her to him in a way that suggesting wouldn't. Respect also seems subjective; I have basically stopped stating opinions around a friend whose rationality I do not respect because I don't think discussing contentious subjects with them is a good use of either of our times. If they say that they're a good judge of character, and I can think of three counterexamples, I'll only state those counterexamples if I respect them enough to think they can handle it. I also wonder about how much respect is subject-specific, and how much it's global. I can easily imagine someone who I trust when it comes to mathematics but don't trust when it comes to introspection.

Your comment has me wondering whether some folks expect niceness and respect to correlate. I've noticed some social contexts where fake niceness seems to be expected to cloak lack of respect. I wouldn't be surprised if some people around here are embittered from experiences with that.

It sounds like part of what Submitter B is complaining about is lack of respect. The guys she dated didn't respect her intellect enough to believe assertions she made about her internal experiences. I suspect this is a dearth of respect that no quantity of friendliness can r

... (read more)
Load More