I've been recently thinking. Mispronunciation of names (books, concepts, intellectuals) is usually taken as a sign that the person in question dosen't really have a place in the debate and is way over his head, faking knowledge for status singaling.
But if the person makes cognisant arguments and generally shows a understanding outside of the mispronunciation what does this really say about him? Why does he then still carry a clear low status penalty? Is this perhaps low status because its a signal that the person while otherwise of sufficient calibre, but displays one or more of the following undesirable traits :
In the modern world, with the advent of the internet, the fraction of people for who c) is true dosen't seem to have increased.
Is perhaps the incentive to discriminate based on point a or b stronger than ever because one can't signal proper class and group affiliations by simply having wide interests and being well read and well versed in various facts (a set of traits now also true of very low status wikipediabingers)? Could this be perhaps related to stronger credential-ism even in circumstances outside of simple economic calculations?
Cheap information in other words means one can't signal high status and intelligence via possessing said newly cheapened information.
Information that is trivially harder to get but that one tends to acquire anyway if one receives his cheap information from the proper sources seems a substitute useful in some circumstance.
Overall I'm not really sure if this is really just about signalling or if its a good heuristic to weed out a stronger onslaught of those who produce only facsimiles of knowledge. One is hard pressed to deny that a culture of intellectual overconfidence has been fostered in certain fields due to cheap information.
I think one possibility you're not considering, and one which applies frequently when mispronunciation is used as a negative signal, is the case of a shibboleth. A mispronunciation becomes a shibboleth when it's frequently discussed, joked about, known in the field. If you fail the shibboleth test, that's evidence that you're not familiar enough with the field to have encountered the test before, and that may legitimately be a negative signal.
E.g. consider the mispronunciation of nuclear as nu-cu-lar, which carries a stigma and is liable to get you mocked. The mispronunciation has become, by itself, a widely known symbol. There are genuine linguistic reasons for this mispronunciation, and many people grow up with it "innocently", so to speak, but then correct themselves as they start studying physics, because they've heard or read about its shibboleth status; even if you're self-taught and get most of your knowledge from books, it's difficult to miss it. Thus if you pronounce nu-cu-lar, I can infer that either you haven't seen its shibboleth status discussed/maintained, which hints that you haven't been much around physics, in verbal or written form; or that you've seen that, but didn't care to update - your c), which happens rather infrequently.
Contrast this with many possible and actual mispronunciations in physics that did not achieve a shibboleth status - in those cases, I think most often people don't care. Is "boson" pronounced with [s] or [z]? The dictionary happily lists both. Names are mangled on a regular based without listeners batting an eye as long as they understand who it is. And so forth. Even if the listener could plausibly infer your a), I haven't found it to be a strong negative signal in many circles.
I was reminded to post this because we didn't have an open thread I could put this link in: Rules for AIs who want to edit Wikipedia.
Interesting. If you know of any other similar codifications of Rules for AIs that have been proposed or promulgated for other potential AI playgrounds, I would be interested in seeing links.
A question about this:
AI editors may not port themselves to other editors' machines unless invited so to do.
Does the "other editor's machines" refer to the machines hosting other Wikipedian AIs, or does it refer to machines owned by human Wikipedians? I notice one curious feature of this rule - it seems to proscribe behaviors not directly of concern to Wikipedia itself, but rather behaviors which victimize members of the Wikipedia community. A sanction against AIs who engage in this activity is not mentioned. Presumably, the sanction would be loss of editing privileges. Is this assumption correct?
It's an essay with humorous intent, I have no idea how much the editor actually knows about the subject. I'd suggest asking on the talk page :-)
It seems to me that on lesswrong there is an overemphasis on status as a human motivator. For example, I think it's possible for a scientist to want to make an important discovery not to gain status in the scientific community but for the beauty of knowledge.
It seems it's a similar situation to the 'if you're a hammer you see all problems as nails' kind of situation, where 'doing it for status' is such a readily thought of thing that it gets over applied.
thoughts?
"Possible" is not a refutation of a general statement, only of an absolute one.
Rather, I suspect the emphasis is to compensate for nerds of various sorts - that being who makes up most of the LessWrong audience - placing far less emphasis on status than most people, thus failing to understand the overwhelming power of tribal politics in almost every human interaction.
Remember: we grew this great big brain just to do tribal politics. We grew general intelligence as a better way to do tribal politics. We discovered quantum mechanics and built a huge technological civilisation as side-effects of a mechanism to do tribal politics better. So I'd say that stuff is likely important to dealing effectively with other people, i.e., winning.
Good question, though :-)
We discovered quantum mechanics and built a huge technological civilisation as side-effects of a mechanism to do tribal politics better.
I cannot begin to express how delighted I am to hear someone else saying this. I'm dancing a little dance of glee in my chair.
Indeed! Why, we've formed our own little transient political alliance here. I can practically hear the endorphin-secreting glands squirting away in response.
Remember: we grew this great big brain just to do tribal politics. We grew general intelligence as a better way to do tribal politics. We discovered quantum mechanics and built a huge technological civilisation as side-effects of a mechanism to do tribal politics better. So I'd say that stuff is likely important to dealing effectively with other people, i.e., winning.
You're sure the theory that the large brain is at least partly for precision throwing is wrong?
You're sure the theory that the large brain is at least partly for precision throwing is wrong?
A very, very small 'part' perhaps. That sort of specialised behavior doesn't particularly need a massive cortex.
If anything, that's even worse news for nerds. It'd provide a handy evolutionary explanation for why basketball players tend to be popular, though.
It seems to me that on lesswrong there is an overemphasis on status as a human motivator. For example, I think it's possible for a scientist to want to make an important discovery not to gain status in the scientific community but for the beauty of knowledge.
I think you miss the point of how status is related to motivation. Relatively few people actually think "I want status and so I will do X". Instead, they just actually want to do X because that is what they feel like doing. However when we wish to model or predict how humans will behave the status concept is powerful. "What would we expect people to do in this situation assuming they were optimized to do what would work to gain social status in their environment of adaptation?" often gives good predictions of what people will do.
Note that people's feelings and desires being real and sincere does not make a behavior less about status. Likewise, a behavior being about status does not make feelings and desires less 'real'.
I'm unsure of whether it is overused. However, I'm not convinced that the heavy emphasis on status pays much of its rent. What do the overarching status hypotheses predict that we would expect not to see if they weren't the case?
It's certainly possible, but that doesn't mean that status isn't a powerful motivator, and one which we're far more likely to underestimate.
The "hammer that makes you see all problems as nails" bit is a description I've used myself though, in regards to Robin Hanson in particular with his treatment of status and signalling. On Overcoming Bias far more than here I get the impression that a lot of the essays develop out of posing a question and asking "is there a way I can use status and signalling to explain this?"
I don't think so, other than the massive security issues of having an Internet-connected device be authorised to spend your money at the push of a button. Every bank I've been with required me to employ a handheld security device to verify a one-time code before authorising direct online payments, but the alarm clock could just use the credit card system instead.
Tenet-style "undying", super coincidental anti-entropy. Stories don't have tidy beginnings. The past is always present.
Everything is possible. Quantum immortality refers to subjective anticipation, a ranking of future observations according to their importance for future plans that one can prepare and carry out with an impact on environment (utility). Since an agent can't personally carry out plans in possible worlds where it's dead, these worlds have low priority in reflection on possible plans, thus most of anticipation is concentrated where the agent survives, however improbable that is.
On the other hand, one doesn't prepare plans for carrying out in the past, and so the heuristic of subjective anticipation doesn't apply. There is no salient decision-theoretic measure to be concentrated in the possible worlds where the agent operated in the past.
Two webcomics that seemed salient:
Does anyone have some decent experience with or references about the viability of biphasic sleep, i.e. sleep in two chunks of say at least 2 hours each?
I can't rely on the polyphasic crowd (who don't collect data and don't care about memory problems at all) and can't find any serious attempt at it besides "if you're 60 and don't do much, then it's just fine".
I'm going to be going to Florida to watch the shuttle launch next week which is the second to last launch of the program. Will any other LWians be watching?
According to my understanding, ideal utilitarians may be modeled as selfish agents to whom all world states which induce a positive change to another being's utility function induce a proportional positive change to its own utility function, and all world states which induce a negative change to another being's utility function induce a proportional negative change to its own utility function.
My question is regarding the 'proportional' bit -- under the standard definition of utilitarian, would a utilitarian bias its decisions towards helping beings whose utility functions had a greater capacity for increase and decrease (or, in other words, beings who felt more strongly than others?)
There have been a few open threads in the discussion section- the concern was that some subjects were too minor to deserve separate discussion posts.
We're getting late with these ...
This thread is for the discussion of Less Wrong topics that have not appeared in recent posts and are too short or inchoate even for a discussion post. If a discussion gets unwieldy, celebrate by turning it into a top-level post.