Comment reply 1 of 2.
I don't recall attempting to make any (partial) jokes, no. I'm not sure what you're referring to as "these reactions". I'll try to respond to what I think is your (not necessarily explicit) question. I'm sort of responding to everyone in this thread.
When I suspect that a negative judgment of me or some thing(s) associated with me might be objectively correct or well-motivated---when I suspect that I might be objectively unjustified in a way that I hadn't already foreseen, even if it would be "objectively" unreasonable for me/others to expect me to have seen so in advance---well, that causes me to, how should I put it?, "freak out". My omnipresent background fear of being objectively unjustified causes me to actually do things, like update my beliefs, or update my strategy (e.g. by flying to California to volunteer for SingInst), or help people I care about (e.g. by flying back to Tucson on a day's notice if I fear that someone back home might be in danger). This strong fear of being objectively (e.g. reflectively) morally (thus epistemicly) antijustified---contemptible, unvirtuous, not awesome, imperfect---has been part of me forever. You can see why I would put an abnormally large amount of effort into becoming a decent "rationalist", and why I would have learned abnormally much, abnormally quickly from my year-long stint as a Visiting Fellow. (Side note: It saddens me that there are no longer any venues for such in-depth rationality training, though admittedly it's hard/impossible for most aspiring rationalists to take advantage of that sort of structure.) You can see why I would take LW's reactions very, very seriously---unless I had some heavyweight ultra-good reasons for laughing at them instead.
(It's worth noting that I can make an incorrect epistemic argument and this doesn't cause me to freak out as long as the moral-epistemic state I was in that caused me to make that argument wasn't "particularly" unjustified. It's possible that I should make myself more afraid of ever being literally wrong, but by default I try not to compound my aversions. Reality's great at doing that without my help.)
"Luckily", judgments of me or my ideas, as made by most humans, tend to be straightforwardly objectively wrong. Obviously this default of dismissal does not extend to judgments made by humans who know me or my ideas well, e.g. my close friends if the matter is moral in nature and/or some SingInst-related people if the matter is epistemic and/or moral in nature. If someone related to SingInst were to respond like Less Wrong did then that would be serious cause for concern, "heavyweight ultra-good reasons" be damned; but such people aren't often wrong and thus did not in fact respond in a manner similar to LW's. Such people know me well enough to know that I am not prone to unreflective stupidity (e.g. prone to unreflective stupidity in the ways that Less Wrong unreflectively interpreted me as being).
If they were like, "The implicit or explicit strategy that motivates you to make comments like that on LW isn't really helping you achieve your goals, you know that right?", then I'd be like, "Burning as much of my credibility as possible with as little splash damage as possible is one of my goals; but yes, I know that half-trolling LW doesn't actually teach them what they need to learn.". But if they responded like LW did, I'd cock an eyebrow, test if they were trolling me, and if not, tell them to bring up Mage: The Ascension or chakras or something next time they were in earshot of Michael Vassar. And if that didn't shake their faith in my stupidity, I'd shrug and start to explain my object-level research questions.
The problem of having to avoid the object-level problems when talking to LW is simple enough. My pedagogy is liable to excessive abstraction, lack of clear motivation, and general vagueness if I can't point out object-level weird slippery ideas in order to demonstrate why it would be stupid to not load your procedural memory with lots and lots of different perspectives on the same thing, or in order to demonstrate the necessity and nature of many other probably-useful procedural skills. This causes people to assume that I'm suggesting certain policies only out of weird aesthetics or a sense of moral duty, when in reality, though aesthetic and moral reasons also count, I'm actually frustrated because I know of many objective-level confusions that cannot be dealt with satisfactorily without certain knowledge and fundamental skills, and also can't be dealt with without avoiding many, many, many different errors that even the best LW members are just not yet experienced enough to avoid. And that would be a problem even if my general audience wasn't already primed to interpret my messages as semi-sensical notes-to-self at best. ("General audience", for sadly my intended audience mostly doesn't exist, yet.)
This cleared things up somewhat for me, but not completely. You might consider making a post that explains why your writing style differs from other writing and what you're trying to accomplish (in a style that is more easily understood by other LWers) and then linking to it when people get confused (or just habitually).
Anyone who does not believe mental states are ontologically fundamental - ie anyone who denies the reality of something like a soul - has two choices about where to go next. They can try reducing mental states to smaller components, or they can stop talking about them entirely.
In a utility-maximizing AI, mental states can be reduced to smaller components. The AI will have goals, and those goals, upon closer examination, will be lines in a computer program.
But in the blue-minimizing robot, its "goal" isn't even a line in its program. There's nothing that looks remotely like a goal in its programming, and goals appear only when you make rough generalizations from its behavior in limited cases.
Philosophers are still very much arguing about whether this applies to humans; the two schools call themselves reductionists and eliminativists (with a third school of wishy-washy half-and-half people calling themselves revisionists). Reductionists want to reduce things like goals and preferences to the appropriate neurons in the brain; eliminativists want to prove that humans, like the blue-minimizing robot, don't have anything of the sort until you start looking at high level abstractions.
I took a similar tack asking ksvanhorn's question in yesterday's post - how can you get a more accurate picture of what your true preferences are? I said:
A more practical example: when people discuss cryonics or anti-aging, the following argument usually comes up in one form or another: if you were in a burning building, you would try pretty hard to get out. Therefore, you must strongly dislike death and want to avoid it. But if you strongly dislike death and want to avoid it, you must be lying when you say you accept death as a natural part of life and think it's crass and selfish to try to cheat the Reaper. And therefore your reluctance to sign up for cryonics violates your own revealed preferences! You must just be trying to signal conformity or something.
The problem is that not signing up for cryonics is also a "revealed preference". "You wouldn't sign up for cryonics, which means you don't really fear death so much, so why bother running from a burning building?" is an equally good argument, although no one except maybe Marcus Aurelius would take it seriously.
Both these arguments assume that somewhere, deep down, there's a utility function with a single term for "death" in it, and all decisions just call upon this particular level of death or anti-death preference.
More explanatory of the way people actually behave is that there's no unified preference for or against death, but rather a set of behaviors. Being in a burning building activates fleeing behavior; contemplating death from old age does not activate cryonics-buying behavior. People guess at their opinions about death by analyzing these behaviors, usually with a bit of signalling thrown in. If they desire consistency - and most people do - maybe they'll change some of their other behaviors to conform to their hypothesized opinion.
One more example. I've previously brought up the case of a rationalist who knows there's no such thing as ghosts, but is still uncomfortable in a haunted house. So does he believe in ghosts or not? If you insist on there being a variable somewhere in his head marked $belief_in_ghosts = (0,1) then it's going to be pretty mysterious when that variable looks like zero when he's talking to the Skeptics Association, and one when he's running away from a creaky staircase at midnight.
But it's not at all mysterious that the thought "I don't believe in ghosts" gets reinforced because it makes him feel intelligent and modern, and staying around a creaky staircase at midnight gets punished because it makes him afraid.
Behaviorism was one of the first and most successful eliminationist theories. I've so far ignored the most modern and exciting eliminationist theory, connectionism, because it involves a lot of math and is very hard to process on an intuitive level. In the next post, I want to try to explain the very basics of connectionism, why it's so exciting, and why it helps justify discussion of behaviorist principles.