I recently made a dissenting comment on a biggish, well-known-ish social-justice-y blog. The comment was on a post about a bracelet which one could wear and which would zap you with a painful (though presumably safe) electric shock at the end of a day if you hadn't done enough exercise that day. The post was decrying this as an example of society's rampant body-shaming and fat-shaming, which had reached such an insane pitch that people are now willing to torture themselves in order to be content with their body image.
I explained as best I could in a couple of shortish paragraphs some ideas about akrasia and precommitment in light of which this device made some sense. I also mentioned in passing that there were good reasons to want to exercise that had nothing to do with an unhealthy body image, such as that it's good for you and improves your mood. For reasons I don't fully understand, these latter turned out to be surprisingly controversial points. (For example, surreally enough, someone asked to see my trainer's certificate and/or medical degree before they would let me get away with the outlandish claim that exercise makes you live longer. Someone else brought up the weird edge ...
but we had entirely different background assumptions about how one makes a case for said position. There was a near-Kuhnian incommensurability between us.
This is very frustrating and when I realize it is happening, I stop the engagement. In my experience, rationalists are not that different from smart science or philosophy types because we agree on very basic things like the structure of an argument and the probabilistic nature of evidence. But in my experience normal people are very difficult to have productive discussions with. Some glaring things that I notice happening are:
a) Different definitions of evidence. The Bayesian definition of evidence is anything that makes A more likely than not A. But for many people, evidence is anything that would happen given A. For example a conspiracy theorist might say "Well of course they would deny it if were true, this only proves that I'm right".
b) Aristotelianism: the idea that every statement is either true or false and you can prove statements deductively via reasoning. If you've reasoned that something is true, then you've proved it so it must be true. Here is a gem from an Aristotelian friend of mine "The people in ...
Sightings:
I was also struck by how weird it was that people were nitpicking totally incidental parts of my post, which, even if granted, didn't actually deduct from the essence of what I was saying. This seemed like a sort of "argument by attrition", or even just a way of saying "go away; we can tell you're not one of us."
A general pattern I've noticed: when processing an argument to which they are hostile, people often parse generalizations as unsympathetically as they can. General statements which would ordinarily pass without a second thought are taken as absolutes and then "disproved" by citations of noncentral examples and weird edge cases. I think this is pretty bad faith, and it seems common enough. Do we have a name for it? (I have to stop myself doing it sometimes.)
Your symbolic arguments made me laugh.
Social justice, apropos of the name, is largely an exercise in the manipulation of cultural assumptions and categorical boundaries- especially the manipulation of taboos like body weight. We probably shouldn't expect the habits and standards of the social justice community to be well suited to factual discovery, if only because factual discovery is usually a poor way to convince whole cultures of things.
But the tricky thing about conversation in that style is that disagreement is rarely amicable. In a conversation where external realities are relevant, the 'winner' gets social respect and the 'loser' gets to learn things, so disagreement can be mutually beneficial happy event. But if external realities are not considered, debate becomes a zero-sum game of social influence. In that case, you start to see tactics pop up that might otherwise feel like 'bad faith.' For example, you win if the other person finds debate so unpleasant that they stop vocalizing their disagreement, leaving you free to make assertions unopposed. On a site like Less Wrong, this result is catastrophic- but if your focus is primarily on the spread of social influence, then it can be an acceptable cost (or...
In regular English, “exercise increases lifespan” doesn't mean ‘all exercise increases lifespan’ any more than “ducks lay eggs” means ‘all ducks [including males] lay eggs’.
It's a first contact situation. You need to establish basic things first, e.g. "do you recognize this is a sequence of primes," "is there such a thing as 'good' and 'bad'," "how do you treat your enemies," etc.
Here is how to win the argument:
Create another nickname, pretending to be a Native American woman. Say that the idea of precommitment to exercise reminds you that in the ancient times the hunters of your tribe believed that it is spiritually important to be fit. (Then the white people came and ruined everything.) If anyone disagrees with you, act emotional and tell them to check their privilege.
The only problem is that winning in this way is a lost purpose. Unless you consider it expanding your communication skills.
I've actually seen an argument online in which some social justicers (with the same bad habits as in the story above) were convinced that it is acceptable to care about male circumcision on the grounds that it made SRS (sexual reassignment surgery) more difficult for trans women. Typically (in this community), if you thought male circumcision was an issue - you were quickly shouted down as a dreaded MRA (men's rights activist).
even though what they're doing is crusading against sexism, racism, patriarchy, etc., simply because no True Social Justice Warror would engage in rational debate or respond to disagreement with sensible engagement rather than outrage.
Slightly off topic, but can I ask why patriarchy is assumed to be obviously bad?
I can certainly see the negative aspects of even moderate patriarchy, and wouldn't endorse extreme patriarchy or all forms of it, but its positive aspect seems to be civilization as we know it. It makes monogamy viable, reduces the time preferences of the people in a society, makes men invested in society by encouraging them to become fathers and husbands, boosts fertility rates to above replacement, likely makes the average man more attractive to the average woman improving many relationships, results in a political system of easily scalable hierarchy, etc.
I propose an alternative explanation. Some people are just born psychopaths; they love to hurt other people.
Whatever nice cause you start, if it gains just a little power, sooner or later one of them will notice it and decide they like it. Then they will try to join it and optimize it for their own purposes. You will recognize that this happened when people around you start repeating memes that hurting other people is actually good for your cause. Now, in such environment people most skilled in hurting others can quickly rise to the top.
(Actually, both our explanations can be true at the same time. Maybe any movement that doesn't open its doors to psychopaths it doomed in the long term, because other people simply don't have enough power to change the society.)
True, but you don't do that by mimicking their rhetoric.
The point isn't to blindly mimic their rhetoric, it's to talk their language: not just the soundbites, but the motivations under them. To use your example, talking about letting Jesus into your heart isn't going to convince anyone to donate a large chunk of their salary to GiveWell's top charities. There's a Christian argument for charity already, though, and talking effective altruism in those terms might well convince someone that accepts it to donate to real charity rather than some godawful sad puppies fund; or to support or create Christian charities that use EA methodology, which given comparative advantage might be even better. But you're not going to get there without understanding what makes Christian charity tick, and it's not the simple utilitarian arguments that we're used to in an EA context.
A lot of people are pointing out that perhaps it wasn't very wise for you to engage with such commenters. I mostly agree. But I also partially disagree. The negative effects of you commenting there, of course, are very clear. But, there are positive effects as well.
The outside world---i.e. outside the rationalist community and academia---shouldn't get too isolated from us. While many people made stupid comments, I'm sure that there were many more people who looked at your argument and went, "Huh. Guess I didn't think of that," or at least registered some discomfort with their currently held worldview. Of course, none of them would've commented.
Also, I'm sure your way of argumentation appealed to many people, and they'll be on the lookout for this kind of argumentation in the future. Maybe one of them will eventually stumble upon LW. By looking at the quality of argumentation was also how I selected which blogs to follow. I tried (and often failed) to avoid those blogs that employed rhetoric and emotional manipulation. One of the good blogs eventually linked to LW.
Thus, while the cost to you was probably great and perhaps wasn't worth the effort, I don't think it was entirely fruitless.
I don't think it's a good idea to get into a discussion on any forum where the term "mansplaining" is used to stifle dissent, even (or especially) if you have "a clear, concise, self-contained point".
I recently made a dissenting comment on a biggish, well-known-ish social-justice-y blog.
Um, why?
I mean, walking through a monkey house when all they're going to do is fling shit everywhere isn't something I would choose to do.
Only because I had a clear, concise, self-contained point to make and I figured I'd be able to walk away once I was done. I'll know better next time.
Website suggestion: Retracted comments should collapse the thread (just like downvoted comments do now).
The philosopher John Danaher is doing a series of posts on Bostrom's Superintelligence book. Posts that were up at the time of writing this comment:
Bostrom on Superintelligence (1): The Orthogonality Thesis
Bostrom on Superintelligence (2): The Instrumental Convergence Thesis
Bostrom on Superintelligence (3): Doom and the Treacherous Turn
Danaher has also blogged about AI risk topics before: see here, here, here, here, and here. He's also written on mind uploading and human enhancement.
Since we are way too confident that bad things won't happen to us I have been researching how to prepare for several rare events with disastrous consequences. Starting the research I realised I have yet to find out what those events exactly are. So far I have found these, remedy given if known:
Anytime you're thinking about buying insurance, double check whether it actually makes more sense to self-insure. It may be better to put all the money you would otherwise put into insurance in "rainy day fund" rather than buying ten different types of insurance.
In general, if you can financially survive the bad thing, then buying insurance isn't a good idea. This is why it almost never makes sense to insure a $1000 computer or get the "extended warranty." Just save all the money you would spend on extended warranties on your devices, and if it breaks pay out of pocket to repair or get a new one.
This is a harshly rational view, so I certainly appreciate that some people get "peace of mind" from having insurance, which can have a real value.
Though note that an insurance may regardless be useful if you have self-control problems with regard to money. If you've paid your yearly insurance payment, the money is spent and will protect you for the rest of the year. If you instead put the money in a rainy day fund, there may be a constant temptation to dip into that fund even for things that aren't actual emergencies.
Of course, that money being permanently spent and not being available for other purposes does have its downsides, too.
"if you can not afford to buy it twice, you can't afford it in the first place"
An excellent maxim, which has crystallised for me why I am so reluctant to move to a bigger house, even though I would like one, and I could buy one immediately for cash plus the price I'd get for my current house. It's because I can't afford to do that twice. With an extra cost-of-a-house in the bank I might.
A quick search for tDCS did not turn up any major discussion newer than 2012 on LW. tDCS devices are now sub $100. Its safety track record seems to be intact. I bought one. There are places to discuss tDCS like on subreddits but I'd like to restart the conversation here with you rationalists.
Recently Radiolab did a piece about it
This is a followup to a post I made in the open thread last week; apologies if this comes off as spammy. I will be running a program equilibrium iterated prisoner's dilemma tournament (inspired by the one last year). There are a few key differences from last year's tournament: First, the tournament is in Haskell rather than Scheme. Second, the penalty for bots that do not finish their computation within the pre-set time limit has been reduced. Third, bots have the ability to run/simulate each other but cannot directly view each other's source code.
Here are...
The elimination tournament better simulates evolution and capitalism. With a round robin you can have a successful strategy of taking resources from stupid people. But in nature and the marketplace stupid people won't consistently get resources and so a strategy of taking from them will not be a long-term effective one.
I was thinking about the idea of lost purposes in my kitchen, and a vivid illustration of the idea occurred to me:
You plan to make homemade ice cream for your partner's birthday party next week, so you put "cream" on your shopping list. The next day, you break up with your partner on surprisingly unfriendly terms. You are no longer going to be attending the birthday party. But then you find yourself at the supermarket, with your shopping list in hand, putting a carton of cream into your cart.
EDIT: The birthday-party/break-up thing is a fictional scenario, not something that actually happened to me. Sorry for any worries!
I feel like parables here on LW, especially the longer and more tortured ones, are pretty much fallacy and bias breeding grounds. A couple egregious offenders, to my mind, include
Blue and Green Martians; about pick-up artistry
and
The Fable of the Dragon Tyrant; about death
Why do I take issue with them? Because while using analogies, including fanciful ones, can help us take the outside view on a problem where we are irrationally biased, these sorts of parables can also be a selective re-telling of the facts, and conclusions drawn from them simply don't tr...
I'm fairly sure this comment was not exactly intended as a compliment, but I can think of worse insults than having my writing put in the same category as Nick Bostrom. As the author of the first of these parables, even I recognize that these two stories differ very significantly in quality
The Blue and Green Martians parable was an attempt to discuss a question of ethics that is important to many members of this community, and which it is almost impossible to discuss elsewhere. The decision to use an analogy was an attempt to minimize mindkill. This did not succeed. However, I am fairly sure that if I had chosen not to use an analogy, the resulting flamewar would have been immense. This probably means that there are certain topics we just can't discuss, which feels distinctly suboptimal, but I'm not sure I have a better solution.
Parables about the danger of nuclear weapons that ignore the fact that this danger was successfully handled (there was something on here using it as an analogy for AI).
The danger wasn't successfully handled for a lot of values of "successful". The fact that you survive playing Russian roulette doesn't show that you successfully handled danger. Once a nuclear bomb nearly exploded in the US where 3 of 4 safety feature of the bomb failed. If I remember right it would have needed 3 of 3 people in the Russian submarine in the Cuban missle crisis to lunch a nuclear weapon and 2 of them wanted to lunch it. There are various lost nuclear weapons.
Well I don't like the dragon parable either. It's overlong, a bit condescending and ignores the core problem that anti-aging research has done a pretty poor job of showing concrete achievements, even if it's right that it's under-prioritized.
Hmm. I suppose I thought the point of "Dragon Tyrant" was not to narrowly advocate for the anti-aging research program; but rather to get people to take seriously the "naïve" idea that death is bad.
Or, more specifically, to say that even though ① defeating death seems like an insurmountable goal because death has always been around, and ② there are people advocating on a wide variety of grounds against attempting to defeat death, it is nonetheless reasonable and desirable to consider.
"Dragon Tyrant" uses the technique, common to sociology and "soft" science fiction (e.g. Kurt Vonnegut, Douglas Adams), of making the familiar strange — taking something that we are so accustomed to that it is unquestioned, and portraying it as alien.
Okay, women have a preference along a single axis which they do nothing about and do not express at all. The framework as described is all about what active agenty men could or should do to entirely passive npc women. I'm very far from being a feminist, but come on -- this is objectification and "don't worry your pretty head about it".
I have a preference for eating tasty food in restaurants. But I am absolutely not interested in teaching chefs how to cook. If I am not satisfied with the food, I will simply never come back to that restaurant again. There are many restaurants to choose from. I don't really care about what happens to the owner of the bad restaurant; it's their problem, not mine.
Does this make me an entirely passive NPC, because I completely refuse to participate in this "how to get better at cooking" business and merely evaluate my satisfaction with the results? I don't think this would be a fair description. I am not waiting helplessly; my strategy is evaluating different restaurants and choosing the best. Yeah, if we assume that each chef can only make a limited amount of food, I am kinda playing a zero-sum game against other customers here. But still, playing zero-sum games is not passivity.
But a naive chef could complain: "All those customers do is criticize. They never help us, never teach us. How are we supposed to learn? Everyone's first cooked meal is far from perfect. Practice makes perfect, but practice inevitably includes making a few mistakes." From his point of view...
I've started to play with directed graphs kind of like Bayesian networks to visualize my belief structures. So a node is a belief (with some indication of confidence), while connections between graphs indicate how beliefs influence (my confidence in) other beliefs.
This seems useful for summarizing complex arguments, easy to memorize, and (when looking at a belief structure that's bigger than my working memory) for organizing and revising thought.
However, there are a few decisions in how to design the visual language of such graphs that I can't see obvious ...
If I wanted to learn about a precise formulation of UDT, where should I look / who should I ask? Info on the wiki is hopelessly outdated, and there lacks a single clear exposition.
My diet seems to influence my mind and body a lot more strongly than is normal. (Food intolerances that mess with my emotions or focus, apparent hypoglycemia that goes away when I take vitamin B, that sort of thing. I know a lot of people have something like this, but I've got so many that diet is the default first suspect whenever anything goes wrong.) I'm not sure whether this makes me a potentially useful test subject for things like nootropics because the effects might get inflated and easier to notice, or just an outlier whose results won't work on an...
I have noticed that maintaining a decent diet makes a massive difference to my mental state, but I have no reason to think this is unusual. You may not be either.
In short, consider generalizing from one example more.
My diet seems to influence my mind and body a lot more strongly than is normal.
Alternative hypothesis: Diet has a huge influence on mind and body but most people lack the mindfulness to notice.
The Food Sense App might help you. http://www.bulletproofexec.com/find-your-kryptonite-with-the-free-bulletproof-food-sense-iphone-app/
Why discuss it? Wouldn't it be better to A/B test which encourages new visitors to click on a link to another page?
Yes, you're right. I didn't like the change that's all, and was hoping for a majority to back me. But if anyone wants to do that, that would certainly be a good idea
I was tasked to replace it, because it apparently tested better. The timing of the reddit thread linked by asd was just coincidence.
For a quick reminder of the power of many independent trials, estimate and then answer the following question:
I have 2 biased coins in my pocket. The first comes up heads with probability 51%, while the second comes up heads with probability 49%. I take a coin out of my pocket, uniformly at random, and flip it a million times. I observe that it comes up heads 508,634 times. What is the probability that it is the first coin?
In his latest newsletter Louie Helm advises taking "activated" vitamin D in the form of Calcitriol or Paricalcitol, to raise one's Klotho levels, which is likely to increase one's IQ and longevity if you don't already have the gene for it. Since Calcitriol and Paricalcitol aren't over-the-counter, what would be the best way to acquire some?
http://rockstarresearch.com/increase-longevity-and-intelligence-with-boosted-klotho-levels/
Less wrong overlaps overcoming bias which plugs scicast. So there should be a bunch of folks on here competing in scicast. If so how are you folks doing?
I tried to think of the most harmless thing. Something I loved from my childhood. Something that could never ever possibly destroy us.
A thought occurred to me a while back. Call it the "Ghostbusters" approach to the existential risk of AI research. The basic idea is that rather than trying to make the best FAI on the first try, you hedge your bets. Work to make an AI that is a)unlikely to disrupt human civilization in a permanent way at all, and b)available for study.
Part of the stress of the 'one big AI' interpretation of the intelligence ...
I am looking for methods by which I can gain experience working with state or federal (American) organizations. I plan to begin applying for jobs with government libraries and archives next year, and I would like some experience besides what I am doing for my current job. I do not mean that my current job is pointless, only that I feel no reason not to spend time augmenting it.
As I am not a student, I cannot apply for online internships, which was my first plan. I could enroll in an online class but I do not have the money for the tuition. So, I am looking...
What are your opinions on professional certificates? Do they actually increase potential earnings or do they only make money for the certifying body? And are there very broad certificates that can be useful in any vaguely quantitative profession that uses mathematical models.
I ask because I am in the later part of studying physics and pretty sure that I neither want to nor be able to make it in academia, so I am working on alternative plans. Figured that some certificates could enhance my employability or show some alternatives I haven't been aware of yet. What do you think?
In light of how long it usually takes for statistical models and discoveries to crawl out of academic articles -> practice, the LessWrong community will probably appreciate the efforts by the Consortium of Food Allergy Research (established w/ money from the US National Institutes of Health) to provide online probabilistic calculators for people's long-term prognoses:
Sorry if this has been addressed, but is there a way to buy a copy of the major sequences (I assume it'd be too long without the word "major?") in a dead tree book form? Further, if not, anyone interested in getting relevant permissions and putting one together on Lulu or some such? I like highlighting my books, and know some folks I'd like to give the major sequences, but stapled copies would make me feel like a crackpot pamphleteer. Thanks in advance if there is a solution.
I am looking for a proofreader or three. The thing I would like proofed is short and so could easily be sent over PMs. I would like an outside view before submitting it.
"Memory is the framework of reality" This quote just popped into my head recently and I can't stop thinking about it.
Regarding where to draw the boundary, are R, W, and Y really vowels? The biggest difference I've noticed between vowels and consonants is that vowels don't involve touching parts of your mouth together. This makes it a lot easier to transition between letters. The second thing is that vowels are always voiced. H is otherwise a vowel, but it seems like it might be worth calling a consonant on that basis.
R at least is always considered a consonant, but the sound W makes is considered a vowel if it's a hard U or an OO, and the sound Y makes is considered a vo...
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the 'open_thread' tag.
2. Check if there is an active Open Thread before posting a new one.
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.