If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.
Given our known problems with actively expressing approval for things, I'd like to mention that I approve of the more frequent open threads.
While reading a psychology paper, I ran into the following comment:
Unfamiliar things are distrusted and hard to process, overly familiar things are boring, and the perfect object of beauty lies somewhere in between (Sluckin, Hargreaves, & Colman, 1983). The familiar comes as standard equipment in every empirical paper: scientific report structure, well-known statistical techniques, established methods. In fact, the form of a research article is so standardized that it is in danger of becoming deathly dull. So the burden is on the author to provide content and ideas that will knock the reader’s socks off—at least if the reader is one of the dozen or so potential reviewers in that sub-subspecialty.
Besides the obvious connection to Schmidhuber's esthetics, it occurred to me that this has considerable relevance to LW/OB. Hanson in the past has counseled contrarians like us to pick our battles and conform in most ways while not conforming in a few carefully chosen ones (eg Dear Young Eccentric, Against Free Thinkers, Even When Contrarians Win, They Lose); this struck me as obviously correct, and that one could think of oneself as having a "budget" where non-conforming on both dr...
Awesome job, whoever made this "latest open thread," "latest rationality diary," and "latest rationality quote" thing happen!
One of the most salient differences between groups that succeed and groups that fail is the group members' ability to work well with one another.
A corollary: If you want a group to fail, undermine its members' ability to work with each other. This was observed and practiced by intelligence agencies in Turing's day, and well before then.
Better yet: Get them to undermine it themselves.
By using the zero-sum conversion trick, we can ask ourselves: What ideas do I possess that the Devil¹ approves of me possessing because they undermine my ability to accomplish my goals?
¹ "The Devil" is shorthand for a purely notional opponent whose values are the opposite of mine.
One Devil's tool against cooperation is reminding people that cooperation is cultish, and if they cooperate, they are sheep.
But there is a big exception! If you work for a corporation, then you are expected to be a team player, and you have to participate in various team-building activities, which are like cult activities, just a bit less effective. You are expected to be a sheep, if you are asked to be one, and to enjoy it. -- It's just somehow wrong to use the same winning strategy outside the corporation, for yourself or your friends.
So we get the interesting result that most people are willing to cooperate if it is for someone else's benefit, but have an aversion against cooperation for their own. If I tried to brainwash people to become obedient masses, I would be proud to achieve this.
This said, I am not sure what exactly caused this. It could be a natural result of thousand small-scale interactions; people winning locally by undermining their nearest competitors' agency, and losing globally by poluting the common meme-space. And the people who overcome this and become able to optimize for their own benefit probably find it much easier to find themselves followers than peers; thus they get out of the system, but don't change the system.
Me and my friend are organizing a new meetup in Zagreb but I don't have enough karma to make an announcement here. Thanks!
[Meta] Most meetup threads have no comments. It seems like it would be useful for people to post to say "I'm coming", both for the organiser and for other people to judge the size of the group. Would this be a good social norm to cultivate? I worry slightly that it would annoy people who follow the recent comments feed, but I can't offhand think of other downsides.
Suggested alternative to reduce the recent comment clutter issue: Have a poll attached to each meetup with people saying if they are coming. Then people can get a quick glance at how many people are probably coming, and if one wants to specifically note it (say one isn't a regular) then mention that in the comment thread.
Here is some verse about steelmanning I wrote to the tune of Keelhauled. Compliments, complaints, and improvements are welcome.
*dun-dun-dun-dun
Steelman that shoddy argument
Mend its faults so they can't be seen
Help that bastard make more sense
A reformulation to see what they mean
Being in Seattle has taught me something I never would have thought of otherwise:
Working in a room with a magnificent view has a positive effect on my productivity.
Is this true for other people, as well? I normally favor ground-level apartments and small villages, but if the multiplier is as consistent as it's been this past week, I may have to rethink my long-term plans.
It could be just the novelty of such a view. I suspect that any interesting modification to your working environment leads to a short-term productivity boost, but these things don't necessarily persist in the long term. In any case, it seems like the VoI of exploring different working environments is high.
Question: Who coined the term "steelman" or "steelmanning", and when?
I was surprised not to find it in the wiki, but the term is gaining currency outside LessWrong.
Also, I'd be surprised if the concept were new. Are there past names for it? Principle of charity is pretty close, but not as extreme.
Google search with a date restriction and a few other tricks to filter out late comments on earlier blog posts suggests Luke's post Better disagreement as the first online reference, though the first widely linked reference is quite recent, from the Well Spent Journey blog.
Saw this on twitter. Hilarious: "Ballad of Big Yud"
There is another video from the same author explaining his opinions on LW. It takes 2 minutes to just start talking about LW, so here are the important parts: ---
The Sequences are hundreds and hundreds of blog posts, written by one man. They are like catechism, teach strange vocabulary like "winning", "paying rent", "mindkilling", "being Bayesian".
The claim that Bayes theorem, which is just a footnote in statistic textbook, has the power to reshape your thinking so that you can maximize the outcomes of your life... has no evidence. You can't simplify the complexity of life into simple probabilities. EY is a high-school dropout and he has no peer-reviewed articles.
People on LW say that criticism of LW is upvoted. Actually, that "criticism" does not disagree with anything -- it just asks MIRI to be more specific. Is that the LW's best defense against accusations of cultishness?
LW community believes in Singularity, which again, has no evidence, and the scientific community does not support it. MIRI asks your money, and does not say how specifically it will be used to save the world.
LW claims that politics is the mindkiller, yet EY adm...
There's a user at RationalWiki, one of the dedicated LW critics there, called "Baloney Detection". I often wondered who it was. The image at 5:45 in this video, and the fact that "Baloney Detection" also edited the "Julia Galef" page at RW to decry her association with LW, tells me this is him...
By the way, the RW article about LW now seems more... rational... than the last time I checked. (Possibly because our hordes of cultists sposored by the right-wing extremist conspiracy fixed it, hoping to receive the promised 3^^^3 robotic virgins in singularitarian paradise as a reward.) You can't say the same thing about the talk pages, though.
It's strange. Now I should probably update towards "a criticism of LW found online probably somehow comes from two or three people on RW". On their talk pages, Aris Katsaris sounds like a lonely sane voice in a desert of... I guess it's supposed to be a "rationality with a snarky point of view"; which works like this -- I can say anything, and if you prove me lying, I say I was exaggerating to make it more funny.
Some interesting bits from the (mostly boring) talk page:
Yudkowsky is an uneducated idiot because there simply can't be 3^^^3 distinct people
A proper skeptical argument about why "Torture vs Dust Specks" is wrong.
...what happened is that they hired Luke Muehlhauser who doesn't know about anything technical but can adequately/objectively research what a research organization would look like, and then
I agree, but we are speaking about approximately 13 downvotes from 265 total votes. So we have at least 13 people on LessWrong who oppose a high-quality criticism.
Or there are approximately 13 people who believe the post is worth a mere 250 votes, not 265 and so used their vote to push it in the desired direction. Votes needn't be made or considered to be made independently of each other.
In 2011, he describes himself as a "a very small-‘l’ libertarian” in this essay at Cato Unbound.
Could I get some career advice?
I'd like to work in software. I can graduate next year with a math degree and look for work, or I can study for additional CS-specific credentials, (two or three extra years for a Master's degree).
On the one hand, I'm told online that programming is unusually meritocratic, and that formal education and credentials matter very little if you can learn and demonstrate competency in other ways, like writing your own software or contributing to open-source projects.
On the other hand, mid-career professionals in other fields (mostly engineering), have told me that education credentials are an inevitable filter for raises, hiring, layoffs, and just getting interesting work. They say that getting a graduate degree will be worthwhile even if I could have learned equally valuable skills by other means.
I think I would enjoy and do well in graduate school, but if it makes little career difference, I don't think I would go. I'm skeptical that marginal credentials are unimportant, (or will remain unimportant in ten years), but I don't know any programmers in person who I could ask.
Any thoughts or experiences here?
I've recently noticed a new variant of failure mode in political discussions. It seems to be most common on political discussions where one already has almost all Blues or all Greens. It goes like this:
Blue 1: "Hey look at this silly thing said by random silly Green. See this website here."
Blue 2, Blue 3... up to Blue n: "Haha! What evil idiots."
Blue n+1 (or possibly Blue sympathizer or outright interloper or maybe even a Red or a Yellow): "Um, the initial link given by Blue 1 is a parody. That website does satire."
Large subset of Blue 2 through Blue n: "Wow, the fact that we can't tell that's a parody shows how ridiculous the Greens are."
Now at this point, the actual failure of rationality happened with Blues not Greens. But somehow Blues will then count this as further evidence against Greens. Is there any way to politely get Blues to understand the failure mode that has occurred in this context?
What with the popularity of rationalist!fanfiction, I feel like there's an irresistible opportunity for anyone familar with The Animorphs books.
Imagine it! A book series where sentient slugs control people's bodies, yet can communicate with their hosts. To borrow from the AI Box experiments, the Yeerks are the Gatekeepers, and the Controlled humans are the AI's! One could use the resident black-sheep character David Hunting as the rationalist! character, who was introduced in the middle of the series, removed three books later and didn't really do anything important. I couldn't write such a thing, but it would be wicked if someone else did.
I've run into a roadblock on the Less Wrong Study Hall reprogramming project. I've been writing against Google Hangouts, but it seems that there's no way to have a permanent, public hangout URL that also runs a specified application. (that is, I can get a fixed URL, or a hangout that runs an app for all users, but I can't do both)
Any of the programmers here know a way around that? At the moment it's looking like I'll have to go back to square zero and find an entirely different approach.
What are good sources for "rational" (or at least not actively harmful) advice on relationships?
What are good sources for "rational" (or at least not actively harmful) advice on relationships?
What sort of relationships? Business? Romantic? Domestic? Shared hobby?
The undercurrent that runs along good advice for most is "make your presence a pleasant influence in the other person's life." (This is good advice for only some business relationships.)
Athol's advice is useful, he does excellent work advising couples with very poor marriages. So far I have not encountered anything that is more unethical than any mainstream relationship advice. Indeed I think it less toxic than mainstream relationship advice.
As to misogyny, this is a bit awkward, I actually cite him as an example of a very much not woman hating red pill blogger. Call Roissy a misogynist, I will nod. Call Athol one and I will downgrade how bad misogyny is.
I disagree that his outlook is toxic. He uses a realistic model of the people involved and recommends advice that would achieve what you want under that model. He repeatedly states that it is a mistake to make negative moral judgement of your partner just because they are predictable in certain ways. His advice is never about manipulation, instead being win-win improvements that your partner would also endorse if they were aware of all the details, and he suggests that they should be made aware of such details.
I see nothing to be outraged about, except that things didn't turn out to actually be how we previously imagined it. In any case, that's not his fault, and he does an admirable job of recommending ethical relationship advice in a world where people are actually physical machines that react in predictable ways to stimuli.
Seriously, would you enjoy playing the part of a cynical, paranoid control freak with a person whom you want to be your life partner?
Drop the adjectives. I strive to be self-aware, and to act in the way that works best (in the sense of happiness, satisfaction, and all the other things we care about) for me and my wife, given my best model of the situation....
Serious damage to who? Idiots who fail to adopt his advice because he calls it a name that is associated with other (even good) ideas that other idiots happen to be attracted to? That's a tragedy, of course, but it hardly seems pressing.
Seems to me that people should be able to judge ideas on their quality, not on which "team" is tangentially associated with them. Maybe that's asking too much, though, and writers should just assume the readers are morally retarded, like you suggest.
I'm somewhat familiar. My impression is that the steelman version of it is a blanket label for views that reject the controversial empirical and philosophical claims of the left-wing mainstream:
Pointing out that an idea has stupid people who believe it is not really a good argument against that idea. Hitler was a vegetarian and a eugenicist, but those ideas are still OK.
It selects for these attitudes in its adherents
So?
Here's why that's true: "Red Pill" covers empirical revisionism of mainstream leftism. What kind of people do you expect to be attracted to such a label without considering which ideas are correct? I would expect bitter social outcasts, people who fail to ideologically conform, a few unapologetic intellectuals, and people who reject leftism for other reasons.
Then how are those people going to appear to someone who is "blue pilled" (ie reasonable mainstream progressive) for lack of a better word? They are going t...
I've been reading a lot of red pill stuff lately (while currently remaining agnostic), and my impression is that most of the prominent "red pill" writers are in fact really nasty. They seem to revel in how offensive their beliefs are to the general public and crank it up to eleven just to cause a reaction. Roissy is an obvious example. About one third of his posts don't even have any point, they're just him ranting about how much he hates fat women. Moldbug bafflingly decides to call black people "Negroes" (while offering some weird historical justification for doing so). Regardless of the actual truth of the red pill movement's literal beliefs, I think they bring most of their misanthropic, hateful reputation on themselves.
I haven't read Athol Kay, so I don't know what his deal is.
I wrote a (highly speculative) article on my blog, about the conversion of negative energy into the ordinary mass-energy.
http://protokol2020.wordpress.com/2013/07/07/the-menace-that-is-dark-energy/
I don't expect mercy, though.
Ben Goertzel will take your money and try put an AGI inside a robot.
Trigger warning: Those creepy semi-human robots that will make anyone who hasn't spent months and months locked in a workshop making them do those human-imitating jerky facial gestures recoil in horror.
Hey everyone, long-time lurker here (I ran a LW group in Ft. Lauderdale, FL for about a year) and this is my first comment. I would like to post a discussion topic on a proposal for potential low-hanging fruit: fixing up Wikipedia pages related to LessWrong's interests (existential risk, rationality, decision theory, cognitive biases, etc. and organizations/people associated with them). I'd definitely be interested in getting some feedback on creating a wiki project that focusing on improving these pages.
Is there a (more well-known/mainstream) name for arguments-as-soldiers-bias?
More specifically, interpreting an explanation of why or how an event happened as approval of that event. Or claiming that someone who points out a flaw in an argument against X is a supporter of X. (maybe these have separate names?)
We've been having beautiful weather recently in my corner of the world, which is something of a rarity. I have a number of side projects and hobbies that I tinker with during the evenings, all of them indoors. The beautiful days were making me feel guilt about not spending time outside.
So I took to going on bike rides after work, dropping by the beach on occasion, and hiking on weekends. Unfortunately, during these activities, my mind was usually back on my side projects, planning what to do next. I'd often rush my excursions. I was trying to tick the &quo...
Anyone around here familiar with Stoicism and/or cognitive-behavioural therapy? I am reading this book and it seems vaguely like it would be of relevance to this site. Especially the focus of training the mind to make something of a habit like questioning whether something is ultimately in our control or not.
Also, I am kind of sad that there is nothing around here like a self-study guide that is easily accessible for the public.
And finally, I am confused again and again why there are so many posts about epistemic rationality and so few about instrumental r...
As a psychotherapy, CBT is the only psychotherapy with evidence of working better than just talking with someone for the same length of time. (Not to denigrate the value of just attention, but e.g. counselors are way cheaper than psychiatrists.) It seems to work well if it's guided, i.e. you have an actual therapist as well as the book to work through.
I don't know how it is for people who aren't coming to it with an actual problem to solve, but for self-knowledge as a philosophical end, or to gain the power of hacking themselves.
And finally, I am confused again and again why there are so many posts about epistemic rationality and so few about instrumental rationality.
Probably because teaching instrumental rationality isn't to the comparative advantage of anyone here. There's already tons of resources out there on improving your willpower, getting rich, becoming happier, being more attractive, losing weight, etc. You can go out and buy a CBT workbook written by a Phd psychologist on almost any subject - why would you want some internet user to write up a post instead?
Out of curiosity, what type of instrumental rationality posts would you like to see here?
There's already tons of resources out there on improving your willpower, getting rich, becoming happier, being more attractive, losing weight, etc. You can go out and buy a CBT workbook written by a Phd psychologist on almost any subject - why would you want some internet user to write up a post instead?
Then linking to it would be interesting. I can't reasonably review the whole literature (that again reviews academic literature) to find the better or best books on the topics of my interest.
So many self-help books are either crap because their content is worthless or painful to read because they have such a low content-to-word ratio for any reasonable metric. I want just the facts. Take investing as an example: It can be summarized in this one sentence "Take as much money as you are comfortable with and invest it in a broad index fund, taking out money so to come out with zero money at the moment of your death, except if you want to leave them some money." And still there is a host of books from professional investors detailing technical analysis of the most obscure financial products.
...Out of curiosity, what type of instrumental rationality posts would you like to see
So many self-help books are either crap because their content is worthless or painful to read because they have such a low content-to-word ratio for any reasonable metric. I want just the facts.
I've found that "just the facts" doesn't really work for self-help, because you need to a) be able to remember the advice b) believe on an emotional, not just rational level that it works and c) be actually motivated to implement the advice. This usually necessitates having the giver of advice drum it into you a whole bunch of different ways over the course of the eight hours or so spent reading the book.
Have reading groups reviewing books of interest. Post summaries of books of interest or reviews. Discuss the cutting edge of practical research, if relevant to our lifes. This is staying with your observation that most practically interesting stuff is already written.
One problem with this is that "reviewing" self-help books is hard because ultimately the judge of a good self-help book is whether or not it helps you, and you can't judge that until a few months down the road. Plus there can be an infinity of confounding factors.
But I can see your point. Making prac...
Hello and welcome to Phoenix Wright: Ace Economist.
Briefly, Phoenix Wright: Ace Attorney is a series of games where you play as Phoenix Wright, an attorney, who defends his client and solves crimes. Using a free online application that lets you make your own trials, I've turned Phoenix Wright into an economist and unleashed him upon the world.
I'm posting it here just in case it interest anyone. The LessWrong crowd is smart and well-educated, and so I'd appreciate any feedback I can get from you fine folk.
Play it here (works best in Firefox):
I'm trying to decide whether to marry someone, but I'm having a lot of trouble deciding. Anyone have any advice?
1) do you plan on spending a long period of time in a relationship with someone?
2) you have a job where they will get benefits from being married to you or vice versa?
3) do you expect to have children or buy property soon?
4) do you hang out with people who care whether or not you're married rather than just a long-term couple?
5) do you expect the other person to ever leave you and take half your stuff?
6) do you want to have a giant ceremony?
7) do you live in a country where you get tax credits or something for being married?
8) do you expect yourself or them to act differently if "married" or not?
9) do you have the money to blow on a wedding?
10) is there any benefit to getting married soon over later? If you expect to be together in several years as a married couple, can you just stay together a year and THEN get married?
These are some useful questions off the top of my head for this situation.
Other than in special circumstance, I think marriage is one of these occasions where "having trouble deciding" pretty clearly means "NO".
While funny as jests go, your reply sounds rather condescending in the "transhumanists are better than muggles" sort of way. Unless I misunderstand your point.
How credible is the research (that forms the inspiration) of this popularisation? The subject is the effect of status on antisocial behaviour and soforth. Nothing seemed particularly surprising to me but that may be confirmation bias with respect to my general philosophy and way of thinking.
So, there's this multiplayer zombie FPS for the blind called Swamp, and the developer recently (as in the past few months) added an AI to help with the massive work of banning troublemakers who use predictable methods to subvert bans. Naturally, a lot of people distrust the AI (which became known as Swampnet), and it makes a convenient scapegoat for all the people demanding to be unbanned (when it turns out that they did indeed violate the user agreement).
In the past 24 hours, several high-status, obviously innocent players started getting banned. I predic...
Has anyone read Dennett's Intuition Pumps? I'm thinking of reading it next. The main thing I want to know: does he offer new ways of thinking which one can actually apply while thinking about (a) everyday situations and (b) math and physics (which is my work).
Is it possible to train yourself the big five in personality traits? Specifically, conscientiousness seems to be correlated with a lot of positive outcomes, so a way of actively promoting it would seem a very useful trick to learn.
Note: The following post is a cross of humor and seriousness.
After reading another reference to an AI failure, it seems to me that almost every "The AI is an unfriendly failure" story begin with "The Humans are wasting too many resources, which I can more efficiently use for something else."
I felt like I should also consider potential solutions that look at the next type of failure. My initial reasoning is: Assuming that a bunch of AI researchers are determined to avoid that particular failure mode and only that one, they're probably go...
The Good Judgement Project is using the Brier score to rate participants forecasts. This is not LW's usual preferred scoring system (negative log odds); Brier is much more forgiving of incorrect assignments of 0 probability. I checked the maths, and you're expected score is still minimised by honestly reporting your subjective probabilities, but are there any more subtle ways to game the system?
Is there a name for the bias that information can just happen, rather than having to be derived by someone using some means?
Anyone have a good recommendation for an app/timer that goes off at pseudo-random (not too short - maybe every 15 min to an hour?) intervals? Someone suggested to me today that I would benefit from a luminosity-style exercise of noting my emotions at intervals throughout the day, and it seems like something I ought to automate as much as possible
Why to the maps of meetups on the front page and on the meetups page differ? Why do neither of them show the regular meetups?
Does anyone know anything about yoga as a spiritual practice (as opposed to exercise or whatever)? I get the sense that it's in the same "probably works" category as meditation and I'd be interested in learning more about it, but I don't know where to start, and I feel like there's probably "real" yoga and "pop" yoga that I need to be able to differentiate between.
Also, I can't sit in any of the standard meditation positions - I can only do maybe five minutes indian-style before I get intense pain. When I ask people how to r...
Running an interest check for an "Auto-Bayes."
Something I've noticed when reading articles on the web is that I occasionally run across the same beliefs, but have completely forgotten my last assigned probability - my current prior. In order to avoid this, I'm writing a program that keeps track of a database of beliefs and current priors, with automated Bayes updating. If nothing else, it'll also make it easier to get statistics on how accurate my predictions are, and keep me honest.
Anyway, I got halfway started and realized that this might be something other people might be interested in, so: interest check!
Do animal altruists regard zoos as a major contributor to animal suffering? Or do the numbers not compare when matched up against factory farming and the like?
(Longpost warning; I find myself wondering if I shouldn't post it to my livejournal and just link it here.)
A few hours shy of a week ago, I got a major update to my commercial game up to releasable standards. When I got to the final scene, I was extremely happy--on a scale of 1=omnicidally depressed to 10=wireheading, possibly pushing 9 (I've tried keeping data on happiness levels in April/May and determined that I'm not well calibrated for determining the value of a single point).
That high dwindled, of course, but for about 24 hours it kept up pretty well...
I posted this in the previous open thread, and would like to carry on the discussion into this thread. As before, I regard this entire subject as a memetic hazard, and will rot13 accordingly. Also, if you're going to downvote it, at least tell me why; karma means nothing to me, even in increments of 5, but it makes others less likely to respond.
Jung qbrf rirelbar guvax bs Bcra Vaqvivqhnyvfz, rkcynvarq ol Rqjneq Zvyyre nf gur pbaprcg juvpu cbfvgf:
... gung gurer vf bayl bar crefba va gur havirefr, lbh, naq rirelbar lbh frr nebhaq lbh vf ernyyl whfg lbh.
Gur...
Another day, another (controversial) opinion!
I think this misunderstands the state of modern complexity theory.
There are lots of NP-complete problems that are well known to have highly accurate approximations that can be computed efficiently. The knapsack problem and traveling-salesperson in 2D Euclidean space are both examples of this. Unfortunately, having an epsilon-close approximation for one NP-complete problem doesn't necessarily help you on other NP-complete problems.
There's nothing particularly magic about evolutionary algorithms here. Any sensible local search will often work well on instances of NP-complete problems.
There is another video from the same author explaining his opinions on LW. It takes 2 minutes to just start talking about LW, so here are the important parts: ---
The Sequences are hundreds and hundreds of blog posts, written by one man. They are like catechism, teach strange vocabulary like "winning", "paying rent", "mindkilling", "being Bayesian".
The claim that Bayes theorem, which is just a footnote in statistic textbook, has the power to reshape your thinking so that you can maximize the outcomes of your life... has no evidence. You can't simplify the complexity of life into simple probabilities. EY is a high-school dropout and he has no peer-reviewed articles.
People on LW say that criticism of LW is upvoted. Actually, that "criticism" does not disagree with anything -- it just asks MIRI to be more specific. Is that the LW's best defense against accusations of cultishness?
LW community believes in Singularity, which again, has no evidence, and the scientific community does not support it. MIRI asks your money, and does not say how specifically it will be used to save the world.
LW claims that politics is the mindkiller, yet EY admits that he is libertarian. Most of MIRI money comes from Peter Theil -- a right-wing libertarian billionaire.
Roko's basilisk...
...and these guys pretend to be skeptics?
Now let's look at CFAR. They have EY on their board, and they force you to read the Sequences if you want to join them.
Julia Galef is a rising star in the skeptical movement; she has a podcast "Rationally Speaking". But she is connected with LW, she believes in Bayes theorem, and she only criticizes the political left. She is obviously used as a face of LW movement because she is pretty! -- This is a sexism on LW's part, because men at LW agree in comments that Julia is pretty. If they weren't sexist, they would talk about how smart she is.
People like this are not skeptics and should not be invited to Skepticon!
Chorus ... We should help him read the sequences ... shambles forward
The anti-LW'ers have become quite the community themselves, the video is referencing XiXiDu and others.
It's thoroughly entertaining, the music especially.
Edit: I must say I found this statement by the video's author illuminating indeed in regards to his strong discounting of Bayesian reasoning:
To his benefit, Dmytry explained it to him, and now all is well again.