Why doesn't CFAR just tape record one of the workshops and throw it on youtube? Or at least put the notes online and update them each time they change for the next workshop? It seems like these two things would take very little effort, and while not perfect, would be a good middle ground for those unable to attend a workshop.
I can definitely appreciate the idea that person to person learning can't be matched with these, but it seems to me if the goal is to help the world through rationality, and not to make money by forcing people to attend workshops, then something like tape recording would make sense. (not an attack on CFAR, just a question from someone not overly familiar with it).
I'm a keen swing dancer. Over the past year or so, a pair of internationally reputable swing dance teachers have been running something called "Swing 90X", (riffing off P90X). The idea is that you establish a local practice group, film your progress, submit your recordings to them, and they give you exercises and feedback over the course of 90 days. By the end of it, you're a significantly more badass dancer.
It would obviously be better if everything happened in person, (and a lot does happen in person; there's a massive international swing dance scene), but time, money and travel constraints make this prohibitively difficult for a lot of people, and the whole Swing 90X thing is a response to this, which is significantly better than the next best thing.
It's worth considering if a similar sort of model could work for CFAR training.
One of the core ideas of CFAR is to develop tools to teach rationality. For that purpose it's useful to avoid making the course material completely open at this point in time. CFAR wants to publish scientific papers that validate their ideas about teaching rationality.
Doing things in person helps with running experiments and those experiments might be less clear when some people already viewed the lectures online.
I think one of my very favorite things about commenting on Lesswrong is that usually when you make a short statement or ask a question people will just respond to what you said rather than taking it as a sign to attack what they think that question implies is your tribe.
This article, written by Dreeve's wife has displaced Yvain's polyamory essay as the most interesting relationships article I've read this year. The basic idea is that instead of trying to split chores or common goods equally, you use auctions. For example, if the bathroom needs to be cleaned, each partner says how much they'd be willing to clean it for. The person with the higher bid pays the what the other person bid, and that person does the cleaning.
It's easy to see why commenters accused them of being libertarian. But I think egalitarians should examine this system too. Most couples agree that chores and common goods should be split equally. But what does "equally" mean? It's hard to quantify exactly how much each person contributes to a relationship. This allows the more powerful person to exaggerate their contributions and pressure the weaker person into doing more than their fair share. But auctions safeguard against this abuse requiring participants to quantify how much they value each task.
For example, feminists argue that women do more domestic chores than men, and that these chores go unnoticed by men. Men do a little bit, but because men don't see all the work...
This sounds interesting for cases where both parties are economically secure.
However I can't see it working in my case since my housemates each earn somewhere around ten times what I do. Under this system, my bids would always be lowest and I would do all the chores without exception. While I would feel unable to turn down this chance to earn money, my status would drop from that of an equal to that of a servant. I would find this unacceptable.
Wasn't it Ariely's Predictably Irrational that went over market norms vs. tribe norms? If you just had ordinary people start doing this, I would guess it would crash and burn for the obvious market-norm reasons (the urge to game the system, basically). And some ew-squick power disparity stuff if this is ever enforced by a third party or even social pressure.
Most couples agree that chores and common goods should be split equally.
I'm skeptical that most couples agree with this.
Anyway, all of these types of 'chore division' systems that I've seen so far totally disregard human psychology. Remember that the goal isn't to have a fair chore system. The goal is to have a system that preserves a happy and stable relationship. If the resulting system winds up not being 'fair', that's ok.
Wow someone else thought of doing this too!
My roommate and I started doing this a year ago. It went pretty well for the first few months. Then our neighbor heard about how much we were paying eachother for chores and started outbidding us.
Then our neighbor heard about how much we were paying eachother for chores and started outbidding us.
This is one of the features of this policy, actually- you can use this as a natural measure of what tasks you should outsource. If a maid would cost $20 to clean the apartment, and you and your roommates all want at least $50 to do it, then the efficient thing to do is to hire a maid.
The polyamory and BDSM subcultures prove that nerds can create new social rules that improve sex. Of course, you can't just theorize about what the best social rules would be and then declare that you've "solved the problem." But when you see people living happier lives as a result of changing their social rules, there's nothing wrong with inviting other people to take a look.
I don't understand your postscript. I didn't say there is no inequality in chore division because if there were a chore market would have removed it. I said a chore market would have more equality than the standard each-person-does-what-they-think-is-fair system. Your response seems like fully generalized counterargument: anyone who proposes a way to reduce inequality can be accused of denying that the inequality exists.
I think it's much better than monthly open threads - back then, I would sometimes think "Hmm, I'd like to ask this in an open thread, but the last one is too old, nobody's looking at it any more".
I prefer it to the old format; once a month is too clumpy for an open thread. It was fine when this was a two-man blog, but not for a discussion forum.
Last week, I gave a presentation at the Boston meetup, about using causal graphs to understand bias in the medical literature. Some of you requested the slides, so I have uploaded them at http://scholar.harvard.edu/files/huitfeldt/files/using_causal_graphs_to_understand_bias_in_the_medical_literature.pptx
Note that this is intended as a "Causality for non-majors" type presentation. If you need a higher level of precision, and are able the follow the maths, you would be much better off reading Pearl's book.
(Edited to change file location)
If you have the time, I heartily recommend Ben Polak's Introduction to Game Theory lectures. They are highly watchable and give a very solid introduction to the topic.
In terms of books, The Strategy of Conflict is the classic popular work, and it's good, but it's very much a product of its time. I imagine there are more accessible books out there. Yvain recommends The Art of Strategy, which I haven't read.
A word of warning: you will probably draw all sorts of wacky conclusions about human interaction when first dabbling with game theory. There is huge potential for hatching beliefs that you may later regret expressing, especially on politically-charged subjects.
What's the name of the bias/fallacy/phenomenon where you learn something (new information, approach, calculation, way of thinking, ...) but after awhile revert to the old ideas/habits/views etc.?
I don't know how technically viable hyperloop is, but it seems especially well suited for the United States.
Investing in a hyperloop system doesn't make as much sense in Europe or Japan for a number of reasons:
European/Japanese cities are closer together, so Hyperloop's long acceleration times are a larger relative penalty in terms of speed. The existing HSR systems reach their lower top speeds more quickly.
Most European countries and Japan already have decent HSR systems and are set to decline in population. Big new infrastructure projects tend not to make as much sense when populations are declining and the infrastructure cost : population ratio is increasing by default.
Existing HSR systems create a natural political enemy for Hyperloop proposals. For most countries, having HSR and Hyperloop doesn't make sense.
In contrast, the US seems far better suited:
The US is set for a massive population increase, requiring large new investments in transportation infrastructure in any case.
The US has lots of large but far-flung cities, so long acceleration times are not as much of a relative penalty.
The US has little existing HSR to act as a competitor. The political class h
Don't forget Australia. We have a few, large cities separated by long distances. In particular, Melbourne to Sydney is one of the highest traffic air routes in the world, roughly the same distance as the proposed Hyperloop, and there has been on and off talk of high speed rail links. Additionally, Sydney airport has a curfew, and is more or less operating at capacity. Offloading Melbourne-bound passengers to a cheaper, faster option would free up more flights for other destinations.
I lost an AI box experiment against PatrickRobotham with me as the AI today on irc. If anyone else wants to play against me then PM me here or contact me on #lesswrong.
When you're trying to raise the sanity waterline, dredging the swamps can be a hazardous occupation. Indian rationalist skeptic Narendra Dabholkar was assassinated this morning.
Political activism, especially in the third world, is inherently dangerous, whether or not it is rationality-related.
So, are $POORETHNICGROUP so poor, badly off and socially failed because they are about 15 IQ points stupider than $RICHETHNICGROUP? No, it may be the other way around: poverty directly loses you around 15 IQ points on average.
Or so says Anandi Mani et al. "Poverty Impedes Cognitive Function" Science 341, 976 (2013); DOI: 10.1126/science.1238041. A PDF while it lasts (from the nice person with the candy on /r/scholar) and the newspaper article I first spotted it in. The authors have written quite a lot of papers on this subject.
The racists claim that this is irrelevant because of research that corrects for socioeconomic status and still finds IQ differences. Of course, researchers have found plenty of evidence of important environmental influences on IQ not measured by SES. It seems especially bad for the racial realist hypothesis that people who, for example, identify as "black" in America have the the same IQ disadvantage compared to whites whether their ancestory is 4% European or 40% European; how much African vs. European ancestry someone has seems to matter only indirectly to the IQ effects, which seem to directly follow whichever artificial simplified category someone is identified as belonging to.
Sorry if this has been asked before, but can someone explain to me if there is any selfish reason to join Alcor while one is in good health? If I die suddenly, it will be too late to have joined, but even if I had joined it seems unlikely that they would get to me in time.
The only reason I can think of is to support Alcor.
There is a circulating google docs for people who are moving into the Bay Area soonish.
Any tips for people moving in from those who are in?
People who have available rooms or houses. Let Nick Ryder know.
Artificial intelligence and Solomonoff induction: what to read?
Olle Häggström, Professor of Mathematical Statistics at Chalmers University of Technology, reads some of Marcus Hutter's work, comes away unimpressed, and asks for recommendations.
...One concept that is sometimes claimed to be of central importance in contemporary AGI research is the so-called AIXI formalism. [...] In the presentation, Hutter advices us to consult his book Universal Artificial Intelligence. Before embarking on that, however, I decided to try one of the two papers that he also di
Has anyone done a good analysis on the expected value of purchasing health insurance? I will need to purchase health when I turn 26. How comprehensive should the insurance I purchase be?
At first I thought I should purchase a high-deductible that only protects against catastrophes. I have low living expenses and considerable savings, so this wouldn't be risky. The logic here is that insurance costs the expected value of the goods provided plus overhead, so the cost of insurance will always be less than it's expected value. If I purchase less insurance, I wa...
If you had to group Less Wrong content into eight categories by subject matter, what would those categories be?
This essay on internet forum behavior by the people behind Discourse is the greatest thing I've seen in the genre in the past two or three years. It rivals even some of the epic examples of wikipedian rule-lawyering that I've witnessed.
Their aggregation of common internet forum rules could have been done by anyone, but it was ultimately they that did it. My confidence in Discourse's success has improved.
We wonder about the moral impact of dust specks in the eyes of 3^^^3 people.
What about dust specks in the eyes of 3^^^3 poodles? Or more to the point, what is the moral cost of killing one person vs one poodle? How many poodles lives would we trade for the life of one person?
Or even within humans, is it human years we would account in coming up with moral equivalencies? Do we discount humans that are less smart, on the theory that we almost certainly discount poodles against humans because they are not as smart as us? Do we discount evil humans com...
No, but I strongly suspect that all Earthly life without frontal cortex would be regarded by my idealized morals as a more complicated paperclip. There may be exceptions and I have heard rumors that octopi pass the mirror test, and I will not be eating any octopus meat until that is resolved, because even in a world where I eat meat because optimizing my diet is more important and my civilization lets me get away with it, I do not eat anything that recognizes itself in a mirror. So a spider is a definite no, a chimpanzee is an extremely probable yes, a day-old human infant is an extremely probable no but there are non-sentience-related causes for me to care in this case, and pigs I am genuinely unsure of.
I've got an (IMHO) interesting discussion article written up, but I am unable to post it; I get a "webpage cannot be found" error when I try. I'm using IE 9. Is this a known issue, or have I done something wrong?
Here's a question that's been distracting me for the last few hours, and I want to get it out of my head so I can think about something else.
You're walking down an alley after making a bank withdrawal of a small sum of money. Just about when you realize this may have been a mistake, two Muggers appear from either side of the alley, blocking trivial escapes.
Mugger A: "Hi there. Give me all of that money or I will inflict 3^^^3 disutility on your utility function."
Mugger B: "Hi there. Give me all of that money or I will inflict maximum disutil...
That's not fighting the hypothetical. Fighting the hypothetical is first paying one, then telling the other you'll go back to the bank to pay him too. Or pulling out your kung fu skills, which is really fighting the hypothetical.
I wonder if it makes sense to have something like a registry of the LW regulars who are experts in certain areas. For example, this forum has a number of trained mathematicians, philosophers, computer scientists...
Something like a table containing [nick, general area, training/credentials, area of interest, additional info (e.g. personal site)], maybe?
This is unrelated to rationality, but I'm posting it here in case someone decides it serves their goals to help me be more effective in mine.
I recently bought a computer, used it for a while, then decided I didn't want it. What's the simplest way to securely wipe the hard drive before returning it? Is it necessary to create an external boot volume (via USB or optical disc)?
I don't suppose there's any regularly scheduled LW meetups in San Diego, is there? I'll be there this week from Saturday to Wednesday for a conference.
Has anyone done a study on redundant information in languages?
I'm just mildly curious, because a back-of-the-envelope calculation suggests that English is about 4.7x redundant - which on a side note explains how we can esiayl regnovze eevn hrriofclly msispled wrods.
(Actually, that would be an interesting experiment - remove or replace fraction x of the letters in a paragraph and see at what average x participants can no longer make a "corrected" copy.)
I'd predict that Chinese is much less redundant in its spoken form, and that I have no idea how to measure redundancy in its written form. (By stroke? By radical?)
Consider the following scenario. Suppose that it can be shown that the laws of physics imply that if we do a certain action (costing 5 utils to perform), then in 1/googol of our descendent universes, 3^^^3 utils can be generated. Intuitively, it seems that we should do this action! (at least to me) But this scenario also seems isomorphic to a Pascal's mugging situation. What is different?
If I attempt to describe the thought process that leads to these differences, it seems to be something like this. What is the measure of the causal descendents where 3^^^3...
A new study shows that manipulative behavior could be linked to the development of some forms of altruism. The study itself is unfortunately behind a paywall.
This paper about AI from Hector J. Levesque seems to be interesting: http://www.cs.toronto.edu/~hector/Papers/ijcai-13-paper.pdf
It extensively discusses something called 'Winograd schema questions': If you want examples of Winograd schema questions, there is a list here: http://www.cs.nyu.edu/faculty/davise/papers/WS.html
The paper's abstract does a fairly good job of summing it up, although it doesn't explicitly mention Winograd schema questions:
...The science of AI is concerned with the study of intelligent forms of behaviour in computational terms. But wh
I have made it up to episode 5 of Umineko, and I've found one incident in particular unusually easy to resolve (easy enough that though the answer hasn't been suggested by anyone in-game, I am sure that I know how it was/could be done); I'm wondering how much it is due to specialized knowledge and whether it really looks harder to other people. (Because of the curse of knowledge, it's now difficult for me to see whether the puzzle really is as trivial as it looks to me.) So, a little poll, even though LWers are not the best people to ask.
In episode 5, a...
V pna guvax bs guerr jnlf bs qbvat guvf gevpx.
Ur uvq sbhe fyvcf bs cncre, bar sbe rnpu frnfba. Cerfhznoyl ur jvyy erzbir gur bgure guerr ng gur svefg bccbeghavgl.
Ur unf qbar fbzr erfrnepu gb qvfpbire fbzr snpg nobhg ure gb hfr va uvf qrzbafgengvba.
Fur unf hfrq ure snibevgr frnfba nf gur nafjre gb n frphevgl dhrfgvba ba n jro fvgr gung ur unf nqzva-yriry npprff gb.
Gurer znl or bgure jnlf. Jvgu fb znal, V pnaabg or irel fher gung nal fvatyr bar gung V pubbfr vf evtug.
I have never consciously noticed a dust speck going into my eye, at least I don't remember it. This means it didn't make big enough effect on my mind so that it would have made a lasting impression on my memory. When I first read the post about dust specks and torture, I had to think hard about wtf the speck going into your eye even means.
Does this mean that I should attribute zero negative utility to dust speck going into my eye?
Do consequentialists generally hold as axiomatic that there must be a morally preferable choice (or conceivably multiple equally preferable choices) in a given situation? If so, could somebody point me to a deeper discussion of this axiom (it probably has a name, which I don't know.)
Um... In the HPMOR notes section, this little thing got mentioned.
"I am auctioning off A Day Of My Time, to do with as the buyer pleases – this could include delivering a talk at your company, advising on your fiction novel in progress, applying advanced rationality skillz to a problem which is tying your brain in knots, or confiding the secret answer to the hard problem of conscious experience (it’s not as exciting as it sounds). I retain the right to refuse bids which would violate my ethics or aesthetics. Disposition of funds as above."
That...
I enjoyed this non-technical piece about the life of Kolmogorov - responsible for a commonly used measure of complexity, as well as several now-conventional conceptions of probability. I wanted to share: http://nautil.us/issue/4/the-unlikely/the-man-who-invented-modern-probability
I find the idea of commitment devices strongly aversive. If I change my mind about doing something in the future, I want to be able to do whatever I choose to do, and don't want my past self to create negative repercussions for me if I change my mind.
If it's worth saying, but not worth its own post (even in Discussion), then it goes here.