If it's worth saying, but not worth its own post (even in Discussion), then it goes here.

New Comment
326 comments, sorted by Click to highlight new comments since: Today at 3:18 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Commercials sound funnier if you mentally replace "up to" with "no more than."

0Sabiola11y
Also easier to translate. In fact, we often translate "up to" with "maximaal", the equivalent of "up to a maximum of" in Dutch. But of course that only translates the practical sense, and leaves out the implication of "up to a maximum of xx (and that is a LOT)". We could translate it with "wel" ("wel xx" ~ "even as much as xx"), but in most contexts, that sounds really... American, over the top, exaggerated. And also it doesn't sound exact enough, when it clearly is intended to be a hard limit.

Why doesn't CFAR just tape record one of the workshops and throw it on youtube? Or at least put the notes online and update them each time they change for the next workshop? It seems like these two things would take very little effort, and while not perfect, would be a good middle ground for those unable to attend a workshop.

I can definitely appreciate the idea that person to person learning can't be matched with these, but it seems to me if the goal is to help the world through rationality, and not to make money by forcing people to attend workshops, then something like tape recording would make sense. (not an attack on CFAR, just a question from someone not overly familiar with it).

I'm a keen swing dancer. Over the past year or so, a pair of internationally reputable swing dance teachers have been running something called "Swing 90X", (riffing off P90X). The idea is that you establish a local practice group, film your progress, submit your recordings to them, and they give you exercises and feedback over the course of 90 days. By the end of it, you're a significantly more badass dancer.

It would obviously be better if everything happened in person, (and a lot does happen in person; there's a massive international swing dance scene), but time, money and travel constraints make this prohibitively difficult for a lot of people, and the whole Swing 90X thing is a response to this, which is significantly better than the next best thing.

It's worth considering if a similar sort of model could work for CFAR training.

One of the core ideas of CFAR is to develop tools to teach rationality. For that purpose it's useful to avoid making the course material completely open at this point in time. CFAR wants to publish scientific papers that validate their ideas about teaching rationality.

Doing things in person helps with running experiments and those experiments might be less clear when some people already viewed the lectures online.

7pan11y
I guess I don't see why the two are mutually exclusive, I doubt everyone would stop attending workshops if the material was freely available, and I don't understand why something can't be published if it's open sourced first?
4Frood11y
I'm guessing that the goal here is to gather information on how to teach rationality to the 'average' person? As in, the person off of the street who's never asked themselves "what do I think I know and how do I think I know it?". But as far as I can tell, LWers make up a large portion of the workshop attendees. Many of us will have already spent enough time reading articles/sequences about related topics that it's as if we've "already viewed the lectures online". Also, it's not as if the entire internet is going to flock to the content the second that it gets posted. There will still be an endless pool of people to use in the experiments. And wouldn't the experiments be more informative if the data points weren't all paying participants with rationality as a high priority? Shouldn't the experiments involve trying to teach a random class of high-schoolers or something? What am I missing?
1ChristianKl11y
As far as I understand that isn't the case. They do give out scholarship, so not everyone pays. I also thinks that they do testing of the techniques outside of the workshops. Doing research costs money and CFAR seems to want to fund itself through workshop fees. If they would focus on high school classes they would need a different source of funding.
5Ben Pace11y
Is a CFAR workshop like a lecture? I thought it would be closer to a group discussion, and perhaps subgroups within. This would make a recording highly unfocused and difficult to follow.
4somervta11y
Any one unit in the workshop is probably something in between a lecture, a practice session and a discussion between the instructor and the attendees. Each unit is different in this respect. For most of the units, a recording of a session would probably not be very useful on its own.
4somervta11y
(April 2013 Workshop Attendee) (The argument is that) A lot of the CFAR workshop material is very context dependent, and would lose significant value if distilled into text or video. Personally speaking, a lot of what I got out of the workshop was only achievable in the intensive environment - the casual discussion about the material, the reasons behind why you might want to do something, etc - a lot of it can't be conveyed in a one hour video. Now, maybe CFAR could go ahead and try to get at least some of the content value into videos, etc, but that has two concerns. One is the reputational problem with 'publishing' lesser-quality material, and the other is sorta-almost akin to the 'valley of bad rationality'. If you teach someone, say, the mechanics of aversion therapy, but not when to use it, or they learn a superficial version of the principle, that can be worse than never having learned it at all, and it seems plausible that this is true of some of the CFAR material also.
3pan11y
I agree that there are concerns, and you would lose a lot of the depth, but my real concern is with how this makes me perceive CFAR. When I am told that there are things I can't see/hear until I pay money, it makes me feel like it's all some sort of money making scheme, and question whether the goal is actually just to teach as many people as much as possible, or just to maximize revenue. Again, let me clarify that I'm not trying to attack CFAR, I believe that they probably are an honest and good thing, but I'm trying to convey how I initially feel when I'm told that I can't get certain material until I pay money. It's akin to my personal heuristic of never taking advice from anyone who stands to gain from my decision. Being told by people at CFAR that I can't see this material until I pay the money is the opposite of how I want to decide to attend a workshop, I instead want to see the tapes or read the raw material and decide on my own that I would benefit from being in person.
4metastable11y
Yeah, I feel these objections, and I don't think your heuristic is bad. I would say, though, and I hold no brief for CFAR, never having donated or attended a workshop, that there is another heuristic possibly worth considering: generally more valuable products are not free. There are many exceptions to this, and it is possible for sellers to counterhack this common heuristic by using higher prices to falsely signal higher quality to consumers. But the heuristic is not worthless, it just has to be applied carefully.
2palladias11y
We do offer some free classes in the Bay Area. As we beta-test tweaks or work on developing new material, we invite people in to give us feedback on classes in development. We don't charge for these test sessions, and, if you're local, you can sign up here. Obviously, this is unfortunately geographically limited. We do have a sample workshop schedule up, so you can get a sense of what we teach. If the written material online isn't enough, you can try to chat with one of us if we're in town (I dropped in on a NYC group at the beginning of August). Or you can drop in an application, and you'll automatically be chatting with one of us and can ask as many questions as you like in a one-on-one interview. Applying doesn't create any obligation to buy; the skype interview is meant to help both parties learn more about each other.
2tgb11y
While you have good points, I would like to say that making money is not unaligned with the goal of teaching as many people as possible. It seems like a good strategy is to develop high-quality material by starting off teaching only those able to pay. This lets some subsidize the development of more open course material. If they haven't gotten to the point where they have released the subsidized material, then I'd give them some more time and judge them again in some years. It's a young organization trying to create material from scratch in many areas.
1somervta11y
I feel your concerns, but tbh I think the main disconnect is the research/development vs teaching dichotomy, not (primarily) the considerations I mentioned. The volunteers at the workshop (who were previous attendees) were really quite emphatic about how much they had improved, including content and coherency as well as organization. (Relevant)

I think one of my very favorite things about commenting on Lesswrong is that usually when you make a short statement or ask a question people will just respond to what you said rather than taking it as a sign to attack what they think that question implies is your tribe.

This article, written by Dreeve's wife has displaced Yvain's polyamory essay as the most interesting relationships article I've read this year. The basic idea is that instead of trying to split chores or common goods equally, you use auctions. For example, if the bathroom needs to be cleaned, each partner says how much they'd be willing to clean it for. The person with the higher bid pays the what the other person bid, and that person does the cleaning.

It's easy to see why commenters accused them of being libertarian. But I think egalitarians should examine this system too. Most couples agree that chores and common goods should be split equally. But what does "equally" mean? It's hard to quantify exactly how much each person contributes to a relationship. This allows the more powerful person to exaggerate their contributions and pressure the weaker person into doing more than their fair share. But auctions safeguard against this abuse requiring participants to quantify how much they value each task.

For example, feminists argue that women do more domestic chores than men, and that these chores go unnoticed by men. Men do a little bit, but because men don't see all the work... (read more)

This sounds interesting for cases where both parties are economically secure.

However I can't see it working in my case since my housemates each earn somewhere around ten times what I do. Under this system, my bids would always be lowest and I would do all the chores without exception. While I would feel unable to turn down this chance to earn money, my status would drop from that of an equal to that of a servant. I would find this unacceptable.

4Viliam_Bur11y
I believe you are wrong. (Or I am; in which case please explain to me how.) Here is what I would do it if I lived with a bunch of millionaires, assuming my money is limited: The first time, I would ask a realistic price X. And I would do the chores. I would put the gained money apart into "the money I don't really own, because I will use them in future to get my status back" budget. The second time, I would ask 1.5 × X. The third time, 2 × X. The fourth time, 3 × X. If asked, I would explain the change by saying: "I guess I was totally miscalibrated about how I value my time. Well, I'm learning. Sorry, this bidding system is so new and confusing to me." But I would act like I am not really required to explain anything. Let's assume I always do the chores. Then my income grows exponentially, which is a nice thing per se, but most importantly, it cannot continue forever. At some moment, my bid would be so insanely high, that even Bill Gates would volunteer to do the chores instead. -- Which is completely okay for me, because I would pay him the $1000000000 per hour from my "get the status back" budget, which at the given time already contains the money. That's it. Keep your money from chores in a separate budget and use them only to pay others for doing the chores. Increase or decrease the bids depending on the state of that budget. If the price becomes relatively stable, there is no way you would do more chores than the other people around you. The only imbalance I can imagine is if you have a housemate A which always bids more than a housemate B, in which case you will end up between them, always doing more chores than A but less than B. Assuming there are 10 A's and 1 B, and the B is considered very low status, this might result in a rather low status for you, too. -- The system merely guarantees you won't get the lowest status, even if you are the less wealthy person in the house; but you can still get the second-lowest place.
2Fronken11y
Could one not change the bidding to use "chore points" of somesuch? I mean, the system described is designed for spouses, but there's no reason it couldn't be adapted for you and your housemates.

Wasn't it Ariely's Predictably Irrational that went over market norms vs. tribe norms? If you just had ordinary people start doing this, I would guess it would crash and burn for the obvious market-norm reasons (the urge to game the system, basically). And some ew-squick power disparity stuff if this is ever enforced by a third party or even social pressure.

1maia11y
Empirically speaking, this system has worked in our house (of 7 people, for about 6 months so far). What kind of gaming the system were you thinking of? We do use social pressure: there is social pressure to do your contracted chores, and keep your chore point balance positive. This hasn't really created power disparities per se.
6someonewrongonthenet11y
If the idea is to say exactly how much you are willing to pay, there would be an incentive to: 1) Broadcast that you find all labor extra unpleasant and all goods extra valuable, to encourage people to bid high 2) Bid artificially lower values when you know someone enjoys a labor / doesn't mind parting with a good and will bid accordingly. In short, optimal play would involve deception, and it happens to be a deception of the sort that might not be difficult to commit subconsciously. You might deceive yourself into thinking you find a chore unpleasant - I have read experimental evidence to support the notion that intrinsically rewarding tasks lose some of their appeal when paired with extrinsic rewards. No comment on whether the traditional way is any better or worse - I think these two testimonials are sufficient evidence for this to be worth people who have a willing human tribe handy to try it, despite the theoretical issues. After all, Edit: There is another, more pleasant problem: If you and I are engaged in trade, and I actually care about your utility function, that's going to effect the price. The whole point of this system is to communicate utility evenly after subtracting for the fact that you care about each other (otherwise why bother with a system?) Concrete example: We are trying to transfer ownership of a computer monitor, and I'm willing to give it to you for free because I care about you. But if I were to take that into account, then we are essentially back to the traditional method. I'd have to attempt to conjure up the value at which i'd sell the monitor to someone I was neutral towards. Of course, you could just use this as an argument stopper - whenever there is real disagreement, you use money to effect an easy compromise. But then there is monetary pressure to be argumentative and difficult, and social pressure not to be - it would be socially awkward and monetarily advantageous if you were constantly the one who had a problem with unme
3maia11y
But if other people bid high, then you have to pay more. And they will know if you bid lower, because the auctions are public. How does this help you? I don't understand how this helps you either; if you bid lower and therefore win the auction, then you have to do the chore for less than you value it at. That's no fun. The way our system works, it actually gives the lowest bidder, not their actual bid, but the second lowest bid minus 1; that way you don't have to do bidding wars, and can more or less just bid what you value it at. It does create the issue that you mention - bid sniping, if you know what the lowest bidder will bid you can bid just above it so they get as little as possible - but this is at the risk of having to actually do the chore for that little, because bids are binding. I'd very much like to understand the issues you bring up, because if they are real problems, we might be able to take some stabs at solving them. This has become somewhat of a norm in our house. We can pass around chore points in exchange for rides to places and so forth; it's useful, because you can ask for favors without using up your social capital. (Just your chore points capital, which is easier to gain more of and more transparent.)
3someonewrongonthenet11y
You only do this when you plan to be the buyer. The idea is to win the auction and become the buyer, but putting up as little money as possible. If you know that the other guy will do it for $5, you bid $6, even if you actually value it at $10. As you said, I'm talking about bid sniping. Ah, I should have written "broadcast that you find all labor extra unpleasant and all goods extra valuable when you are the seller (giving up a good or doing a labour) so that people pay you more to do it." If you're willing to do a chore for _$10, but you broadcast that you find it more than -$10 of unpleasantness, the other party will be influenced to bid higher - say, $40. Then, you can bid $30, and get paid more. It's just price inflation - in a traditional transaction, a seller wants the buyer to pay as much as they are willing to pay. To do this, the seller must artificially inflate the buyer's perception of how much the item is worth to the seller. The same holds true here. When you intend to be the buyer you do the opposite - broadcast that you're willing to do the labor for cheap to lower prices, then bid snipe. As in a traditional transaction, the buyer wants the seller to believe that the item is not of much worth to the buyer. The buyer also has to try to guess the minimum amount that the seller will part with the item. So what I wrote above was assuming the price was a midpoint between the buyer's and seller's bid, which gives them both equal power to set the price. This rule slightly alters things, by putting all the price setting power in the buyer's hands. Under this rule, after all the deceptive price inflation is said and done you should still bid an honest $10 if you are only playing once - though since this is an iterated case, you probably want to bid higher just to keep up appearances if you are trying to be deceptive. One of the nice things about this rule is that there is no incentive to be deceptive unless other people are bid sniping. The weakness of
3rocurley11y
(I'm one of the other users/devs of Choron) There are two ways I know of that the market can try to defeat bid sniping, and one way a bidder can (that I know of). Our system does not display the lowest bid, only the second lowest bid. For a one-shot auction where you had poor information about the others preferences, this would solve bid sniping. However, in our case, chores come up multiple times, and I'm pretty sure that it's public knowledge how much I bid on shopping, for example. If you're in a situation where the lowest bid is hidden, but your bidding is predictable, you can sometimes bid higher than you normally would. This punishes people who bid less than they're willing to actually do the chore for, but imposes costs on you and the market as a whole as well, in the form of higher prices for the chore. A third option, which we do not implement (credit to Richard for this idea), is to randomly award the auction to one of the two (or n) lowest bidders, with probability inversely related to their bid. In particular, if you pick between the lowest 2 bidders, both have claimed to be willing to do the job for the 2nd bidder's price (so the price isn't higher and noone can claim they were forced to do something for less than they wanted). This punishes bid-snipers by taking them at their word that they're willing to do the chore for the reduced price, at the cost of determinism, which allows better planning.
1someonewrongonthenet11y
And market efficiency. Plus, I think it doesn't work when there are only two players? If I honestly bid $30, and you bid $40 and randomly get awarded the auction, then I have to pay you $40. And that leaves me at -$10 disutility, since the task was only -$30 to me.
1rocurley11y
To be sure I'm following you: If the 2nd bidder gets it (for the same price as the first bidder), the market efficiency is lost because the 2nd person is indifferent between winning and not, while the first would have liked to win it? If so, I think that's right. If there are two players... I agree the first bidder is worse off than they would be if they had won. This seems like a special case of the above though: why is it more broken with 2 players?
1someonewrongonthenet11y
Yes, that's one of the inefficiencies. The other inefficiency is that whenever the 2nd player wins, the service gets more expensive. Because of the fact that the service gets more expensive. When there are multiple players, this might not seem like such a big deal - sure, you might pay more than the cheapest possible price, but you are still ultimately all benefiting (even if you aren't maximally benefiting). Small market inefficiencies are tolerable. It's not so bad with 3 players who bid 20, 30, 40, since even if the 30-bidder wins, the other two players only have to pay 15 each. It's still inefficient, but it's not worse than no trade. However, when your economy consists of two people, market inefficiency is felt more keenly. Consider the example I gave earlier once more: I bid 30. You bid 40. So I can sell you my service for $30-$40, and we both benefit. . But wait! The coin flip makes you win the auction. So now I have to pay you $40. My stated preference is that I would not be willing to pay more than $30 for this service. But I am forced to do so. The market inefficiency has not merely resulted in a sub-optimal outcome - it's actually worse than if I had not traded at all! Edit: What's worse is that you can name any price. So suppose it's just us two, I bid $10 and you bid $100, and it goes to the second bidder...
2rocurley11y
I don't think that the service gets more expensive under a second price auction (which Choron uses). If you bid $10 and I bid $100, normally it would go to you for $100. In the randomized case, it might go to me for $100. I think I agree with you about the possibility of harm in the 2 person case.
1someonewrongonthenet11y
Oh yes, that's right. I think I initially misunderstood the rules of the second price - I thought it would be $10 to me or $100 to you , randomly chosen.
4Manfred11y
Yeah, bidding = deception. But in addition to someonewrong's answer, I was thinking you could just end up doing a shitty job at things (e.g. cleaning the bathroom). Which is to say, if this were an actual labor market, and not a method of communicating between people who like each other and have outside-the-market reasons to cooperate, the market doesn't have much competition.
2maia11y
Yeah, that's unfortunately not something we can really handle other than decreeing "Doing this chore entails doing X and it doesn't count if you don't do X." Enforcing the system isn't solved by the system itself. Good way to describe it.
0juliawise10y
Except she specifies that if they're bidding above market wages for a task (cleaning the bathroom would work fine), they'll just pay someone else to do it. Of course, chores like getting up to deal with a sick child are not so outsourceable.

Most couples agree that chores and common goods should be split equally.

I'm skeptical that most couples agree with this.

Anyway, all of these types of 'chore division' systems that I've seen so far totally disregard human psychology. Remember that the goal isn't to have a fair chore system. The goal is to have a system that preserves a happy and stable relationship. If the resulting system winds up not being 'fair', that's ok.

3A1987dM11y
Most couples worldwide, or most couples in W.E.I.R.D. societies?
8passive_fist11y
Both.

Wow someone else thought of doing this too!

My roommate and I started doing this a year ago. It went pretty well for the first few months. Then our neighbor heard about how much we were paying eachother for chores and started outbidding us.

Then our neighbor heard about how much we were paying eachother for chores and started outbidding us.

This is one of the features of this policy, actually- you can use this as a natural measure of what tasks you should outsource. If a maid would cost $20 to clean the apartment, and you and your roommates all want at least $50 to do it, then the efficient thing to do is to hire a maid.

7Viliam_Bur11y
The problem could be that they actually are willing to do it for $10, but it's a low-status thing to admit. If we both lived in the same appartment, and we both pretended that our time is precious that we are only willing to clean the appartment for $1000... and I do it 50% of the time, and you do it 50% of the time, at the end none of us gets poor despite the unrealistic prices, because each of us gets all the money back. Now when the third person comes and cares about money more than about status (which is easier for them, because they don't live in the same appartment with us), our pretending is exposed and we become either more honest or poor.
8Luke_A_Somers11y
I can see this working better than a dysfunctional household, but if you're both in the habit of just doing things, this is going to make everything worse.
1dreeves11y
Very fair point! Just like with Beeminder, if you're lucky enough to simply not suffer from akrasia then all the craziness with commitment devices is entirely superfluous. I liken it to literal myopia. If you don't have the problem then more power to you. If you do then apply the requisite technology to fix it (glasses, commitment devices, decision auctions). But actually I think decision auctions are different. There's no such thing as not having the problem they solve. Preferences will conflict sometimes. Just that normal people have perfectly adequate approximations (turn taking, feeling each other out, informal mental point systems, barter) to what we've formalized and nerded up with our decision auctions.
7Multiheaded11y
P.S.: those last two sentences ("No room for misogynist cultural machines to pressure the wife into doing more than her fair share. Just a market transaction that is efficient and fair.") also remind me of "If those women were really oppressed, someone would have tended to have freed them by then."

The polyamory and BDSM subcultures prove that nerds can create new social rules that improve sex. Of course, you can't just theorize about what the best social rules would be and then declare that you've "solved the problem." But when you see people living happier lives as a result of changing their social rules, there's nothing wrong with inviting other people to take a look.

I don't understand your postscript. I didn't say there is no inequality in chore division because if there were a chore market would have removed it. I said a chore market would have more equality than the standard each-person-does-what-they-think-is-fair system. Your response seems like fully generalized counterargument: anyone who proposes a way to reduce inequality can be accused of denying that the inequality exists.

8Nornagest11y
The modern BDSM culture's origins are somewhat obscure, but I don't think I'd be comfortable saying it was created by nerds despite its present demographics. The leather scene is only one of its cultural poles, but that's generally thought to have grown out of the post-WWII gay biker scene: not the nerdiest of subcultures, to say the least. I don't know as much about the origins of poly, but I suspect the same would likely be true there.
0fubarobfusco11y
Hmm, I don't know that I would consider those rules overall to be clearly superior for everyone, although they do reasonably well for me. Rather, I value the existence of different subcultures with different norms, so that people can choose those that suit their predilections and needs. (More politically: A "liberal" society composed of overlapping subcultures with different norms, in a context of individual rights and social support, seems to be almost certain to meet more people's needs than a "totalizing" society with a single set of norms.) There are certain of those social rules that seem to be pretty clear improvements to me, though — chiefly the increased care on the subject of consent. That's an improvement in a vanilla-monogamous-heteronormative subculture as well as a kink-poly-genderqueer one.
0Viliam_Bur11y
This works best if none of the "subcultures with different norms" creates huge negative externatilies for the rest of the society. Otherwise, some people get angry. -- And then we need to go meta and create some global rules that either prevent the former from creating the externalities, or the latter from expressing their anger. I guess in case of BDSM subculture this works without problems. And I guess the test of the polyamorous community will be how well they will treat their children (hopefully better than polygamous mormons treat their sons), or perhaps how will they handle the poly- equivalents of divorce, especially the economical aspects of it (if there is a significant shared property).
5NancyLebovitz11y
One datapoint: I know of one household (two adults, one child) which worked out chores by having people list which chores they liked, which they tolerated, and which they hated. It turned out that there was enough intrinsic motivation to make taking care of the house work.
5maia11y
Roger and I wrote a web app for exactly this purpose - dividing chores via auction. This has worked well for chore management for a house of 7 roommates, for about 6 months so far. The feminism angle didn't even occur to us! It's just been really useful for dividing chores optimally.
3shminux11y
I can see it working when all parties are trustworthy and committed to fairness, which is a high threshold to begin with. Also, everyone has to buy into the idea of other people being autonomous agents, with no shoulds attached. Still, this might run into trouble when one party badly wants something flatly unacceptable to the other and so unable to afford it and feeling resentful. One (unrelated) interesting quote:

Weekly open threads - how do you think it's working?

I think it's much better than monthly open threads - back then, I would sometimes think "Hmm, I'd like to ask this in an open thread, but the last one is too old, nobody's looking at it any more".

4Manfred11y
You haven't ever posted a top-level comment in a weekly open thread.

I have, and I agree with Emile's assessment.

0Tenoke11y
What has that to do with it?
4Manfred11y
Suppose we were wondering about changing the flavor of our pizza. Someone says "Yeah, I'm really glad you've got these new flavors on your menu, I used to think the old recipe was boring and didn't order it much." And then it turns out that this person hasn't ever actually tried any of your new flavors of pizza. Sort of sets an upper bound on how much the introduction of new flavors has impacted this person's behavior.
9Tenoke11y
You can judge a lot more about a thread than about a pizza by just looking at it. Also, if you seriously think that Open Threads can only be evaluated by people with top-level comments in them you probably misunderstand both how most people use the Open Threads and what is required to judge them.
-3Manfred11y
I think you can judge quite a lot about pizza without eating it. That merely wasn't what I was talking about. Don't bait and switch conversations please.
-8Tenoke11y
5bogdanb11y
Note that he didn’t say “I didn’t post much”, he just said that there existed times when he thought about posting but didn’t because of the age of the thread. That is useful evidence, you can’t just ignore it if it so happens that there are no instances of posting at all. (In pizza terms, Emile said “I used to think the old recipe was bad and I never ordered it. It’s not that surprising in that case that there are no instances of ordering.)
5Emile11y
Sure! Though here is more of a case of "once in a blue moon I got o the pizza place ... and I'm bored and tired of life ... and want to try something crazy for a change ... but then I see the same old stuff on the menu, I think man, this world sucks ... but now they have the Sushi-Harissa-Livarot pizza, I know next time I'm going to feel better!" I agree it's a bit weird that I say that p(post|weekly thread) > p(post| monthly thread) when so far there are no instances of post|weekly thread.
3Kawoomba11y
Well, it's evidence for "Hmm, I'd like to ask this in an open thread, but the last one is too old, nobody's looking at it any more."
3Tenoke11y
Haha but no, Manfred says that he hasn't ever posted a top-level comment in a weekly open thread.

I prefer it to the old format; once a month is too clumpy for an open thread. It was fine when this was a two-man blog, but not for a discussion forum.

Last week, I gave a presentation at the Boston meetup, about using causal graphs to understand bias in the medical literature. Some of you requested the slides, so I have uploaded them at http://scholar.harvard.edu/files/huitfeldt/files/using_causal_graphs_to_understand_bias_in_the_medical_literature.pptx

Note that this is intended as a "Causality for non-majors" type presentation. If you need a higher level of precision, and are able the follow the maths, you would be much better off reading Pearl's book.

(Edited to change file location)

1Adele_L11y
Thanks for making these available. Even if you can follow the math, these sorts of things can be useful for orienting someone new to the field, or laying a conceptually simple map of the subject that can be elaborated on later. Sometimes, it's easier to use a map to get a feel for where things are than it is to explore directly.

I want to know more (ie anything) about game theory. What should I read?

If you have the time, I heartily recommend Ben Polak's Introduction to Game Theory lectures. They are highly watchable and give a very solid introduction to the topic.

In terms of books, The Strategy of Conflict is the classic popular work, and it's good, but it's very much a product of its time. I imagine there are more accessible books out there. Yvain recommends The Art of Strategy, which I haven't read.

0mstevens11y
I hate trying to learn things from videos, but the books look interesting.
4sixes_and_sevens11y
(If you want a specific link, here is Yvain's introduction to game theory sequence. There are some problems and inaccuracies with it which are generally discussed in comments, but as a quick overview aimed at a LW audience it should serve pretty well.)
2sixes_and_sevens11y
What are your motives for learning about it? If it's to gain a bare-bones understanding sufficient for following discussion in Less Wrong, existing Less Wrong articles would probably equip you well enough.
6mstevens11y
My possibly crazy theory is that game theory would be a good way to understand feminism.
2sixes_and_sevens11y
OK, I'm interested. Can you explain a little more?
2mstevens11y
It's a little bit intuition and might turn out to be daft, but a) I've read just enough about game theory in the past to know what the prisoner's dilemma is b) I was reading an argument/discussion on another blog about the men chatting up women, who may or may not be interested, scenario, and various discussions on irc with MixedNuts have given me the feeling that male/female interactions (which are obviously an area of central interest to feminism) are a similar class of thing and possibly game theory will help me understand said feminism and/or opposition to it.

A word of warning: you will probably draw all sorts of wacky conclusions about human interaction when first dabbling with game theory. There is huge potential for hatching beliefs that you may later regret expressing, especially on politically-charged subjects.

5JQuinton11y
I also had the same intuition about male/female dynamics and the prisoner's dilemma. It also seems like a lot of men's behavior towards women is a result of a scarcity mentality. Surely there are some economic models that explain how people behave -- especially their bad behavior -- when they feel some product is scarce, and if these models were applied to male/female dynamics it might predict some behavior. But since feminism is such a mind-killing topic, I wouldn't feel too comfortable expressing alternative explanations (especially among non-rationalists) since people tend to feel that if you disagree with the explanation then you disagree with the normative goals.
1satt11y
One model which I've seen come up repeatedly in the humanities is the "marriage market". Unsurprisingly, economists seem to use this idea most often in the literature, but peeking through the Google Scholar hits I see demographers, sociologists, and historians too. (At least one political philosopher uses the idea too.) I don't know how predictive these models are. I haven't done a systematic review or anything remotely close to one, but when I've seen the marriage market metaphor used it's usually to explain an observation after the fact. Here is a specific example I spotted in Randall Collins's book ''Violence: A Micro-sociological Theory''. On pages 149 & 150 Collins offers this gloss on an escalating case of domestic violence: (Digression: Collins calls this a sociological interpretation, but I usually associate this kind of bargaining power-based explanation with microeconomics or game theory, not sociology. Perhaps I should expand my idea of what constitutes sociology. After all, Collins is a sociologist, and he has partly melded the bargaining power-based explanation with his own micro-sociological theory of violence.)
2Viliam_Bur11y
All sciences are describing various aspects of the reality, but there is one reality, and all these aspects are connected. Asking whether some explanation belongs to science X or science Y is useful when we want to find the best tools to deal with it; but the more important question is whether the explanation is true or false; how well it predicts reality. Some applied topics may be considered by various sciences to be in their (extended) territory. For example, I have seen game theory considered a part of a) mathematics, b) economy, and c) psychology. I guess the mechanism itself is mathematical, and it has important economical and psychological consequences, so it is usefull for all of them to know about it. There may be the case that one outcome is influenced by many factors, and the different factors are best explained by different sciences. For example, some aspects of relationships in marriage can be explained by biology, psychology, economics, sociology, perhaps even theology when the people are religious. Then it is good to check across all sciences to see whether we didn't miss some important factor. But the goal would be to create the best model, not to pick the favourite explanation. (The best model would include all relevant factors, but relatively to their strength.) Trying to focus on one science only... I guess it is trying to influence the outcome; motivated thinking. For example if someone decides to ignore the biology and only focus on sociology, that already makes it obvious what kind of answer they want to get. And if someone decides to ignore the sociology and only focus on biology, that also makes it obvious. But the real question should be how specifically do both biological and sociological aspect influence the result.
0satt11y
Indeed. Still, I want my mental models/stereotypes of different sciences to roughly match what scientists in those different fields are actually doing.
5Manfred11y
I actually found The Selfish Gene a pretty good book for developing game theory intuitions. I'd put it as #2 on my list after "the first 2/3 of The Strategy of Conflict".
4[anonymous]11y
If you're looking for something shorter than a full text, I can recommend this entry at the Standord Encyclopedia of Philosophy.

Open comment thread:

If it's worth saying, but not worth its own top-level comment in the open thread, it goes here.

(Copied since it was well received last time.)

What's the name of the bias/fallacy/phenomenon where you learn something (new information, approach, calculation, way of thinking, ...) but after awhile revert to the old ideas/habits/views etc.?

Relapse? Backsliding? Recidivism? Unstickiness? Retrogression? Downdating?

1shminux11y
Hmm, some of these are good terms, but the issue is so common, I assumed there would be a standard term for it, at least in the education circles.
0moreati11y
I can't think of an academic name, the common phrases in Britain are 'stuck in your ways', 'bloody minded', 'better the the devil you know'.
0A1987dM11y
Depending on what timescales shminux is thinking of as “awhile” (hours or months?), RobbBB's suggestions may be better.
-1Armok_GoB11y
Open subcomment subthread: If it's not worth saying anywhere, it goes here.
0Dorikka11y
I thought it wasn't necessary to paste the note (not mine) that accompanied the original comment. :P
0Armok_GoB11y
Hey now, half the joke was sort of original, about the implication of sufficient metaleevels in this direction. :p

I don't know how technically viable hyperloop is, but it seems especially well suited for the United States.

Investing in a hyperloop system doesn't make as much sense in Europe or Japan for a number of reasons:

  1. European/Japanese cities are closer together, so Hyperloop's long acceleration times are a larger relative penalty in terms of speed. The existing HSR systems reach their lower top speeds more quickly.

  2. Most European countries and Japan already have decent HSR systems and are set to decline in population. Big new infrastructure projects tend not to make as much sense when populations are declining and the infrastructure cost : population ratio is increasing by default.

  3. Existing HSR systems create a natural political enemy for Hyperloop proposals. For most countries, having HSR and Hyperloop doesn't make sense.

In contrast, the US seems far better suited:

  1. The US is set for a massive population increase, requiring large new investments in transportation infrastructure in any case.

  2. The US has lots of large but far-flung cities, so long acceleration times are not as much of a relative penalty.

  3. The US has little existing HSR to act as a competitor. The political class h

... (read more)

Don't forget Australia. We have a few, large cities separated by long distances. In particular, Melbourne to Sydney is one of the highest traffic air routes in the world, roughly the same distance as the proposed Hyperloop, and there has been on and off talk of high speed rail links. Additionally, Sydney airport has a curfew, and is more or less operating at capacity. Offloading Melbourne-bound passengers to a cheaper, faster option would free up more flights for other destinations.

7[anonymous]11y
In theory there is no difference between theory and practice. In practice, there is. I continue to fail to see how this idea is anything more than a cool idea that would take huge amounts of testing and engineering hurdles to get going if it indeed would prove viable. Nothing is as simple as its untested dream ever is. Not hating on it, but seriously, hold your horses...
1knb11y
I feel like I covered this in the first sentence with, "I don't know how technically viable hyperloop is." My point is just to argue that the US would be especially well-suited for hyperloop if it turns out to be viable. My goal was mainly to try to argue against the apparent popular wisdom that hyperloop would never be built in the US for the same reason HSR (mostly) wasn't.
3CAE_Jones11y
I was only vaguely following the Hyperloop thread on Lesswrong, but this analysis convinced me to Google it to learn more. I was immediately bombarded with a page full of search results that were pecimmistic at best (mocking, pretending at fallasy of gray but still patronizing, and politically indignant (the LA Times) were among the results on the first page)[1]. I was actually kinda hopeful about the concept, since America desperately needs better transit infrastructure, and KND's analysis of it being best suited for America makes plenty of sense so far as I can tell. [1] I didn't actually open any of the results, just read the titles and descriptions. The tone might have been exaggerated or even completely mutated by that filter, but that seems unlikely for the titles and excerpts I read.
4RolfAndreassen11y
I suggest that this is very weak evidence against the viability, either political, economic, or technical, of the Hyperloop. Any project that is obviously viable and useful has been done already; consequently, both useful and non-useful projects get the same amount of resistance of the form "Here's a problem I spent at least ten seconds thinking up, now you must take three days to counter it or I will pout. In public. Thus spoiling all your chances of ever getting your pet project accepted, hah!"
0DanielLC11y
I've been told that railways primarily get money from freight, and nobody cares that much about freight getting there immediately. As such, high speed railways are not a good idea. I know you can't leave this to free enterprise per se. If someone doesn't want to sell their house, you can't exactly steer a railroad around it. However, if eminent domain is used, then if it's worth building, the market will build it. Let the government offer eminent domain use for railroads, and let them be built if they're truly needed.
2kalium11y
Much of Amtrak uses tracks owned by freight companies, and that this is responsible for a good chunk of Amtrak's poor performance. However, high-speed rail on non-freight-owned tracks works pretty well in the rest of the world; it just needs its own right-of-way (in some cases running freight at night when the high-speed trains aren't running, but still having priority over freight traffic).
0DanielLC11y
Are high speed trains profitable enough for people to build them without government money? I'm not sure how to look that up.
2knb11y
Many of the private passenger rail companies were losing money before they were nationalized, but that was under heavy regulation and price controls. The freight rail companies were losing money before they were deregulated as well. These days they are quite profitable. A lot of the old right-of-way has been lost so they would certainly need government help to overcome the tragedy-of-the-anticommons problem.
0DanielLC11y
You mean the problem that someone isn't going to be willing to sell their property? Eminent domain is certainly necessary. I'm just wondering if it's sufficient.
2kalium11y
That's not at all the same question as "Are high-speed trains a good idea?" * Any decent HSR would generate quite a lot of value not captured by fares. It would be more informative to compare the economic development of regions that have built high-speed rail against that of similar regions which haven't or which did so later. * France's TGV is profitable. Do you think that because it might not have been built without government funding it was a bad idea to build?
2DanielLC11y
If the HSR charges based on marginal cost, and marginal and average cost are significantly different, then this could be a problem. I intuitively assumed they'd be fairly close. Thinking about it more, I've heard that airports charge vastly more for people who are flying for business than for pleasure, which suggests there is a signifcant difference. Of course, it also suggests that they might be able to capture it through price discrimination, since the airports seem to manage. How much government help is necessary for a train to be built? The economics of a train is not comparable to the economics of a city. If you can actually notice the difference in economic development caused by the train, then the train is so insanely valuable that it would be blindingly obvious from looking at how often they're built by the private sector. Making a profit is not a sufficient condition for it to be worth while to build. It has to make enough profit to make up for the capital cost. It might well do that, and it is possible to check, but it's a lot easier to ask if one has been built without government funding. If it is worth while to build trains in general, and the government doesn't always fund them, then someone will build one without the government funding them.
2kalium11y
I don't understand the reasoning by which you conclude that if an effect is measurable it must be so overwhelmingly huge that you wouldn't have to measure it. On a much smaller scale, property values rise substantially in the neighborhood of light rail stations, but this value is not easily captured by whoever builds the rails. Despite the measurability of this created value, we do not find that "[light rail] is so insanely valuable that it would be blindingly obvious from looking at how often they're built by the private sector."
2DanielLC11y
If the effect is measurable on an accurate but imprecise scale (such as the effect of a train on the economy), then it will be overwhelming on an inaccurate but precise scale (such as ticket sales). You are suggesting we measure the utility of a single business by its effect on the entire economy. Unless my guesses of the relative sizes are way off, the cost of a train is tiny compared to the normal variation of the economy. In order for the effect to be noticeable, the train would have to pay for itself many, many times over. Ticket sales, and by extension the free market, might not be entirely accurate in judging the value of a train. But it's not so inaccurate that an effect of that magnitude will go unnoticed. Am I missing something? Are trains really valuable enough that they'd be noticed on the scale of cities?
1kalium11y
Are you claiming that a scenario in which * Fares cover 90% of (construction + operating costs) * Faster, more convenient transportation creates non-captured value worth 20% of (construction + operating costs) is impossible? You seem to be looking at this from a very all-or-nothing point of view.
2DanielLC11y
Faster, more convenient transportation is what fares are charging for. Non-captured value is more complicated than that. If the non-captured value is 20% of the captured value, it's highly unlikely that trains will frequently be worth building, but rarely capture enough value. That would require that the true value stay within a very narrow area. If it's not a monopoly good, and marginal costs are close to average costs, then captured value will only go down as people build more trains, so that value not being captured doesn't prevent trains from being built. If it is a monopoly good (I think it is, but I would appreciate it if some who actually knows tells me), and marginal costs are much lower than average costs, then a significant portion of the value will not be captured. Much more than 20%. It's not entirely unreasonable that the true value is such that trains are rarely built when they should often be built. That's part of why I asked: If the government is subsidizing it by, say, 20%, then the trains are likely worth while. If the government practically has to pay for the infrastructure to get people to operate trains, not so much. Also, that comment isn't really applicable to what you just posted it as a response to. It would fit better as a response to my last comment. The comment you responded to was just saying that unless the value of trains is orders of magnitude more than the cost, you'd never notice by looking at the economy.
0kalium11y
Marginal and average cost are obviously different, but your example of business fliers is not relevant. Business fliers aren't paying for their flights, but do often get to choose which airline they take. If there is one population that pays for their own flights and another population that does not even consider cost, it would be silly not to discriminate whatever the relation between marginal and average cost.
0DanielLC11y
The businesses are perfectly capable of choosing not to pay for their employees flights. The fact that they do, and that they don't consider the costs, shows that their willingness to pay is much higher than the marginal cost. If it wasn't for price discrimination, consumer surplus would be high, and a large amount of value produced by the airlines would go towards the consumers. Are high-speed trains natural monopolies? That is, are the capital costs (e.g. rail lines) much higher than the marginal costs (e.g. train cars)? I think they are, and if they are considering the consumer surplus is important, but if they're not, then it doesn't matter.
0kalium11y
What marginal cost are you referring to here? If it's the cost to the airline of one butt-in-seat, we know it's less than one fare because the airline is willing to sell that ticket. And this has nothing to do with average cost. I think you've lost the thread a bit.
0DanielLC11y
What I mean is that, if everyone payed what people who travel for pleasure pay, then people travelling for business would pay much less than they're willing to, so the amount of value airports produce would be a lot less than what they'd get. If they charged everyone the same, either it would get so expensive that people would only travel for business, even though it's worth while for people to travel for pleasure, or it would be cheap enough that people travelling for business would fly for a fraction of what they're willing to pay. Either way, airports that are worth building would go unbuilt since the airport wouldn't actually be able to make enough money to build it.
1fubarobfusco11y
Are highways?
3DanielLC11y
Some roads do collect tolls. Again, I don't know how to look it up, but I don't think they have government help. They're in the minority, but they show that having roads is socially optimal. Similarly, if there are high-speed trains that operate without government help, we know that it's good to have high-speed trains, and while it may be that government encouragement is resulting in too many of them being built, we should still build some.
0knb11y
I'm not sure what your point is here. Passenger rail and freight rail are usually decoupled. Amtrak operates on freight rail in most places because the government orders the rail companies to give preference to passenger rail (at substantial cost to the private freight railways). Hyperloop would help out a lot, since it takes the burden off of freight rail. I suppose hyperloop could be privately operated (that would be my preference, so long as there was commonsense regulation against monopolistic pricing).
4DanielLC11y
If competitors can simply build more hyperloops, monopolistic pricing won't be a problem. If you only need one hyperloop, then monopolistic pricing is insufficient. They will still make less money than they produce. Getting rid of monopolistic pricing runs the risk of keeping anyone from building the hyperloops.
0metastable11y
I'd like to hear more about possibilities in China, if you've got more. Everything I've read lately suggests that they've extensively overbuilt their infrastructure, much of it with bad debt, in the rush to create urban jobs. And it seems like they're teetering on the edge of a land-development bubble, and that urbanization has already started slowing. But they do get rights-of-way trivially, as you say, and they're geographically a lot more like the US than Europe.
0Eliezer Yudkowsky11y
(The Money Illusion would like to dispute this view of China. Not sure how much to trust Sumner on this but he strikes me as generally smart.)
1gattsuru11y
Mr. Sumner has some pretty clear systemic assumptions toward government spending on infrastructure. This article seems to agree with both aspects, without conflicting with either, however. The Chinese government /is/ opening up new opportunities for non-Chinese companies to provide infrastructure, in order to further cover land development. But they're doing so at least in part because urbanization is slowing and these investments are perceived locally as higher-risk to already risk-heavy banks, and foreign investors are likely to be more adventurous or to lack information.

I lost an AI box experiment against PatrickRobotham with me as the AI today on irc. If anyone else wants to play against me then PM me here or contact me on #lesswrong.

0Kawoomba11y
Do we still keep up with those secrecy shenanigans even when no MIRI employees were involved, or can you share some details?
8Tenoke11y
I don't share details because subsequent games will be less fun and because if I am using dick moves I don't want people to know how much of a dick I am.
-3shminux11y
Failing to convince your jailer to let you out is the highly likely outcome, so it is not very interesting. I would love to hear about any simulated AI winning against an informed opponent.

I posted this to advertise that I am looking for people to play with me.

When you're trying to raise the sanity waterline, dredging the swamps can be a hazardous occupation. Indian rationalist skeptic Narendra Dabholkar was assassinated this morning.

Political activism, especially in the third world, is inherently dangerous, whether or not it is rationality-related.

3knb11y
He was trying to pass a law to suppress religious freedoms of small sects. That doesn't raise the sanity waterline, it just increases tensions and hatred between groups.
3David_Gerard11y
That's a ludicrously forgiving reading of what the bill (which looks like going through) is about. Steelmanning is an exercise in clarifying one's own thoughts, not in justifying fraud and witch-hunting.
0fubarobfusco11y
I haven't been able to find the text of the bill — only summaries such as this one. Do you have a link?
-1knb11y
Did you even read my comment?
-2David_Gerard11y
Yes, I did. Your characterisation of the new law is factually ridiculous.
-2knb11y
That isn't all the law does, as you would know if you actually read it.

So, are $POORETHNICGROUP so poor, badly off and socially failed because they are about 15 IQ points stupider than $RICHETHNICGROUP? No, it may be the other way around: poverty directly loses you around 15 IQ points on average.

Or so says Anandi Mani et al. "Poverty Impedes Cognitive Function" Science 341, 976 (2013); DOI: 10.1126/science.1238041. A PDF while it lasts (from the nice person with the candy on /r/scholar) and the newspaper article I first spotted it in. The authors have written quite a lot of papers on this subject.

3Transfuturist11y
The biggest problem I have with racists claiming racial realism is this.

The racists claim that this is irrelevant because of research that corrects for socioeconomic status and still finds IQ differences. Of course, researchers have found plenty of evidence of important environmental influences on IQ not measured by SES. It seems especially bad for the racial realist hypothesis that people who, for example, identify as "black" in America have the the same IQ disadvantage compared to whites whether their ancestory is 4% European or 40% European; how much African vs. European ancestry someone has seems to matter only indirectly to the IQ effects, which seem to directly follow whichever artificial simplified category someone is identified as belonging to.

6Viliam_Bur11y
Not completely serious, just wondering about possible implications, for sake of munchkinism: Would it be possible to invent some new color, for example "purple", so that identifying with that color would increase someone's IQ? I guess it would first require the rest of the society accepting the superiority (at least in intelligence) of the purple people, and their purpleness being easy to identify and difficult for others to fake. (Possible to achieve with some genetic manipulation.) Also, could this mechanism possibly explain the higher intelligence of Jews? I mean, if we stopped suspecting them from making international conspiracies and secretly ruling the world (which obviously requires a lot of intelligence), would their IQs consequently drop to the average level? Also... what about Asians? It is the popularity of anime than increases their IQ, or what?
2Protagoras11y
Unfortunately, while we know there are lots of environmental factors that affect IQ, we mostly don't know the details well enough to be sure of very much, or to have much idea how to manipulate it. However, as I understand it, some research has suggested that there are interesting cultural similarities between Jews in most of the world and Chinese who don't live in China, and that the IQ advantage of Chinese is primarily among Chinese who don't live in China, so something in common between how the Chinese and Jewish cultures deal with being minority outsiders may explain part of why both show unusually high IQs when they are minority outsiders (and could explain a lot of East Asians generally; considering how enormous the cultural influence of China has been in the region, it would not be terribly surprising if many other East Asian groups had acquired whatever the relevant factor is). This paper by Ogbu and Simons discusses some of the theories about groups that do poorly (the "involuntary" or "caste-like" minorities). Unfortunately I couldn't find a citation for any discussion of differences between voluntary minorities which would explain why some voluntary minorities outperform rather than merely equalling the majority, apart from Ned Block's passing reference to a culture of "self-respect" in his review of The Bell Curve.
-2bogus11y
It's been done - many people do in fact self-identify as 'Indigo children', 'Indigos' or even 'Brights'. The label tends to come with a broadly humanistic and strongly irreligious worldview, but many of them are in fact highly committed to some form of spirituality and mysticism: indeed, they credit these perhaps unusual convictions for their increased intelligence and, more broadly, their highly developed intuition.
5David_Gerard11y
Ah, "Brights" is Dawkins and Dennett's terrible word for atheists; "Indigos" is completely insane and incoherent new-age nonsense about allegedly superpowered children. How did you conflate the two?
2Vaniver11y
I've seen mixed reports on this. Human Varieties, for example, has a series of posts on colorism which finds a relationship between skin color and intelligence in the population of African Americans, as predicted by both the hereditarian and "colorist" (i.e. discrimination) theories, but does not find a relationship between skin color and intelligence within families (as predicted by the hereditarian but not the colorist theory), and I know there were studies using blood type which didn't support the hereditarian theory but appear to have been too weakly designed to do that even if hereditarianism were true. Are you aware of any studies that actually look at genetic ancestry and compare it to IQ? (Self-reported ancestry would still be informative, but not as accurate.)
2David_Gerard11y
It's because Europeans are 4% Neanderthal and partake of the Neanderthals' larger brains, and Africans aren't.
0Vaniver11y
There is large enough variance in Neanderthal ancestry among Europeans that we might actually be able to see differences within the European population (and then extrapolate those to guess how much of the European-African gap that explains). I seem to recall seeing some preliminary reports on this, but I can't find them right now so I'm not confident they were evidence-driven instead of theory-driven.
5David_Gerard11y
The really interesting thing is that you see results from all over the world showing this. Catholics in Northern Ireland in the 1970s measuring 15 points lower than Protestants. Burakumin in Japan measuring 15 points lower than non-Burakumin. SAME GENE POOL. This strongly suggests you get at least 15 points really easily just from social factors, and these studies may (because a study isn't solid science yet, not even a string of studies from the same group) point to one reason.
6Viliam_Bur11y
Could be interesting to know how much of that is the status directly, and how much is better nutrition and medical care.
-5Eugine_Nier10y
0Vaniver11y
So, I totally buy the "cognitive load decreases intellectual performance, both in life and on IQ tests" claim. This is very well replicated, and has immediate personal implications (don't try to remember everything, write it all down; try to minimize sources of stress in your life; try to think about as few projects at a time as possible). I don't think it's valid to say "instead of A->B, it's B->A," or see this as a complete explanation, because the ~13 point drop is only present in times of financial stress. Take standardized school tests, and suppose that half of the minority students are under immediate financial stress (their parents just got a hefty car repair bill) and the other half aren't (the 'easy' condition in the test), whereas none of the majority students are under immediate financial stress. Then we should expect the minority students to be, on average, 6.5 points lower, but what we see is the gap of 15 points. It's also plausible that the differentiatior between people is their reaction to stress--I know a lot of high-powered managers and engineers under significant stress at work, who lose much less than a standard deviation of their ability to make good decisions and focus on other things and so on. Some people even seem to perform better under stress, but it's hard to separate out the difference between motivation and fluid intelligence there.
0David_Gerard11y
Being poor means living a life of stress, financial and social. John Scalzi attempts to explain it. John Cheese has excellent ha-ha-only-serious stuff on Cracked on the subject too. I wasn't meaning to put forward a study as settled science, of course; but I think it's interesting, and that they have a pile of other studies showing similar stuff. Now it's replication time.
-2Vaniver11y
Then why, during the experiment, did the poor participants and the rich participants have comparable scores when presented with a hypothetical easy financial challenge (a repair of $150)? The claim the paper makes is that there are temporary challenges which lower cognitive functionality, that are easier to induce in the poor than the rich. If you expect that those challenges are more likely to occur to the poor than the rich (which seems reasonable to me), then this should explain some part of the effect- but isn't on all the time, or the experiment wouldn't have come out the way it did. While I have my doubts about the replicability of any social science article that made it into Science, the interpretation concerns here are assuming the effect the paper saw is entirely real and at the strength they reported.

Sorry if this has been asked before, but can someone explain to me if there is any selfish reason to join Alcor while one is in good health? If I die suddenly, it will be too late to have joined, but even if I had joined it seems unlikely that they would get to me in time.

The only reason I can think of is to support Alcor.

7Randy_M11y
It's like what the TV preacher told Bart Simpson: "Yes, a deathbed conversion is a pretty sweet angle, but if you join now, you're also covered in case of accidental death and dismemberment!" (may not be an exact quote)
7Turgurth11y
I don't think it's been asked before on Less Wrong, and it's an interesting question. It depends on how much you value not dying. If you value it very strongly, the risk of sudden, terminal, but not immediately fatal injuries or illnesses, as mentioned by paper-machine, might be unacceptable to you, and would point toward joining Alcor sooner rather than later. The marginal increase your support would add to the probability of Alcor surviving as an institution might also matter to you selfishly, since this would increase the probability that there will exist a stronger Alcor when you are older and will likely need it more than you do now. Additionally, while it's true that it's unlikely that Alcor would reach you in time if you were to die suddenly, compare this risk to the chance of your survival if alternately you don't join Alcor soon enough, and, after your hypothetical fatal car crash, you end up rotting in the ground. And hey, if you really want selfish reasons: signing up for cryonics is high-status in certain subcultures, including this one. There are also altruistic reasons to join Alcor, but that's a separate issue.
1brazil8411y
Thank you for your response; I suppose one would need to estimate the probability of dying in such a way that having previously joined Alcor would make a difference. Perusing Ben Best's web site and using some common sense, it seems that the most likely causes of death for a reasonably healthy middle aged man are cancer, stroke, heart attack, accident, suicide, and homicide. We need to estimate the probability of sudden serious loss of faculties followed by death. It seems that for cancer, that probability is extremely small. For stroke, heart attack, and accidents, one could look it up but just guesstimating a number based on general observations, I would guess roughly 10 to 15 percent. Suicide and homicide are special cases -- I imagine that in those cases I would be autopsied so there would be much less chance of cryopreservation even if I had already joined Alcor. Of course even if you pre-joined Alcor, there is still a decent chance that for whatever reason they would not be able to preserve you after, for example, a fatal accident which killed you a few days later. So all told, my rough estimate is that the improvement in my chances of being cryopreserved upon death if I joined Alcor now as opposed to taking a wait and see approach is 5% at best. Does that sound about right?
0Turgurth11y
That does sound about right, but with two potential caveats: one is that individual circumstances might also matter in these calculations. For example, my risk of dying in a car accident is much lowered by not driving and only rarely riding in cars. However, my risk of dying of heart disease is raised by a strong family history. There may also be financial considerations. Cancer almost certainly and often heart disease and stroke take time to kill. If you were paying for cryonics out-of-pocket, this wouldn't matter, but if you were paying with life insurance the cost of the policy would go up, perhaps dramatically, if you were to wait until the onset of serious illness to make your arrangements, as life insurance companies are not fond of pre-existing condtions. It might be worth noting that age alone also increases the cost of life insurance. That being said, it's also fair to say that even a successful cryopreservation has a (roughly) 10-20% chance of preserving your life, taking most factors into account. So again, the key here is determining how strongly you value your continued existence. If you could come up with a roughly estimated monetary value of your life, taking the probability of radical life extension into account, that may clarify matters considerably. There at values at which that (roughly) 5% chance is too little, or close to the line, or plenty sufficient, or way more than sufficient; it's quite a spectrum.
0brazil8411y
Yes I totally agree. Similarly your chances of being murdered are probably a lot lower than the average if you live in an affluent neighborhood and have a spouse who has never assaulted you. Suicide is an interesting issue -- I would like to think that my chances of committing suicide are far lower than average but painful experience has taught me that it's very easy to be overconfident in predicting one's own actions. Yes, but there is an easy way around this: Just buy life insurance while you are still reasonably healthy. Actually this is what got me thinking about the issue: I was recently buying life insurance to protect my family. When I got the policy, I noticed that it had an "accelerated death benefit rider," i.e. if you are certifiably terminally ill, you can get a $100k advance on the policy proceeds. When you think about it, that's not the only way to raise substantial money in such a situation. For example, if you were terminally ill, your spouse probably wouldn't mind if you borrowed $200k against the house for cryopreservation if she knew that when you finally kicked the bucket she would get a check for a million from the insurance company. So the upshot is that from a selfish perspective, there is a lot to be said for taking a "wait and see" approach. (There's another issue I thought of: Like most life insurance policies, the ones I bought are good only for 20 years. There is a pretty good chance that I will live for those 20 years but in the meantime develop a serious health condition which makes it almost impossible to buy more insurance. What then?) I agree with this to an extent.
8gwern11y
That's a feature, not a bug, of term life insurance. That's the tradeoff you're making to get coverage now at a cheap rate. But of course, the option value exists on both sides - so if you want to lock in relatively lower rates, well, that's why whole life insurance exists.
1brazil8411y
Yes, good point. I actually looked into getting whole life insurance but the policies contained so many bells, whistles, and other confusions that I put it all on hold until I had bought some term insurance. Maybe I will look into that again. Of course if I were disciplined, it would probably make sense to just "buy term and invest the difference" for the next 30 years.
3Turgurth11y
Hmmm. You do have some interesting ideas regarding cryonics funding that do sound promising, but to be safe I would talk to Alcor, specifically Diane Cremeens, about them directly to ensure ahead of time that they'll work for them.
0brazil8411y
Probably that's a good idea. But on the other hand, what are the chances that they would turn down a certified check for $200k from someone who has a few months to live? I suppose one could argue that setting things up years in advance so that Alcor controls the money makes it difficult for family members to obstruct your attempt to get frozen.
3Ben_LandauTaylor11y
In addition to the money, Alcor requires a lot of legal paperwork, including a notarized will. You can probably do that if you have "a few months," but it's one more thing to worry about, especially if you're dying of something that leaves you mentally impaired and makes legal consent complicated. I don't know how strict about this Alcor would be; I second the grandparent's advice to ask Diane.
0[anonymous]11y
There is some background base rate of sudden, terminal, but not immediately fatal, injury or illness. For example, I currently do not value life insurance highly, and therefore I value cryonics insurance even less. Otherwise, there's only some marginal increase in the probability of Alcor surviving as an institution. Seeing as there's precedent for healthy cryonics orgs to adopt the patients of unhealthy cryonics orgs, this marginal increase should be viewed as a yet more marginal increase in the survival of cryonics locations in your locality. (Assuming transportation costs are prohibitive enough to be treated as a rounding error.)

There is a circulating google docs for people who are moving into the Bay Area soonish.

Any tips for people moving in from those who are in?

People who have available rooms or houses. Let Nick Ryder know.

0Nisan11y
Some advice for people who want to rent from landowners.

Artificial intelligence and Solomonoff induction: what to read?

Olle Häggström, Professor of Mathematical Statistics at Chalmers University of Technology, reads some of Marcus Hutter's work, comes away unimpressed, and asks for recommendations.

One concept that is sometimes claimed to be of central importance in contemporary AGI research is the so-called AIXI formalism. [...] In the presentation, Hutter advices us to consult his book Universal Artificial Intelligence. Before embarking on that, however, I decided to try one of the two papers that he also di

... (read more)
0Wei Dai11y
My current thinking is that Kolmogorov complexity / Solomonoff induction is probably only a small piece of the AGI puzzle. It seems obvious to me that the ideas are relevant to AGI, but hard to tell in what way exactly. I think Hutter correctly recognized the relevance of the ideas, but tends to exaggerate their importance, and as Olle Häggström recognized, can't really back up his claims as to how central these ideas are. If Olle wanted to become an FAI researcher then I'd suggest getting an overview of the AIT field from Li and Vitanyi's textbook, but if he is more interested in what I called "Singularity Strategies" (which from Google translations of his other blog entries, it sounds like he is) and wants an understanding of just how Solomonoff Induction is relevant to AGI, in order to better understand AI risk and generally figure out how to best influence the Singularity in a positive direction, I'm afraid nobody has the answers at the moment. (I wonder if we could convince Olle to join LW? I'd comment on some of Olle's posts but I'm really wary of personal blogs, which tend to disappear and take all of my comments with them.)
4gwern11y
Nothing stops you from setting up some program to archive URLs you visit, which will deal with most comments. I also tend to excerpt my best comments into Evernote as well, to make them easier to refind.
0linkhyrule511y
Random question - is AGI7 a typo, or a term?
8Manfred11y
Open link, control+f "relavant to AGI". Get directed to "relavant to AGI7". Footnote 7 is "7) I am not a computer scientist, so the following should perhaps be taken with a grain of salt. While I do think that computability and concepts derived from it such as Kolmogorov complexity may be relevant to AGI, I have the feeling that the somewhat more down-to-earth issue of computability in polynomial time is even more likely to be of crucial importance."

Has anyone done a good analysis on the expected value of purchasing health insurance? I will need to purchase health when I turn 26. How comprehensive should the insurance I purchase be?

At first I thought I should purchase a high-deductible that only protects against catastrophes. I have low living expenses and considerable savings, so this wouldn't be risky. The logic here is that insurance costs the expected value of the goods provided plus overhead, so the cost of insurance will always be less than it's expected value. If I purchase less insurance, I wa... (read more)

5Randy_M11y
"Also, insurance companies can reduce the cost of health care by negotiating lower prices for you. " This is the case even with a high deductable plan. The insurance will have a different rate when you use an in-network doctor or hospital service. If you haven't met the deductible and you go in, they'll send you a bill--but that bill will still be much cheaper than if you had gone in and paid out of pocket (like paying less than half). But make sure that the high deductable plan actually has a cheaper monthly payment by an amount that matters. With new regulations of what must be covered, the differences between plans may not end up being very big.

If you had to group Less Wrong content into eight categories by subject matter, what would those categories be?

  • Self-improvement, optimal living, life hacks
  • Philosophy
  • Futurism (Cryonics, the singularity
  • Friendly AI and SIAI, I mean, MIRI
  • Maths, Decision Theory, Game theory
  • Meetups
  • General-interest discussion (biased towards the interests of atheist nerds)
  • Meta
5somervta11y
I would remove meetups, as that isn't really LW content as such.
0RolfAndreassen11y
It would be good to have it in a separate category, though, so you could disappear it from the front page.
5Dorikka11y
For unspecified levels of meta. :P
2palladias11y
I'd subdivide Lifehacks into: * debiasing lifehacks - practical ways to subvert/avoid cognitive biases (CoZE exercises, Monday-Tuesday game, etc) * non-epistemological lifehacks - domain specific clever ideas (frameworks for chore negotiation, investment strategies, etc)
2Viliam_Bur11y
epistemical lifehacks; general instrumental lifehacks (e.g. how to overcome procrastination); specific instrumental lifehacks (domain-specific)

I don't understand the graph in Stephen Hsu on Cognitive Genomics - help?

7gwern11y
So to first quote Hsu's description: I'll try to explain it in different terms. What you are looking at is a graph of 'results vs effort'. How much work do you have to do to get out some useful results? The importance of this is that it's showing you a visual version of statistical power analysis (introduction). Ordinary power analysis is about examining the inherent zero-sum trade-offs of power vs sample size vs effect size vs statistical-significance, where you try to optimize each thing for one's particular purpose; so for example, you can choose to have a small (=cheap) sample size and a small Type I (false positives) error rate in detecting a small effect size - as long as you don't mind a huge Type II error rate (low power, false negative, failure to detect real effects). If you look at my nootropics or sleep experiments, you'll see I do power analysis all the time as a way of understanding how big my experiments need to be before they are not worthlessly uninformative; if your sample size is too small, you simply won't observe anything, even if there really is an effect (eg. you might conclude, 'with such a small n as 23, at the predicted effect size and the usual alpha of 0.05, our power will be very low, like 10%, so the experiment would be a waste of time'). Even though we know intelligence is very influenced by genes, you can't find 'the genes for intelligence' by looking at just 10 people - but how many do you need to look at? In the case of the graph, the statistical-significance is hardwired & the effect sizes are all known to be small, and we ignore power, so that leaves two variables: sample size and number of null-rejection/findings. The graph shows us simply that as we get a larger sample, we can successfully find more associations (because we have more power to get a subtle genetic effect to pass our significance cutoffs). Simple enough. It's not news to anyone that the more data you collect, the more results you get. What's useful here is t
2Paul Crowley11y
Many thanks for this! So in broad strokes: the smaller a correlation is, the more samples you're going to need to detect it, so the more samples you take, the more correlations you can detect. For five different human variables, this graph shows number of samples against number of correlations detected with them on a log/log scale; from that we infer that a similar slope is likely for intelligence, and so we can use it to take a guess at how many samples we'll need to find some number of SNPs for intelligence. Am I handwaving in the right direction?
0gwern11y
Yes, although I'd phrase this more as 'the more samples you take, the bigger your "budget", which you can then spend on better estimates of a single variable or if you prefer, acceptable-quality estimates of several variables'. Which one you want depends on what you're doing. Sometimes you want one variable, other times you want more than one variable. In my self-experiments, I tend to spend my entire budget on getting good power on detecting changes in a single variable (but I could have spent my data budget in several ways: on smaller alphas or smaller effect sizes or detecting changes to multiple variables). Genomics studies like these, however, aren't interested so much in singling out any particular gene and studying it in close detail, but finding 'any relevant gene at all and as many as possible'.
0Paul Crowley11y
And there's a "budget" because if you "double-spend", you end up with the XKCD green acne jelly beans?
0gwern11y
Eh, I'm not sure the idea of 'double-spending' really applies here. In the multiple comparisons case, you're spending all your budget on detecting the observed effect size and getting high-power/reducing-Type-II-errors (if there's an effect lurking there, you'll find it!), but you then can't buy as much Type I error reduction as you want. This could be fine in some applications. For example, when I'm A/B testing visual changes to gwern.net, I don't care if I commit a Type I error, because if I replace one doohickey with another doohickey and they work equally well (the null hypothesis), all I've lost is a little time. I'm worried about coming up with an improvement, testing the improvement, and mistakenly believing it isn't an improvement when actually it is. The problem with multiple comparisons comes when people don't realize they've used up their budget and they believe they really have controlled alpha errors at 5% or whatever. When they think they've had their cake & ate it too. I guess a better financial analogy would be more like "you spend all your money on the new laptop you need for work, but not having checked your bank account balance, promise to take your friends out for dinner tomorrow"?
0Lumifer11y
I am a bit confused -- is the framework for this thread observation (where the number of samples is pretty much the only thing you can affect pre-analysis) or experiment design (where you you can greatly affect which data you collect)? I ask because I'm intrigued by the idea of trading off Type I errors against Type II errors, but I'm not sure it's possible in the observation context without introducing bias.
0gwern11y
I'm not sure about this observation vs experiment design dichotomy you're thinking of. I think of power analysis as something which can be done both before an experiment to design it and understand what the data could tell one, and post hoc, to understand why you did or did not get a result and to estimate things for designing the next experiment.
0Lumifer11y
Well, I think of statistical power as the ability to distinguish signal from noise. If you expect signal of a particular strength you need to find ways to reduce the noise floor to below that strength (typically through increasing sample size). However my standard way of thinking about this is: we have data, we build a model, we evaluate how good the model output is. Bulding a model, say, via some sort of maximum likelihood, gives you "the" fitted model with specific chances to commit a Type I or a Type II error. But can you trade off chances of Type I errors against chances of Type II errors other than through crudely adding bias to the model output?
0gwern11y
Model-building seems like a separate topic. Power analysis is for particular approaches, where I certainly can trade off Type I against Type II. Here's a simple example for a two-group t-test, where I accept a higher Type I error rate and immediately see my Type II go down (power go up): R> power.t.test(n=40, delta=0.5, sig.level=0.05) Two-sample t test power calculation n = 40 delta = 0.5 sd = 1 sig.level = 0.05 power = 0.5981 alternative = two.sided NOTE: n is number in *each* group R> power.t.test(n=40, delta=0.5, sig.level=0.10) Two-sample t test power calculation n = 40 delta = 0.5 sd = 1 sig.level = 0.1 power = 0.7163 alternative = two.sided NOTE: n is number in *each* group In exchange for accepting 10% Type I rather than 5%, I see my Type II fall from 1-0.60=40% to 1-0.72=28%. Tada, I have traded off errors and as far as I know, the t-test remains exactly as unbiased as it ever was.
0Lumifer11y
I am not explaining myself well. Let me try again. To even talk about Type I / II errors you need two things -- a hypothesis or a prediction (generally, output of a model, possibly implicit) and reality (unobserved at prediction time). Let's keep things very simple and deal with binary variables, let's say we have an object foo and we want to know whether it belongs to class bar (or does not belong to it). We have a model, maybe simple and even trivial, which, when fed the object foo outputs the probability of it belonging to class bar. Let's say this probability is 92%. Now, at this point we are still in the probability land. Saying that "foo belongs to class bar with a probability of 92%" does not subject us to Type I / II errors. It's only when we commit to the binary outcome and say "foo belongs to class bar, full stop" that they appear. The point is that in probability land you can't trade off Type I error against Type II -- you just have the probability (or a full distribution in the more general case). It's the commitment to to a certain outcome on the basis of an arbitrarily picked threshold that gives rise to them. And if so it is that threshold (e.g. traditionally 5%) that determines the trade-off between errors. Changing the threshold changes the trade-off, but this doesn't affect the model and its output, it's all post-prediction interpretation.
0gwern11y
So you're trying to talk about overall probability distributions in a Bayesian framework? I haven't ever done power analysis with that approach, so I don't know what would be analogous to Type I and II errors and whether one can trade them off; in fact, the only paper I can recall discussing how one does it is Kruschke's paper (starting on pg11) - maybe he will be helpful?
0Lumifer11y
Not necessarily in the Bayesian framework, though it's kinda natural there. You can think in terms of complete distributions within the frequentist framework perfectly well, too. The issue that we started with was of statistical power, right? While it's technically defined in terms of the usual significance (=rejecting the null hypothesis), you can think about it in broader terms. Essentially it's the capability to detect a signal (of certain effect size) in the presence of noise (in certain amounts) with a given level of confidence. Thank for the paper, I've seen it before but didn't have a handy link to it.
0gwern11y
Does anyone do that, though? Well, if you want to think of it like that, you could probably formulate all of this in information-theoretic terms and speak of needing a certain number of bits; then the sample size & effect size interact to say how many bits each n contains. So a binary variable contains a lot less than a continuous variable, a shift in a rare observation like 90/10 is going to be harder to detect than a shift in a 50/50 split, etc. That's not stuff I know a lot about.
0Lumifer11y
Well, sure. The frequentist approach, aka mainstream statistics, deals with distributions all the time and the arguments about particular tests or predictions being optimal, or unbiased, or asymptotically true, etc. are all explicitly conditional on characteristics of underlying distributions. Yes, something like that. Take a look at Fisher information, e.g. "The Fisher information is a way of measuring the amount of information that an observable random variable X carries about an unknown parameter θ upon which the probability of X depends."
[-][anonymous]11y60

This essay on internet forum behavior by the people behind Discourse is the greatest thing I've seen in the genre in the past two or three years. It rivals even some of the epic examples of wikipedian rule-lawyering that I've witnessed.

Their aggregation of common internet forum rules could have been done by anyone, but it was ultimately they that did it. My confidence in Discourse's success has improved.

2David_Gerard11y
"Don't be a dick" is now "Wheaton's law"? Pfeh!

We wonder about the moral impact of dust specks in the eyes of 3^^^3 people.

What about dust specks in the eyes of 3^^^3 poodles? Or more to the point, what is the moral cost of killing one person vs one poodle? How many poodles lives would we trade for the life of one person?

Or even within humans, is it human years we would account in coming up with moral equivalencies? Do we discount humans that are less smart, on the theory that we almost certainly discount poodles against humans because they are not as smart as us? Do we discount evil humans com... (read more)

4wedrifid11y
I observe that the answer to the last question is not constrained to be positive.
6Randy_M11y
"Letting those people die was worth it, because they took their cursed yapping poodle with them!" (quote marks to indicate not my actual views)
1David_Gerard11y
Do the nervous systems of 3^^^3 nematodes beat the nervous systems of a mere 7x10^9 humans? If not, why not?
8Eliezer Yudkowsky11y
I believe that I care nothing for nematodes, and that as the nervous systems at hand became incrementally more complicated, I would eventually reach a sharp boundary wherein my degree of caring went from 0 to tiny. Or rather, I currently suspect that an idealized version of my morality would output such.
7ahbwramc11y
I'm kind of curious as to why you wouldn't expect a continuous, gradual shift in caring. Wouldn't mind design space (which I would imagine your caring to be a function of) be continuous?
8Eliezer Yudkowsky11y
Something going from 0 to 10^-20 is behaving pretty close to continuously in one sense. It is clear that there are some configurations of matter I don't care about at all (like a paperclip), while I do care about other configurations (like twelve-year-old human children), so it is elementary that at some point my utility function must go from 0 to nonzero. The derivative, the second derivative, or even the function itself could easily be discontinuous at this point.
1Bakkot11y
But needn't be! See for example f(x) = exp(-1/x) (x > 0), 0 (x ≤ 0). Wikipedia has an analysis. (Of course, the space of objects isn't exactly isomorphic to the real line, but it's still a neat example.)
1Eliezer Yudkowsky11y
Agreed, but it is not obvious to me that my utility function needs to be differentiable at that point.
0Armok_GoB11y
I dispute that; the paperclip is almost certainly either more or less likely to become a Boltzmann brain than an equivalent volume of vacuum.
-1MugaSofer11y
And ... it isn't clear that there are some configurations you care for ... a bit? Sparrows being tortured and so on? You don't care more about dogs than insects and more for chimpanzees than dogs? (I mean, most cultures have a Great Chain Of Being or whatever, so surely I haven't gone dreadfully awry in my introspection ...)
4Eliezer Yudkowsky11y
This is not incompatible with what I just said. It goes from 0 to tiny somewhere, not from 0 to 12-year-old.
1shminux11y
Can you bracket this boundary reasonably sharply? Say, mosquito: no, butterfly: yes?

No, but I strongly suspect that all Earthly life without frontal cortex would be regarded by my idealized morals as a more complicated paperclip. There may be exceptions and I have heard rumors that octopi pass the mirror test, and I will not be eating any octopus meat until that is resolved, because even in a world where I eat meat because optimizing my diet is more important and my civilization lets me get away with it, I do not eat anything that recognizes itself in a mirror. So a spider is a definite no, a chimpanzee is an extremely probable yes, a day-old human infant is an extremely probable no but there are non-sentience-related causes for me to care in this case, and pigs I am genuinely unsure of.

8Eliezer Yudkowsky11y
To be clear, I am unsure if pigs are objects of value, which incorporates both empirical uncertainty about their degree of reflectivity, philosophical uncertainty about the precise relation of reflectivity to degrees of consciousness, and ethical uncertainty about how much my idealized morals would care about various degrees of consciousness to the extent I can imagine that coherently. I can imagine that there's a sharp line of sentience which humans are over and pigs are under, and imagine that my idealized caring would drop to immediately zero for anything under the line, but my subjective probability for both of these being simultaneously true is under 50% though they are not independent. However it is plausible to me that I would care exactly zero about a pig getting a dust speck in the eye... or not.
1fubarobfusco11y
Does it matter to you that octopuses are quite commonly cannibalistic?
7Eliezer Yudkowsky11y
No. Babyeater lives are still important.
2MugaSofer11y
Funny, I parsed that as "should we then maybe be capturing them all to stop them eating each other?" Didn't even occur to me that was an argument about extrapolated octopus values.
4Eliezer Yudkowsky11y
It wasn't, your first parse would be a correct moral implication. The Babyeaters must be stopped from eating themselves.
2MugaSofer11y
... whoops. I meant I parsed fubarobfusco's comment differently to you, ("they want to be cannibals, therefore it's ... OK to eat them? Somehow?"), because I just assumed that obviously you should save the poor octopi (i.e. it would "bother" you in the sense of moral anguish, not "betcha didn't think of this!")
2shminux11y
I was unable to empathize with this view when reading 3WC. To me the Prime Directive approach makes more sense. I was willing to accept that the Superhappies have an anti-suffering moral imperative, since they are aliens with their alien morals, but that all the humans on the IPW or even its bridge officers would be unanimous in their resolute desire to end suffering of the Babyeater children strained my suspension of disbelief more than no one accidentally or intentionally making an accurate measurement of the star drive constant.
1Viliam_Bur11y
As an example outside of sci-fi, if you see an abusive husband and a brainwashed battered wife, the Prime Directive tells you to ignore the whole situation, because they both think it's more or less okay that way. Would you accept this consequence? Would it make a moral difference if the husband and wife were members of a different culture; if they were humans living on a different planet; or if they belonged to a different sapient species?
1shminux11y
The idea behind the PD is that for foreign enough cultures * you can't predict the consequences of your intervention with a reasonable certainty, * you can't trust your moral instincts to guide you to do the "right" thing * the space of all favorable outcomes is likely much smaller than that of all possible outcomes, like in the literal genie case * so you end up acting like a UFAI more likely than not. Hence non-intervention has a higher expected utility than an intervention based on your personal deontology or virtue ethics. This is not true for sufficiently well analyzed cases, like abuse in your own society. The farther you stray from the known territory, the more chances that your intervention will be a net negative. Human history is rife with examples of this. So, unless you can do a full consequentialist analysis of applying your morals to an alien culture, keep the hell out.
0Emile11y
Assuming pigs were objects of value, would that make it morally wrong to eat them? Unlike octopi, most pigs exist because humans plan on eating them, so if a lot of humans stopped eating pigs, there would be less pigs, and the life of the average pig might not be much better. (this is not a rhetorical question)
2Eliezer Yudkowsky11y
Yes. If pigs were objects of value, it would be morally wrong to eat them, and indeed the moral thing to do would be to not create them.
3Vladimir_Nesov11y
This needs a distinction between the value of creating pigs, existence of living pigs, and killing of pigs. If existing pigs are objects of value, but the negative value of killing them (of the event itself, not of the change in value between a living pig and a dead one) doesn't outweigh the value of their preceding existence, then creating and killing as many pigs as possible has positive value (relative to noise; with opportunity cost the value is probably negative, there are better things to do with the same resources; by the same token, post-FAI the value of "classical" human lives is also negative, as it'll be possible to make significant improvements).
2drethelin11y
I don't think it's morally wrong to eat people, if they happen to be in irrecoverable states
1MugaSofer11y
... really? Um, that strikes me as very unlikely. Could you elaborate on your reasoning?
1David_Gerard11y
But zero is not a probability. Edit: Adele_L is right, I was confusing utilities and probabilities.

Zero is a utility, and utilities can even be negative (i.e. if Eliezer hated nematodes).

0MugaSofer11y
... are you pointing out that there is a nonzero probability that Eliezer's CEV actually cares about nematodes?
3David_Gerard11y
No, Adele_L is right, I was confusing utilities and probabilities.
0Armok_GoB11y
Keyword here is believe. What probability do you assign? And if you say epsilon or something like that, is the epsilon bigger or smaller than 1/(3^^^3/10^100)?

I've got an (IMHO) interesting discussion article written up, but I am unable to post it; I get a "webpage cannot be found" error when I try. I'm using IE 9. Is this a known issue, or have I done something wrong?

6gwern11y
Have you tried searching the LW bugtracker or using a different browser?
6Salemicus11y
Thank you for this suggestion. I have discovered that this works in Chrome.
[-][anonymous]11y50

Here's a question that's been distracting me for the last few hours, and I want to get it out of my head so I can think about something else.

You're walking down an alley after making a bank withdrawal of a small sum of money. Just about when you realize this may have been a mistake, two Muggers appear from either side of the alley, blocking trivial escapes.

Mugger A: "Hi there. Give me all of that money or I will inflict 3^^^3 disutility on your utility function."

Mugger B: "Hi there. Give me all of that money or I will inflict maximum disutil... (read more)

7Emile11y
I may be fighting the hypothetical here, but ... If utility is unbounded, maximum disutility is undefined, and if it's bounded, then 3^^^3 is by definition smaller than the maximum so you should pay all to mugger B. I think trading a 10% chance of utility A for a 10% chance of utility B, with B < A is irrational per the definition of utility (as far as I understand; you can have marginal diminishing utility on money, but not marginally diminishing utility on *utility. I'm less sure about risk aversion though.)

That's not fighting the hypothetical. Fighting the hypothetical is first paying one, then telling the other you'll go back to the bank to pay him too. Or pulling out your kung fu skills, which is really fighting the hypothetical.

4sixes_and_sevens11y
If you have some concept of "3^^^3 disutility" as a tractable measure of units of disutility, it seems unlikely you don't also have a reasonable idea of the upper and lower bounds of your utility function. If the values are known this becomes trivial to solve. I am becoming increasingly convinced that VNM-utility is a poor tool for ad-hoc decision-theoretics, not because of dubious assumptions or inapplicability, but because finding corner-cases where it appears to break down is somehow ridiculously appealing.
3[anonymous]11y
If they're both telling the truth: since B gives maximum disutility, being mugged by both is no worse than being mugged by B. If you think your maximum disutility is X*3^^^3, I think if you run the numbers you should give a fraction X/2 to B, and the rest to A. (or all to B if X>2) If they might be lying, you should probably ignore them. Or pay B, whose threat is more credible if you don't think your utility function goes as far as 3^^^3 (although, what scale? Maybe a dust speck is 3^^^^3)
2Armok_GoB11y
Give it all to mugger B obviously. I almost certainly am experiencing -3^^^3 utilions according to almost any measure every millisecond anyway, given I live in a Big World.

I wonder if it makes sense to have something like a registry of the LW regulars who are experts in certain areas. For example, this forum has a number of trained mathematicians, philosophers, computer scientists...

Something like a table containing [nick, general area, training/credentials, area of interest, additional info (e.g. personal site)], maybe?

0Viliam_Bur11y
On a wiki page. Allowing anyone to opt out. The first step would be to gather data... probably in an article made for this purpose... or in a fresh open thread.

This is unrelated to rationality, but I'm posting it here in case someone decides it serves their goals to help me be more effective in mine.

I recently bought a computer, used it for a while, then decided I didn't want it. What's the simplest way to securely wipe the hard drive before returning it? Is it necessary to create an external boot volume (via USB or optical disc)?

4tut11y
Probably use dban.
0Document11y
How should I answer this dialog? The help link at the bottom was unhelpful.
2tut11y
I used the second option, but it would surprise me if it didn't work either way.
0Document11y
Seems to have worked; thanks.
0Document11y
Thanks; I'll try it. (I should have mentioned that it was a Windows 8 PC, but your link mentions working under Windows, so thanks again.)
0tut11y
It doesn't work under any operating system, it has its own very simple OS on the CD.
0Document11y
Good point; not sure what I was thinking. I could have said something about the CPU and BIOS(?), but for now I'll just see if it works. (Edit: seems to havea worked; thanks.)

I don't suppose there's any regularly scheduled LW meetups in San Diego, is there? I'll be there this week from Saturday to Wednesday for a conference.

How can I apply rationality to business?

9wedrifid11y
* Avoid sunk costs. * If stuff doesn't work figure out why and (in most cases) do different stuff. * When predicting how long a project will take consider how long similar tasks tend to take and use that as a (rather strong) guide.

Has anyone done a study on redundant information in languages?

I'm just mildly curious, because a back-of-the-envelope calculation suggests that English is about 4.7x redundant - which on a side note explains how we can esiayl regnovze eevn hrriofclly msispled wrods.

(Actually, that would be an interesting experiment - remove or replace fraction x of the letters in a paragraph and see at what average x participants can no longer make a "corrected" copy.)

I'd predict that Chinese is much less redundant in its spoken form, and that I have no idea how to measure redundancy in its written form. (By stroke? By radical?)

7gwern11y
Yes, it's been studied quite a bit by linguists. You can find some pointers in http://www.gwern.net/Notes#efficient-natural-language which may be helpful.
2linkhyrule511y
Thanks. ... huh. Now I'm thinking about actually doing that experiment...
5gwern11y
I ran into another thing in that vein: --The Man Who Invented Modern Probability - Issue 4: The Unlikely - Nautilus
0JQuinton11y
This also happens to me with music. I enjoy "unpredictable" music more than predictable music. Knowing music theory I know which notes are supposed to be played -- if a song is in a certain key -- and if a note or chord isn't predicted then it feels a bit more enjoyable. I wonder if the same technique could be applied to different genres of music with the same result, i.e. radio-friendly pop music vs non-mainstream music.
0linkhyrule511y
I wonder what that metric has to say about Finnigan's Wake...
0Douglas_Knight11y
By other metrics, Joyce became less compressible throughout his life. Going closer to the original metric, you demonstrate that the title is hard to compress (especially the lack of apostrophe).
0palladias11y
If you do, please post about it!
2wedrifid11y
Studies of this form have been done at least on the edge case where all the material removed is from the end (ie. tests of the ability of subjects to predict the next letter in an English text). I'd be interested to see your more general test but am not sure if it has been done. (Except, perhaps, as a game show).

Consider the following scenario. Suppose that it can be shown that the laws of physics imply that if we do a certain action (costing 5 utils to perform), then in 1/googol of our descendent universes, 3^^^3 utils can be generated. Intuitively, it seems that we should do this action! (at least to me) But this scenario also seems isomorphic to a Pascal's mugging situation. What is different?

If I attempt to describe the thought process that leads to these differences, it seems to be something like this. What is the measure of the causal descendents where 3^^^3... (read more)

0Armok_GoB11y
You cant pay for things in Utils, you can only pay for them in Opportunities. This is where pascals mugging goes wrong as well; the only reason to not give pascals mugger the money is the possibility of an even greater opportunity coming along later; a mugger that's more credible, and/or offers an even greater potential payof. (And once any mugger offers INFINITE utility there's only credibility left to increase.)
0Adele_L11y
That doesn't work because the expected value of things that you should do, e.g. donating to an effective charity, is far lower than the expected value of a pascal mugging.
2Armok_GoB11y
I expect an FAI to have at least 10% probability of acquiring infinite computational power. This means donations to MIRI have infinite expected utility.

A new study shows that manipulative behavior could be linked to the development of some forms of altruism. The study itself is unfortunately behind a paywall.

4somervta11y
I have access - PM me if you're interested in it.
2Richard_Kennaway11y
It's about eusocial animals. Human relevance?
0JoshuaZ11y
Unclear. One could conceive of similar action occurring in highly social species that aren't eusocial but have limited numbers of breeding pairs, but that's not frequently done by primates.
1diegocaleiro11y
Didn't Sci Hub work to find an upaid version, it often does......http://sci-hub.org/
1gwern11y
Sci-hub does not work for US users AFAIK.
[-][anonymous]11y30

This paper about AI from Hector J. Levesque seems to be interesting: http://www.cs.toronto.edu/~hector/Papers/ijcai-13-paper.pdf

It extensively discusses something called 'Winograd schema questions': If you want examples of Winograd schema questions, there is a list here: http://www.cs.nyu.edu/faculty/davise/papers/WS.html

The paper's abstract does a fairly good job of summing it up, although it doesn't explicitly mention Winograd schema questions:

The science of AI is concerned with the study of intelligent forms of behaviour in computational terms. But wh

... (read more)

I have made it up to episode 5 of Umineko, and I've found one incident in particular unusually easy to resolve (easy enough that though the answer hasn't been suggested by anyone in-game, I am sure that I know how it was/could be done); I'm wondering how much it is due to specialized knowledge and whether it really looks harder to other people. (Because of the curse of knowledge, it's now difficult for me to see whether the puzzle really is as trivial as it looks to me.) So, a little poll, even though LWers are not the best people to ask.


In episode 5, a... (read more)

V pna guvax bs guerr jnlf bs qbvat guvf gevpx.

  1. Ur uvq sbhe fyvcf bs cncre, bar sbe rnpu frnfba. Cerfhznoyl ur jvyy erzbir gur bgure guerr ng gur svefg bccbeghavgl.

  2. Ur unf qbar fbzr erfrnepu gb qvfpbire fbzr snpg nobhg ure gb hfr va uvf qrzbafgengvba.

  3. Fur unf hfrq ure snibevgr frnfba nf gur nafjre gb n frphevgl dhrfgvba ba n jro fvgr gung ur unf nqzva-yriry npprff gb.

Gurer znl or bgure jnlf. Jvgu fb znal, V pnaabg or irel fher gung nal fvatyr bar gung V pubbfr vf evtug.

5ygert11y
Guvf "chmmyr" frrzf rnfl gb na rkgerzr, gb zr ng yrnfg. Gur gevivny fbyhgvba jbhyq or gb uvqr nyy gur cbffvoyr nafjref va qvssrerag cynprf, naq bayl gryy ure gb ybbx va gur cynpr jurer ur uvq gur nafjre ur trgf gbyq vf pbeerpg. (Va guvf pnfr, haqre gur pybpx.)
4palladias11y
Cerqvpgvba: Ur chg sbhe fyvcf bs cncre va gur ebbz (r.t. pybpx, grqql orne, fubr, cntr # bs grkgobbx), naq pubfr juvpu bowrpg gb qverpg ure gb onfrq ba ure erfcbafr. Ur'f unir gb erzbir gur bgure guerr fbbavfu, ohg ur boivbhfyl unq npprff bapr, naq vs gurl'er nyy va fhssvpvragyl bofpher cynprf, vg jbhyq or cerggl rnfl
3Adele_L11y
My thought was the same as palladias'. I'm not seeing an obvious way involving cryptography though, but I am somewhat familiar with it (I understand RSA and its proof).
1gwern11y
Zl crefbany guvaxvat jnf "Bar bs gur rnfvrfg jnlf gb purng n pelcgbtencuvp unfu cerpbzzvgzrag vf gb znxr zhygvcyr fhpu unfurf naq fryrpgviryl erirny n fcrpvsvp bar nf nccebcevngr; gur punenpgre unf irevsvnoyl cerpbzzvggrq gb n cnegvphyne cerqvpgvba bs 'jvagre', ohg unf ur irevsvnoyl cerpbzvggrq gb bayl bar cerqvpgvba?" (Nqzvggrqyl V unir orra guvaxvat nobhg unfu cerpbzzvgzragf zber guna hfhny orpnhfr V unir n ybat-grez cebwrpg jubfr pbapyhfvba vaibyirf unfu cerpbzzvgzragf naq V qba'g jnag gb zvfhfr gurz be yrnir crbcyr ebbz sbe bowrpgvba.)
2palladias11y
V qvqa'g guvax ng nyy nobhg unfurf (naq V qba'g unir zhpu rkcrevrapr jvgu gurz rkprcg n ovg bs gurbel). V whfg ena 'jung jbhyq V qb jvgu npprff gb gur ebbz nurnq bs gvzr naq jung qb V xabj?' naq bhg cbccrq sbhe furrgf bs cncre.
0saturn11y
Bs pbhefr, erirnyvat n unfu nsgre gur snpg cebirf abguvat, rira vs vg'f irevsvnoyl gvzrfgnzcrq. Nabgure cbffvoyr gevpx vf gb fraq n qvssrerag cerqvpgvba gb qvssrerag tebhcf bs crbcyr fb gung ng yrnfg bar tebhc jvyy frr lbhe cerqvpgvba pbzr gehr. V qba'g xabj bs na rnfl jnl nebhaq gung vs gur tebhcf qba'g pbzzhavpngr.
0David_Gerard11y
Guvf vf irel yvxr gur sbbgonyy cvpxf fpnz.
3Alicorn11y
V'z abg fher V jbhyq unir pnyyrq guvf n sbez bs pelcgbtencul jrer V hacevzrq, ohg jvgu bayl sbhe cbffvoyr nafjref ur whfg unf gb cvpx sbhe uvqvat cynprf naq gryy ure gb ybbx va gur evtug bar, evtug?
2MugaSofer11y
Gurer jrer abgrf sbe rnpu bs gur sbhe frnfbaf uvqqra va qvssrerag cynprf nebhaq gur ebbz. Gur pnyyre fvzcyl ersreerq ure gb gur uvqvat-cynpr bs gur abgr gung zngpurq ure nafjre. Zl svefg gubhtug ba ernqvat gur ceboyrz - juvpu fgvyy frrzf yvxr zl org thrff, ba ersyrpgvba, gubhtu. Qvqa'g ibgr ba gur "ubj fher ner lbh", orpnhfr V'z ab ybatre fher ubj fher V nz - V'z hasnzvyvne jvgu gur fubj, naq gur ersrerapr gb pelcgbtencul fhttrfgf fbzr bgure fbyhgvba (V'z snzvyvne jvgu ehqvzragnel zntvp gevpxf, juvpu vf cebonoyl jurer ZL fbyhgvba pbzrf sebz.) Ohg V pregnvayl qba'g unir "ab vqrn" ubj vg jnf qbar.
2NancyLebovitz11y
Posted before I read other replies: V fhfcrpg gurer ner sbhe fyvcf bs cncre va qvssrerag cnegf bs ure ebbz. Naq vs ur pbhyq farnx gurz va, gura gurer'f n ernfbanoyr punapr ur pna farnx gur guerr fyvcf ersreevat gb aba-jvagre frnfbaf bhg orsber fur svaqf gurz.
2beoShaffer11y
Yvxr frireny bs gur bgure pbzzragref V dhvpxyl fnj ubj guvf pbhyq or qbar jvgu onfvp fgntr zntvp, ohg qrfcvgr orvat snveyl snzvyvne jvgu pelcgb V qvqa'g vzzrqvngryl znxr gur pbaarpgvba gb pelcgb hagvy V fnj lbhe pbzzrag ba unfu cer-pbzzvgzragf. Univat n fvatyr pnabavpny yvfg bs lbhe cer-pbzzvgzragf. choyvfurq va nqinapr jbhyq frrz gb fbyir cngpu guvf fcrpvsvp irarenovyvgl.
2Risto_Saarelma11y
V cnggrea zngpurq zl vqrn bs gur fbyhgvba gb gur onfvp fgntr zntvp gevpx bs univat znal uvqqra bcgvbaf naq znxvat gur znex guvax lbh bayl unq gur bar lbh fubjrq gurz, abg pelcgbtencul.
1gjm11y
I'm rather alarmed at how many people appear to have said they're very sure they know how he did it, on (I assume, but I think it's pretty clear) the basis of having thought of one very credible way he could have done it. I'm going to be optimistic and suppose that all those people thought something like "Although gwern asked how sure we are that we know how it was done, context suggests that the puzzle is really 'find a way to do it' rather than 'identify the specific way used in this case', so I'll say 'very' even though for all I know there could be other ways'. (For what it's worth, I pedantically chose the "middle" option for that question, but I found the same obvious solution as everyone else.)
1gwern11y
In the case of Umineko, there's not really any difference between 'find a way' and 'find the way', since it adheres to a relativistic Schrodinger's-cat-inspired epistemology where all that matters is successfully explaining the observed evidence. So I don't expect the infelicitous wording to make a difference.
0gjm11y
Ah, OK. I wasn't aware of that bit of context. Thanks.
0gwern11y
As it turns out, there's a second possible way using a detail I didn't bother to mention (because I assumed it was a red herring and not as satisfactory a solution anyway): Angfhuv npghnyyl fnlf fur'f arire rire gbyq nalbar ure snibevgr frnfba rkprcg sbe gur srznyr freinag Funaaba lrnef ntb, naq guvaxf nobhg jurgure Funaaba pbhyq or pbafcvevat jvgu gur lbhat znyr pnyyre. Rkprcg Funaaba vf n ebyr cynlrq ol gur traqre-pbashfrq pebffqerffvat phycevg Lnfh (nybat jvgu gur ebyrf bs Xnaba & Orngevpr), fb gur thrff pbhyq unir orra onfrq ba abguvat ohg ure zrzbel bs orvat gbyq gung. Crefbanyyl, rira vs V jnf va fhpu n cbfvgvba, V jbhyq fgvyy cersre hfvat gur pneq gevpx: jul pbhyqa'g Angfhuv unir punatrq ure zvaq bire gur lrnef? Be abg orra frevbhf va gur svefg cynpr? Be Funaaba unir zvferzrzorerq? rgp
1Kindly11y
Mentally subtract my vote from "No idea" onto "Very" since apparently I can read poll answers better than poll questions.
0garethrees10y
Creuncf gur fyvc bs cncre ybbxrq fbzrguvat yvxr guvf. (Qrfvtavat na nzovtenz jbhyq or nanybtbhf gb svaqvat zhygvcyr zrffntrf jvgu gur fnzr unfu.)
0gwern10y
Gung'q arire jbex sbe n frpbaq ba n uhzna. V qba'g guvax V'ir frra nal nzovtenzf juvpu ner fb fzbbgu gung lbh pbhyq frr rvgure bar onfrq ba n cevzr jvgubhg abgvat gung gur jevgvat vf irel bqq. V pna'g rira ernq nal bs gung nzovtenz rkprcg sbe 'fcevat', fgenvavat uneq.
0garethrees10y
Gung cnegvphyne nzovtenz, fher. (Vg'f nyfb qvsvphyg gb svaq zhygvcyr zrffntrf jvgu gur fnzr unfu.) Ohg Qreera Oebja hfrq guvf nzovtenz va uvf 2007 frevrf "Gevpx be Gerng" jvgu ng yrnfg gur nccrnenapr bs fhpprff (gubhtu nf nyjnlf jvgu Oebja, vg'f cbffvoyr ur jnf sbbyvat hf engure guna gur cnegvpvcnag).
0gwern11y
Thanks for all the poll submissions. I decided since I just finished Umineko, this is a good time to analyze the 49 responses. The gist is that the direction seems to be as predicted and the effect size reasonable (odds-ratio of 1.77), but not big enough to yield any impressive level of statistical-significance (p=0.24): R> poll <- read.csv("http://dl.dropboxusercontent.com/u/182368464/umineko-poll.csv") R> library(ordinal) R> summary(clm(as.ordered(Certainty) ~ Crypto, data=poll)) formula: as.ordered(Certainty) ~ Crypto data: poll link threshold nobs logLik AIC niter max.grad cond.H logit flexible 48 -30.58 67.16 5(0) 5.28e-09 2.9e+01 Coefficients: Estimate Std. Error z value Pr(>|z|) Crypto 0.571 0.491 1.16 0.24 Threshold coefficients: Estimate Std. Error z value 0|1 1.988 0.708 2.81 1|2 3.075 0.822 3.74 (1 observation deleted due to missingness) R> exp(0.571) [1] 1.77 Or if you prefer, a linear regression: R> summary(lm(Certainty ~ Crypto, data=poll)) Call: lm(formula = Certainty ~ Crypto, data = poll) Residuals: Min 1Q Median 3Q Max -0.409 -0.287 -0.287 -0.164 1.836 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.164 0.151 1.09 0.28 Crypto 0.122 0.117 1.05 0.30
0David_Gerard11y
Zhygvcyr ovgf bs cncre, boivbhfyl.
-8tut11y
[-][anonymous]11y20

I have never consciously noticed a dust speck going into my eye, at least I don't remember it. This means it didn't make big enough effect on my mind so that it would have made a lasting impression on my memory. When I first read the post about dust specks and torture, I had to think hard about wtf the speck going into your eye even means.

Does this mean that I should attribute zero negative utility to dust speck going into my eye?

4gwern11y
You could consider the analogous problem of waking up during surgery & then forgetting it afterwards.
3Locaha11y
The dust speck is just a symbol for the smallest negative utility unit. Just imagine something else.
2[anonymous]11y
Oh, I was already aware of that (and this is not just hindsight bias, I remember reading about this today and someone suggested replacing the speck with the smallest actual negative utility unit). This isn't really about the original question anyway. I was just thinking if something that doesn't even register on a conscious level could have negative utility.
1Locaha11y
I guess anything with a negative cumulative effect. Imagine the dust specks piling in your eye until they start to interfere with your vision.
1linkhyrule511y
Well, yes, but it's one dust speck per person... And it's entirely possible that utility of dust speck isn't additive. In fact, it's trivially so: one dust speck is fine, a few trillion will do gruesome things to your head.
0Locaha11y
I'm now thinking of developing a Dust Speck Machine Gun. Or Shotgun, possibly. Well, I don't see how anything that never registers on any level can have any utility. But... I dunno. Something that lowers your IQ by 1 point may be something you will never discover, and yet it will cause you negative utility...

What if this were a video game? A way of becoming more strategic.

Do consequentialists generally hold as axiomatic that there must be a morally preferable choice (or conceivably multiple equally preferable choices) in a given situation? If so, could somebody point me to a deeper discussion of this axiom (it probably has a name, which I don't know.)

2somervta11y
Not explicitly as an axiom AFAIK, but if you're valuing states-of-the-world, any choice you make will lead to some state, which means that unless your valuation is circular, the answer is yes. Basically, as long as your valuation is VNM-rational, definitely yes. Utilitarians are a special case of this, and I think most consequentialists would adhere to that also.
5asr11y
What happens if my valuation is noncircular, but is incomplete? What if I only have a partial order over states of the world? Suppose I say "I prefer state X to Z, and don't express a preference between X and Y, or between Y and Z." I am not saying that X and Y are equivalent; I am merely refusing to judge. My impression is that real human preference routinely looks like this; there are lots of cases people refuse to evaluate or don't evaluate consistently. It seems like even with partial preferences, one can be consequentialist -- if you don't have clear preferences between outcomes, you have a choice that isn't morally relevant. Or is there a self-contradiction lurking?
1pengvado11y
If the result of that partial preference is that you start with Z and then decline the sequence of trades Z->Y->X, then you got dutch booked. Otoh, maybe you want to accept the sequence Z->Y->X if you expect both trades to be offered, but decline each in isolation? But then your decision procedure is dynamically inconsistent: Standing at Z and expecting both trade offers, you have to precommit to using a different algorithm to evaluate the Y->X trade than you will want to use once you have Y.
0asr11y
I think I see the point about dynamic inconsistency. It might be that "I got to state Y from Z" will alter my decisionmaking about Y versus X. I suppose it means that my decision of what to do in state Y no longer depends purely on consequences, but also on history, at which point they revoke my consequentialist party membership. But why is that so terrible? It's a little weird, but I'm not sure it's actually inconsistent or violates any of my moral beliefs. I have all sorts of moral beliefs about ownership and rights that are history-dependent so it's not like history-dependence is a new strange thing.
0somervta11y
You could have undefined value, but it's not particularly intuitive, and I don't think anyone actually advocates it as a component of a consequentialist theory. Whether, in real life, people actually do it is a different story. I mean, it's quite likely that humans violate the VNM model of rationality, but that could just be because we're not rational.
0metastable11y
Thanks! Do consequentialist kind of port the first axiom (completeness) from the VN-M utility theorem, changing it from decision theory to meta-ethics? And for others, to put my original question another way: before we start comparing utilons or utility functions, insofar as consequentialists begin with moral intuitions and reason the existence of utility, is one of their starting intuitions that all moral questions have correct answers? Or am I just making this up? And has anybody written about this? To put that in one popular context: in the Trolley Switch and Fat Man problem, it seems like most people start with the assumption that there exists a right answer (or preferable, or best, whatever your terminology), and that it could never be the case that an agent will do the wrong/immoral/unethical thing no matter what he or she chooses. Am I right that this assumption exists?
1asr11y
Most people do have this belief. I think it's a safe one, though. It follows from a substantive belief most people have, which is that agents are only morally responsible for things that are under their control. In the context of a trolley problem, it's stipulated that the person is being confronted with a choice -- in the context of the problem, they have to choose. And so it would be blaming them for something beyond their control to say "no matter what you do, you are blameworthy." One way to fight the hypothetical of the trolley problem is to say "people are rarely confronted with this sort of moral dilemma involuntarily, and it's evil to to put yourself in a position of choosing between evils." I suppose for consistency, if you say this, you should avoid jury service, voting, or political office.
1somervta11y
Not explicitly (except in the case of some utilitarians), but I don't think many would deny it. The boundaries between meta-ethics and normative ethics are vaguer than you'd think, but consequentialism is already sort of metaethical. The VMN theorem isn't explicitly discussed that often (many ethicists won't have heard of it), but the axioms are fairly intuitive anyway. However, although I don't know enough about weird forms of consequentialism to know if anyone's made a point of denying completeness, I wouldn't be that surprised if that position exists. Yes, I think it certainly exists. I'm not sure if it's universal or not, but I haven't read a great deal on the subject yet, you I'm not sure if I would know.

Um... In the HPMOR notes section, this little thing got mentioned.

"I am auctioning off A Day Of My Time, to do with as the buyer pleases – this could include delivering a talk at your company, advising on your fiction novel in progress, applying advanced rationality skillz to a problem which is tying your brain in knots, or confiding the secret answer to the hard problem of conscious experience (it’s not as exciting as it sounds). I retain the right to refuse bids which would violate my ethics or aesthetics. Disposition of funds as above."

That... (read more)

4ArisKatsaris11y
Well, keep in mind that Eliezer himself claims that "it's not as exciting as it sounds". And of course you always need to have in mind that what Eliezer considers to be "the secret answer to the hard problem of conscious experience" may not be as satisfying an answer to you as it is to him. After all, some people think that the non-secret answer to the hard problem of conscious experience is something like "consciousness is what an algorithm feels like from the inside" and this is quite non-satisfactory to me (and I think it was non-satisfactory to Eliezer too). (And also, I think the bidding started at something like $4000.)
0CAE_Jones11y
I got excited for the fraction of a second it took me to remember that everyone who could possibly want to bid could probably afford to spend more money than I have to my name on this without it cutting into their living expenses. Unless my plan was "Bid $900, hope no one outbids, ask Eliezer to get me a job as quickly as possible", which isn't really that exciting a category, however useful.
0Mitchell_Porter11y
I might have bid on that, but the auction is already over.

I enjoyed this non-technical piece about the life of Kolmogorov - responsible for a commonly used measure of complexity, as well as several now-conventional conceptions of probability. I wanted to share: http://nautil.us/issue/4/the-unlikely/the-man-who-invented-modern-probability

what is a reliable way of identifying arbitrary solved or unsolved problems??

0[anonymous]11y
The existence of an industry indicates a common problem that humans can make some progress toward solving. http://en.wikipedia.org/wiki/Standard_Industrial_Classification A manual or a textbooks for an field that is more applied than descriptive is full of procedural knowledge for solving the problems of that domain. You can find very good books explaining how to draw portraits, but for some reason people don't openly say portrait drawing is solved. Maybe in applied fields we just work to solve bigger and harder problems, like figuring out how to forecasting the weather ever more accurately, and once the problems are mostly, reliably solved the fields just quietly disappear. Like we don't have lamp lighters anymore, because light bulbs mostly and reliably solve the problem that lamp lighters were specialized to deal with. Or it's unusual for a university education to build up to theology these days, when theology used to be the main reason for universities existing.
0Alsadius11y
Arbitrary, as in ones you pick yourself? Well, pick a problem, then Google it. Do you mean random?
0Flipnash11y
I do mean random. The only way I've come up with that reliably can identify a problem would be to pick a random household item, then think of what problem it is supposed to solve therefore identifying a problem, but that doesn't work for unsolved problems....
7Pentashagon11y
I think you have to start by imaging better possible states of the world, and then see if anyone has thought of a practical way to get from the current state to the better possible state; if not it's an unsolved problem. In household terms, start by imagining the household in a "random" better state (cleaner, more efficient, more interesting, more comfortable, etc.) and once you have a clear idea of something better, search for ways to achieve the better state. In concrete terms, always having clean dishes and delicious prepared food would be much better than dirty dishes and no food. Dishwashers help with the former, but are manual and annoying. Microwaves and frozen food help with the latter, but I like fresh food. Paying a cook is expensive. Learning to cook and then cooking costs time. What is cheap, practical, and yields good results? Unsolved problem, unless you want to eat Soylent.
3RolfAndreassen11y
Skilled slaves? Perhaps 'ethical' should be added to your list of constraints. :)
1Lumifer11y
(cheap, practical, and yields good results) = (skilled slaves) ?? We must live in radically different environments X-D
4Manfred11y
You could pick words from the dictionary at random until they either describe a problem or are nonsensical - if nonsense, try again. Warning: may take a few million tries to work.

I find the idea of commitment devices strongly aversive. If I change my mind about doing something in the future, I want to be able to do whatever I choose to do, and don't want my past self to create negative repercussions for me if I change my mind.