You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.
Comment author:Emile
19 August 2013 07:22:52AM
19 points
[-]
I think it's much better than monthly open threads - back then, I would sometimes think "Hmm, I'd like to ask this in an open thread, but the last one is too old, nobody's looking at it any more".
Comment author:Manfred
19 August 2013 01:57:56PM
*
4 points
[-]
Suppose we were wondering about changing the flavor of our pizza. Someone says "Yeah, I'm really glad you've got these new flavors on your menu, I used to think the old recipe was boring and didn't order it much."
And then it turns out that this person hasn't ever actually tried any of your new flavors of pizza.
Sort of sets an upper bound on how much the introduction of new flavors has impacted this person's behavior.
Comment author:Tenoke
19 August 2013 02:16:31PM
5 points
[-]
You can judge a lot more about a thread than about a pizza by just looking at it.
Also, if you seriously think that Open Threads can only be evaluated by people with top-level comments in them you probably misunderstand both how most people use the Open Threads and what is required to judge them.
Comment author:Manfred
19 August 2013 04:42:00PM
*
-3 points
[-]
I think you can judge quite a lot about pizza without eating it. That merely wasn't what I was talking about. Don't bait and switch conversations please.
Comment author:Emile
19 August 2013 04:26:35PM
3 points
[-]
Sure!
Though here is more of a case of "once in a blue moon I got o the pizza place ... and I'm bored and tired of life ... and want to try something crazy for a change ... but then I see the same old stuff on the menu, I think man, this world sucks ... but now they have the Sushi-Harissa-Livarot pizza, I know next time I'm going to feel better!"
I agree it's a bit weird that I say that p(post|weekly thread) > p(post| monthly thread) when so far there are no instances of post|weekly thread.
Comment author:bogdanb
19 August 2013 07:34:42PM
4 points
[-]
Note that he didn’t say “I didn’t post much”, he just said that there existed times when he thought about posting but didn’t because of the age of the thread. That is useful evidence, you can’t just ignore it if it so happens that there are no instances of posting at all.
(In pizza terms, Emile said “I used to think the old recipe was bad and I never ordered it. It’s not that surprising in that case that there are no instances of ordering.)
I prefer it to the old format; once a month is too clumpy for an open thread. It was fine when this was a two-man blog, but not for a discussion forum.
Comment author:bbleeker
20 August 2013 02:56:22PM
-1 points
[-]
Also easier to translate. In fact, we often translate "up to" with "maximaal", the equivalent of "up to a maximum of" in Dutch. But of course that only translates the practical sense, and leaves out the implication of "up to a maximum of xx (and that is a LOT)". We could translate it with "wel" ("wel xx" ~ "even as much as xx"), but in most contexts, that sounds really... American, over the top, exaggerated. And also it doesn't sound exact enough, when it clearly is intended to be a hard limit.
Comment author:JoshuaZ
19 August 2013 09:20:09PM
0 points
[-]
Unclear. One could conceive of similar action occurring in highly social species that aren't eusocial but have limited numbers of breeding pairs, but that's not frequently done by primates.
Comment author:shminux
19 August 2013 08:22:55PM
5 points
[-]
What's the name of the bias/fallacy/phenomenon where you learn something (new information, approach, calculation, way of thinking, ...) but after awhile revert to the old ideas/habits/views etc.?
Note that this is intended as a "Causality for non-majors" type presentation. If you need a higher level of precision, and are able the follow the maths, you would be much better off reading Pearl's book.
Comment author:Adele_L
19 August 2013 10:35:49PM
1 point
[-]
Thanks for making these available.
Even if you can follow the math, these sorts of things can be useful for orienting someone new to the field, or laying a conceptually simple map of the subject that can be elaborated on later. Sometimes, it's easier to use a map to get a feel for where things are than it is to explore directly.
Comment author:Flipnash
20 August 2013 12:17:04AM
*
0 points
[-]
I do mean random. The only way I've come up with that reliably can identify a problem would be to pick a random household item, then think of what problem it is supposed to solve therefore identifying a problem, but that doesn't work for unsolved problems....
Comment author:Pentashagon
20 August 2013 12:44:03AM
5 points
[-]
I think you have to start by imaging better possible states of the world, and then see if anyone has thought of a practical way to get from the current state to the better possible state; if not it's an unsolved problem.
In household terms, start by imagining the household in a "random" better state (cleaner, more efficient, more interesting, more comfortable, etc.) and once you have a clear idea of something better, search for ways to achieve the better state. In concrete terms, always having clean dishes and delicious prepared food would be much better than dirty dishes and no food. Dishwashers help with the former, but are manual and annoying. Microwaves and frozen food help with the latter, but I like fresh food. Paying a cook is expensive. Learning to cook and then cooking costs time. What is cheap, practical, and yields good results? Unsolved problem, unless you want to eat Soylent.
Comment author:Manfred
20 August 2013 09:46:03AM
*
3 points
[-]
You could pick words from the dictionary at random until they either describe a problem or are nonsensical - if nonsense, try again. Warning: may take a few million tries to work.
Comment author:pan
19 August 2013 10:12:51PM
23 points
[-]
Why doesn't CFAR just tape record one of the workshops and throw it on youtube? Or at least put the notes online and update them each time they change for the next workshop? It seems like these two things would take very little effort, and while not perfect, would be a good middle ground for those unable to attend a workshop.
I can definitely appreciate the idea that person to person learning can't be matched with these, but it seems to me if the goal is to help the world through rationality, and not to make money by forcing people to attend workshops, then something like tape recording would make sense. (not an attack on CFAR, just a question from someone not overly familiar with it).
Comment author:ChristianKl
19 August 2013 10:28:05PM
8 points
[-]
One of the core ideas of CFAR is to develop tools to teach rationality. For that purpose it's useful to avoid making the course material completely open at this point in time. CFAR wants to publish scientific papers that validate their ideas about teaching rationality.
Doing things in person helps with running experiments and those experiments might be less clear when some people already viewed the lectures online.
Comment author:pan
19 August 2013 11:31:23PM
5 points
[-]
I guess I don't see why the two are mutually exclusive, I doubt everyone would stop attending workshops if the material was freely available, and I don't understand why something can't be published if it's open sourced first?
Comment author:Frood
20 August 2013 06:07:16AM
4 points
[-]
I'm guessing that the goal here is to gather information on how to teach rationality to the 'average' person? As in, the person off of the street who's never asked themselves "what do I think I know and how do I think I know it?". But as far as I can tell, LWers make up a large portion of the workshop attendees. Many of us will have already spent enough time reading articles/sequences about related topics that it's as if we've "already viewed the lectures online".
Also, it's not as if the entire internet is going to flock to the content the second that it gets posted. There will still be an endless pool of people to use in the experiments. And wouldn't the experiments be more informative if the data points weren't all paying participants with rationality as a high priority? Shouldn't the experiments involve trying to teach a random class of high-schoolers or something?
Comment author:somervta
21 August 2013 01:51:24AM
3 points
[-]
(April 2013 Workshop Attendee)
(The argument is that) A lot of the CFAR workshop material is very context dependent, and would lose significant value if distilled into text or video. Personally speaking, a lot of what I got out of the workshop was only achievable in the intensive environment - the casual discussion about the material, the reasons behind why you might want to do something, etc - a lot of it can't be conveyed in a one hour video. Now, maybe CFAR could go ahead and try to get at least some of the content value into videos, etc, but that has two concerns. One is the reputational problem with 'publishing' lesser-quality material, and the other is sorta-almost akin to the 'valley of bad rationality'. If you teach someone, say, the mechanics of aversion therapy, but not when to use it, or they learn a superficial version of the principle, that can be worse than never having learned it at all, and it seems plausible that this is true of some of the CFAR material also.
Comment author:pan
21 August 2013 03:33:22PM
3 points
[-]
I agree that there are concerns, and you would lose a lot of the depth, but my real concern is with how this makes me perceive CFAR. When I am told that there are things I can't see/hear until I pay money, it makes me feel like it's all some sort of money making scheme, and question whether the goal is actually just to teach as many people as much as possible, or just to maximize revenue. Again, let me clarify that I'm not trying to attack CFAR, I believe that they probably are an honest and good thing, but I'm trying to convey how I initially feel when I'm told that I can't get certain material until I pay money.
It's akin to my personal heuristic of never taking advice from anyone who stands to gain from my decision. Being told by people at CFAR that I can't see this material until I pay the money is the opposite of how I want to decide to attend a workshop, I instead want to see the tapes or read the raw material and decide on my own that I would benefit from being in person.
Comment author:tgb
21 August 2013 06:28:37PM
2 points
[-]
While you have good points, I would like to say that making money is not unaligned with the goal of teaching as many people as possible. It seems like a good strategy is to develop high-quality material by starting off teaching only those able to pay. This lets some subsidize the development of more open course material. If they haven't gotten to the point where they have released the subsidized material, then I'd give them some more time and judge them again in some years. It's a young organization trying to create material from scratch in many areas.
Comment author:metastable
21 August 2013 07:16:22PM
3 points
[-]
Yeah, I feel these objections, and I don't think your heuristic is bad. I would say, though, and I hold no brief for CFAR, never having donated or attended a workshop, that there is another heuristic possibly worth considering: generally more valuable products are not free. There are many exceptions to this, and it is possible for sellers to counterhack this common heuristic by using higher prices to falsely signal higher quality to consumers. But the heuristic is not worthless, it just has to be applied carefully.
Comment author:somervta
22 August 2013 09:21:09AM
1 point
[-]
I feel your concerns, but tbh I think the main disconnect is the research/development vs teaching dichotomy, not (primarily) the considerations I mentioned. The volunteers at the workshop (who were previous attendees) were really quite emphatic about how much they had improved, including content and coherency as well as organization.
I'm a keen swing dancer. Over the past year or so, a pair of internationally reputable swing dance teachers have been running something called "Swing 90X", (riffing off P90X). The idea is that you establish a local practice group, film your progress, submit your recordings to them, and they give you exercises and feedback over the course of 90 days. By the end of it, you're a significantly more badass dancer.
It would obviously be better if everything happened in person, (and a lot does happen in person; there's a massive international swing dance scene), but time, money and travel constraints make this prohibitively difficult for a lot of people, and the whole Swing 90X thing is a response to this, which is significantly better than the next best thing.
It's worth considering if a similar sort of model could work for CFAR training.
Comment author:Benito
21 August 2013 01:59:35PM
4 points
[-]
Is a CFAR workshop like a lecture? I thought it would be closer to a group discussion, and perhaps subgroups within. This would make a recording highly unfocused and difficult to follow.
Comment author:somervta
22 August 2013 09:25:36AM
*
3 points
[-]
Any one unit in the workshop is probably something in between a lecture, a practice session and a discussion between the instructor and the attendees. Each unit is different in this respect. For most of the units, a recording of a session would probably not be very useful on its own.
Comment author:Adele_L
19 August 2013 11:39:54PM
3 points
[-]
Consider the following scenario. Suppose that it can be shown that the laws of physics imply that if we do a certain action (costing 5 utils to perform), then in 1/googol of our descendent universes, 3^^^3 utils can be generated. Intuitively, it seems that we should do this action! (at least to me) But this scenario also seems isomorphic to a Pascal's mugging situation. What is different?
If I attempt to describe the thought process that leads to these differences, it seems to be something like this. What is the measure of the causal descendents where 3^^^3 utils are generated? In typical Pascal's mugging, I expect there to be absolutely zero causal descendents where 3^^^3 utils are generated, but in this example, I expect there to be "1/googol" such causal descendents, even though the subjective probability of these two scenarios is roughly the same. I then do my expected utility maximization with (# of utils)(best guess of my measure) instead of (# of utils)(subjective probability), which seems to match with my intuitions better, at least.
But this also just seems like I am passing the buck to the subjective probability of a certain model of the universe, and that this will suffer from the mugging problem as well.
So does thinking about it this way add anything, or is it just more confusing?
Comment author:knb
20 August 2013 01:15:13AM
*
8 points
[-]
I don't know how technically viable hyperloop is, but it seems especially well suited for the United States.
Investing in a hyperloop system doesn't make as much sense in Europe or Japan for a number of reasons:
European/Japanese cities are closer together, so Hyperloop's long acceleration times are a larger relative penalty in terms of speed. The existing HSR systems reach their lower top speeds more quickly.
Most European countries and Japan already have decent HSR systems and are set to decline in population. Big new infrastructure projects tend not to make as much sense when populations are declining and the infrastructure cost : population ratio is increasing by default.
Existing HSR systems create a natural political enemy for Hyperloop proposals. For most countries, having HSR and Hyperloop doesn't make sense.
In contrast, the US seems far better suited:
The US is set for a massive population increase, requiring large new investments in transportation infrastructure in any case.
The US has lots of large but far-flung cities, so long acceleration times are not as much of a relative penalty.
The US has little existing HSR to act as a competitor. The political class has expressed interest in increasing passenger rail infrastructure.
Hyperloop is proposed to carry automobiles. Low walkability of US towns is the big killer of intercity passenger rail in the US. Taking HSR might be faster than driving, but in addition to other benefits, driving saves money on having to rent a car when you reach the destination city.
Another possible early adopter is China (because they still need more transport infrastructure, land acquisition is a trivial problem for the Communist party, and they have a larger area, mitigating the slow acceleration problem.) I see China as less likely than the US because they do have a fairly large HSR system and it is expanding quickly. Also, China is set for population decline within a few decades, although they have some decades of slow growth left.
Russia is another possible candidate. Admittedly they have the declining population problem, but they still need more transport infrastructure and they have several big, far-flung cities. The current Russian transportation system is quite unsafe, so they could be expected to be willing to invest in big new projects. The slow acceleration problem would again be mitigated by Russia's large size.
Comment author:CAE_Jones
20 August 2013 02:30:06AM
2 points
[-]
I was only vaguely following the Hyperloop thread on Lesswrong, but this analysis convinced me to Google it to learn more. I was immediately bombarded with a page full of search results that were pecimmistic at best (mocking, pretending at fallasy of gray but still patronizing, and politically indignant (the LA Times) were among the results on the first page)[1]. I was actually kinda hopeful about the concept, since America desperately needs better transit infrastructure, and KND's analysis of it being best suited for America makes plenty of sense so far as I can tell.
[1] I didn't actually open any of the results, just read the titles and descriptions. The tone might have been exaggerated or even completely mutated by that filter, but that seems unlikely for the titles and excerpts I read.
I suggest that this is very weak evidence against the viability, either political, economic, or technical, of the Hyperloop. Any project that is obviously viable and useful has been done already; consequently, both useful and non-useful projects get the same amount of resistance of the form "Here's a problem I spent at least ten seconds thinking up, now you must take three days to counter it or I will pout. In public. Thus spoiling all your chances of ever getting your pet project accepted, hah!"
Comment author:CellBioGuy
20 August 2013 04:29:09AM
*
5 points
[-]
In theory there is no difference between theory and practice. In practice, there is.
I continue to fail to see how this idea is anything more than a cool idea that would take huge amounts of testing and engineering hurdles to get going if it indeed would prove viable. Nothing is as simple as its untested dream ever is.
Not hating on it, but seriously, hold your horses...
Comment author:knb
21 August 2013 09:48:28AM
1 point
[-]
I feel like I covered this in the first sentence with, "I don't know how technically viable hyperloop is." My point is just to argue that the US would be especially well-suited for hyperloop if it turns out to be viable. My goal was mainly to try to argue against the apparent popular wisdom that hyperloop would never be built in the US for the same reason HSR (mostly) wasn't.
Comment author:metastable
20 August 2013 05:21:15AM
0 points
[-]
I'd like to hear more about possibilities in China, if you've got more. Everything I've read lately suggests that they've extensively overbuilt their infrastructure, much of it with bad debt, in the rush to create urban jobs. And it seems like they're teetering on the edge of a land-development bubble, and that urbanization has already started slowing. But they do get rights-of-way trivially, as you say, and they're geographically a lot more like the US than Europe.
Comment author:gattsuru
20 August 2013 05:01:31PM
1 point
[-]
Mr. Sumner has some pretty clear systemic assumptions toward government spending on infrastructure. This article seems to agree with both aspects, without conflicting with either, however.
The Chinese government /is/ opening up new opportunities for non-Chinese companies to provide infrastructure, in order to further cover land development. But they're doing so at least in part because urbanization is slowing and these investments are perceived locally as higher-risk to already risk-heavy banks, and foreign investors are likely to be more adventurous or to lack information.
Comment author:DanielLC
20 August 2013 05:57:51AM
0 points
[-]
I've been told that railways primarily get money from freight, and nobody cares that much about freight getting there immediately. As such, high speed railways are not a good idea.
I know you can't leave this to free enterprise per se. If someone doesn't want to sell their house, you can't exactly steer a railroad around it. However, if eminent domain is used, then if it's worth building, the market will build it. Let the government offer eminent domain use for railroads, and let them be built if they're truly needed.
Comment author:kalium
20 August 2013 05:19:41PM
1 point
[-]
Much of Amtrak uses tracks owned by freight companies, and that this is responsible for a good chunk of Amtrak's poor performance. However, high-speed rail on non-freight-owned tracks works pretty well in the rest of the world; it just needs its own right-of-way (in some cases running freight at night when the high-speed trains aren't running, but still having priority over freight traffic).
Comment author:kalium
20 August 2013 11:34:34PM
*
1 point
[-]
That's not at all the same question as "Are high-speed trains a good idea?"
Any decent HSR would generate quite a lot of value not captured by fares. It would be more informative to compare the economic development of regions that have built high-speed rail against that of similar regions which haven't or which did so later.
France's TGV is profitable. Do you think that because it might not have been built without government funding it was a bad idea to build?
Comment author:DanielLC
21 August 2013 03:36:40AM
1 point
[-]
It would be more informative to compare the economic development of regions that have built high-speed rail against that of similar regions which haven't or which did so later.
If the HSR charges based on marginal cost, and marginal and average cost are significantly different, then this could be a problem. I intuitively assumed they'd be fairly close. Thinking about it more, I've heard that airports charge vastly more for people who are flying for business than for pleasure, which suggests there is a signifcant difference. Of course, it also suggests that they might be able to capture it through price discrimination, since the airports seem to manage.
How much government help is necessary for a train to be built?
It would be more informative to compare the economic development of regions that have built high-speed rail against that of similar regions which haven't or which did so later.
The economics of a train is not comparable to the economics of a city. If you can actually notice the difference in economic development caused by the train, then the train is so insanely valuable that it would be blindingly obvious from looking at how often they're built by the private sector.
France's TGV is profitable. Do you think that because it might not have been built without government funding it was a bad idea to build?
Making a profit is not a sufficient condition for it to be worth while to build. It has to make enough profit to make up for the capital cost. It might well do that, and it is possible to check, but it's a lot easier to ask if one has been built without government funding.
If it is worth while to build trains in general, and the government doesn't always fund them, then someone will build one without the government funding them.
Comment author:kalium
21 August 2013 04:43:06AM
0 points
[-]
If the HSR charges based on marginal cost, and marginal and average cost are significantly different, then this could be a problem. I intuitively assumed they'd be fairly close. Thinking about it more, I've heard that airports charge vastly more for people who are flying for business than for pleasure, which suggests there is a significant difference.
Marginal and average cost are obviously different, but your example of business fliers is not relevant. Business fliers aren't paying for their flights, but do often get to choose which airline they take. If there is one population that pays for their own flights and another population that does not even consider cost, it would be silly not to discriminate whatever the relation between marginal and average cost.
Comment author:DanielLC
21 August 2013 05:13:11AM
0 points
[-]
The businesses are perfectly capable of choosing not to pay for their employees flights. The fact that they do, and that they don't consider the costs, shows that their willingness to pay is much higher than the marginal cost. If it wasn't for price discrimination, consumer surplus would be high, and a large amount of value produced by the airlines would go towards the consumers.
Are high-speed trains natural monopolies? That is, are the capital costs (e.g. rail lines) much higher than the marginal costs (e.g. train cars)? I think they are, and if they are considering the consumer surplus is important, but if they're not, then it doesn't matter.
Comment author:kalium
21 August 2013 05:20:06AM
0 points
[-]
The fact that they do, and that they don't consider the costs, shows that their willingness to pay is much higher than the marginal cost.
What marginal cost are you referring to here? If it's the cost to the airline of one butt-in-seat, we know it's less than one fare because the airline is willing to sell that ticket. And this has nothing to do with average cost. I think you've lost the thread a bit.
Comment author:DanielLC
21 August 2013 10:59:55PM
0 points
[-]
What I mean is that, if everyone payed what people who travel for pleasure pay, then people travelling for business would pay much less than they're willing to, so the amount of value airports produce would be a lot less than what they'd get. If they charged everyone the same, either it would get so expensive that people would only travel for business, even though it's worth while for people to travel for pleasure, or it would be cheap enough that people travelling for business would fly for a fraction of what they're willing to pay. Either way, airports that are worth building would go unbuilt since the airport wouldn't actually be able to make enough money to build it.
Comment author:kalium
21 August 2013 04:50:57AM
*
1 point
[-]
If you can actually notice the difference in economic development caused by the train, then the train is so insanely valuable that it would be blindingly obvious from looking at how often they're built by the private sector.
I don't understand the reasoning by which you conclude that if an effect is measurable it must be so overwhelmingly huge that you wouldn't have to measure it.
On a much smaller scale, property values rise substantially in the neighborhood of light rail stations, but this value is not easily captured by whoever builds the rails. Despite the measurability of this created value, we do not find that "[light rail] is so insanely valuable that it would be blindingly obvious from looking at how often they're built by the private sector."
Comment author:DanielLC
21 August 2013 05:06:03AM
1 point
[-]
If the effect is measurable on an accurate but imprecise scale (such as the effect of a train on the economy), then it will be overwhelming on an inaccurate but precise scale (such as ticket sales).
You are suggesting we measure the utility of a single business by its effect on the entire economy. Unless my guesses of the relative sizes are way off, the cost of a train is tiny compared to the normal variation of the economy. In order for the effect to be noticeable, the train would have to pay for itself many, many times over. Ticket sales, and by extension the free market, might not be entirely accurate in judging the value of a train. But it's not so inaccurate that an effect of that magnitude will go unnoticed.
Am I missing something? Are trains really valuable enough that they'd be noticed on the scale of cities?
Comment author:DanielLC
21 August 2013 11:02:00PM
1 point
[-]
Faster, more convenient transportation is what fares are charging for. Non-captured value is more complicated than that.
If the non-captured value is 20% of the captured value, it's highly unlikely that trains will frequently be worth building, but rarely capture enough value. That would require that the true value stay within a very narrow area.
If it's not a monopoly good, and marginal costs are close to average costs, then captured value will only go down as people build more trains, so that value not being captured doesn't prevent trains from being built. If it is a monopoly good (I think it is, but I would appreciate it if some who actually knows tells me), and marginal costs are much lower than average costs, then a significant portion of the value will not be captured. Much more than 20%. It's not entirely unreasonable that the true value is such that trains are rarely built when they should often be built.
That's part of why I asked:
How much government help is necessary for a train to be built?
If the government is subsidizing it by, say, 20%, then the trains are likely worth while. If the government practically has to pay for the infrastructure to get people to operate trains, not so much.
Also, that comment isn't really applicable to what you just posted it as a response to. It would fit better as a response to my last comment. The comment you responded to was just saying that unless the value of trains is orders of magnitude more than the cost, you'd never notice by looking at the economy.
Comment author:DanielLC
21 August 2013 03:23:34AM
2 points
[-]
Some roads do collect tolls. Again, I don't know how to look it up, but I don't think they have government help. They're in the minority, but they show that having roads is socially optimal. Similarly, if there are high-speed trains that operate without government help, we know that it's good to have high-speed trains, and while it may be that government encouragement is resulting in too many of them being built, we should still build some.
Comment author:knb
21 August 2013 09:40:12AM
2 points
[-]
Many of the private passenger rail companies were losing money before they were nationalized, but that was under heavy regulation and price controls. The freight rail companies were losing money before they were deregulated as well. These days they are quite profitable.
A lot of the old right-of-way has been lost so they would certainly need government help to overcome the tragedy-of-the-anticommons problem.
Comment author:DanielLC
21 August 2013 10:55:28PM
0 points
[-]
A lot of the old right-of-way has been lost so they would certainly need government help to overcome the tragedy-of-the-anticommons problem.
You mean the problem that someone isn't going to be willing to sell their property? Eminent domain is certainly necessary. I'm just wondering if it's sufficient.
Comment author:knb
21 August 2013 09:36:05AM
*
0 points
[-]
I'm not sure what your point is here. Passenger rail and freight rail are usually decoupled. Amtrak operates on freight rail in most places because the government orders the rail companies to give preference to passenger rail (at substantial cost to the private freight railways).
Hyperloop would help out a lot, since it takes the burden off of freight rail. I suppose hyperloop could be privately operated (that would be my preference, so long as there was commonsense regulation against monopolistic pricing).
Comment author:DanielLC
21 August 2013 11:04:44PM
2 points
[-]
so long as there was commonsense regulation against monopolistic pricing
If competitors can simply build more hyperloops, monopolistic pricing won't be a problem. If you only need one hyperloop, then monopolistic pricing is insufficient. They will still make less money than they produce. Getting rid of monopolistic pricing runs the risk of keeping anyone from building the hyperloops.
Comment author:luminosity
20 August 2013 11:24:09AM
10 points
[-]
Don't forget Australia. We have a few, large cities separated by long distances. In particular, Melbourne to Sydney is one of the highest traffic air routes in the world, roughly the same distance as the proposed Hyperloop, and there has been on and off talk of high speed rail links. Additionally, Sydney airport has a curfew, and is more or less operating at capacity. Offloading Melbourne-bound passengers to a cheaper, faster option would free up more flights for other destinations.
Olle Häggström, Professor of Mathematical Statistics at Chalmers University of Technology, reads some of Marcus Hutter's work, comes away unimpressed, and asks for recommendations.
One concept that is sometimes claimed to be of central importance in contemporary AGI research is the so-called AIXI formalism. [...] In the presentation, Hutter advices us to consult his book Universal Artificial Intelligence. Before embarking on that, however, I decided to try one of the two papers that he also directs us to in the presentation, namely his A philosophical treatise of universal induction, coauthored with Samuel Rathmanner and published in the journal Entropy in 2011. After reading the paper, I have moved the reading of Hutter's book far down my list of priorities, because gerneralizing from the paper leads me to suspect that the book is not so good.
I find the paper bad. There is nothing wrong with the ambition - to sketch various approaches to induction from Epicurus and onwards, and to try to argue how it all culminates in the concept of Solomonoff induction. There is much to agree with in the paper, such as the untenability of relying on uniform priors and the limited interest of the so-called No Free Lunch Theorems (points I've actually made myself in a different setting). The authors' emphasis on the difficulty of defending induction without resorting to circularity (see the well-known anti-induction joke for a drastic illustration) is laudable. And it's a nice perspective to view Solomonoff's prior as a kind of compromise between Epicurus and Ockham, but does this particular point need to be made in quite so many words? Judging from the style of the paper, the word "philosophical" in the title seems to mean something like "characterized by lack of rigor and general verbosity".4 Here are some examples of my more specific complaints [...]
I still consider it plausible to think that Kolmogorov complexity and Solomonoff induction are relavant to AGI7 (as well as to statistical inference and the theory of science), but the experience of reading Uncertainty & Induction in AGI and A philosophical treatise of universal induction strongly suggests that Hutter's writings are not the place for me to go in order to learn more about this. But where, then? Can the readers of this blog offer any advice?
Comment author:Manfred
20 August 2013 09:26:50AM
*
6 points
[-]
Open link, control+f "relavant to AGI". Get directed to "relavant to AGI<sup>7</sup>".
Footnote 7 is "7) I am not a computer scientist, so the following should perhaps be taken with a grain of salt. While I do think that computability and concepts derived from it such as Kolmogorov complexity may be relevant to AGI, I have the feeling that the somewhat more down-to-earth issue of computability in polynomial time is even more likely to be of crucial importance."
In terms of books, The Strategy of Conflict is the classic popular work, and it's good, but it's very much a product of its time. I imagine there are more accessible books out there. Yvain recommends The Art of Strategy, which I haven't read.
What are your motives for learning about it? If it's to gain a bare-bones understanding sufficient for following discussion in Less Wrong, existing Less Wrong articles would probably equip you well enough.
Comment author:mstevens
21 August 2013 10:20:00AM
1 point
[-]
It's a little bit intuition and might turn out to be daft, but
a) I've read just enough about game theory in the past to know what the prisoner's dilemma is
b) I was reading an argument/discussion on another blog about the men chatting up women, who may or may not be interested, scenario, and various discussions on irc with MixedNuts have given me the feeling that male/female interactions (which are obviously an area of central interest to feminism) are a similar class of thing and possibly game theory will help me understand said feminism and/or opposition to it.
A word of warning: you will probably draw all sorts of wacky conclusions about human interaction when first dabbling with game theory. There is huge potential for hatching beliefs that you may later regret expressing, especially on politically-charged subjects.
Comment author:JQuinton
23 August 2013 05:14:48PM
3 points
[-]
I also had the same intuition about male/female dynamics and the prisoner's dilemma. It also seems like a lot of men's behavior towards women is a result of a scarcity mentality. Surely there are some economic models that explain how people behave -- especially their bad behavior -- when they feel some product is scarce, and if these models were applied to male/female dynamics it might predict some behavior.
But since feminism is such a mind-killing topic, I wouldn't feel too comfortable expressing alternative explanations (especially among non-rationalists) since people tend to feel that if you disagree with the explanation then you disagree with the normative goals.
Comment author:satt
24 August 2013 03:36:00PM
1 point
[-]
It also seems like a lot of men's behavior towards women is a result of a scarcity mentality. Surely there are some economic models that explain how people behave -- especially their bad behavior -- when they feel some product is scarce, and if these models were applied to male/female dynamics it might predict some behavior.
One model which I've seen come up repeatedly in the humanities is the "marriage market". Unsurprisingly, economists seem to use this idea most often in the literature, but peeking through the Google Scholar hits I see demographers, sociologists, and historians too. (At least one political philosopher uses the idea too.)
I don't know how predictive these models are. I haven't done a systematic review or anything remotely close to one, but when I've seen the marriage market metaphor used it's usually to explain an observation after the fact. Here is a specific example I spotted in Randall Collins's book ''Violence: A Micro-sociological Theory''. On pages 149 & 150 Collins offers this gloss on an escalating case of domestic violence:
It appears that the husband's occupational status is rising relative to his wife's; in this social class, their socializing is likely to be with the man's professional associates (Kanter 1977), and thus it is when she is in the presence of his professional peers that he belittles her, and it is in regard to what he perceives as her faulty self-presentation in these situations that he begins to engage in tirades at home. He is becoming relatively stronger socially, and she is coming to accept that relationship. Then he escalates his power advantage, as the momentum of verbal tirades flows into physical violence.
A sociological interpretation of the overall pattern is that within the first two years of their marriage, the man has discovered that he is in an improving position on the interactional market relative to his wife; since he apparently does not want to leave his wife, or seek additional partners, he uses his implicit market power to demand greater subservience from his wife in their own personal and sexual relationships. Blau's (1964) principle applies here: the person with a weaker exchange position can compensate by subservience. [...] In effect, they are trying out how their bargaining resources will be turned into ongoing roles: he is learning techniques of building his emotional momentum as dominator, she is learning to be a victim.
(Digression: Collins calls this a sociological interpretation, but I usually associate this kind of bargaining power-based explanation with microeconomics or game theory, not sociology. Perhaps I should expand my idea of what constitutes sociology. After all, Collins is a sociologist, and he has partly melded the bargaining power-based explanation with his own micro-sociological theory of violence.)
(If you want a specific link, here is Yvain's introduction to game theory sequence. There are some problems and inaccuracies with it which are generally discussed in comments, but as a quick overview aimed at a LW audience it should serve pretty well.)
Comment author:Manfred
20 August 2013 05:02:37PM
*
3 points
[-]
I actually found The Selfish Gene a pretty good book for developing game theory intuitions. I'd put it as #2 on my list after "the first 2/3 of The Strategy of Conflict".
Comment author:[deleted]
20 August 2013 04:00:09PM
2 points
[-]
Here's a question that's been distracting me for the last few hours, and I want to get it out of my head so I can think about something else.
You're walking down an alley after making a bank withdrawal of a small sum of money. Just about when you realize this may have been a mistake, two Muggers appear from either side of the alley, blocking trivial escapes.
Mugger A: "Hi there. Give me all of that money or I will inflict 3^^^3 disutility on your utility function."
Mugger B: "Hi there. Give me all of that money or I will inflict maximum disutility on your utility function."
You: "You're working together?"
Mugger A: "No, you're just really unlucky."
Mugger B: "Yeah, I don't know this guy."
You: "But I can't give both of you all of this money!"
Mugger A: "Tell you what. You're having a horrible day, so if you give me half your money, I'll give you a 50% chance of avoiding my 3^^^3 disutility. And if you give me a quarter of your money, I'll give you a 25% chance of avoiding my 3^^^3 disutility. Maybe the other Mugger will let you have the same kind of break. Sound good to you, other Mugger?"
Mugger B: "Works for me. Start paying."
You: Do what, exactly?
I can see at least 4 vaugely plausible answers:
Pay Mugger A: 3^^^3 disutility is likely going to be more than whatever you think your maximum is and you want to be as likely as possible of avoiding that. You'll just have to try resist/escape from Mugger B (unless he's just faking).
Pay Mugger B: Maximum disutility is by it's definition of greater than or equal to any other disutility, worse than 3^^^3, and has probably happened to at least a few people with utility functions (although probably NOT to a 3^^^3 extent), so it's a serious threat and you want to be as likely as possible of avoiding that. You'll just have to try resist/escape from Mugger A (unless he's just faking).
Pay both Muggers a split of the money: For example: If you pay half to each, and they're both telling the truth, you have a 25% chance of not getting either disutility and not having to resist/escape at all (unless one or both is faking, which may improve your odds.)
Don't Pay: This seems like it becomes generally less likely than in a normal Pascal's mugging since there are no clear escape routes, and you're outnumbered, so there is at least some real threat unless they're both faking.
The problem is, I can't seem to justify any of my vaugely plausible answers to this conundrum well enough to stop thinking about it. Which makes me wonder if the question is ill formed in some way.
Comment author:Khoth
20 August 2013 04:25:10PM
*
3 points
[-]
If they're both telling the truth: since B gives maximum disutility, being mugged by both is no worse than being mugged by B. If you think your maximum disutility is X*3^^^3, I think if you run the numbers you should give a fraction X/2 to B, and the rest to A. (or all to B if X>2)
If they might be lying, you should probably ignore them. Or pay B, whose threat is more credible if you don't think your utility function goes as far as 3^^^3 (although, what scale? Maybe a dust speck is 3^^^^3)
If you have some concept of "3^^^3 disutility" as a tractable measure of units of disutility, it seems unlikely you don't also have a reasonable idea of the upper and lower bounds of your utility function. If the values are known this becomes trivial to solve.
I am becoming increasingly convinced that VNM-utility is a poor tool for ad-hoc decision-theoretics, not because of dubious assumptions or inapplicability, but because finding corner-cases where it appears to break down is somehow ridiculously appealing.
Comment author:Emile
20 August 2013 05:21:31PM
*
5 points
[-]
I may be fighting the hypothetical here, but ...
If utility is unbounded, maximum disutility is undefined, and if it's bounded, then 3^^^3 is by definition smaller than the maximum so you should pay all to mugger B.
Pay both Muggers a split of the money: For example: If you pay half to each, and they're both telling the truth, you have a 25% chance of not getting either disutility and not having to resist/escape at all (unless one or both is faking, which may improve your odds.)
I think trading a 10% chance of utility A for a 10% chance of utility B, with B < A is irrational per the definition of utility (as far as I understand; you can have marginal diminishing utility on money, but not marginally diminishing utility on *utility. I'm less sure about risk aversion though.)
That's not fighting the hypothetical. Fighting the hypothetical is first paying one, then telling the other you'll go back to the bank to pay him too. Or pulling out your kung fu skills, which is really fighting the hypothetical.
Comment author:Omid
20 August 2013 04:02:11PM
*
15 points
[-]
This article, written by Dreeve's wife has displaced Yvain's polyamory essay as the most interesting relationships article I've read this year. The basic idea is that instead of trying to split chores or common goods equally, you use auctions. For example, if the bathroom needs to be cleaned, each partner says how much they'd be willing to clean it for. The person with the higher bid pays the what the other person bid, and that person does the cleaning.
It's easy to see why commenters accused them of being libertarian. But I think egalitarians should examine this system too. Most couples agree that chores and common goods should be split equally. But what does "equally" mean? It's hard to quantify exactly how much each person contributes to a relationship. This allows the more powerful person to exaggerate their contributions and pressure the weaker person into doing more than their fair share. But auctions safeguard against this abuse requiring participants to quantify how much they value each task.
For example, feminists argue that women do more domestic chores than men, and that these chores go unnoticed by men. Men do a little bit, but because men don't see all the work women do, they end up thinking that they're doing their share when they aren't. Auctions safeguard against this abuse. Instead of the wife just cleaning the bathroom, she and her husbands bid for how much they'd be willing to clean the bathroom for. The lower bid is considered the fair market price of cleaning the bathroom. Then she and her husband engage in a joint-purchase auction to decide if the bathroom will be cleaned at all. Either the bathroom gets cleaned and the cleaner gets fairly compensated, or the bathroom doesn't get cleaned because the total utility of cleaning the bathroom is less than the disutility of cleaning the bathroom.
And that's it. No arguing about who cleaned it last. No debating whether it really needs to cleaned. No room for misogynist cultural machines to pressure the wife into doing more than her fair share. Just a market transaction that is efficient and fair.
Comment author:Manfred
20 August 2013 05:16:54PM
*
10 points
[-]
Wasn't it Ariely's Predictably Irrational that went over market norms vs. tribe norms? If you just had ordinary people start doing this, I would guess it would crash and burn for the obvious market-norm reasons (the urge to game the system, basically). And some ew-squick power disparity stuff if this is ever enforced by a third party or even social pressure.
Comment author:maia
20 August 2013 06:16:35PM
2 points
[-]
Empirically speaking, this system has worked in our house (of 7 people, for about 6 months so far). What kind of gaming the system were you thinking of?
We do use social pressure: there is social pressure to do your contracted chores, and keep your chore point balance positive. This hasn't really created power disparities per se.
What kind of gaming the system were you thinking of?
If the idea is to say exactly how much you are willing to pay, there would be an incentive to:
1) Broadcast that you find all labor extra unpleasant and all goods extra valuable, to encourage people to bid high
2) Bid artificially lower values when you know someone enjoys a labor / doesn't mind parting with a good and will bid accordingly.
In short, optimal play would involve deception, and it happens to be a deception of the sort that might not be difficult to commit subconsciously. You might deceive yourself into thinking you find a chore unpleasant - I have read experimental evidence to support the notion that intrinsically rewarding tasks lose some of their appeal when paired with extrinsic rewards.
No comment on whether the traditional way is any better or worse - I think these two testimonials are sufficient evidence for this to be worth people who have a willing human tribe handy to try it, despite the theoretical issues. After all,
we trust each other not to be cheats and jerks. That’s true love, baby
Edit: There is another, more pleasant problem: If you and I are engaged in trade, and I actually care about your utility function, that's going to effect the price. The whole point of this system is to communicate utility evenly after subtracting for the fact that you care about each other (otherwise why bother with a system?)
Concrete example: We are trying to transfer ownership of a computer monitor, and I'm willing to give it to you for free because I care about you. But if I were to take that into account, then we are essentially back to the traditional method. I'd have to attempt to conjure up the value at which i'd sell the monitor to someone I was neutral towards.
Of course, you could just use this as an argument stopper - whenever there is real disagreement, you use money to effect an easy compromise. But then there is monetary pressure to be argumentative and difficult, and social pressure not to be - it would be socially awkward and monetarily advantageous if you were constantly the one who had a problem with unmet needs.
Comment author:maia
21 August 2013 02:51:59AM
2 points
[-]
1) Broadcast that you find all labor extra unpleasant and all goods extra valuable, to encourage people to bid high
But if other people bid high, then you have to pay more. And they will know if you bid lower, because the auctions are public. How does this help you?
2) Bid artificially lower values when you know someone enjoys a labor / doesn't mind parting with a good and will bid accordingly.
I don't understand how this helps you either; if you bid lower and therefore win the auction, then you have to do the chore for less than you value it at. That's no fun.
The way our system works, it actually gives the lowest bidder, not their actual bid, but the second lowest bid minus 1; that way you don't have to do bidding wars, and can more or less just bid what you value it at. It does create the issue that you mention - bid sniping, if you know what the lowest bidder will bid you can bid just above it so they get as little as possible - but this is at the risk of having to actually do the chore for that little, because bids are binding.
I'd very much like to understand the issues you bring up, because if they are real problems, we might be able to take some stabs at solving them.
whenever there is real disagreement, you use money to effect an easy compromise.
This has become somewhat of a norm in our house. We can pass around chore points in exchange for rides to places and so forth; it's useful, because you can ask for favors without using up your social capital. (Just your chore points capital, which is easier to gain more of and more transparent.)
if you bid lower and therefore win the auction, then you have to do the chore for less than you value it at. That's no fun.
You only do this when you plan to be the buyer. The idea is to win the auction and become the buyer, but putting up as little money as possible. If you know that the other guy will do it for $5, you bid $6, even if you actually value it at $10. As you said, I'm talking about bid sniping.
But if other people bid high, then you have to pay more.
Ah, I should have written "broadcast that you find all labor extra unpleasant and all goods extra valuable when you are the seller (giving up a good or doing a labour) so that people pay you more to do it."
If you're willing to do a chore for _$10, but you broadcast that you find it more than -$10 of unpleasantness, the other party will be influenced to bid higher - say, $40. Then, you can bid $30, and get paid more. It's just price inflation - in a traditional transaction, a seller wants the buyer to pay as much as they are willing to pay. To do this, the seller must artificially inflate the buyer's perception of how much the item is worth to the seller. The same holds true here.
When you intend to be the buyer you do the opposite - broadcast that you're willing to do the labor for cheap to lower prices, then bid snipe. As in a traditional transaction, the buyer wants the seller to believe that the item is not of much worth to the buyer. The buyer also has to try to guess the minimum amount that the seller will part with the item.
it actually gives the lowest bidder, not their actual bid, but the second lowest bid minus 1
So what I wrote above was assuming the price was a midpoint between the buyer's and seller's bid, which gives them both equal power to set the price. This rule slightly alters things, by putting all the price setting power in the buyer's hands.
Under this rule, after all the deceptive price inflation is said and done you should still bid an honest $10 if you are only playing once - though since this is an iterated case, you probably want to bid higher just to keep up appearances if you are trying to be deceptive.
One of the nice things about this rule is that there is no incentive to be deceptive unless other people are bid sniping. The weakness of this rule is that it creates a stronger incentive to bid snipe.
Price inflation (seller's strategy) and bid sniping (buyer's strategy) are the two basic forms of deception in this game. Your rule empowers the buyer to set the price, thereby making price inflation harder at the cost of making bid sniping easier. I don't think there is a way around this - it seems to be a general property of trading. Finding a way around it would probably solve some larger scale economic problems.
Comment author:rocurley
21 August 2013 07:36:18PM
2 points
[-]
(I'm one of the other users/devs of Choron)
There are two ways I know of that the market can try to defeat bid sniping, and one way a bidder can (that I know of).
Our system does not display the lowest bid, only the second lowest bid. For a one-shot auction where you had poor information about the others preferences, this would solve bid sniping. However, in our case, chores come up multiple times, and I'm pretty sure that it's public knowledge how much I bid on shopping, for example.
If you're in a situation where the lowest bid is hidden, but your bidding is predictable, you can sometimes bid higher than you normally would. This punishes people who bid less than they're willing to actually do the chore for, but imposes costs on you and the market as a whole as well, in the form of higher prices for the chore.
A third option, which we do not implement (credit to Richard for this idea), is to randomly award the auction to one of the two (or n) lowest bidders, with probability inversely related to their bid. In particular, if you pick between the lowest 2 bidders, both have claimed to be willing to do the job for the 2nd bidder's price (so the price isn't higher and noone can claim they were forced to do something for less than they wanted). This punishes bid-snipers by taking them at their word that they're willing to do the chore for the reduced price, at the cost of determinism, which allows better planning.
Plus, I think it doesn't work when there are only two players? If I honestly bid $30, and you bid $40 and randomly get awarded the auction, then I have to pay you $40. And that leaves me at -$10 disutility, since the task was only -$30 to me.
Comment author:rocurley
23 August 2013 03:44:30AM
0 points
[-]
To be sure I'm following you: If the 2nd bidder gets it (for the same price as the first bidder), the market efficiency is lost because the 2nd person is indifferent between winning and not, while the first would have liked to win it? If so, I think that's right.
If there are two players... I agree the first bidder is worse off than they would be if they had won. This seems like a special case of the above though: why is it more broken with 2 players?
Comment author:Manfred
20 August 2013 08:54:50PM
2 points
[-]
What kind of gaming the system were you thinking of?
Yeah, bidding = deception. But in addition to someonewrong's answer, I was thinking you could just end up doing a shitty job at things (e.g. cleaning the bathroom). Which is to say, if this were an actual labor market, and not a method of communicating between people who like each other and have outside-the-market reasons to cooperate, the market doesn't have much competition.
Comment author:maia
21 August 2013 02:42:28AM
1 point
[-]
Yeah, that's unfortunately not something we can really handle other than decreeing "Doing this chore entails doing X and it doesn't count if you don't do X." Enforcing the system isn't solved by the system itself.
a method of communicating between people who like each other and have outside-the-market reasons to cooperate
I can see this working better than a dysfunctional household, but if you're both in the habit of just doing things, this is going to make everything worse.
Comment author:maia
20 August 2013 06:14:14PM
3 points
[-]
Roger and I wrote a web app for exactly this purpose - dividing chores via auction. This has worked well for chore management for a house of 7 roommates, for about 6 months so far.
The feminism angle didn't even occur to us! It's just been really useful for dividing chores optimally.
Comment author:shminux
20 August 2013 06:26:36PM
1 point
[-]
I can see it working when all parties are trustworthy and committed to fairness, which is a high threshold to begin with. Also, everyone has to buy into the idea of other people being autonomous agents, with no shoulds attached. Still, this might run into trouble when one party badly wants something flatly unacceptable to the other and so unable to afford it and feeling resentful.
One (unrelated) interesting quote:
my womb is worth about the cost of one graduate-level course at Columbia, assuming I’m interested in bearing your kid to begin with.
Comment author:knb
20 August 2013 10:39:06PM
7 points
[-]
Wow someone else thought of doing this too!
My roommate and I started doing this a year ago. It went pretty well for the first few months. Then our neighbor heard about how much we were paying eachother for chores and started outbidding us.
Comment author:Vaniver
22 August 2013 11:54:58PM
*
7 points
[-]
Then our neighbor heard about how much we were paying eachother for chores and started outbidding us.
This is one of the features of this policy, actually- you can use this as a natural measure of what tasks you should outsource. If a maid would cost $20 to clean the apartment, and you and your roommates all want at least $50 to do it, then the efficient thing to do is to hire a maid.
Comment author:kalium
21 August 2013 05:46:58AM
11 points
[-]
This sounds interesting for cases where both parties are economically secure.
However I can't see it working in my case since my housemates each earn somewhere around ten times what I do. Under this system, my bids would always be lowest and I would do all the chores without exception. While I would feel unable to turn down this chance to earn money, my status would drop from that of an equal to that of a servant. I would find this unacceptable.
Comment author:Fronken
24 August 2013 05:50:37PM
1 point
[-]
Could one not change the bidding to use "chore points" of somesuch? I mean, the system described is designed for spouses, but there's no reason it couldn't be adapted for you and your housemates.
Most couples agree that chores and common goods should be split equally.
I'm skeptical that most couples agree with this.
Anyway, all of these types of 'chore division' systems that I've seen so far totally disregard human psychology. Remember that the goal isn't to have a fair chore system. The goal is to have a system that preserves a happy and stable relationship. If the resulting system winds up not being 'fair', that's ok.
Comment author:Multiheaded
22 August 2013 05:33:39PM
*
3 points
[-]
And that's it. No arguing about who cleaned it last. No debating whether it really needs to cleaned. No room for misogynist cultural machines to pressure the wife into doing more than her fair share. Just a market transaction that is efficient and fair.
Comment author:Omid
23 August 2013 12:59:22AM
*
7 points
[-]
The polyamory and BDSM subcultures prove that nerds can create new social rules that improve sex. Of course, you can't just theorize about what the best social rules would be and then declare that you've "solved the problem." But when you see people living happier lives as a result of changing their social rules, there's nothing wrong with inviting other people to take a look.
I don't understand your postscript. I didn't say there is no inequality in chore division because if there were a chore market would have removed it. I said a chore market would have more equality than the standard each-person-does-what-they-think-is-fair system. Your response seems like fully generalized counterargument: anyone who proposes a way to reduce inequality can be accused of denying that the inequality exists.
One datapoint: I know of one household (two adults, one child) which worked out chores by having people list which chores they liked, which they tolerated, and which they hated. It turned out that there was enough intrinsic motivation to make taking care of the house work.
Comment author:knb
20 August 2013 10:11:02PM
*
4 points
[-]
He was trying to pass a law to suppress religious freedoms of small sects. That doesn't raise the sanity waterline, it just increases tensions and hatred between groups.
That's a ludicrously forgiving reading of what the bill (which looks like going through) is about. Steelmanning is an exercise in clarifying one's own thoughts, not in justifying fraud and witch-hunting.
Comment author:Salemicus
20 August 2013 09:29:43PM
3 points
[-]
I've got an (IMHO) interesting discussion article written up, but I am unable to post it; I get a "webpage cannot be found" error when I try. I'm using IE 9. Is this a known issue, or have I done something wrong?
Comment author:metastable
21 August 2013 12:18:21AM
1 point
[-]
Do consequentialists generally hold as axiomatic that there must be a morally preferable choice (or conceivably multiple equally preferable choices) in a given situation? If so, could somebody point me to a deeper discussion of this axiom (it probably has a name, which I don't know.)
Comment author:somervta
21 August 2013 01:34:11AM
2 points
[-]
Not explicitly as an axiom AFAIK, but if you're valuing states-of-the-world, any choice you make will lead to some state, which means that unless your valuation is circular, the answer is yes.
Basically, as long as your valuation is VNM-rational, definitely yes. Utilitarians are a special case of this, and I think most consequentialists would adhere to that also.
Comment author:metastable
21 August 2013 03:17:32AM
0 points
[-]
Thanks! Do consequentialist kind of port the first axiom (completeness) from the VN-M utility theorem, changing it from decision theory to meta-ethics?
And for others, to put my original question another way: before we start comparing utilons or utility functions, insofar as consequentialists begin with moral intuitions and reason the existence of utility, is one of their starting intuitions that all moral questions have correct answers? Or am I just making this up? And has anybody written about this?
To put that in one popular context: in the Trolley Switch and Fat Man problem, it seems like most people start with the assumption that there exists a right answer (or preferable, or best, whatever your terminology), and that it could never be the case that an agent will do the wrong/immoral/unethical thing no matter what he or she chooses. Am I right that this assumption exists?
Comment author:somervta
21 August 2013 04:52:19AM
1 point
[-]
Thanks! Do consequentialist kind of port the first axiom (completeness) from the VN-M utility theorem, changing it from decision theory to meta-ethics?
Not explicitly (except in the case of some utilitarians), but I don't think many would deny it. The boundaries between meta-ethics and normative ethics are vaguer than you'd think, but consequentialism is already sort of metaethical. The VMN theorem isn't explicitly discussed that often (many ethicists won't have heard of it), but the axioms are fairly intuitive anyway. However, although I don't know enough about weird forms of consequentialism to know if anyone's made a point of denying completeness, I wouldn't be that surprised if that position exists.
To put that in one popular context: in the Trolley Switch and Fat Man problem, it seems like most people start with the assumption that there exists a right answer (or preferable, or best, whatever your terminology), and that it could never be the case that an agent will do the wrong/immoral/unethical thing no matter what he or she chooses. Am I right that this assumption exists?
Yes, I think it certainly exists. I'm not sure if it's universal or not, but I haven't read a great deal on the subject yet, you I'm not sure if I would know.
Comment author:asr
21 August 2013 05:02:03AM
*
1 point
[-]
it could never be the case that an agent will do the wrong/immoral/unethical thing no matter what he or she chooses.
Most people do have this belief. I think it's a safe one, though. It follows from a substantive belief most people have, which is that agents are only morally responsible for things that are under their control.
In the context of a trolley problem, it's stipulated that the person is being confronted with a choice -- in the context of the problem, they have to choose. And so it would be blaming them for something beyond their control to say "no matter what you do, you are blameworthy."
One way to fight the hypothetical of the trolley problem is to say "people are rarely confronted with this sort of moral dilemma involuntarily, and it's evil to to put yourself in a position of choosing between evils." I suppose for consistency, if you say this, you should avoid jury service, voting, or political office.
Comment author:asr
21 August 2013 05:08:50AM
*
3 points
[-]
What happens if my valuation is noncircular, but is incomplete? What if I only have a partial order over states of the world? Suppose I say "I prefer state X to Z, and don't express a preference between X and Y, or between Y and Z." I am not saying that X and Y are equivalent; I am merely refusing to judge.
My impression is that real human preference routinely looks like this; there are lots of cases people refuse to evaluate or don't evaluate consistently.
It seems like even with partial preferences, one can be consequentialist -- if you don't have clear preferences between outcomes, you have a choice that isn't morally relevant. Or is there a self-contradiction lurking?
Comment author:somervta
21 August 2013 02:56:20PM
0 points
[-]
You could have undefined value, but it's not particularly intuitive, and I don't think anyone actually advocates it as a component of a consequentialist theory.
Whether, in real life, people actually do it is a different story. I mean, it's quite likely that humans violate the VNM model of rationality, but that could just be because we're not rational.
Comment author:pengvado
21 August 2013 05:37:45PM
*
1 point
[-]
Suppose I say "I prefer state X to Z, and don't express a preference between X and Y, or between Y and Z." I am not saying that X and Y are equivalent; I am merely refusing to judge.
If the result of that partial preference is that you start with Z and then decline the sequence of trades Z->Y->X, then you got dutch booked.
Otoh, maybe you want to accept the sequence Z->Y->X if you expect both trades to be offered, but decline each in isolation? But then your decision procedure is dynamically inconsistent: Standing at Z and expecting both trade offers, you have to precommit to using a different algorithm to evaluate the Y->X trade than you will want to use once you have Y.
Comment author:asr
21 August 2013 07:46:18PM
*
0 points
[-]
I think I see the point about dynamic inconsistency. It might be that "I got to state Y from Z" will alter my decisionmaking about Y versus X.
I suppose it means that my decision of what to do in state Y no longer depends purely on consequences, but also on history, at which point they revoke my consequentialist party membership.
But why is that so terrible? It's a little weird, but I'm not sure it's actually inconsistent or violates any of my moral beliefs. I have all sorts of moral beliefs about ownership and rights that are history-dependent so it's not like history-dependence is a new strange thing.
Comment author:brazil84
21 August 2013 02:51:15PM
6 points
[-]
Sorry if this has been asked before, but can someone explain to me if there is any selfish reason to join Alcor while one is in good health? If I die suddenly, it will be too late to have joined, but even if I had joined it seems unlikely that they would get to me in time.
The only reason I can think of is to support Alcor.
Comment author:[deleted]
21 August 2013 03:43:47PM
*
-1 points
[-]
There is some background base rate of sudden, terminal, but not immediately fatal, injury or illness.
For example, I currently do not value life insurance highly, and therefore I value cryonics insurance even less.
Otherwise, there's only some marginal increase in the probability of Alcor surviving as an institution. Seeing as there's precedent for healthy cryonics orgs to adopt the patients of unhealthy cryonics orgs, this marginal increase should be viewed as a yet more marginal increase in the survival of cryonics locations in your locality.
(Assuming transportation costs are prohibitive enough to be treated as a rounding error.)
Comment author:Turgurth
22 August 2013 01:08:16AM
5 points
[-]
I don't think it's been asked before on Less Wrong, and it's an interesting question.
It depends on how much you value not dying. If you value it very strongly, the risk of sudden, terminal, but not immediately fatal injuries or illnesses, as mentioned by paper-machine, might be unacceptable to you, and would point toward joining Alcor sooner rather than later.
The marginal increase your support would add to the probability of Alcor surviving as an institution might also matter to you selfishly, since this would increase the probability that there will exist a stronger Alcor when you are older and will likely need it more than you do now.
Additionally, while it's true that it's unlikely that Alcor would reach you in time if you were to die suddenly, compare this risk to the chance of your survival if alternately you don't join Alcor soon enough, and, after your hypothetical fatal car crash, you end up rotting in the ground.
And hey, if you really want selfish reasons: signing up for cryonics is high-status in certain subcultures, including this one.
There are also altruistic reasons to join Alcor, but that's a separate issue.
Comment author:brazil84
22 August 2013 10:13:24PM
1 point
[-]
Thank you for your response; I suppose one would need to estimate the probability of dying in such a way that having previously joined Alcor would make a difference.
Perusing Ben Best's web site and using some common sense, it seems that the most likely causes of death for a reasonably healthy middle aged man are cancer, stroke, heart attack, accident, suicide, and homicide. We need to estimate the probability of sudden serious loss of faculties followed by death.
It seems that for cancer, that probability is extremely small. For stroke, heart attack, and accidents, one could look it up but just guesstimating a number based on general observations, I would guess roughly 10 to 15 percent. Suicide and homicide are special cases -- I imagine that in those cases I would be autopsied so there would be much less chance of cryopreservation even if I had already joined Alcor.
Of course even if you pre-joined Alcor, there is still a decent chance that for whatever reason they would not be able to preserve you after, for example, a fatal accident which killed you a few days later.
So all told, my rough estimate is that the improvement in my chances of being cryopreserved upon death if I joined Alcor now as opposed to taking a wait and see approach is 5% at best.
Comment author:Turgurth
23 August 2013 01:53:29AM
0 points
[-]
That does sound about right, but with two potential caveats: one is that individual circumstances might also matter in these calculations. For example, my risk of dying in a car accident is much lowered by not driving and only rarely riding in cars. However, my risk of dying of heart disease is raised by a strong family history.
There may also be financial considerations. Cancer almost certainly and often heart disease and stroke take time to kill. If you were paying for cryonics out-of-pocket, this wouldn't matter, but if you were paying with life insurance the cost of the policy would go up, perhaps dramatically, if you were to wait until the onset of serious illness to make your arrangements, as life insurance companies are not fond of pre-existing condtions. It might be worth noting that age alone also increases the cost of life insurance.
That being said, it's also fair to say that even a successful cryopreservation has a (roughly) 10-20% chance of preserving your life, taking most factors into account.
So again, the key here is determining how strongly you value your continued existence. If you could come up with a roughly estimated monetary value of your life, taking the probability of radical life extension into account, that may clarify matters considerably. There at values at which that (roughly) 5% chance is too little, or close to the line, or plenty sufficient, or way more than sufficient; it's quite a spectrum.
Comment author:brazil84
23 August 2013 01:28:35PM
0 points
[-]
ne is that individual circumstances might also matter in these calculations. For example, my risk of dying in a car accident is much lowered by not driving and only rarely riding in cars
Yes I totally agree. Similarly your chances of being murdered are probably a lot lower than the average if you live in an affluent neighborhood and have a spouse who has never assaulted you.
Suicide is an interesting issue -- I would like to think that my chances of committing suicide are far lower than average but painful experience has taught me that it's very easy to be overconfident in predicting one's own actions.
There may also be financial considerations. Cancer almost certainly and often heart disease and stroke take time to kill. If you were paying for cryonics out-of-pocket, this wouldn't matter, but if you were paying with life insurance the cost of the policy would go up, perhaps dramatically, if you were to wait until the onset of serious illness to make your arrangements, as life insurance companies are not fond of pre-existing condtions
Yes, but there is an easy way around this: Just buy life insurance while you are still reasonably healthy.
Actually this is what got me thinking about the issue: I was recently buying life insurance to protect my family. When I got the policy, I noticed that it had an "accelerated death benefit rider," i.e. if you are certifiably terminally ill, you can get a $100k advance on the policy proceeds. When you think about it, that's not the only way to raise substantial money in such a situation. For example, if you were terminally ill, your spouse probably wouldn't mind if you borrowed $200k against the house for cryopreservation if she knew that when you finally kicked the bucket she would get a check for a million from the insurance company.
So the upshot is that from a selfish perspective, there is a lot to be said for taking a "wait and see" approach.
(There's another issue I thought of: Like most life insurance policies, the ones I bought are good only for 20 years. There is a pretty good chance that I will live for those 20 years but in the meantime develop a serious health condition which makes it almost impossible to buy more insurance. What then?)
So again, the key here is determining how strongly you value your continued existence.
Comment author:Turgurth
23 August 2013 06:56:29PM
2 points
[-]
Hmmm. You do have some interesting ideas regarding cryonics funding that do sound promising, but to be safe I would talk to Alcor, specifically Diane Cremeens, about them directly to ensure ahead of time that they'll work for them.
Comment author:brazil84
23 August 2013 07:26:09PM
0 points
[-]
Probably that's a good idea. But on the other hand, what are the chances that they would turn down a certified check for $200k from someone who has a few months to live?
I suppose one could argue that setting things up years in advance so that Alcor controls the money makes it difficult for family members to obstruct your attempt to get frozen.
Comment author:gwern
23 August 2013 07:56:42PM
5 points
[-]
(There's another issue I thought of: Like most life insurance policies, the ones I bought are good only for 20 years. There is a pretty good chance that I will live for those 20 years but in the meantime develop a serious health condition which makes it almost impossible to buy more insurance. What then?)
That's a feature, not a bug, of term life insurance. That's the tradeoff you're making to get coverage now at a cheap rate. But of course, the option value exists on both sides - so if you want to lock in relatively lower rates, well, that's why whole life insurance exists.
Comment author:brazil84
23 August 2013 10:05:06PM
1 point
[-]
That's a feature, not a bug, of term life insurance. That's the tradeoff you're making to get coverage now at a cheap rate. But of course, the option value exists on both sides - so if you want to lock in relatively lower rates, well, that's why whole life insurance exists.
Yes, good point. I actually looked into getting whole life insurance but the policies contained so many bells, whistles, and other confusions that I put it all on hold until I had bought some term insurance. Maybe I will look into that again.
Of course if I were disciplined, it would probably make sense to just "buy term and invest the difference" for the next 30 years.
Comment author:Randy_M
23 August 2013 03:25:47PM
6 points
[-]
It's like what the TV preacher told Bart Simpson: "Yes, a deathbed conversion is a pretty sweet angle, but if you join now, you're also covered in case of accidental death and dismemberment!"
The paper's abstract does a fairly good job of summing it up, although it doesn't explicitly mention Winograd schema questions:
The science of AI is concerned with the study of intelligent forms of behaviour in computational terms. But what does it tell us when a good semblance of a behaviour can be achieved using cheap tricks that seem to have little to do with what we intuitively imagine intelligence to be? Are these intuitions wrong, and is intelligence really just a bag of tricks? Or are the philosophers right, and is a behavioural understanding of intelligence simply too weak? I think both of these are wrong. I suggest in the context of question-answering that what matters when it comes to the science of AI is not a good semblance of intelligent behaviour at all, but the behaviour itself, what it depends on, and how it can be achieved. I go on to discuss two major hurdles that I believe will need to be cleared.
If you have time, this seems worth a read. I started reading other Hector J. Levesque papers because of it.
Edit: Upon searching, I also found some critiques of Levesque's work as well, so looking up opposition to some of these points may also be a good idea.
Comment author:mwengler
21 August 2013 06:50:29PM
*
4 points
[-]
We wonder about the moral impact of dust specks in the eyes of 3^^^3 people.
What about dust specks in the eyes of 3^^^3 poodles? Or more to the point, what is the moral cost of killing one person vs one poodle? How many poodles lives would we trade for the life of one person?
Or even within humans, is it human years we would account in coming up with moral equivalencies? Do we discount humans that are less smart, on the theory that we almost certainly discount poodles against humans because they are not as smart as us? Do we discount evil humans compared to helpful humans? Discount unproductive humans against productive ones? What about sims, if it is human*years we count rather than human lives, what of a sim which might be expected to run for more than a trillion subjective years in simulation, do they carry billions times more moral weight than a single meat human who has precommitted to eschew cryonics or upload?
And of course I am using poodle as an algebraic symbol to represent any one of many intelligences. Do we discount poodles against humans because they are not as smart, or is there some other measure of how to relate the moral value of a poodle to the moral value of a person? Does a sim (simulated human running in software) count equal to a meat human? Does an earthworm have epsilon<<1 times the worth of a human, or is it identically 0 times the worth of a human?
What about really big smart AI? Would an AI as smart as an entire planet be worth (morally) preserving at the expense of losing one-fifth the human population?
I believe that I care nothing for nematodes, and that as the nervous systems at hand became incrementally more complicated, I would eventually reach a sharp boundary wherein my degree of caring went from 0 to tiny. Or rather, I currently suspect that an idealized version of my morality would output such.
Comment author:ahbwramc
22 August 2013 11:28:20PM
5 points
[-]
I'm kind of curious as to why you wouldn't expect a continuous, gradual shift in caring. Wouldn't mind design space (which I would imagine your caring to be a function of) be continuous?
Something going from 0 to 10^-20 is behaving pretty close to continuously in one sense. It is clear that there are some configurations of matter I don't care about at all (like a paperclip), while I do care about other configurations (like twelve-year-old human children), so it is elementary that at some point my utility function must go from 0 to nonzero. The derivative, the second derivative, or even the function itself could easily be discontinuous at this point.
Comment author:MugaSofer
23 August 2013 03:57:07PM
*
-1 points
[-]
It is clear that there are some configurations of matter I don't care about at all (like a paperclip), while I do care about other configurations (like twelve-year-old human children), so it is elementary that at some point my utility function must go from 0 to nonzero.
And ... it isn't clear that there are some configurations you care for ... a bit? Sparrows being tortured and so on? You don't care more about dogs than insects and more for chimpanzees than dogs?
(I mean, most cultures have a Great Chain Of Being or whatever, so surely I haven't gone dreadfully awry in my introspection ...)
No, but I strongly suspect that all Earthly life without frontal cortex would be regarded by my idealized morals as a more complicated paperclip. There may be exceptions and I have heard rumors that octopi pass the mirror test, and I will not be eating any octopus meat until that is resolved, because even in a world where I eat meat because optimizing my diet is more important and my civilization lets me get away with it, I do not eat anything that recognizes itself in a mirror. So a spider is a definite no, a chimpanzee is an extremely probable yes, a day-old human infant is an extremely probable no but there are non-sentience-related causes for me to care in this case, and pigs I am genuinely unsure of.
To be clear, I am unsure if pigs are objects of value, which incorporates both empirical uncertainty about their degree of reflectivity, philosophical uncertainty about the precise relation of reflectivity to degrees of consciousness, and ethical uncertainty about how much my idealized morals would care about various degrees of consciousness to the extent I can imagine that coherently. I can imagine that there's a sharp line of sentience which humans are over and pigs are under, and imagine that my idealized caring would drop to immediately zero for anything under the line, but my subjective probability for both of these being simultaneously true is under 50% though they are not independent.
However it is plausible to me that I would care exactly zero about a pig getting a dust speck in the eye... or not.
Comment author:wedrifid
22 August 2013 02:26:19AM
3 points
[-]
What about dust specks in the eyes of 3^^^3 poodles? Or more to the point, what is the moral cost of killing one person vs one poodle? How many poodles lives would we trade for the life of one person?
I observe that the answer to the last question is not constrained to be positive.
Comment author:linkhyrule5
21 August 2013 09:15:53PM
2 points
[-]
Has anyone done a study on redundant information in languages?
I'm just mildly curious, because a back-of-the-envelope calculation suggests that English is about 4.7x redundant - which on a side note explains how we can esiayl regnovze eevn hrriofclly msispled wrods.
(Actually, that would be an interesting experiment - remove or replace fraction x of the letters in a paragraph and see at what average x participants can no longer make a "corrected" copy.)
I'd predict that Chinese is much less redundant in its spoken form, and that I have no idea how to measure redundancy in its written form. (By stroke? By radical?)
Comment author:gwern
22 August 2013 09:47:32PM
3 points
[-]
I ran into another thing in that vein:
To measure the artistic merit of texts, Kolmogorov also employed a letter-guessing method to evaluate the entropy of natural language. In information theory, entropy is a measure of uncertainty or unpredictability, corresponding to the information content of a message: the more unpredictable the message, the more information it carries. Kolmogorov turned entropy into a measure of artistic originality. His group conducted a series of experiments, showing volunteers a fragment of Russian prose or poetry and asking them to guess the next letter, then the next, and so on. Kolmogorov privately remarked that, from the viewpoint of information theory, Soviet newspapers were less informative than poetry, since political discourse employed a large number of stock phrases and was highly predictable in its content. The verses of great poets, on the other hand, were much more difficult to predict, despite the strict limitations imposed on them by the poetic form. According to Kolmogorov, this was a mark of their originality. True art was unlikely, a quality probability theory could help to measure.
By other metrics, Joyce became less compressible throughout his life. Going closer to the original metric, you demonstrate that the title is hard to compress (especially the lack of apostrophe).
Comment author:JQuinton
23 August 2013 08:41:16PM
0 points
[-]
The verses of great poets, on the other hand, were much more difficult to predict, despite the strict limitations imposed on them by the poetic form. According to Kolmogorov, this was a mark of their originality. True art was unlikely, a quality probability theory could help to measure.
This also happens to me with music. I enjoy "unpredictable" music more than predictable music. Knowing music theory I know which notes are supposed to be played -- if a song is in a certain key -- and if a note or chord isn't predicted then it feels a bit more enjoyable. I wonder if the same technique could be applied to different genres of music with the same result, i.e. radio-friendly pop music vs non-mainstream music.
Comment author:wedrifid
22 August 2013 02:33:55AM
*
1 point
[-]
(Actually, that would be an interesting experiment - remove or replace fraction x of the letters in a paragraph and see at what average x participants can no longer make a "corrected" copy.)
Studies of this form have been done at least on the edge case where all the material removed is from the end (ie. tests of the ability of subjects to predict the next letter in an English text). I'd be interested to see your more general test but am not sure if it has been done. (Except, perhaps, as a game show).
Comment author:Omid
22 August 2013 05:46:01PM
4 points
[-]
Has anyone done a good analysis on the expected value of purchasing health insurance? I will need to purchase health when I turn 26. How comprehensive should the insurance I purchase be?
At first I thought I should purchase a high-deductible that only protects against catastrophes. I have low living expenses and considerable savings, so this wouldn't be risky. The logic here is that insurance costs the expected value of the goods provided plus overhead, so the cost of insurance will always be less than it's expected value. If I purchase less insurance, I waste less money on overhead.
On the other hand, there's a tax break for purchasing health insurance, and soon there will be subsidies as well. Also, insurance companies can reduce the cost of health care by negotiating lower prices for you. So the insurance company will pay less than the person who pays out of pocket. All these together might outweigh money wasted on overhead.
On the third hand, I'm a young healthy male. Under the ACA, my insurance premiums will be inflated so that old, sick, and female persons can have lower premiums. The money that's being transferred to these groups won't be spent on me, so it reduces the expected value of my insurance.
Has anyone added all these effects up? Would you recommend I purchase skimpy insurance or comprehensive?
Comment author:Randy_M
23 August 2013 03:32:16PM
*
3 points
[-]
"Also, insurance companies can reduce the cost of health care by negotiating lower prices for you. "
This is the case even with a high deductable plan. The insurance will have a different rate when you use an in-network doctor or hospital service. If you haven't met the deductible and you go in, they'll send you a bill--but that bill will still be much cheaper than if you had gone in and paid out of pocket (like paying less than half).
But make sure that the high deductable plan actually has a cheaper monthly payment by an amount that matters. With new regulations of what must be covered, the differences between plans may not end up being very big.
Comment author:drethelin
22 August 2013 07:00:12PM
18 points
[-]
I think one of my very favorite things about commenting on Lesswrong is that usually when you make a short statement or ask a question people will just respond to what you said rather than taking it as a sign to attack what they think that question implies is your tribe.
Comment author:blacktrance
23 August 2013 04:38:44AM
0 points
[-]
I find the idea of commitment devices strongly aversive. If I change my mind about doing something in the future, I want to be able to do whatever I choose to do, and don't want my past self to create negative repercussions for me if I change my mind.
Comment author:[deleted]
23 August 2013 11:45:51PM
3 points
[-]
This essay on internet forum behavior by the people behind Discourse is the greatest thing I've seen in the genre in the past two or three years. It rivals even some of the epic examples of wikipedian rule-lawyering that I've witnessed.
Their aggregation of common internet forum rules could have been done by anyone, but it was ultimately they that did it. My confidence in Discourse's success has improved.
Comment author:ahbwramc
24 August 2013 03:14:35AM
2 points
[-]
I don't suppose there's any regularly scheduled LW meetups in San Diego, is there? I'll be there this week from Saturday to Wednesday for a conference.
Comment author:Document
25 August 2013 08:32:34AM
2 points
[-]
This is unrelated to rationality, but I'm posting it here in case someone decides it serves their goals to help me be more effective in mine.
I recently bought a computer, used it for a while, then decided I didn't want it. What's the simplest way to securely wipe the hard drive before returning it? Is it necessary to create an external boot volume (via USB or optical disc)?
Comments (325)
Weekly open threads - how do you think it's working?
I think it's much better than monthly open threads - back then, I would sometimes think "Hmm, I'd like to ask this in an open thread, but the last one is too old, nobody's looking at it any more".
You haven't ever posted a top-level comment in a weekly open thread.
What has that to do with it?
Well, it's evidence for "Hmm, I'd like to ask this in an open thread, but the last one is too old, nobody's looking at it any more."
Haha but no, Manfred says that he hasn't ever posted a top-level comment in a weekly open thread.
Suppose we were wondering about changing the flavor of our pizza. Someone says "Yeah, I'm really glad you've got these new flavors on your menu, I used to think the old recipe was boring and didn't order it much."
And then it turns out that this person hasn't ever actually tried any of your new flavors of pizza.
Sort of sets an upper bound on how much the introduction of new flavors has impacted this person's behavior.
You can judge a lot more about a thread than about a pizza by just looking at it.
Also, if you seriously think that Open Threads can only be evaluated by people with top-level comments in them you probably misunderstand both how most people use the Open Threads and what is required to judge them.
Sure!
Though here is more of a case of "once in a blue moon I got o the pizza place ... and I'm bored and tired of life ... and want to try something crazy for a change ... but then I see the same old stuff on the menu, I think man, this world sucks ... but now they have the Sushi-Harissa-Livarot pizza, I know next time I'm going to feel better!"
I agree it's a bit weird that I say that p(post|weekly thread) > p(post| monthly thread) when so far there are no instances of post|weekly thread.
Note that he didn’t say “I didn’t post much”, he just said that there existed times when he thought about posting but didn’t because of the age of the thread. That is useful evidence, you can’t just ignore it if it so happens that there are no instances of posting at all.
(In pizza terms, Emile said “I used to think the old recipe was bad and I never ordered it. It’s not that surprising in that case that there are no instances of ordering.)
I have, and I agree with Emile's assessment.
I prefer it to the old format; once a month is too clumpy for an open thread. It was fine when this was a two-man blog, but not for a discussion forum.
Commercials sound funnier if you mentally replace "up to" with "no more than."
Also easier to translate. In fact, we often translate "up to" with "maximaal", the equivalent of "up to a maximum of" in Dutch. But of course that only translates the practical sense, and leaves out the implication of "up to a maximum of xx (and that is a LOT)". We could translate it with "wel" ("wel xx" ~ "even as much as xx"), but in most contexts, that sounds really... American, over the top, exaggerated. And also it doesn't sound exact enough, when it clearly is intended to be a hard limit.
A new study shows that manipulative behavior could be linked to the development of some forms of altruism. The study itself is unfortunately behind a paywall.
It's about eusocial animals. Human relevance?
Unclear. One could conceive of similar action occurring in highly social species that aren't eusocial but have limited numbers of breeding pairs, but that's not frequently done by primates.
Didn't Sci Hub work to find an upaid version, it often does......http://sci-hub.org/
Sci-hub does not work for US users AFAIK.
I have access - PM me if you're interested in it.
Open comment thread:
If it's worth saying, but not worth its own top-level comment in the open thread, it goes here.
(Copied since it was well received last time.)
What's the name of the bias/fallacy/phenomenon where you learn something (new information, approach, calculation, way of thinking, ...) but after awhile revert to the old ideas/habits/views etc.?
I can't think of an academic name, the common phrases in Britain are 'stuck in your ways', 'bloody minded', 'better the the devil you know'.
Depending on what timescales shminux is thinking of as “awhile” (hours or months?), RobbBB's suggestions may be better.
Relapse? Backsliding? Recidivism? Unstickiness? Retrogression? Downdating?
Hmm, some of these are good terms, but the issue is so common, I assumed there would be a standard term for it, at least in the education circles.
Last week, I gave a presentation at the Boston meetup, about using causal graphs to understand bias in the medical literature. Some of you requested the slides, so I have uploaded them at http://scholar.harvard.edu/files/huitfeldt/files/using_causal_graphs_to_understand_bias_in_the_medical_literature.pptx
Note that this is intended as a "Causality for non-majors" type presentation. If you need a higher level of precision, and are able the follow the maths, you would be much better off reading Pearl's book.
(Edited to change file location)
Thanks for making these available.
Even if you can follow the math, these sorts of things can be useful for orienting someone new to the field, or laying a conceptually simple map of the subject that can be elaborated on later. Sometimes, it's easier to use a map to get a feel for where things are than it is to explore directly.
what is a reliable way of identifying arbitrary solved or unsolved problems??
Arbitrary, as in ones you pick yourself? Well, pick a problem, then Google it.
Do you mean random?
I do mean random. The only way I've come up with that reliably can identify a problem would be to pick a random household item, then think of what problem it is supposed to solve therefore identifying a problem, but that doesn't work for unsolved problems....
I think you have to start by imaging better possible states of the world, and then see if anyone has thought of a practical way to get from the current state to the better possible state; if not it's an unsolved problem.
In household terms, start by imagining the household in a "random" better state (cleaner, more efficient, more interesting, more comfortable, etc.) and once you have a clear idea of something better, search for ways to achieve the better state. In concrete terms, always having clean dishes and delicious prepared food would be much better than dirty dishes and no food. Dishwashers help with the former, but are manual and annoying. Microwaves and frozen food help with the latter, but I like fresh food. Paying a cook is expensive. Learning to cook and then cooking costs time. What is cheap, practical, and yields good results? Unsolved problem, unless you want to eat Soylent.
Skilled slaves? Perhaps 'ethical' should be added to your list of constraints. :)
(cheap, practical, and yields good results) = (skilled slaves) ??
We must live in radically different environments X-D
You could pick words from the dictionary at random until they either describe a problem or are nonsensical - if nonsense, try again. Warning: may take a few million tries to work.
Why doesn't CFAR just tape record one of the workshops and throw it on youtube? Or at least put the notes online and update them each time they change for the next workshop? It seems like these two things would take very little effort, and while not perfect, would be a good middle ground for those unable to attend a workshop.
I can definitely appreciate the idea that person to person learning can't be matched with these, but it seems to me if the goal is to help the world through rationality, and not to make money by forcing people to attend workshops, then something like tape recording would make sense. (not an attack on CFAR, just a question from someone not overly familiar with it).
One of the core ideas of CFAR is to develop tools to teach rationality. For that purpose it's useful to avoid making the course material completely open at this point in time. CFAR wants to publish scientific papers that validate their ideas about teaching rationality.
Doing things in person helps with running experiments and those experiments might be less clear when some people already viewed the lectures online.
I guess I don't see why the two are mutually exclusive, I doubt everyone would stop attending workshops if the material was freely available, and I don't understand why something can't be published if it's open sourced first?
I'm guessing that the goal here is to gather information on how to teach rationality to the 'average' person? As in, the person off of the street who's never asked themselves "what do I think I know and how do I think I know it?". But as far as I can tell, LWers make up a large portion of the workshop attendees. Many of us will have already spent enough time reading articles/sequences about related topics that it's as if we've "already viewed the lectures online".
Also, it's not as if the entire internet is going to flock to the content the second that it gets posted. There will still be an endless pool of people to use in the experiments. And wouldn't the experiments be more informative if the data points weren't all paying participants with rationality as a high priority? Shouldn't the experiments involve trying to teach a random class of high-schoolers or something?
What am I missing?
(April 2013 Workshop Attendee)
(The argument is that) A lot of the CFAR workshop material is very context dependent, and would lose significant value if distilled into text or video. Personally speaking, a lot of what I got out of the workshop was only achievable in the intensive environment - the casual discussion about the material, the reasons behind why you might want to do something, etc - a lot of it can't be conveyed in a one hour video. Now, maybe CFAR could go ahead and try to get at least some of the content value into videos, etc, but that has two concerns. One is the reputational problem with 'publishing' lesser-quality material, and the other is sorta-almost akin to the 'valley of bad rationality'. If you teach someone, say, the mechanics of aversion therapy, but not when to use it, or they learn a superficial version of the principle, that can be worse than never having learned it at all, and it seems plausible that this is true of some of the CFAR material also.
I agree that there are concerns, and you would lose a lot of the depth, but my real concern is with how this makes me perceive CFAR. When I am told that there are things I can't see/hear until I pay money, it makes me feel like it's all some sort of money making scheme, and question whether the goal is actually just to teach as many people as much as possible, or just to maximize revenue. Again, let me clarify that I'm not trying to attack CFAR, I believe that they probably are an honest and good thing, but I'm trying to convey how I initially feel when I'm told that I can't get certain material until I pay money.
It's akin to my personal heuristic of never taking advice from anyone who stands to gain from my decision. Being told by people at CFAR that I can't see this material until I pay the money is the opposite of how I want to decide to attend a workshop, I instead want to see the tapes or read the raw material and decide on my own that I would benefit from being in person.
While you have good points, I would like to say that making money is not unaligned with the goal of teaching as many people as possible. It seems like a good strategy is to develop high-quality material by starting off teaching only those able to pay. This lets some subsidize the development of more open course material. If they haven't gotten to the point where they have released the subsidized material, then I'd give them some more time and judge them again in some years. It's a young organization trying to create material from scratch in many areas.
Yeah, I feel these objections, and I don't think your heuristic is bad. I would say, though, and I hold no brief for CFAR, never having donated or attended a workshop, that there is another heuristic possibly worth considering: generally more valuable products are not free. There are many exceptions to this, and it is possible for sellers to counterhack this common heuristic by using higher prices to falsely signal higher quality to consumers. But the heuristic is not worthless, it just has to be applied carefully.
I feel your concerns, but tbh I think the main disconnect is the research/development vs teaching dichotomy, not (primarily) the considerations I mentioned. The volunteers at the workshop (who were previous attendees) were really quite emphatic about how much they had improved, including content and coherency as well as organization.
(Relevant)
I'm a keen swing dancer. Over the past year or so, a pair of internationally reputable swing dance teachers have been running something called "Swing 90X", (riffing off P90X). The idea is that you establish a local practice group, film your progress, submit your recordings to them, and they give you exercises and feedback over the course of 90 days. By the end of it, you're a significantly more badass dancer.
It would obviously be better if everything happened in person, (and a lot does happen in person; there's a massive international swing dance scene), but time, money and travel constraints make this prohibitively difficult for a lot of people, and the whole Swing 90X thing is a response to this, which is significantly better than the next best thing.
It's worth considering if a similar sort of model could work for CFAR training.
Is a CFAR workshop like a lecture? I thought it would be closer to a group discussion, and perhaps subgroups within. This would make a recording highly unfocused and difficult to follow.
Any one unit in the workshop is probably something in between a lecture, a practice session and a discussion between the instructor and the attendees. Each unit is different in this respect. For most of the units, a recording of a session would probably not be very useful on its own.
Consider the following scenario. Suppose that it can be shown that the laws of physics imply that if we do a certain action (costing 5 utils to perform), then in 1/googol of our descendent universes, 3^^^3 utils can be generated. Intuitively, it seems that we should do this action! (at least to me) But this scenario also seems isomorphic to a Pascal's mugging situation. What is different?
If I attempt to describe the thought process that leads to these differences, it seems to be something like this. What is the measure of the causal descendents where 3^^^3 utils are generated? In typical Pascal's mugging, I expect there to be absolutely zero causal descendents where 3^^^3 utils are generated, but in this example, I expect there to be "1/googol" such causal descendents, even though the subjective probability of these two scenarios is roughly the same. I then do my expected utility maximization with (# of utils)(best guess of my measure) instead of (# of utils)(subjective probability), which seems to match with my intuitions better, at least.
But this also just seems like I am passing the buck to the subjective probability of a certain model of the universe, and that this will suffer from the mugging problem as well.
So does thinking about it this way add anything, or is it just more confusing?
I don't know how technically viable hyperloop is, but it seems especially well suited for the United States.
Investing in a hyperloop system doesn't make as much sense in Europe or Japan for a number of reasons:
European/Japanese cities are closer together, so Hyperloop's long acceleration times are a larger relative penalty in terms of speed. The existing HSR systems reach their lower top speeds more quickly.
Most European countries and Japan already have decent HSR systems and are set to decline in population. Big new infrastructure projects tend not to make as much sense when populations are declining and the infrastructure cost : population ratio is increasing by default.
Existing HSR systems create a natural political enemy for Hyperloop proposals. For most countries, having HSR and Hyperloop doesn't make sense.
In contrast, the US seems far better suited:
The US is set for a massive population increase, requiring large new investments in transportation infrastructure in any case.
The US has lots of large but far-flung cities, so long acceleration times are not as much of a relative penalty.
The US has little existing HSR to act as a competitor. The political class has expressed interest in increasing passenger rail infrastructure.
Hyperloop is proposed to carry automobiles. Low walkability of US towns is the big killer of intercity passenger rail in the US. Taking HSR might be faster than driving, but in addition to other benefits, driving saves money on having to rent a car when you reach the destination city.
Another possible early adopter is China (because they still need more transport infrastructure, land acquisition is a trivial problem for the Communist party, and they have a larger area, mitigating the slow acceleration problem.) I see China as less likely than the US because they do have a fairly large HSR system and it is expanding quickly. Also, China is set for population decline within a few decades, although they have some decades of slow growth left.
Russia is another possible candidate. Admittedly they have the declining population problem, but they still need more transport infrastructure and they have several big, far-flung cities. The current Russian transportation system is quite unsafe, so they could be expected to be willing to invest in big new projects. The slow acceleration problem would again be mitigated by Russia's large size.
I was only vaguely following the Hyperloop thread on Lesswrong, but this analysis convinced me to Google it to learn more. I was immediately bombarded with a page full of search results that were pecimmistic at best (mocking, pretending at fallasy of gray but still patronizing, and politically indignant (the LA Times) were among the results on the first page)[1]. I was actually kinda hopeful about the concept, since America desperately needs better transit infrastructure, and KND's analysis of it being best suited for America makes plenty of sense so far as I can tell.
[1] I didn't actually open any of the results, just read the titles and descriptions. The tone might have been exaggerated or even completely mutated by that filter, but that seems unlikely for the titles and excerpts I read.
I suggest that this is very weak evidence against the viability, either political, economic, or technical, of the Hyperloop. Any project that is obviously viable and useful has been done already; consequently, both useful and non-useful projects get the same amount of resistance of the form "Here's a problem I spent at least ten seconds thinking up, now you must take three days to counter it or I will pout. In public. Thus spoiling all your chances of ever getting your pet project accepted, hah!"
In theory there is no difference between theory and practice. In practice, there is.
I continue to fail to see how this idea is anything more than a cool idea that would take huge amounts of testing and engineering hurdles to get going if it indeed would prove viable. Nothing is as simple as its untested dream ever is.
Not hating on it, but seriously, hold your horses...
I feel like I covered this in the first sentence with, "I don't know how technically viable hyperloop is." My point is just to argue that the US would be especially well-suited for hyperloop if it turns out to be viable. My goal was mainly to try to argue against the apparent popular wisdom that hyperloop would never be built in the US for the same reason HSR (mostly) wasn't.
I'd like to hear more about possibilities in China, if you've got more. Everything I've read lately suggests that they've extensively overbuilt their infrastructure, much of it with bad debt, in the rush to create urban jobs. And it seems like they're teetering on the edge of a land-development bubble, and that urbanization has already started slowing. But they do get rights-of-way trivially, as you say, and they're geographically a lot more like the US than Europe.
(The Money Illusion would like to dispute this view of China. Not sure how much to trust Sumner on this but he strikes me as generally smart.)
Mr. Sumner has some pretty clear systemic assumptions toward government spending on infrastructure. This article seems to agree with both aspects, without conflicting with either, however.
The Chinese government /is/ opening up new opportunities for non-Chinese companies to provide infrastructure, in order to further cover land development. But they're doing so at least in part because urbanization is slowing and these investments are perceived locally as higher-risk to already risk-heavy banks, and foreign investors are likely to be more adventurous or to lack information.
I've been told that railways primarily get money from freight, and nobody cares that much about freight getting there immediately. As such, high speed railways are not a good idea.
I know you can't leave this to free enterprise per se. If someone doesn't want to sell their house, you can't exactly steer a railroad around it. However, if eminent domain is used, then if it's worth building, the market will build it. Let the government offer eminent domain use for railroads, and let them be built if they're truly needed.
Much of Amtrak uses tracks owned by freight companies, and that this is responsible for a good chunk of Amtrak's poor performance. However, high-speed rail on non-freight-owned tracks works pretty well in the rest of the world; it just needs its own right-of-way (in some cases running freight at night when the high-speed trains aren't running, but still having priority over freight traffic).
Are high speed trains profitable enough for people to build them without government money? I'm not sure how to look that up.
That's not at all the same question as "Are high-speed trains a good idea?"
Any decent HSR would generate quite a lot of value not captured by fares. It would be more informative to compare the economic development of regions that have built high-speed rail against that of similar regions which haven't or which did so later.
France's TGV is profitable. Do you think that because it might not have been built without government funding it was a bad idea to build?
If the HSR charges based on marginal cost, and marginal and average cost are significantly different, then this could be a problem. I intuitively assumed they'd be fairly close. Thinking about it more, I've heard that airports charge vastly more for people who are flying for business than for pleasure, which suggests there is a signifcant difference. Of course, it also suggests that they might be able to capture it through price discrimination, since the airports seem to manage.
How much government help is necessary for a train to be built?
The economics of a train is not comparable to the economics of a city. If you can actually notice the difference in economic development caused by the train, then the train is so insanely valuable that it would be blindingly obvious from looking at how often they're built by the private sector.
Making a profit is not a sufficient condition for it to be worth while to build. It has to make enough profit to make up for the capital cost. It might well do that, and it is possible to check, but it's a lot easier to ask if one has been built without government funding.
If it is worth while to build trains in general, and the government doesn't always fund them, then someone will build one without the government funding them.
Marginal and average cost are obviously different, but your example of business fliers is not relevant. Business fliers aren't paying for their flights, but do often get to choose which airline they take. If there is one population that pays for their own flights and another population that does not even consider cost, it would be silly not to discriminate whatever the relation between marginal and average cost.
The businesses are perfectly capable of choosing not to pay for their employees flights. The fact that they do, and that they don't consider the costs, shows that their willingness to pay is much higher than the marginal cost. If it wasn't for price discrimination, consumer surplus would be high, and a large amount of value produced by the airlines would go towards the consumers.
Are high-speed trains natural monopolies? That is, are the capital costs (e.g. rail lines) much higher than the marginal costs (e.g. train cars)? I think they are, and if they are considering the consumer surplus is important, but if they're not, then it doesn't matter.
What marginal cost are you referring to here? If it's the cost to the airline of one butt-in-seat, we know it's less than one fare because the airline is willing to sell that ticket. And this has nothing to do with average cost. I think you've lost the thread a bit.
What I mean is that, if everyone payed what people who travel for pleasure pay, then people travelling for business would pay much less than they're willing to, so the amount of value airports produce would be a lot less than what they'd get. If they charged everyone the same, either it would get so expensive that people would only travel for business, even though it's worth while for people to travel for pleasure, or it would be cheap enough that people travelling for business would fly for a fraction of what they're willing to pay. Either way, airports that are worth building would go unbuilt since the airport wouldn't actually be able to make enough money to build it.
I don't understand the reasoning by which you conclude that if an effect is measurable it must be so overwhelmingly huge that you wouldn't have to measure it.
On a much smaller scale, property values rise substantially in the neighborhood of light rail stations, but this value is not easily captured by whoever builds the rails. Despite the measurability of this created value, we do not find that "[light rail] is so insanely valuable that it would be blindingly obvious from looking at how often they're built by the private sector."
If the effect is measurable on an accurate but imprecise scale (such as the effect of a train on the economy), then it will be overwhelming on an inaccurate but precise scale (such as ticket sales).
You are suggesting we measure the utility of a single business by its effect on the entire economy. Unless my guesses of the relative sizes are way off, the cost of a train is tiny compared to the normal variation of the economy. In order for the effect to be noticeable, the train would have to pay for itself many, many times over. Ticket sales, and by extension the free market, might not be entirely accurate in judging the value of a train. But it's not so inaccurate that an effect of that magnitude will go unnoticed.
Am I missing something? Are trains really valuable enough that they'd be noticed on the scale of cities?
Are you claiming that a scenario in which
Fares cover 90% of (construction + operating costs)
Faster, more convenient transportation creates non-captured value worth 20% of (construction + operating costs)
is impossible? You seem to be looking at this from a very all-or-nothing point of view.
Faster, more convenient transportation is what fares are charging for. Non-captured value is more complicated than that.
If the non-captured value is 20% of the captured value, it's highly unlikely that trains will frequently be worth building, but rarely capture enough value. That would require that the true value stay within a very narrow area.
If it's not a monopoly good, and marginal costs are close to average costs, then captured value will only go down as people build more trains, so that value not being captured doesn't prevent trains from being built. If it is a monopoly good (I think it is, but I would appreciate it if some who actually knows tells me), and marginal costs are much lower than average costs, then a significant portion of the value will not be captured. Much more than 20%. It's not entirely unreasonable that the true value is such that trains are rarely built when they should often be built.
That's part of why I asked:
If the government is subsidizing it by, say, 20%, then the trains are likely worth while. If the government practically has to pay for the infrastructure to get people to operate trains, not so much.
Also, that comment isn't really applicable to what you just posted it as a response to. It would fit better as a response to my last comment. The comment you responded to was just saying that unless the value of trains is orders of magnitude more than the cost, you'd never notice by looking at the economy.
Are highways?
Some roads do collect tolls. Again, I don't know how to look it up, but I don't think they have government help. They're in the minority, but they show that having roads is socially optimal. Similarly, if there are high-speed trains that operate without government help, we know that it's good to have high-speed trains, and while it may be that government encouragement is resulting in too many of them being built, we should still build some.
Many of the private passenger rail companies were losing money before they were nationalized, but that was under heavy regulation and price controls. The freight rail companies were losing money before they were deregulated as well. These days they are quite profitable.
A lot of the old right-of-way has been lost so they would certainly need government help to overcome the tragedy-of-the-anticommons problem.
You mean the problem that someone isn't going to be willing to sell their property? Eminent domain is certainly necessary. I'm just wondering if it's sufficient.
I'm not sure what your point is here. Passenger rail and freight rail are usually decoupled. Amtrak operates on freight rail in most places because the government orders the rail companies to give preference to passenger rail (at substantial cost to the private freight railways).
Hyperloop would help out a lot, since it takes the burden off of freight rail. I suppose hyperloop could be privately operated (that would be my preference, so long as there was commonsense regulation against monopolistic pricing).
If competitors can simply build more hyperloops, monopolistic pricing won't be a problem. If you only need one hyperloop, then monopolistic pricing is insufficient. They will still make less money than they produce. Getting rid of monopolistic pricing runs the risk of keeping anyone from building the hyperloops.
Don't forget Australia. We have a few, large cities separated by long distances. In particular, Melbourne to Sydney is one of the highest traffic air routes in the world, roughly the same distance as the proposed Hyperloop, and there has been on and off talk of high speed rail links. Additionally, Sydney airport has a curfew, and is more or less operating at capacity. Offloading Melbourne-bound passengers to a cheaper, faster option would free up more flights for other destinations.
There is a circulating google docs for people who are moving into the Bay Area soonish.
Any tips for people moving in from those who are in?
People who have available rooms or houses. Let Nick Ryder know.
Some advice for people who want to rent from landowners.
Artificial intelligence and Solomonoff induction: what to read?
Olle Häggström, Professor of Mathematical Statistics at Chalmers University of Technology, reads some of Marcus Hutter's work, comes away unimpressed, and asks for recommendations.
Random question - is AGI7 a typo, or a term?
Open link, control+f "relavant to AGI". Get directed to "relavant to AGI<sup>7</sup>".
Footnote 7 is "7) I am not a computer scientist, so the following should perhaps be taken with a grain of salt. While I do think that computability and concepts derived from it such as Kolmogorov complexity may be relevant to AGI, I have the feeling that the somewhat more down-to-earth issue of computability in polynomial time is even more likely to be of crucial importance."
I want to know more (ie anything) about game theory. What should I read?
If you have the time, I heartily recommend Ben Polak's Introduction to Game Theory lectures. They are highly watchable and give a very solid introduction to the topic.
In terms of books, The Strategy of Conflict is the classic popular work, and it's good, but it's very much a product of its time. I imagine there are more accessible books out there. Yvain recommends The Art of Strategy, which I haven't read.
I hate trying to learn things from videos, but the books look interesting.
What are your motives for learning about it? If it's to gain a bare-bones understanding sufficient for following discussion in Less Wrong, existing Less Wrong articles would probably equip you well enough.
My possibly crazy theory is that game theory would be a good way to understand feminism.
OK, I'm interested. Can you explain a little more?
It's a little bit intuition and might turn out to be daft, but
a) I've read just enough about game theory in the past to know what the prisoner's dilemma is
b) I was reading an argument/discussion on another blog about the men chatting up women, who may or may not be interested, scenario, and various discussions on irc with MixedNuts have given me the feeling that male/female interactions (which are obviously an area of central interest to feminism) are a similar class of thing and possibly game theory will help me understand said feminism and/or opposition to it.
A word of warning: you will probably draw all sorts of wacky conclusions about human interaction when first dabbling with game theory. There is huge potential for hatching beliefs that you may later regret expressing, especially on politically-charged subjects.
I also had the same intuition about male/female dynamics and the prisoner's dilemma. It also seems like a lot of men's behavior towards women is a result of a scarcity mentality. Surely there are some economic models that explain how people behave -- especially their bad behavior -- when they feel some product is scarce, and if these models were applied to male/female dynamics it might predict some behavior.
But since feminism is such a mind-killing topic, I wouldn't feel too comfortable expressing alternative explanations (especially among non-rationalists) since people tend to feel that if you disagree with the explanation then you disagree with the normative goals.
One model which I've seen come up repeatedly in the humanities is the "marriage market". Unsurprisingly, economists seem to use this idea most often in the literature, but peeking through the Google Scholar hits I see demographers, sociologists, and historians too. (At least one political philosopher uses the idea too.)
I don't know how predictive these models are. I haven't done a systematic review or anything remotely close to one, but when I've seen the marriage market metaphor used it's usually to explain an observation after the fact. Here is a specific example I spotted in Randall Collins's book ''Violence: A Micro-sociological Theory''. On pages 149 & 150 Collins offers this gloss on an escalating case of domestic violence:
(Digression: Collins calls this a sociological interpretation, but I usually associate this kind of bargaining power-based explanation with microeconomics or game theory, not sociology. Perhaps I should expand my idea of what constitutes sociology. After all, Collins is a sociologist, and he has partly melded the bargaining power-based explanation with his own micro-sociological theory of violence.)
(If you want a specific link, here is Yvain's introduction to game theory sequence. There are some problems and inaccuracies with it which are generally discussed in comments, but as a quick overview aimed at a LW audience it should serve pretty well.)
If you're looking for something shorter than a full text, I can recommend this entry at the Standord Encyclopedia of Philosophy.
I actually found The Selfish Gene a pretty good book for developing game theory intuitions. I'd put it as #2 on my list after "the first 2/3 of The Strategy of Conflict".
If you had to group Less Wrong content into eight categories by subject matter, what would those categories be?
For unspecified levels of meta. :P
I would remove meetups, as that isn't really LW content as such.
It would be good to have it in a separate category, though, so you could disappear it from the front page.
Here's a question that's been distracting me for the last few hours, and I want to get it out of my head so I can think about something else.
You're walking down an alley after making a bank withdrawal of a small sum of money. Just about when you realize this may have been a mistake, two Muggers appear from either side of the alley, blocking trivial escapes.
Mugger A: "Hi there. Give me all of that money or I will inflict 3^^^3 disutility on your utility function."
Mugger B: "Hi there. Give me all of that money or I will inflict maximum disutility on your utility function."
You: "You're working together?"
Mugger A: "No, you're just really unlucky."
Mugger B: "Yeah, I don't know this guy."
You: "But I can't give both of you all of this money!"
Mugger A: "Tell you what. You're having a horrible day, so if you give me half your money, I'll give you a 50% chance of avoiding my 3^^^3 disutility. And if you give me a quarter of your money, I'll give you a 25% chance of avoiding my 3^^^3 disutility. Maybe the other Mugger will let you have the same kind of break. Sound good to you, other Mugger?"
Mugger B: "Works for me. Start paying."
You: Do what, exactly?
I can see at least 4 vaugely plausible answers:
Pay Mugger A: 3^^^3 disutility is likely going to be more than whatever you think your maximum is and you want to be as likely as possible of avoiding that. You'll just have to try resist/escape from Mugger B (unless he's just faking).
Pay Mugger B: Maximum disutility is by it's definition of greater than or equal to any other disutility, worse than 3^^^3, and has probably happened to at least a few people with utility functions (although probably NOT to a 3^^^3 extent), so it's a serious threat and you want to be as likely as possible of avoiding that. You'll just have to try resist/escape from Mugger A (unless he's just faking).
Pay both Muggers a split of the money: For example: If you pay half to each, and they're both telling the truth, you have a 25% chance of not getting either disutility and not having to resist/escape at all (unless one or both is faking, which may improve your odds.)
Don't Pay: This seems like it becomes generally less likely than in a normal Pascal's mugging since there are no clear escape routes, and you're outnumbered, so there is at least some real threat unless they're both faking.
The problem is, I can't seem to justify any of my vaugely plausible answers to this conundrum well enough to stop thinking about it. Which makes me wonder if the question is ill formed in some way.
Thoughts?
If they're both telling the truth: since B gives maximum disutility, being mugged by both is no worse than being mugged by B. If you think your maximum disutility is X*3^^^3, I think if you run the numbers you should give a fraction X/2 to B, and the rest to A. (or all to B if X>2)
If they might be lying, you should probably ignore them. Or pay B, whose threat is more credible if you don't think your utility function goes as far as 3^^^3 (although, what scale? Maybe a dust speck is 3^^^^3)
If you have some concept of "3^^^3 disutility" as a tractable measure of units of disutility, it seems unlikely you don't also have a reasonable idea of the upper and lower bounds of your utility function. If the values are known this becomes trivial to solve.
I am becoming increasingly convinced that VNM-utility is a poor tool for ad-hoc decision-theoretics, not because of dubious assumptions or inapplicability, but because finding corner-cases where it appears to break down is somehow ridiculously appealing.
I may be fighting the hypothetical here, but ...
If utility is unbounded, maximum disutility is undefined, and if it's bounded, then 3^^^3 is by definition smaller than the maximum so you should pay all to mugger B.
I think trading a 10% chance of utility A for a 10% chance of utility B, with B < A is irrational per the definition of utility (as far as I understand; you can have marginal diminishing utility on money, but not marginally diminishing utility on *utility. I'm less sure about risk aversion though.)
That's not fighting the hypothetical. Fighting the hypothetical is first paying one, then telling the other you'll go back to the bank to pay him too. Or pulling out your kung fu skills, which is really fighting the hypothetical.
This article, written by Dreeve's wife has displaced Yvain's polyamory essay as the most interesting relationships article I've read this year. The basic idea is that instead of trying to split chores or common goods equally, you use auctions. For example, if the bathroom needs to be cleaned, each partner says how much they'd be willing to clean it for. The person with the higher bid pays the what the other person bid, and that person does the cleaning.
It's easy to see why commenters accused them of being libertarian. But I think egalitarians should examine this system too. Most couples agree that chores and common goods should be split equally. But what does "equally" mean? It's hard to quantify exactly how much each person contributes to a relationship. This allows the more powerful person to exaggerate their contributions and pressure the weaker person into doing more than their fair share. But auctions safeguard against this abuse requiring participants to quantify how much they value each task.
For example, feminists argue that women do more domestic chores than men, and that these chores go unnoticed by men. Men do a little bit, but because men don't see all the work women do, they end up thinking that they're doing their share when they aren't. Auctions safeguard against this abuse. Instead of the wife just cleaning the bathroom, she and her husbands bid for how much they'd be willing to clean the bathroom for. The lower bid is considered the fair market price of cleaning the bathroom. Then she and her husband engage in a joint-purchase auction to decide if the bathroom will be cleaned at all. Either the bathroom gets cleaned and the cleaner gets fairly compensated, or the bathroom doesn't get cleaned because the total utility of cleaning the bathroom is less than the disutility of cleaning the bathroom.
And that's it. No arguing about who cleaned it last. No debating whether it really needs to cleaned. No room for misogynist cultural machines to pressure the wife into doing more than her fair share. Just a market transaction that is efficient and fair.
Wasn't it Ariely's Predictably Irrational that went over market norms vs. tribe norms? If you just had ordinary people start doing this, I would guess it would crash and burn for the obvious market-norm reasons (the urge to game the system, basically). And some ew-squick power disparity stuff if this is ever enforced by a third party or even social pressure.
Empirically speaking, this system has worked in our house (of 7 people, for about 6 months so far). What kind of gaming the system were you thinking of?
We do use social pressure: there is social pressure to do your contracted chores, and keep your chore point balance positive. This hasn't really created power disparities per se.
If the idea is to say exactly how much you are willing to pay, there would be an incentive to:
1) Broadcast that you find all labor extra unpleasant and all goods extra valuable, to encourage people to bid high
2) Bid artificially lower values when you know someone enjoys a labor / doesn't mind parting with a good and will bid accordingly.
In short, optimal play would involve deception, and it happens to be a deception of the sort that might not be difficult to commit subconsciously. You might deceive yourself into thinking you find a chore unpleasant - I have read experimental evidence to support the notion that intrinsically rewarding tasks lose some of their appeal when paired with extrinsic rewards.
No comment on whether the traditional way is any better or worse - I think these two testimonials are sufficient evidence for this to be worth people who have a willing human tribe handy to try it, despite the theoretical issues. After all,
Edit: There is another, more pleasant problem: If you and I are engaged in trade, and I actually care about your utility function, that's going to effect the price. The whole point of this system is to communicate utility evenly after subtracting for the fact that you care about each other (otherwise why bother with a system?)
Concrete example: We are trying to transfer ownership of a computer monitor, and I'm willing to give it to you for free because I care about you. But if I were to take that into account, then we are essentially back to the traditional method. I'd have to attempt to conjure up the value at which i'd sell the monitor to someone I was neutral towards.
Of course, you could just use this as an argument stopper - whenever there is real disagreement, you use money to effect an easy compromise. But then there is monetary pressure to be argumentative and difficult, and social pressure not to be - it would be socially awkward and monetarily advantageous if you were constantly the one who had a problem with unmet needs.
But if other people bid high, then you have to pay more. And they will know if you bid lower, because the auctions are public. How does this help you?
I don't understand how this helps you either; if you bid lower and therefore win the auction, then you have to do the chore for less than you value it at. That's no fun.
The way our system works, it actually gives the lowest bidder, not their actual bid, but the second lowest bid minus 1; that way you don't have to do bidding wars, and can more or less just bid what you value it at. It does create the issue that you mention - bid sniping, if you know what the lowest bidder will bid you can bid just above it so they get as little as possible - but this is at the risk of having to actually do the chore for that little, because bids are binding.
I'd very much like to understand the issues you bring up, because if they are real problems, we might be able to take some stabs at solving them.
This has become somewhat of a norm in our house. We can pass around chore points in exchange for rides to places and so forth; it's useful, because you can ask for favors without using up your social capital. (Just your chore points capital, which is easier to gain more of and more transparent.)
You only do this when you plan to be the buyer. The idea is to win the auction and become the buyer, but putting up as little money as possible. If you know that the other guy will do it for $5, you bid $6, even if you actually value it at $10. As you said, I'm talking about bid sniping.
Ah, I should have written "broadcast that you find all labor extra unpleasant and all goods extra valuable when you are the seller (giving up a good or doing a labour) so that people pay you more to do it."
If you're willing to do a chore for _$10, but you broadcast that you find it more than -$10 of unpleasantness, the other party will be influenced to bid higher - say, $40. Then, you can bid $30, and get paid more. It's just price inflation - in a traditional transaction, a seller wants the buyer to pay as much as they are willing to pay. To do this, the seller must artificially inflate the buyer's perception of how much the item is worth to the seller. The same holds true here.
When you intend to be the buyer you do the opposite - broadcast that you're willing to do the labor for cheap to lower prices, then bid snipe. As in a traditional transaction, the buyer wants the seller to believe that the item is not of much worth to the buyer. The buyer also has to try to guess the minimum amount that the seller will part with the item.
So what I wrote above was assuming the price was a midpoint between the buyer's and seller's bid, which gives them both equal power to set the price. This rule slightly alters things, by putting all the price setting power in the buyer's hands.
Under this rule, after all the deceptive price inflation is said and done you should still bid an honest $10 if you are only playing once - though since this is an iterated case, you probably want to bid higher just to keep up appearances if you are trying to be deceptive.
One of the nice things about this rule is that there is no incentive to be deceptive unless other people are bid sniping. The weakness of this rule is that it creates a stronger incentive to bid snipe.
Price inflation (seller's strategy) and bid sniping (buyer's strategy) are the two basic forms of deception in this game. Your rule empowers the buyer to set the price, thereby making price inflation harder at the cost of making bid sniping easier. I don't think there is a way around this - it seems to be a general property of trading. Finding a way around it would probably solve some larger scale economic problems.
(I'm one of the other users/devs of Choron)
There are two ways I know of that the market can try to defeat bid sniping, and one way a bidder can (that I know of).
Our system does not display the lowest bid, only the second lowest bid. For a one-shot auction where you had poor information about the others preferences, this would solve bid sniping. However, in our case, chores come up multiple times, and I'm pretty sure that it's public knowledge how much I bid on shopping, for example.
If you're in a situation where the lowest bid is hidden, but your bidding is predictable, you can sometimes bid higher than you normally would. This punishes people who bid less than they're willing to actually do the chore for, but imposes costs on you and the market as a whole as well, in the form of higher prices for the chore.
A third option, which we do not implement (credit to Richard for this idea), is to randomly award the auction to one of the two (or n) lowest bidders, with probability inversely related to their bid. In particular, if you pick between the lowest 2 bidders, both have claimed to be willing to do the job for the 2nd bidder's price (so the price isn't higher and noone can claim they were forced to do something for less than they wanted). This punishes bid-snipers by taking them at their word that they're willing to do the chore for the reduced price, at the cost of determinism, which allows better planning.
And market efficiency.
Plus, I think it doesn't work when there are only two players? If I honestly bid $30, and you bid $40 and randomly get awarded the auction, then I have to pay you $40. And that leaves me at -$10 disutility, since the task was only -$30 to me.
To be sure I'm following you: If the 2nd bidder gets it (for the same price as the first bidder), the market efficiency is lost because the 2nd person is indifferent between winning and not, while the first would have liked to win it? If so, I think that's right.
If there are two players... I agree the first bidder is worse off than they would be if they had won. This seems like a special case of the above though: why is it more broken with 2 players?
Yeah, bidding = deception. But in addition to someonewrong's answer, I was thinking you could just end up doing a shitty job at things (e.g. cleaning the bathroom). Which is to say, if this were an actual labor market, and not a method of communicating between people who like each other and have outside-the-market reasons to cooperate, the market doesn't have much competition.
Yeah, that's unfortunately not something we can really handle other than decreeing "Doing this chore entails doing X and it doesn't count if you don't do X." Enforcing the system isn't solved by the system itself.
Good way to describe it.
I can see this working better than a dysfunctional household, but if you're both in the habit of just doing things, this is going to make everything worse.
Roger and I wrote a web app for exactly this purpose - dividing chores via auction. This has worked well for chore management for a house of 7 roommates, for about 6 months so far.
The feminism angle didn't even occur to us! It's just been really useful for dividing chores optimally.
I can see it working when all parties are trustworthy and committed to fairness, which is a high threshold to begin with. Also, everyone has to buy into the idea of other people being autonomous agents, with no shoulds attached. Still, this might run into trouble when one party badly wants something flatly unacceptable to the other and so unable to afford it and feeling resentful.
One (unrelated) interesting quote:
Wow someone else thought of doing this too!
My roommate and I started doing this a year ago. It went pretty well for the first few months. Then our neighbor heard about how much we were paying eachother for chores and started outbidding us.
This is one of the features of this policy, actually- you can use this as a natural measure of what tasks you should outsource. If a maid would cost $20 to clean the apartment, and you and your roommates all want at least $50 to do it, then the efficient thing to do is to hire a maid.
This sounds interesting for cases where both parties are economically secure.
However I can't see it working in my case since my housemates each earn somewhere around ten times what I do. Under this system, my bids would always be lowest and I would do all the chores without exception. While I would feel unable to turn down this chance to earn money, my status would drop from that of an equal to that of a servant. I would find this unacceptable.
Could one not change the bidding to use "chore points" of somesuch? I mean, the system described is designed for spouses, but there's no reason it couldn't be adapted for you and your housemates.
I'm skeptical that most couples agree with this.
Anyway, all of these types of 'chore division' systems that I've seen so far totally disregard human psychology. Remember that the goal isn't to have a fair chore system. The goal is to have a system that preserves a happy and stable relationship. If the resulting system winds up not being 'fair', that's ok.
Most couples worldwide, or most couples in W.E.I.R.D. societies?
Both.
P.S.: those last two sentences ("No room for misogynist cultural machines to pressure the wife into doing more than her fair share. Just a market transaction that is efficient and fair.") also remind me of "If those women were really oppressed, someone would have tended to have freed them by then."
The polyamory and BDSM subcultures prove that nerds can create new social rules that improve sex. Of course, you can't just theorize about what the best social rules would be and then declare that you've "solved the problem." But when you see people living happier lives as a result of changing their social rules, there's nothing wrong with inviting other people to take a look.
I don't understand your postscript. I didn't say there is no inequality in chore division because if there were a chore market would have removed it. I said a chore market would have more equality than the standard each-person-does-what-they-think-is-fair system. Your response seems like fully generalized counterargument: anyone who proposes a way to reduce inequality can be accused of denying that the inequality exists.
One datapoint: I know of one household (two adults, one child) which worked out chores by having people list which chores they liked, which they tolerated, and which they hated. It turned out that there was enough intrinsic motivation to make taking care of the house work.
When you're trying to raise the sanity waterline, dredging the swamps can be a hazardous occupation. Indian rationalist skeptic Narendra Dabholkar was assassinated this morning.
Political activism, especially in the third world, is inherently dangerous, whether or not it is rationality-related.
He was trying to pass a law to suppress religious freedoms of small sects. That doesn't raise the sanity waterline, it just increases tensions and hatred between groups.
That's a ludicrously forgiving reading of what the bill (which looks like going through) is about. Steelmanning is an exercise in clarifying one's own thoughts, not in justifying fraud and witch-hunting.
Did you even read my comment?
Yes, I did. Your characterisation of the new law is factually ridiculous.
That isn't all the law does, as you would know if you actually read it.
I haven't been able to find the text of the bill — only summaries such as this one. Do you have a link?
I've got an (IMHO) interesting discussion article written up, but I am unable to post it; I get a "webpage cannot be found" error when I try. I'm using IE 9. Is this a known issue, or have I done something wrong?
Have you tried searching the LW bugtracker or using a different browser?
Thank you for this suggestion. I have discovered that this works in Chrome.
Do consequentialists generally hold as axiomatic that there must be a morally preferable choice (or conceivably multiple equally preferable choices) in a given situation? If so, could somebody point me to a deeper discussion of this axiom (it probably has a name, which I don't know.)
Not explicitly as an axiom AFAIK, but if you're valuing states-of-the-world, any choice you make will lead to some state, which means that unless your valuation is circular, the answer is yes.
Basically, as long as your valuation is VNM-rational, definitely yes. Utilitarians are a special case of this, and I think most consequentialists would adhere to that also.
Thanks! Do consequentialist kind of port the first axiom (completeness) from the VN-M utility theorem, changing it from decision theory to meta-ethics?
And for others, to put my original question another way: before we start comparing utilons or utility functions, insofar as consequentialists begin with moral intuitions and reason the existence of utility, is one of their starting intuitions that all moral questions have correct answers? Or am I just making this up? And has anybody written about this?
To put that in one popular context: in the Trolley Switch and Fat Man problem, it seems like most people start with the assumption that there exists a right answer (or preferable, or best, whatever your terminology), and that it could never be the case that an agent will do the wrong/immoral/unethical thing no matter what he or she chooses. Am I right that this assumption exists?
Not explicitly (except in the case of some utilitarians), but I don't think many would deny it. The boundaries between meta-ethics and normative ethics are vaguer than you'd think, but consequentialism is already sort of metaethical. The VMN theorem isn't explicitly discussed that often (many ethicists won't have heard of it), but the axioms are fairly intuitive anyway. However, although I don't know enough about weird forms of consequentialism to know if anyone's made a point of denying completeness, I wouldn't be that surprised if that position exists.
Yes, I think it certainly exists. I'm not sure if it's universal or not, but I haven't read a great deal on the subject yet, you I'm not sure if I would know.
Most people do have this belief. I think it's a safe one, though. It follows from a substantive belief most people have, which is that agents are only morally responsible for things that are under their control.
In the context of a trolley problem, it's stipulated that the person is being confronted with a choice -- in the context of the problem, they have to choose. And so it would be blaming them for something beyond their control to say "no matter what you do, you are blameworthy."
One way to fight the hypothetical of the trolley problem is to say "people are rarely confronted with this sort of moral dilemma involuntarily, and it's evil to to put yourself in a position of choosing between evils." I suppose for consistency, if you say this, you should avoid jury service, voting, or political office.
What happens if my valuation is noncircular, but is incomplete? What if I only have a partial order over states of the world? Suppose I say "I prefer state X to Z, and don't express a preference between X and Y, or between Y and Z." I am not saying that X and Y are equivalent; I am merely refusing to judge.
My impression is that real human preference routinely looks like this; there are lots of cases people refuse to evaluate or don't evaluate consistently.
It seems like even with partial preferences, one can be consequentialist -- if you don't have clear preferences between outcomes, you have a choice that isn't morally relevant. Or is there a self-contradiction lurking?
You could have undefined value, but it's not particularly intuitive, and I don't think anyone actually advocates it as a component of a consequentialist theory.
Whether, in real life, people actually do it is a different story. I mean, it's quite likely that humans violate the VNM model of rationality, but that could just be because we're not rational.
If the result of that partial preference is that you start with Z and then decline the sequence of trades Z->Y->X, then you got dutch booked.
Otoh, maybe you want to accept the sequence Z->Y->X if you expect both trades to be offered, but decline each in isolation? But then your decision procedure is dynamically inconsistent: Standing at Z and expecting both trade offers, you have to precommit to using a different algorithm to evaluate the Y->X trade than you will want to use once you have Y.
I think I see the point about dynamic inconsistency. It might be that "I got to state Y from Z" will alter my decisionmaking about Y versus X.
I suppose it means that my decision of what to do in state Y no longer depends purely on consequences, but also on history, at which point they revoke my consequentialist party membership.
But why is that so terrible? It's a little weird, but I'm not sure it's actually inconsistent or violates any of my moral beliefs. I have all sorts of moral beliefs about ownership and rights that are history-dependent so it's not like history-dependence is a new strange thing.
Sorry if this has been asked before, but can someone explain to me if there is any selfish reason to join Alcor while one is in good health? If I die suddenly, it will be too late to have joined, but even if I had joined it seems unlikely that they would get to me in time.
The only reason I can think of is to support Alcor.
There is some background base rate of sudden, terminal, but not immediately fatal, injury or illness.
For example, I currently do not value life insurance highly, and therefore I value cryonics insurance even less.
Otherwise, there's only some marginal increase in the probability of Alcor surviving as an institution. Seeing as there's precedent for healthy cryonics orgs to adopt the patients of unhealthy cryonics orgs, this marginal increase should be viewed as a yet more marginal increase in the survival of cryonics locations in your locality.
(Assuming transportation costs are prohibitive enough to be treated as a rounding error.)
I don't think it's been asked before on Less Wrong, and it's an interesting question.
It depends on how much you value not dying. If you value it very strongly, the risk of sudden, terminal, but not immediately fatal injuries or illnesses, as mentioned by paper-machine, might be unacceptable to you, and would point toward joining Alcor sooner rather than later.
The marginal increase your support would add to the probability of Alcor surviving as an institution might also matter to you selfishly, since this would increase the probability that there will exist a stronger Alcor when you are older and will likely need it more than you do now.
Additionally, while it's true that it's unlikely that Alcor would reach you in time if you were to die suddenly, compare this risk to the chance of your survival if alternately you don't join Alcor soon enough, and, after your hypothetical fatal car crash, you end up rotting in the ground.
And hey, if you really want selfish reasons: signing up for cryonics is high-status in certain subcultures, including this one.
There are also altruistic reasons to join Alcor, but that's a separate issue.
Thank you for your response; I suppose one would need to estimate the probability of dying in such a way that having previously joined Alcor would make a difference.
Perusing Ben Best's web site and using some common sense, it seems that the most likely causes of death for a reasonably healthy middle aged man are cancer, stroke, heart attack, accident, suicide, and homicide. We need to estimate the probability of sudden serious loss of faculties followed by death.
It seems that for cancer, that probability is extremely small. For stroke, heart attack, and accidents, one could look it up but just guesstimating a number based on general observations, I would guess roughly 10 to 15 percent. Suicide and homicide are special cases -- I imagine that in those cases I would be autopsied so there would be much less chance of cryopreservation even if I had already joined Alcor.
Of course even if you pre-joined Alcor, there is still a decent chance that for whatever reason they would not be able to preserve you after, for example, a fatal accident which killed you a few days later.
So all told, my rough estimate is that the improvement in my chances of being cryopreserved upon death if I joined Alcor now as opposed to taking a wait and see approach is 5% at best.
Does that sound about right?
That does sound about right, but with two potential caveats: one is that individual circumstances might also matter in these calculations. For example, my risk of dying in a car accident is much lowered by not driving and only rarely riding in cars. However, my risk of dying of heart disease is raised by a strong family history.
There may also be financial considerations. Cancer almost certainly and often heart disease and stroke take time to kill. If you were paying for cryonics out-of-pocket, this wouldn't matter, but if you were paying with life insurance the cost of the policy would go up, perhaps dramatically, if you were to wait until the onset of serious illness to make your arrangements, as life insurance companies are not fond of pre-existing condtions. It might be worth noting that age alone also increases the cost of life insurance.
That being said, it's also fair to say that even a successful cryopreservation has a (roughly) 10-20% chance of preserving your life, taking most factors into account.
So again, the key here is determining how strongly you value your continued existence. If you could come up with a roughly estimated monetary value of your life, taking the probability of radical life extension into account, that may clarify matters considerably. There at values at which that (roughly) 5% chance is too little, or close to the line, or plenty sufficient, or way more than sufficient; it's quite a spectrum.
Yes I totally agree. Similarly your chances of being murdered are probably a lot lower than the average if you live in an affluent neighborhood and have a spouse who has never assaulted you.
Suicide is an interesting issue -- I would like to think that my chances of committing suicide are far lower than average but painful experience has taught me that it's very easy to be overconfident in predicting one's own actions.
Yes, but there is an easy way around this: Just buy life insurance while you are still reasonably healthy.
Actually this is what got me thinking about the issue: I was recently buying life insurance to protect my family. When I got the policy, I noticed that it had an "accelerated death benefit rider," i.e. if you are certifiably terminally ill, you can get a $100k advance on the policy proceeds. When you think about it, that's not the only way to raise substantial money in such a situation. For example, if you were terminally ill, your spouse probably wouldn't mind if you borrowed $200k against the house for cryopreservation if she knew that when you finally kicked the bucket she would get a check for a million from the insurance company.
So the upshot is that from a selfish perspective, there is a lot to be said for taking a "wait and see" approach.
(There's another issue I thought of: Like most life insurance policies, the ones I bought are good only for 20 years. There is a pretty good chance that I will live for those 20 years but in the meantime develop a serious health condition which makes it almost impossible to buy more insurance. What then?)
I agree with this to an extent.
Hmmm. You do have some interesting ideas regarding cryonics funding that do sound promising, but to be safe I would talk to Alcor, specifically Diane Cremeens, about them directly to ensure ahead of time that they'll work for them.
Probably that's a good idea. But on the other hand, what are the chances that they would turn down a certified check for $200k from someone who has a few months to live?
I suppose one could argue that setting things up years in advance so that Alcor controls the money makes it difficult for family members to obstruct your attempt to get frozen.
That's a feature, not a bug, of term life insurance. That's the tradeoff you're making to get coverage now at a cheap rate. But of course, the option value exists on both sides - so if you want to lock in relatively lower rates, well, that's why whole life insurance exists.
Yes, good point. I actually looked into getting whole life insurance but the policies contained so many bells, whistles, and other confusions that I put it all on hold until I had bought some term insurance. Maybe I will look into that again.
Of course if I were disciplined, it would probably make sense to just "buy term and invest the difference" for the next 30 years.
It's like what the TV preacher told Bart Simpson: "Yes, a deathbed conversion is a pretty sweet angle, but if you join now, you're also covered in case of accidental death and dismemberment!"
(may not be an exact quote)
This paper about AI from Hector J. Levesque seems to be interesting: http://www.cs.toronto.edu/~hector/Papers/ijcai-13-paper.pdf
It extensively discusses something called 'Winograd schema questions': If you want examples of Winograd schema questions, there is a list here: http://www.cs.nyu.edu/faculty/davise/papers/WS.html
The paper's abstract does a fairly good job of summing it up, although it doesn't explicitly mention Winograd schema questions:
If you have time, this seems worth a read. I started reading other Hector J. Levesque papers because of it.
Edit: Upon searching, I also found some critiques of Levesque's work as well, so looking up opposition to some of these points may also be a good idea.
We wonder about the moral impact of dust specks in the eyes of 3^^^3 people.
What about dust specks in the eyes of 3^^^3 poodles? Or more to the point, what is the moral cost of killing one person vs one poodle? How many poodles lives would we trade for the life of one person?
Or even within humans, is it human years we would account in coming up with moral equivalencies? Do we discount humans that are less smart, on the theory that we almost certainly discount poodles against humans because they are not as smart as us? Do we discount evil humans compared to helpful humans? Discount unproductive humans against productive ones? What about sims, if it is human*years we count rather than human lives, what of a sim which might be expected to run for more than a trillion subjective years in simulation, do they carry billions times more moral weight than a single meat human who has precommitted to eschew cryonics or upload?
And of course I am using poodle as an algebraic symbol to represent any one of many intelligences. Do we discount poodles against humans because they are not as smart, or is there some other measure of how to relate the moral value of a poodle to the moral value of a person? Does a sim (simulated human running in software) count equal to a meat human? Does an earthworm have epsilon<<1 times the worth of a human, or is it identically 0 times the worth of a human?
What about really big smart AI? Would an AI as smart as an entire planet be worth (morally) preserving at the expense of losing one-fifth the human population?
Do the nervous systems of 3^^^3 nematodes beat the nervous systems of a mere 7x10^9 humans? If not, why not?
I believe that I care nothing for nematodes, and that as the nervous systems at hand became incrementally more complicated, I would eventually reach a sharp boundary wherein my degree of caring went from 0 to tiny. Or rather, I currently suspect that an idealized version of my morality would output such.
But zero is not a probability.
Edit: Adele_L is right, I was confusing utilities and probabilities.
Zero is a utility, and utilities can even be negative (i.e. if Eliezer hated nematodes).
... are you pointing out that there is a nonzero probability that Eliezer's CEV actually cares about nematodes?
No, Adele_L is right, I was confusing utilities and probabilities.
I'm kind of curious as to why you wouldn't expect a continuous, gradual shift in caring. Wouldn't mind design space (which I would imagine your caring to be a function of) be continuous?
Something going from 0 to 10^-20 is behaving pretty close to continuously in one sense. It is clear that there are some configurations of matter I don't care about at all (like a paperclip), while I do care about other configurations (like twelve-year-old human children), so it is elementary that at some point my utility function must go from 0 to nonzero. The derivative, the second derivative, or even the function itself could easily be discontinuous at this point.
And ... it isn't clear that there are some configurations you care for ... a bit? Sparrows being tortured and so on? You don't care more about dogs than insects and more for chimpanzees than dogs?
(I mean, most cultures have a Great Chain Of Being or whatever, so surely I haven't gone dreadfully awry in my introspection ...)
This is not incompatible with what I just said. It goes from 0 to tiny somewhere, not from 0 to 12-year-old.
Can you bracket this boundary reasonably sharply? Say, mosquito: no, butterfly: yes?
No, but I strongly suspect that all Earthly life without frontal cortex would be regarded by my idealized morals as a more complicated paperclip. There may be exceptions and I have heard rumors that octopi pass the mirror test, and I will not be eating any octopus meat until that is resolved, because even in a world where I eat meat because optimizing my diet is more important and my civilization lets me get away with it, I do not eat anything that recognizes itself in a mirror. So a spider is a definite no, a chimpanzee is an extremely probable yes, a day-old human infant is an extremely probable no but there are non-sentience-related causes for me to care in this case, and pigs I am genuinely unsure of.
To be clear, I am unsure if pigs are objects of value, which incorporates both empirical uncertainty about their degree of reflectivity, philosophical uncertainty about the precise relation of reflectivity to degrees of consciousness, and ethical uncertainty about how much my idealized morals would care about various degrees of consciousness to the extent I can imagine that coherently. I can imagine that there's a sharp line of sentience which humans are over and pigs are under, and imagine that my idealized caring would drop to immediately zero for anything under the line, but my subjective probability for both of these being simultaneously true is under 50% though they are not independent.
However it is plausible to me that I would care exactly zero about a pig getting a dust speck in the eye... or not.
But needn't be! See for example f(x) = exp(-1/x) (x > 0), 0 (x ≤ 0).
Wikipedia has an analysis.
(Of course, the space of objects isn't exactly isomorphic to the real line, but it's still a neat example.)
Agreed, but it is not obvious to me that my utility function needs to be differentiable at that point.
... really?
Um, that strikes me as very unlikely. Could you elaborate on your reasoning?
I observe that the answer to the last question is not constrained to be positive.
"Letting those people die was worth it, because they took their cursed yapping poodle with them!"
(quote marks to indicate not my actual views)
Has anyone done a study on redundant information in languages?
I'm just mildly curious, because a back-of-the-envelope calculation suggests that English is about 4.7x redundant - which on a side note explains how we can esiayl regnovze eevn hrriofclly msispled wrods.
(Actually, that would be an interesting experiment - remove or replace fraction x of the letters in a paragraph and see at what average x participants can no longer make a "corrected" copy.)
I'd predict that Chinese is much less redundant in its spoken form, and that I have no idea how to measure redundancy in its written form. (By stroke? By radical?)
Yes, it's been studied quite a bit by linguists. You can find some pointers in http://www.gwern.net/Notes#efficient-natural-language which may be helpful.
Thanks.
... huh. Now I'm thinking about actually doing that experiment...
I ran into another thing in that vein:
--The Man Who Invented Modern Probability - Issue 4: The Unlikely - Nautilus
I wonder what that metric has to say about Finnigan's Wake...
By other metrics, Joyce became less compressible throughout his life. Going closer to the original metric, you demonstrate that the title is hard to compress (especially the lack of apostrophe).
This also happens to me with music. I enjoy "unpredictable" music more than predictable music. Knowing music theory I know which notes are supposed to be played -- if a song is in a certain key -- and if a note or chord isn't predicted then it feels a bit more enjoyable. I wonder if the same technique could be applied to different genres of music with the same result, i.e. radio-friendly pop music vs non-mainstream music.
Studies of this form have been done at least on the edge case where all the material removed is from the end (ie. tests of the ability of subjects to predict the next letter in an English text). I'd be interested to see your more general test but am not sure if it has been done. (Except, perhaps, as a game show).
Has anyone done a good analysis on the expected value of purchasing health insurance? I will need to purchase health when I turn 26. How comprehensive should the insurance I purchase be?
At first I thought I should purchase a high-deductible that only protects against catastrophes. I have low living expenses and considerable savings, so this wouldn't be risky. The logic here is that insurance costs the expected value of the goods provided plus overhead, so the cost of insurance will always be less than it's expected value. If I purchase less insurance, I waste less money on overhead.
On the other hand, there's a tax break for purchasing health insurance, and soon there will be subsidies as well. Also, insurance companies can reduce the cost of health care by negotiating lower prices for you. So the insurance company will pay less than the person who pays out of pocket. All these together might outweigh money wasted on overhead.
On the third hand, I'm a young healthy male. Under the ACA, my insurance premiums will be inflated so that old, sick, and female persons can have lower premiums. The money that's being transferred to these groups won't be spent on me, so it reduces the expected value of my insurance.
Has anyone added all these effects up? Would you recommend I purchase skimpy insurance or comprehensive?
"Also, insurance companies can reduce the cost of health care by negotiating lower prices for you. "
This is the case even with a high deductable plan. The insurance will have a different rate when you use an in-network doctor or hospital service. If you haven't met the deductible and you go in, they'll send you a bill--but that bill will still be much cheaper than if you had gone in and paid out of pocket (like paying less than half).
But make sure that the high deductable plan actually has a cheaper monthly payment by an amount that matters. With new regulations of what must be covered, the differences between plans may not end up being very big.
I think one of my very favorite things about commenting on Lesswrong is that usually when you make a short statement or ask a question people will just respond to what you said rather than taking it as a sign to attack what they think that question implies is your tribe.
I find the idea of commitment devices strongly aversive. If I change my mind about doing something in the future, I want to be able to do whatever I choose to do, and don't want my past self to create negative repercussions for me if I change my mind.
How can I apply rationality to business?
This essay on internet forum behavior by the people behind Discourse is the greatest thing I've seen in the genre in the past two or three years. It rivals even some of the epic examples of wikipedian rule-lawyering that I've witnessed.
Their aggregation of common internet forum rules could have been done by anyone, but it was ultimately they that did it. My confidence in Discourse's success has improved.
"Don't be a dick" is now "Wheaton's law"? Pfeh!
I don't suppose there's any regularly scheduled LW meetups in San Diego, is there? I'll be there this week from Saturday to Wednesday for a conference.
What if this were a video game? A way of becoming more strategic.
This is unrelated to rationality, but I'm posting it here in case someone decides it serves their goals to help me be more effective in mine.
I recently bought a computer, used it for a while, then decided I didn't want it. What's the simplest way to securely wipe the hard drive before returning it? Is it necessary to create an external boot volume (via USB or optical disc)?