You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.
Comment author:gwern
02 September 2013 06:12:10PM
*
4 points
[-]
So to first quote Hsu's description:
This graph displays the number of GWAS hits versus sample size for height, BMI, etc. Once the minimal sample size to discover the alleles of largest impact (large MAF, large effect size) is exceeded, one generally expects a steady accumulation of new hits at lower MAF / effect size. I expect the same sort of progress for g. (MAF = Minor Allele Frequency. Variants that are common in the population are easier to detect than rare variants.)
We can’t predict the sample size required to obtain most of the additive variance for g (this depends on the details of the distribution of alleles), but I would guess that about a million genotypes together with associated g scores will suffice. When, exactly, we will reach this sample size is unclear, but I think most of the difficulty is in obtaining the phenotype data. Within a few years, over a million people will have been genotyped, but probably we will only have g scores for a small fraction of the individuals.
I'll try to explain it in different terms. What you are looking at is a graph of 'results vs effort'. How much work do you have to do to get out some useful results? The importance of this is that it's showing you a visual version of statistical power analysis (introduction).
Ordinary power analysis is about examining the inherent zero-sum trade-offs of power vs sample size vs effect size vs statistical-significance, where you try to optimize each thing for one's particular purpose; so for example, you can choose to have a small (=cheap) sample size and a small Type I (false positives) error rate in detecting a small effect size - as long as you don't mind a huge Type II error rate (low power, false negative, failure to detect real effects).
If you look at my nootropics or sleep experiments, you'll see I do power analysis all the time as a way of understanding how big my experiments need to be before they are not worthlessly uninformative; if your sample size is too small, you simply won't observe anything, even if there really is an effect (eg. you might conclude, 'with such a small n as 23, at the predicted effect size and the usual alpha of 0.05, our power will be very low, like 10%, so the experiment would be a waste of time').
Even though we know intelligence is very influenced by genes, you can't find 'the genes for intelligence' by looking at just 10 people - but how many do you need to look at?
In the case of the graph, the statistical-significance is hardwired & the effect sizes are all known to be small, and we ignore power, so that leaves two variables: sample size and number of null-rejection/findings. The graph shows us simply that as we get a larger sample, we can successfully find more associations (because we have more power to get a subtle genetic effect to pass our significance cutoffs). Simple enough. It's not news to anyone that the more data you collect, the more results you get.
What's useful here is that the slope of the points is encoding the joint relationship of power & significance & effect size for genetic findings, so we can simply vary sample size and spit out estimated number of findings. The intercept remains uncertain, though. What Hsu finds so important about this graph is that it lets us predict for intelligence how many hits we will get at any sample size once we have a datapoint which then nails down a unique line. What's the datapoint? Well, he mentions the very interesting recent findings of ~3 associations - which happened at n=126k. So to plot this IQ datapoint and guessing at roughly where it would go (please pardon my Paint usage):
OK, but how does that let Hsu predict anything? Well, the slope ought to be the same for future IQ findings, since the procedures are basically the same. So all we have to do is guess at the line, and anchor it on this new finding:
So if you want to know what we'll find at 200000 samples, you extend the line and it looks like we'll have ~10 SNPs at that point. Or, if you wanted to know when we'll have found 100 SNPs for intelligence, you simply continue extending the line until it reaches 100 on the y-axis, which apparently Hsu thinks will happen somewhere around 1000000 on the x-axis (which extends off the screen because no one has collected that big a sample yet for anything else, much less intelligence).
I hope that helps; if you don't understand power, it might help to look at my own little analyses where the problem is usually much simpler.
Comment author:ciphergoth
03 September 2013 12:19:50PM
1 point
[-]
Many thanks for this!
So in broad strokes: the smaller a correlation is, the more samples you're going to need to detect it, so the more samples you take, the more correlations you can detect. For five different human variables, this graph shows number of samples against number of correlations detected with them on a log/log scale; from that we infer that a similar slope is likely for intelligence, and so we can use it to take a guess at how many samples we'll need to find some number of SNPs for intelligence. Am I handwaving in the right direction?
Comment author:gwern
03 September 2013 03:26:54PM
0 points
[-]
so the more samples you take, the more correlations you can detect.
Yes, although I'd phrase this more as 'the more samples you take, the bigger your "budget", which you can then spend on better estimates of a single variable or if you prefer, acceptable-quality estimates of several variables'.
Which one you want depends on what you're doing. Sometimes you want one variable, other times you want more than one variable. In my self-experiments, I tend to spend my entire budget on getting good power on detecting changes in a single variable (but I could have spent my data budget in several ways: on smaller alphas or smaller effect sizes or detecting changes to multiple variables). Genomics studies like these, however, aren't interested so much in singling out any particular gene and studying it in close detail, but finding 'any relevant gene at all and as many as possible'.
Comment author:gwern
03 September 2013 04:38:47PM
0 points
[-]
Eh, I'm not sure the idea of 'double-spending' really applies here. In the multiple comparisons case, you're spending all your budget on detecting the observed effect size and getting high-power/reducing-Type-II-errors (if there's an effect lurking there, you'll find it!), but you then can't buy as much Type I error reduction as you want.
This could be fine in some applications. For example, when I'm A/B testing visual changes to gwern.net, I don't care if I commit a Type I error, because if I replace one doohickey with another doohickey and they work equally well (the null hypothesis), all I've lost is a little time. I'm worried about coming up with an improvement, testing the improvement, and mistakenly believing it isn't an improvement when actually it is.
The problem with multiple comparisons comes when people don't realize they've used up their budget and they believe they really have controlled alpha errors at 5% or whatever. When they think they've had their cake & ate it too.
I guess a better financial analogy would be more like "you spend all your money on the new laptop you need for work, but not having checked your bank account balance, promise to take your friends out for dinner tomorrow"?
Comment author:Lumifer
03 September 2013 05:27:53PM
0 points
[-]
I am a bit confused -- is the framework for this thread observation (where the number of samples is pretty much the only thing you can affect pre-analysis) or experiment design (where you you can greatly affect which data you collect)?
I ask because I'm intrigued by the idea of trading off Type I errors against Type II errors, but I'm not sure it's possible in the observation context without introducing bias.
Comment author:gwern
03 September 2013 06:57:26PM
0 points
[-]
I'm not sure about this observation vs experiment design dichotomy you're thinking of. I think of power analysis as something which can be done both before an experiment to design it and understand what the data could tell one, and post hoc, to understand why you did or did not get a result and to estimate things for designing the next experiment.
Comment author:Lumifer
03 September 2013 07:20:53PM
0 points
[-]
Well, I think of statistical power as the ability to distinguish signal from noise. If you expect signal of a particular strength you need to find ways to reduce the noise floor to below that strength (typically through increasing sample size).
However my standard way of thinking about this is: we have data, we build a model, we evaluate how good the model output is. Bulding a model, say, via some sort of maximum likelihood, gives you "the" fitted model with specific chances to commit a Type I or a Type II error. But can you trade off chances of Type I errors against chances of Type II errors other than through crudely adding bias to the model output?
Comment author:gwern
03 September 2013 07:28:38PM
0 points
[-]
But can you trade off chances of Type I errors against chances of Type II errors other than through crudely adding bias to the model output?
Model-building seems like a separate topic. Power analysis is for particular approaches, where I certainly can trade off Type I against Type II. Here's a simple example for a two-group t-test, where I accept a higher Type I error rate and immediately see my Type II go down (power go up):
R> power.t.test(n=40, delta=0.5, sig.level=0.05)
Two-sample t test power calculation n = 40
delta = 0.5
sd = 1
sig.level = 0.05
power = 0.5981
alternative = two.sided
NOTE: n is number in *each* group
R> power.t.test(n=40, delta=0.5, sig.level=0.10)
Two-sample t test power calculation n = 40
delta = 0.5
sd = 1
sig.level = 0.1
power = 0.7163
alternative = two.sided
NOTE: n is number in *each* group
In exchange for accepting 10% Type I rather than 5%, I see my Type II fall from 1-0.60=40% to 1-0.72=28%. Tada, I have traded off errors and as far as I know, the t-test remains exactly as unbiased as it ever was.
Comment author:Skeptityke
31 August 2013 05:20:15PM
*
0 points
[-]
Um... In the HPMOR notes section, this little thing got mentioned.
"I am auctioning off A Day Of My Time, to do with as the buyer pleases – this could include delivering a talk at your company, advising on your fiction novel in progress, applying advanced rationality skillz to a problem which is tying your brain in knots, or confiding the secret answer to the hard problem of conscious experience (it’s not as exciting as it sounds). I retain the right to refuse bids which would violate my ethics or aesthetics. Disposition of funds as above."
That sounds like really exciting news to me, TBH. Someone seriously needs to bid. There are less than 7 hours left and nobody has taken him up on the offer.
Comment author:ArisKatsaris
01 September 2013 01:07:24PM
*
2 points
[-]
That sounds like really exciting news to me
Well, keep in mind that Eliezer himself claims that "it's not as exciting as it sounds".
And of course you always need to have in mind that what Eliezer considers to be "the secret answer to the hard problem of conscious experience" may not be as satisfying an answer to you as it is to him.
After all, some people think that the non-secret answer to the hard problem of conscious experience is something like "consciousness is what an algorithm feels like from the inside" and this is quite non-satisfactory to me (and I think it was non-satisfactory to Eliezer too).
(And also, I think the bidding started at something like $4000.)
Comment author:CAE_Jones
01 September 2013 02:58:51AM
0 points
[-]
I got excited for the fraction of a second it took me to remember that everyone who could possibly want to bid could probably afford to spend more money than I have to my name on this without it cutting into their living expenses. Unless my plan was "Bid $900, hope no one outbids, ask Eliezer to get me a job as quickly as possible", which isn't really that exciting a category, however useful.
Comment author:David_Gerard
30 August 2013 10:49:40PM
*
6 points
[-]
So, are $POORETHNICGROUP so poor, badly off and socially failed because they are about 15 IQ points stupider than $RICHETHNICGROUP? No, it may be the other way around: poverty directly loses you around 15 IQ points on average.
Or so says Anandi Mani et al. "Poverty Impedes Cognitive Function" Science 341, 976 (2013); DOI: 10.1126/science.1238041. A PDF while it lasts (from the nice person with the candy on /r/scholar) and the newspaper article I first spotted it in. The authors have written quite a lot of papers on this subject.
Comment author:Vaniver
01 September 2013 05:32:36PM
0 points
[-]
So, I totally buy the "cognitive load decreases intellectual performance, both in life and on IQ tests" claim. This is very well replicated, and has immediate personal implications (don't try to remember everything, write it all down; try to minimize sources of stress in your life; try to think about as few projects at a time as possible).
I don't think it's valid to say "instead of A->B, it's B->A," or see this as a complete explanation, because the ~13 point drop is only present in times of financial stress. Take standardized school tests, and suppose that half of the minority students are under immediate financial stress (their parents just got a hefty car repair bill) and the other half aren't (the 'easy' condition in the test), whereas none of the majority students are under immediate financial stress. Then we should expect the minority students to be, on average, 6.5 points lower, but what we see is the gap of 15 points.
It's also plausible that the differentiatior between people is their reaction to stress--I know a lot of high-powered managers and engineers under significant stress at work, who lose much less than a standard deviation of their ability to make good decisions and focus on other things and so on. Some people even seem to perform better under stress, but it's hard to separate out the difference between motivation and fluid intelligence there.
Comment author:David_Gerard
01 September 2013 08:41:40PM
*
-1 points
[-]
Being poor means living a life of stress, financial and social. John Scalzi attempts to explain it. John Cheese has excellent ha-ha-only-serious stuff on Cracked on the subject too.
I wasn't meaning to put forward a study as settled science, of course; but I think it's interesting, and that they have a pile of other studies showing similar stuff. Now it's replication time.
Comment author:Vaniver
01 September 2013 09:04:04PM
*
-1 points
[-]
Being poor means living a life of stress, financial and social.
Then why, during the experiment, did the poor participants and the rich participants have comparable scores when presented with a hypothetical easy financial challenge (a repair of $150)?
The claim the paper makes is that there are temporary challenges which lower cognitive functionality, that are easier to induce in the poor than the rich. If you expect that those challenges are more likely to occur to the poor than the rich (which seems reasonable to me), then this should explain some part of the effect- but isn't on all the time, or the experiment wouldn't have come out the way it did.
I wasn't meaning to put forward a study as settled science, of course; but I think it's interesting, and that they have a pile of other studies showing similar stuff. Now it's replication time.
While I have my doubts about the replicability of any social science article that made it into Science, the interpretation concerns here are assuming the effect the paper saw is entirely real and at the strength they reported.
Comment author:David_Gerard
31 August 2013 07:56:49AM
*
2 points
[-]
The really interesting thing is that you see results from all over the world showing this. Catholics in Northern Ireland in the 1970s measuring 15 points lower than Protestants. Burakumin in Japan measuring 15 points lower than non-Burakumin. SAME GENE POOL. This strongly suggests you get at least 15 points really easily just from social factors, and these studies may (because a study isn't solid science yet, not even a string of studies from the same group) point to one reason.
Comment author:Protagoras
31 August 2013 03:13:09AM
*
6 points
[-]
The racists claim that this is irrelevant because of research that corrects for socioeconomic status and still finds IQ differences. Of course, researchers have found plenty of evidence of important environmental influences on IQ not measured by SES. It seems especially bad for the racial realist hypothesis that people who, for example, identify as "black" in America have the the same IQ disadvantage compared to whites whether their ancestory is 4% European or 40% European; how much African vs. European ancestry someone has seems to matter only indirectly to the IQ effects, which seem to directly follow whichever artificial simplified category someone is identified as belonging to.
Comment author:Vaniver
01 September 2013 05:40:39PM
*
2 points
[-]
It seems especially bad for the racial realist hypothesis that people who, for example, identify as "black" in America have the the same IQ disadvantage compared to whites whether their ancestory is 4% European or 40% European
I've seen mixed reports on this. Human Varieties, for example, has a series of posts on colorism which finds a relationship between skin color and intelligence in the population of African Americans, as predicted by both the hereditarian and "colorist" (i.e. discrimination) theories, but does not find a relationship between skin color and intelligence within families (as predicted by the hereditarian but not the colorist theory), and I know there were studies using blood type which didn't support the hereditarian theory but appear to have been too weakly designed to do that even if hereditarianism were true. Are you aware of any studies that actually look at genetic ancestry and compare it to IQ? (Self-reported ancestry would still be informative, but not as accurate.)
Comment author:Vaniver
01 September 2013 05:44:18PM
1 point
[-]
There is large enough variance in Neanderthal ancestry among Europeans that we might actually be able to see differences within the European population (and then extrapolate those to guess how much of the European-African gap that explains). I seem to recall seeing some preliminary reports on this, but I can't find them right now so I'm not confident they were evidence-driven instead of theory-driven.
Comment author:Viliam_Bur
31 August 2013 12:30:27PM
*
3 points
[-]
Not completely serious, just wondering about possible implications, for sake of munchkinism:
Would it be possible to invent some new color, for example "purple", so that identifying with that color would increase someone's IQ?
I guess it would first require the rest of the society accepting the superiority (at least in intelligence) of the purple people, and their purpleness being easy to identify and difficult for others to fake. (Possible to achieve with some genetic manipulation.)
Also, could this mechanism possibly explain the higher intelligence of Jews? I mean, if we stopped suspecting them from making international conspiracies and secretly ruling the world (which obviously requires a lot of intelligence), would their IQs consequently drop to the average level?
Also... what about Asians? It is the popularity of anime than increases their IQ, or what?
Comment author:Protagoras
31 August 2013 03:35:09PM
*
0 points
[-]
Unfortunately, while we know there are lots of environmental factors that affect IQ, we mostly don't know the details well enough to be sure of very much, or to have much idea how to manipulate it. However, as I understand it, some research has suggested that there are interesting cultural similarities between Jews in most of the world and Chinese who don't live in China, and that the IQ advantage of Chinese is primarily among Chinese who don't live in China, so something in common between how the Chinese and Jewish cultures deal with being minority outsiders may explain part of why both show unusually high IQs when they are minority outsiders (and could explain a lot of East Asians generally; considering how enormous the cultural influence of China has been in the region, it would not be terribly surprising if many other East Asian groups had acquired whatever the relevant factor is).
This paper by Ogbu and Simons discusses some of the theories about groups that do poorly (the "involuntary" or "caste-like" minorities). Unfortunately I couldn't find a citation for any discussion of differences between voluntary minorities which would explain why some voluntary minorities outperform rather than merely equalling the majority, apart from Ned Block's passing reference to a culture of "self-respect" in his review of The Bell Curve.
Comment author:bogus
31 August 2013 01:59:42PM
0 points
[-]
Would it be possible to invent some new color, for example "purple", so that identifying with that color would increase someone's IQ?
It's been done - many people do in fact self-identify as 'Indigo children', 'Indigos' or even 'Brights'. The label tends to come with a broadly humanistic and strongly irreligious worldview, but many of them are in fact highly committed to some form of spirituality and mysticism: indeed, they credit these perhaps unusual convictions for their increased intelligence and, more broadly, their highly developed intuition.
Comment author:Tenoke
30 August 2013 01:33:04PM
5 points
[-]
I lost an AI box experiment against PatrickRobotham with me as the AI today on irc. If anyone else wants to play against me then PM me here or contact me on #lesswrong.
Comment author:shminux
30 August 2013 06:14:44PM
*
-3 points
[-]
Failing to convince your jailer to let you out is the highly likely outcome, so it is not very interesting. I would love to hear about any simulated AI winning against an informed opponent.
Comment author:Tenoke
30 August 2013 05:34:23PM
3 points
[-]
I don't share details because subsequent games will be less fun and because if I am using dick moves I don't want people to know how much of a dick I am.
Comment author:shminux
26 August 2013 06:26:39PM
*
2 points
[-]
I wonder if it makes sense to have something like a registry of the LW regulars who are experts in certain areas. For example, this forum has a number of trained mathematicians, philosophers, computer scientists...
Something like a table containing [nick, general area, training/credentials, area of interest, additional info (e.g. personal site)], maybe?
Comment author:gwern
23 September 2013 01:54:19AM
*
0 points
[-]
Thanks for all the poll submissions. I decided since I just finished Umineko, this is a good time to analyze the 49 responses.
The gist is that the direction seems to be as predicted and the effect size reasonable (odds-ratio of 1.77), but not big enough to yield any impressive level of statistical-significance (p=0.24):
R> poll <- read.csv("<http://dl.dropboxusercontent.com/u/182368464/umineko-poll.csv>")
R> library(ordinal)
R> summary(clm(as.ordered(Certainty) ~ Crypto, data=poll))
formula: as.ordered(Certainty) ~ Crypto
data: poll
link threshold nobs logLik AIC niter max.grad cond.H
logit flexible 48 -30.58 67.16 5(0) 5.28e-09 2.9e+01
Coefficients:
Estimate Std. Error z value Pr(>|z|)
Crypto 0.571 0.491 1.16 0.24
Threshold coefficients:
Estimate Std. Error z value
0|1 1.988 0.708 2.81
1|2 3.075 0.822 3.74
(1 observation deleted due to missingness)
R> exp(0.571)
[1] 1.77
Or if you prefer, a linear regression:
R> summary(lm(Certainty ~ Crypto, data=poll))
Call:
lm(formula = Certainty ~ Crypto, data = poll)
Residuals:
Min 1Q Median 3Q Max
-0.409 -0.287 -0.287 -0.164 1.836
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.164 0.151 1.09 0.28
Crypto 0.122 0.117 1.05 0.30
Comment author:gjm
31 August 2013 08:49:56PM
1 point
[-]
I'm rather alarmed at how many people appear to have said they're very sure they know how he did it, on (I assume, but I think it's pretty clear) the basis of having thought of one very credible way he could have done it.
I'm going to be optimistic and suppose that all those people thought something like "Although gwern asked how sure we are that we know how it was done, context suggests that the puzzle is really 'find a way to do it' rather than 'identify the specific way used in this case', so I'll say 'very' even though for all I know there could be other ways'.
(For what it's worth, I pedantically chose the "middle" option for that question, but I found the same obvious solution as everyone else.)
Comment author:gwern
01 September 2013 02:55:45PM
1 point
[-]
I'm going to be optimistic and suppose that all those people thought something like "Although gwern asked how sure we are that we know how it was done, context suggests that the puzzle is really 'find a way to do it' rather than 'identify the specific way used in this case', so I'll say 'very' even though for all I know there could be other ways'.
In the case of Umineko, there's not really any difference between 'find a way' and 'find the way', since it adheres to a relativistic Schrodinger's-cat-inspired epistemology where all that matters is successfully explaining the observed evidence. So I don't expect the infelicitous wording to make a difference.
Comment author:gwern
23 September 2013 02:38:56PM
*
0 points
[-]
As it turns out, there's a second possible way using a detail I didn't bother to mention (because I assumed it was a red herring and not as satisfactory a solution anyway):
Angfhuv npghnyyl fnlf fur'f arire rire gbyq nalbar ure snibevgr frnfba rkprcg sbe gur srznyr freinag Funaaba lrnef ntb, naq guvaxf nobhg jurgure Funaaba pbhyq or pbafcvevat jvgu gur lbhat znyr pnyyre. Rkprcg Funaaba vf n ebyr cynlrq ol gur traqre-pbashfrq pebffqerffvat phycevg Lnfh (nybat jvgu gur ebyrf bs Xnaba & Orngevpr), fb gur thrff pbhyq unir orra onfrq ba abguvat ohg ure zrzbel bs orvat gbyq gung.
Crefbanyyl, rira vs V jnf va fhpu n cbfvgvba, V jbhyq fgvyy cersre hfvat gur pneq gevpx: jul pbhyqa'g Angfhuv unir punatrq ure zvaq bire gur lrnef? Be abg orra frevbhf va gur svefg cynpr? Be Funaaba unir zvferzrzorerq? rgp
Comment author:tut
27 August 2013 09:20:49AM
-4 points
[-]
DUH
Downvoted because you made a poll in the open thread, thus making the RSS feed impossible to subscribe to, and producing a whole thread full of encrypted nonsense.
Comment author:gwern
27 August 2013 03:10:29PM
0 points
[-]
DUH
The answer to the question I am asking (whether perceived difficulty interacts with cryptography knowledge) is not 'duh', and is difficult-to-impossible to answer without a poll. If you think the answer is duh, you are not understanding the point of the poll and you are underrating the possible inferential distance & curses of knowledge at play in trying to guess the answer.
V fhfcrpg gurer ner sbhe fyvcf bs cncre va qvssrerag cnegf bs ure ebbz. Naq vs ur pbhyq farnx gurz va, gura gurer'f n ernfbanoyr punapr ur pna farnx gur guerr fyvcf ersreevat gb aba-jvagre frnfbaf bhg orsber fur svaqf gurz.
Comment author:ygert
25 August 2013 10:12:20PM
4 points
[-]
Guvf "chmmyr" frrzf rnfl gb na rkgerzr, gb zr ng yrnfg. Gur gevivny fbyhgvba jbhyq or gb uvqr nyy gur cbffvoyr nafjref va qvssrerag cynprf, naq bayl gryy ure gb ybbx va gur cynpr jurer ur uvq gur nafjre ur trgf gbyq vf pbeerpg. (Va guvf pnfr, haqre gur pybpx.)
Comment author:Adele_L
25 August 2013 08:23:15PM
2 points
[-]
My thought was the same as palladias'. I'm not seeing an obvious way involving cryptography though, but I am somewhat familiar with it (I understand RSA and its proof).
Comment author:[deleted]
25 August 2013 10:41:23AM
*
2 points
[-]
I have never consciously noticed a dust speck going into my eye, at least I don't remember it. This means it didn't make big enough effect on my mind so that it would have made a lasting impression on my memory. When I first read the post about dust specks and torture, I had to think hard about wtf the speck going into your eye even means.
Does this mean that I should attribute zero negative utility to dust speck going into my eye?
Comment author:[deleted]
25 August 2013 12:34:20PM
2 points
[-]
Oh, I was already aware of that (and this is not just hindsight bias, I remember reading about this today and someone suggested replacing the speck with the smallest actual negative utility unit). This isn't really about the original question anyway. I was just thinking if something that doesn't even register on a conscious level could have negative utility.
Comment author:linkhyrule5
29 August 2013 10:38:14AM
1 point
[-]
Well, yes, but it's one dust speck per person...
And it's entirely possible that utility of dust speck isn't additive. In fact, it's trivially so: one dust speck is fine, a few trillion will do gruesome things to your head.
Comment author:Document
25 August 2013 08:32:34AM
2 points
[-]
This is unrelated to rationality, but I'm posting it here in case someone decides it serves their goals to help me be more effective in mine.
I recently bought a computer, used it for a while, then decided I didn't want it. What's the simplest way to securely wipe the hard drive before returning it? Is it necessary to create an external boot volume (via USB or optical disc)?
Comment author:ahbwramc
24 August 2013 03:14:35AM
2 points
[-]
I don't suppose there's any regularly scheduled LW meetups in San Diego, is there? I'll be there this week from Saturday to Wednesday for a conference.
Comment author:[deleted]
23 August 2013 11:45:51PM
3 points
[-]
This essay on internet forum behavior by the people behind Discourse is the greatest thing I've seen in the genre in the past two or three years. It rivals even some of the epic examples of wikipedian rule-lawyering that I've witnessed.
Their aggregation of common internet forum rules could have been done by anyone, but it was ultimately they that did it. My confidence in Discourse's success has improved.
Comment author:blacktrance
23 August 2013 04:38:44AM
0 points
[-]
I find the idea of commitment devices strongly aversive. If I change my mind about doing something in the future, I want to be able to do whatever I choose to do, and don't want my past self to create negative repercussions for me if I change my mind.
Comment author:drethelin
22 August 2013 07:00:12PM
18 points
[-]
I think one of my very favorite things about commenting on Lesswrong is that usually when you make a short statement or ask a question people will just respond to what you said rather than taking it as a sign to attack what they think that question implies is your tribe.
Comment author:Omid
22 August 2013 05:46:01PM
4 points
[-]
Has anyone done a good analysis on the expected value of purchasing health insurance? I will need to purchase health when I turn 26. How comprehensive should the insurance I purchase be?
At first I thought I should purchase a high-deductible that only protects against catastrophes. I have low living expenses and considerable savings, so this wouldn't be risky. The logic here is that insurance costs the expected value of the goods provided plus overhead, so the cost of insurance will always be less than it's expected value. If I purchase less insurance, I waste less money on overhead.
On the other hand, there's a tax break for purchasing health insurance, and soon there will be subsidies as well. Also, insurance companies can reduce the cost of health care by negotiating lower prices for you. So the insurance company will pay less than the person who pays out of pocket. All these together might outweigh money wasted on overhead.
On the third hand, I'm a young healthy male. Under the ACA, my insurance premiums will be inflated so that old, sick, and female persons can have lower premiums. The money that's being transferred to these groups won't be spent on me, so it reduces the expected value of my insurance.
Has anyone added all these effects up? Would you recommend I purchase skimpy insurance or comprehensive?
Comment author:Randy_M
23 August 2013 03:32:16PM
*
3 points
[-]
"Also, insurance companies can reduce the cost of health care by negotiating lower prices for you. "
This is the case even with a high deductable plan. The insurance will have a different rate when you use an in-network doctor or hospital service. If you haven't met the deductible and you go in, they'll send you a bill--but that bill will still be much cheaper than if you had gone in and paid out of pocket (like paying less than half).
But make sure that the high deductable plan actually has a cheaper monthly payment by an amount that matters. With new regulations of what must be covered, the differences between plans may not end up being very big.
Comment author:linkhyrule5
21 August 2013 09:15:53PM
2 points
[-]
Has anyone done a study on redundant information in languages?
I'm just mildly curious, because a back-of-the-envelope calculation suggests that English is about 4.7x redundant - which on a side note explains how we can esiayl regnovze eevn hrriofclly msispled wrods.
(Actually, that would be an interesting experiment - remove or replace fraction x of the letters in a paragraph and see at what average x participants can no longer make a "corrected" copy.)
I'd predict that Chinese is much less redundant in its spoken form, and that I have no idea how to measure redundancy in its written form. (By stroke? By radical?)
Comment author:wedrifid
22 August 2013 02:33:55AM
*
1 point
[-]
(Actually, that would be an interesting experiment - remove or replace fraction x of the letters in a paragraph and see at what average x participants can no longer make a "corrected" copy.)
Studies of this form have been done at least on the edge case where all the material removed is from the end (ie. tests of the ability of subjects to predict the next letter in an English text). I'd be interested to see your more general test but am not sure if it has been done. (Except, perhaps, as a game show).
Comment author:gwern
22 August 2013 09:47:32PM
3 points
[-]
I ran into another thing in that vein:
To measure the artistic merit of texts, Kolmogorov also employed a letter-guessing method to evaluate the entropy of natural language. In information theory, entropy is a measure of uncertainty or unpredictability, corresponding to the information content of a message: the more unpredictable the message, the more information it carries. Kolmogorov turned entropy into a measure of artistic originality. His group conducted a series of experiments, showing volunteers a fragment of Russian prose or poetry and asking them to guess the next letter, then the next, and so on. Kolmogorov privately remarked that, from the viewpoint of information theory, Soviet newspapers were less informative than poetry, since political discourse employed a large number of stock phrases and was highly predictable in its content. The verses of great poets, on the other hand, were much more difficult to predict, despite the strict limitations imposed on them by the poetic form. According to Kolmogorov, this was a mark of their originality. True art was unlikely, a quality probability theory could help to measure.
Comment author:JQuinton
23 August 2013 08:41:16PM
0 points
[-]
The verses of great poets, on the other hand, were much more difficult to predict, despite the strict limitations imposed on them by the poetic form. According to Kolmogorov, this was a mark of their originality. True art was unlikely, a quality probability theory could help to measure.
This also happens to me with music. I enjoy "unpredictable" music more than predictable music. Knowing music theory I know which notes are supposed to be played -- if a song is in a certain key -- and if a note or chord isn't predicted then it feels a bit more enjoyable. I wonder if the same technique could be applied to different genres of music with the same result, i.e. radio-friendly pop music vs non-mainstream music.
By other metrics, Joyce became less compressible throughout his life. Going closer to the original metric, you demonstrate that the title is hard to compress (especially the lack of apostrophe).
Comment author:mwengler
21 August 2013 06:50:29PM
*
4 points
[-]
We wonder about the moral impact of dust specks in the eyes of 3^^^3 people.
What about dust specks in the eyes of 3^^^3 poodles? Or more to the point, what is the moral cost of killing one person vs one poodle? How many poodles lives would we trade for the life of one person?
Or even within humans, is it human years we would account in coming up with moral equivalencies? Do we discount humans that are less smart, on the theory that we almost certainly discount poodles against humans because they are not as smart as us? Do we discount evil humans compared to helpful humans? Discount unproductive humans against productive ones? What about sims, if it is human*years we count rather than human lives, what of a sim which might be expected to run for more than a trillion subjective years in simulation, do they carry billions times more moral weight than a single meat human who has precommitted to eschew cryonics or upload?
And of course I am using poodle as an algebraic symbol to represent any one of many intelligences. Do we discount poodles against humans because they are not as smart, or is there some other measure of how to relate the moral value of a poodle to the moral value of a person? Does a sim (simulated human running in software) count equal to a meat human? Does an earthworm have epsilon<<1 times the worth of a human, or is it identically 0 times the worth of a human?
What about really big smart AI? Would an AI as smart as an entire planet be worth (morally) preserving at the expense of losing one-fifth the human population?
Comment author:wedrifid
22 August 2013 02:26:19AM
3 points
[-]
What about dust specks in the eyes of 3^^^3 poodles? Or more to the point, what is the moral cost of killing one person vs one poodle? How many poodles lives would we trade for the life of one person?
I observe that the answer to the last question is not constrained to be positive.
I believe that I care nothing for nematodes, and that as the nervous systems at hand became incrementally more complicated, I would eventually reach a sharp boundary wherein my degree of caring went from 0 to tiny. Or rather, I currently suspect that an idealized version of my morality would output such.
Comment author:ahbwramc
22 August 2013 11:28:20PM
5 points
[-]
I'm kind of curious as to why you wouldn't expect a continuous, gradual shift in caring. Wouldn't mind design space (which I would imagine your caring to be a function of) be continuous?
Something going from 0 to 10^-20 is behaving pretty close to continuously in one sense. It is clear that there are some configurations of matter I don't care about at all (like a paperclip), while I do care about other configurations (like twelve-year-old human children), so it is elementary that at some point my utility function must go from 0 to nonzero. The derivative, the second derivative, or even the function itself could easily be discontinuous at this point.
Comment author:MugaSofer
23 August 2013 03:57:07PM
*
-1 points
[-]
It is clear that there are some configurations of matter I don't care about at all (like a paperclip), while I do care about other configurations (like twelve-year-old human children), so it is elementary that at some point my utility function must go from 0 to nonzero.
And ... it isn't clear that there are some configurations you care for ... a bit? Sparrows being tortured and so on? You don't care more about dogs than insects and more for chimpanzees than dogs?
(I mean, most cultures have a Great Chain Of Being or whatever, so surely I haven't gone dreadfully awry in my introspection ...)
No, but I strongly suspect that all Earthly life without frontal cortex would be regarded by my idealized morals as a more complicated paperclip. There may be exceptions and I have heard rumors that octopi pass the mirror test, and I will not be eating any octopus meat until that is resolved, because even in a world where I eat meat because optimizing my diet is more important and my civilization lets me get away with it, I do not eat anything that recognizes itself in a mirror. So a spider is a definite no, a chimpanzee is an extremely probable yes, a day-old human infant is an extremely probable no but there are non-sentience-related causes for me to care in this case, and pigs I am genuinely unsure of.
Comment author:Emile
25 August 2013 12:11:03PM
*
1 point
[-]
I do not eat anything that recognizes itself in a mirror.
Assuming pigs were objects of value, would that make it morally wrong to eat them? Unlike octopi, most pigs exist because humans plan on eating them, so if a lot of humans stopped eating pigs, there would be less pigs, and the life of the average pig might not be much better.
To be clear, I am unsure if pigs are objects of value, which incorporates both empirical uncertainty about their degree of reflectivity, philosophical uncertainty about the precise relation of reflectivity to degrees of consciousness, and ethical uncertainty about how much my idealized morals would care about various degrees of consciousness to the extent I can imagine that coherently. I can imagine that there's a sharp line of sentience which humans are over and pigs are under, and imagine that my idealized caring would drop to immediately zero for anything under the line, but my subjective probability for both of these being simultaneously true is under 50% though they are not independent.
However it is plausible to me that I would care exactly zero about a pig getting a dust speck in the eye... or not.
The paper's abstract does a fairly good job of summing it up, although it doesn't explicitly mention Winograd schema questions:
The science of AI is concerned with the study of intelligent forms of behaviour in computational terms. But what does it tell us when a good semblance of a behaviour can be achieved using cheap tricks that seem to have little to do with what we intuitively imagine intelligence to be? Are these intuitions wrong, and is intelligence really just a bag of tricks? Or are the philosophers right, and is a behavioural understanding of intelligence simply too weak? I think both of these are wrong. I suggest in the context of question-answering that what matters when it comes to the science of AI is not a good semblance of intelligent behaviour at all, but the behaviour itself, what it depends on, and how it can be achieved. I go on to discuss two major hurdles that I believe will need to be cleared.
If you have time, this seems worth a read. I started reading other Hector J. Levesque papers because of it.
Edit: Upon searching, I also found some critiques of Levesque's work as well, so looking up opposition to some of these points may also be a good idea.
Comment author:brazil84
21 August 2013 02:51:15PM
6 points
[-]
Sorry if this has been asked before, but can someone explain to me if there is any selfish reason to join Alcor while one is in good health? If I die suddenly, it will be too late to have joined, but even if I had joined it seems unlikely that they would get to me in time.
The only reason I can think of is to support Alcor.
Comment author:Randy_M
23 August 2013 03:25:47PM
6 points
[-]
It's like what the TV preacher told Bart Simpson: "Yes, a deathbed conversion is a pretty sweet angle, but if you join now, you're also covered in case of accidental death and dismemberment!"
Comment author:Turgurth
22 August 2013 01:08:16AM
5 points
[-]
I don't think it's been asked before on Less Wrong, and it's an interesting question.
It depends on how much you value not dying. If you value it very strongly, the risk of sudden, terminal, but not immediately fatal injuries or illnesses, as mentioned by paper-machine, might be unacceptable to you, and would point toward joining Alcor sooner rather than later.
The marginal increase your support would add to the probability of Alcor surviving as an institution might also matter to you selfishly, since this would increase the probability that there will exist a stronger Alcor when you are older and will likely need it more than you do now.
Additionally, while it's true that it's unlikely that Alcor would reach you in time if you were to die suddenly, compare this risk to the chance of your survival if alternately you don't join Alcor soon enough, and, after your hypothetical fatal car crash, you end up rotting in the ground.
And hey, if you really want selfish reasons: signing up for cryonics is high-status in certain subcultures, including this one.
There are also altruistic reasons to join Alcor, but that's a separate issue.
Comment author:brazil84
22 August 2013 10:13:24PM
1 point
[-]
Thank you for your response; I suppose one would need to estimate the probability of dying in such a way that having previously joined Alcor would make a difference.
Perusing Ben Best's web site and using some common sense, it seems that the most likely causes of death for a reasonably healthy middle aged man are cancer, stroke, heart attack, accident, suicide, and homicide. We need to estimate the probability of sudden serious loss of faculties followed by death.
It seems that for cancer, that probability is extremely small. For stroke, heart attack, and accidents, one could look it up but just guesstimating a number based on general observations, I would guess roughly 10 to 15 percent. Suicide and homicide are special cases -- I imagine that in those cases I would be autopsied so there would be much less chance of cryopreservation even if I had already joined Alcor.
Of course even if you pre-joined Alcor, there is still a decent chance that for whatever reason they would not be able to preserve you after, for example, a fatal accident which killed you a few days later.
So all told, my rough estimate is that the improvement in my chances of being cryopreserved upon death if I joined Alcor now as opposed to taking a wait and see approach is 5% at best.
Comment author:Turgurth
23 August 2013 01:53:29AM
0 points
[-]
That does sound about right, but with two potential caveats: one is that individual circumstances might also matter in these calculations. For example, my risk of dying in a car accident is much lowered by not driving and only rarely riding in cars. However, my risk of dying of heart disease is raised by a strong family history.
There may also be financial considerations. Cancer almost certainly and often heart disease and stroke take time to kill. If you were paying for cryonics out-of-pocket, this wouldn't matter, but if you were paying with life insurance the cost of the policy would go up, perhaps dramatically, if you were to wait until the onset of serious illness to make your arrangements, as life insurance companies are not fond of pre-existing condtions. It might be worth noting that age alone also increases the cost of life insurance.
That being said, it's also fair to say that even a successful cryopreservation has a (roughly) 10-20% chance of preserving your life, taking most factors into account.
So again, the key here is determining how strongly you value your continued existence. If you could come up with a roughly estimated monetary value of your life, taking the probability of radical life extension into account, that may clarify matters considerably. There at values at which that (roughly) 5% chance is too little, or close to the line, or plenty sufficient, or way more than sufficient; it's quite a spectrum.
Comment author:brazil84
23 August 2013 01:28:35PM
0 points
[-]
ne is that individual circumstances might also matter in these calculations. For example, my risk of dying in a car accident is much lowered by not driving and only rarely riding in cars
Yes I totally agree. Similarly your chances of being murdered are probably a lot lower than the average if you live in an affluent neighborhood and have a spouse who has never assaulted you.
Suicide is an interesting issue -- I would like to think that my chances of committing suicide are far lower than average but painful experience has taught me that it's very easy to be overconfident in predicting one's own actions.
There may also be financial considerations. Cancer almost certainly and often heart disease and stroke take time to kill. If you were paying for cryonics out-of-pocket, this wouldn't matter, but if you were paying with life insurance the cost of the policy would go up, perhaps dramatically, if you were to wait until the onset of serious illness to make your arrangements, as life insurance companies are not fond of pre-existing condtions
Yes, but there is an easy way around this: Just buy life insurance while you are still reasonably healthy.
Actually this is what got me thinking about the issue: I was recently buying life insurance to protect my family. When I got the policy, I noticed that it had an "accelerated death benefit rider," i.e. if you are certifiably terminally ill, you can get a $100k advance on the policy proceeds. When you think about it, that's not the only way to raise substantial money in such a situation. For example, if you were terminally ill, your spouse probably wouldn't mind if you borrowed $200k against the house for cryopreservation if she knew that when you finally kicked the bucket she would get a check for a million from the insurance company.
So the upshot is that from a selfish perspective, there is a lot to be said for taking a "wait and see" approach.
(There's another issue I thought of: Like most life insurance policies, the ones I bought are good only for 20 years. There is a pretty good chance that I will live for those 20 years but in the meantime develop a serious health condition which makes it almost impossible to buy more insurance. What then?)
So again, the key here is determining how strongly you value your continued existence.
Comment author:gwern
23 August 2013 07:56:42PM
5 points
[-]
(There's another issue I thought of: Like most life insurance policies, the ones I bought are good only for 20 years. There is a pretty good chance that I will live for those 20 years but in the meantime develop a serious health condition which makes it almost impossible to buy more insurance. What then?)
That's a feature, not a bug, of term life insurance. That's the tradeoff you're making to get coverage now at a cheap rate. But of course, the option value exists on both sides - so if you want to lock in relatively lower rates, well, that's why whole life insurance exists.
Comment author:brazil84
23 August 2013 10:05:06PM
1 point
[-]
That's a feature, not a bug, of term life insurance. That's the tradeoff you're making to get coverage now at a cheap rate. But of course, the option value exists on both sides - so if you want to lock in relatively lower rates, well, that's why whole life insurance exists.
Yes, good point. I actually looked into getting whole life insurance but the policies contained so many bells, whistles, and other confusions that I put it all on hold until I had bought some term insurance. Maybe I will look into that again.
Of course if I were disciplined, it would probably make sense to just "buy term and invest the difference" for the next 30 years.
Comment author:Turgurth
23 August 2013 06:56:29PM
2 points
[-]
Hmmm. You do have some interesting ideas regarding cryonics funding that do sound promising, but to be safe I would talk to Alcor, specifically Diane Cremeens, about them directly to ensure ahead of time that they'll work for them.
Comment author:brazil84
23 August 2013 07:26:09PM
0 points
[-]
Probably that's a good idea. But on the other hand, what are the chances that they would turn down a certified check for $200k from someone who has a few months to live?
I suppose one could argue that setting things up years in advance so that Alcor controls the money makes it difficult for family members to obstruct your attempt to get frozen.
what are the chances that they would turn down a certified check for $200k from someone who has a few months to live?
In addition to the money, Alcor requires a lot of legal paperwork, including a notarized will. You can probably do that if you have "a few months," but it's one more thing to worry about, especially if you're dying of something that leaves you mentally impaired and makes legal consent complicated. I don't know how strict about this Alcor would be; I second the grandparent's advice to ask Diane.
Comment author:[deleted]
21 August 2013 03:43:47PM
*
-1 points
[-]
There is some background base rate of sudden, terminal, but not immediately fatal, injury or illness.
For example, I currently do not value life insurance highly, and therefore I value cryonics insurance even less.
Otherwise, there's only some marginal increase in the probability of Alcor surviving as an institution. Seeing as there's precedent for healthy cryonics orgs to adopt the patients of unhealthy cryonics orgs, this marginal increase should be viewed as a yet more marginal increase in the survival of cryonics locations in your locality.
(Assuming transportation costs are prohibitive enough to be treated as a rounding error.)
Comment author:metastable
21 August 2013 12:18:21AM
1 point
[-]
Do consequentialists generally hold as axiomatic that there must be a morally preferable choice (or conceivably multiple equally preferable choices) in a given situation? If so, could somebody point me to a deeper discussion of this axiom (it probably has a name, which I don't know.)
Comment author:somervta
21 August 2013 01:34:11AM
2 points
[-]
Not explicitly as an axiom AFAIK, but if you're valuing states-of-the-world, any choice you make will lead to some state, which means that unless your valuation is circular, the answer is yes.
Basically, as long as your valuation is VNM-rational, definitely yes. Utilitarians are a special case of this, and I think most consequentialists would adhere to that also.
Comment author:asr
21 August 2013 05:08:50AM
*
3 points
[-]
What happens if my valuation is noncircular, but is incomplete? What if I only have a partial order over states of the world? Suppose I say "I prefer state X to Z, and don't express a preference between X and Y, or between Y and Z." I am not saying that X and Y are equivalent; I am merely refusing to judge.
My impression is that real human preference routinely looks like this; there are lots of cases people refuse to evaluate or don't evaluate consistently.
It seems like even with partial preferences, one can be consequentialist -- if you don't have clear preferences between outcomes, you have a choice that isn't morally relevant. Or is there a self-contradiction lurking?
Comment author:pengvado
21 August 2013 05:37:45PM
*
1 point
[-]
Suppose I say "I prefer state X to Z, and don't express a preference between X and Y, or between Y and Z." I am not saying that X and Y are equivalent; I am merely refusing to judge.
If the result of that partial preference is that you start with Z and then decline the sequence of trades Z->Y->X, then you got dutch booked.
Otoh, maybe you want to accept the sequence Z->Y->X if you expect both trades to be offered, but decline each in isolation? But then your decision procedure is dynamically inconsistent: Standing at Z and expecting both trade offers, you have to precommit to using a different algorithm to evaluate the Y->X trade than you will want to use once you have Y.
Comment author:asr
21 August 2013 07:46:18PM
*
0 points
[-]
I think I see the point about dynamic inconsistency. It might be that "I got to state Y from Z" will alter my decisionmaking about Y versus X.
I suppose it means that my decision of what to do in state Y no longer depends purely on consequences, but also on history, at which point they revoke my consequentialist party membership.
But why is that so terrible? It's a little weird, but I'm not sure it's actually inconsistent or violates any of my moral beliefs. I have all sorts of moral beliefs about ownership and rights that are history-dependent so it's not like history-dependence is a new strange thing.
Comment author:somervta
21 August 2013 02:56:20PM
0 points
[-]
You could have undefined value, but it's not particularly intuitive, and I don't think anyone actually advocates it as a component of a consequentialist theory.
Whether, in real life, people actually do it is a different story. I mean, it's quite likely that humans violate the VNM model of rationality, but that could just be because we're not rational.
Comment author:metastable
21 August 2013 03:17:32AM
0 points
[-]
Thanks! Do consequentialist kind of port the first axiom (completeness) from the VN-M utility theorem, changing it from decision theory to meta-ethics?
And for others, to put my original question another way: before we start comparing utilons or utility functions, insofar as consequentialists begin with moral intuitions and reason the existence of utility, is one of their starting intuitions that all moral questions have correct answers? Or am I just making this up? And has anybody written about this?
To put that in one popular context: in the Trolley Switch and Fat Man problem, it seems like most people start with the assumption that there exists a right answer (or preferable, or best, whatever your terminology), and that it could never be the case that an agent will do the wrong/immoral/unethical thing no matter what he or she chooses. Am I right that this assumption exists?
Comment author:asr
21 August 2013 05:02:03AM
*
1 point
[-]
it could never be the case that an agent will do the wrong/immoral/unethical thing no matter what he or she chooses.
Most people do have this belief. I think it's a safe one, though. It follows from a substantive belief most people have, which is that agents are only morally responsible for things that are under their control.
In the context of a trolley problem, it's stipulated that the person is being confronted with a choice -- in the context of the problem, they have to choose. And so it would be blaming them for something beyond their control to say "no matter what you do, you are blameworthy."
One way to fight the hypothetical of the trolley problem is to say "people are rarely confronted with this sort of moral dilemma involuntarily, and it's evil to to put yourself in a position of choosing between evils." I suppose for consistency, if you say this, you should avoid jury service, voting, or political office.
Comment author:somervta
21 August 2013 04:52:19AM
1 point
[-]
Thanks! Do consequentialist kind of port the first axiom (completeness) from the VN-M utility theorem, changing it from decision theory to meta-ethics?
Not explicitly (except in the case of some utilitarians), but I don't think many would deny it. The boundaries between meta-ethics and normative ethics are vaguer than you'd think, but consequentialism is already sort of metaethical. The VMN theorem isn't explicitly discussed that often (many ethicists won't have heard of it), but the axioms are fairly intuitive anyway. However, although I don't know enough about weird forms of consequentialism to know if anyone's made a point of denying completeness, I wouldn't be that surprised if that position exists.
To put that in one popular context: in the Trolley Switch and Fat Man problem, it seems like most people start with the assumption that there exists a right answer (or preferable, or best, whatever your terminology), and that it could never be the case that an agent will do the wrong/immoral/unethical thing no matter what he or she chooses. Am I right that this assumption exists?
Yes, I think it certainly exists. I'm not sure if it's universal or not, but I haven't read a great deal on the subject yet, you I'm not sure if I would know.
Comment author:Salemicus
20 August 2013 09:29:43PM
3 points
[-]
I've got an (IMHO) interesting discussion article written up, but I am unable to post it; I get a "webpage cannot be found" error when I try. I'm using IE 9. Is this a known issue, or have I done something wrong?
Comment author:knb
20 August 2013 10:11:02PM
*
4 points
[-]
He was trying to pass a law to suppress religious freedoms of small sects. That doesn't raise the sanity waterline, it just increases tensions and hatred between groups.
That's a ludicrously forgiving reading of what the bill (which looks like going through) is about. Steelmanning is an exercise in clarifying one's own thoughts, not in justifying fraud and witch-hunting.
Comment author:Omid
20 August 2013 04:02:11PM
*
15 points
[-]
This article, written by Dreeve's wife has displaced Yvain's polyamory essay as the most interesting relationships article I've read this year. The basic idea is that instead of trying to split chores or common goods equally, you use auctions. For example, if the bathroom needs to be cleaned, each partner says how much they'd be willing to clean it for. The person with the higher bid pays the what the other person bid, and that person does the cleaning.
It's easy to see why commenters accused them of being libertarian. But I think egalitarians should examine this system too. Most couples agree that chores and common goods should be split equally. But what does "equally" mean? It's hard to quantify exactly how much each person contributes to a relationship. This allows the more powerful person to exaggerate their contributions and pressure the weaker person into doing more than their fair share. But auctions safeguard against this abuse requiring participants to quantify how much they value each task.
For example, feminists argue that women do more domestic chores than men, and that these chores go unnoticed by men. Men do a little bit, but because men don't see all the work women do, they end up thinking that they're doing their share when they aren't. Auctions safeguard against this abuse. Instead of the wife just cleaning the bathroom, she and her husbands bid for how much they'd be willing to clean the bathroom for. The lower bid is considered the fair market price of cleaning the bathroom. Then she and her husband engage in a joint-purchase auction to decide if the bathroom will be cleaned at all. Either the bathroom gets cleaned and the cleaner gets fairly compensated, or the bathroom doesn't get cleaned because the total utility of cleaning the bathroom is less than the disutility of cleaning the bathroom.
And that's it. No arguing about who cleaned it last. No debating whether it really needs to cleaned. No room for misogynist cultural machines to pressure the wife into doing more than her fair share. Just a market transaction that is efficient and fair.
One datapoint: I know of one household (two adults, one child) which worked out chores by having people list which chores they liked, which they tolerated, and which they hated. It turned out that there was enough intrinsic motivation to make taking care of the house work.
Comment author:Multiheaded
22 August 2013 05:33:39PM
*
3 points
[-]
And that's it. No arguing about who cleaned it last. No debating whether it really needs to cleaned. No room for misogynist cultural machines to pressure the wife into doing more than her fair share. Just a market transaction that is efficient and fair.
Comment author:Omid
23 August 2013 12:59:22AM
*
7 points
[-]
The polyamory and BDSM subcultures prove that nerds can create new social rules that improve sex. Of course, you can't just theorize about what the best social rules would be and then declare that you've "solved the problem." But when you see people living happier lives as a result of changing their social rules, there's nothing wrong with inviting other people to take a look.
I don't understand your postscript. I didn't say there is no inequality in chore division because if there were a chore market would have removed it. I said a chore market would have more equality than the standard each-person-does-what-they-think-is-fair system. Your response seems like fully generalized counterargument: anyone who proposes a way to reduce inequality can be accused of denying that the inequality exists.
Comment author:Nornagest
26 August 2013 12:37:46AM
*
5 points
[-]
The polyamory and BDSM subcultures prove that nerds can create new social rules that improve sex
The modern BDSM culture's origins are somewhat obscure, but I don't think I'd be comfortable saying it was created by nerds despite its present demographics. The leather scene is only one of its cultural poles, but that's generally thought to have grown out of the post-WWII gay biker scene: not the nerdiest of subcultures, to say the least.
I don't know as much about the origins of poly, but I suspect the same would likely be true there.
Comment author:fubarobfusco
25 August 2013 11:48:03PM
*
-1 points
[-]
The polyamory and BDSM subcultures prove that nerds can create new social rules that improve sex.
Hmm, I don't know that I would consider those rules overall to be clearly superior for everyone, although they do reasonably well for me. Rather, I value the existence of different subcultures with different norms, so that people can choose those that suit their predilections and needs.
(More politically: A "liberal" society composed of overlapping subcultures with different norms, in a context of individual rights and social support, seems to be almost certain to meet more people's needs than a "totalizing" society with a single set of norms.)
There are certain of those social rules that seem to be pretty clear improvements to me, though — chiefly the increased care on the subject of consent. That's an improvement in a vanilla-monogamous-heteronormative subculture as well as a kink-poly-genderqueer one.
Comment author:Viliam_Bur
31 August 2013 11:34:48AM
*
-1 points
[-]
(More politically: A "liberal" society composed of overlapping subcultures with different norms, in a context of individual rights and social support, seems to be almost certain to meet more people's needs than a "totalizing" society with a single set of norms.)
This works best if none of the "subcultures with different norms" creates huge negative externatilies for the rest of the society. Otherwise, some people get angry. -- And then we need to go meta and create some global rules that either prevent the former from creating the externalities, or the latter from expressing their anger.
I guess in case of BDSM subculture this works without problems. And I guess the test of the polyamorous community will be how well they will treat their children (hopefully better than polygamous mormons treat their sons), or perhaps how will they handle the poly- equivalents of divorce, especially the economical aspects of it (if there is a significant shared property).
Most couples agree that chores and common goods should be split equally.
I'm skeptical that most couples agree with this.
Anyway, all of these types of 'chore division' systems that I've seen so far totally disregard human psychology. Remember that the goal isn't to have a fair chore system. The goal is to have a system that preserves a happy and stable relationship. If the resulting system winds up not being 'fair', that's ok.
Comment author:kalium
21 August 2013 05:46:58AM
11 points
[-]
This sounds interesting for cases where both parties are economically secure.
However I can't see it working in my case since my housemates each earn somewhere around ten times what I do. Under this system, my bids would always be lowest and I would do all the chores without exception. While I would feel unable to turn down this chance to earn money, my status would drop from that of an equal to that of a servant. I would find this unacceptable.
Comment author:Viliam_Bur
31 August 2013 10:58:41AM
*
3 points
[-]
my housemates each earn somewhere around ten times what I do. Under this system, my bids would always be lowest and I would do all the chores without exception.
I believe you are wrong. (Or I am; in which case please explain to me how.) Here is what I would do it if I lived with a bunch of millionaires, assuming my money is limited:
The first time, I would ask a realistic price X. And I would do the chores. I would put the gained money apart into "the money I don't really own, because I will use them in future to get my status back" budget.
The second time, I would ask 1.5 × X. The third time, 2 × X. The fourth time, 3 × X. If asked, I would explain the change by saying: "I guess I was totally miscalibrated about how I value my time. Well, I'm learning. Sorry, this bidding system is so new and confusing to me." But I would act like I am not really required to explain anything.
Let's assume I always do the chores. Then my income grows exponentially, which is a nice thing per se, but most importantly, it cannot continue forever. At some moment, my bid would be so insanely high, that even Bill Gates would volunteer to do the chores instead. -- Which is completely okay for me, because I would pay him the $1000000000 per hour from my "get the status back" budget, which at the given time already contains the money.
That's it. Keep your money from chores in a separate budget and use them only to pay others for doing the chores. Increase or decrease the bids depending on the state of that budget. If the price becomes relatively stable, there is no way you would do more chores than the other people around you.
The only imbalance I can imagine is if you have a housemate A which always bids more than a housemate B, in which case you will end up between them, always doing more chores than A but less than B. Assuming there are 10 A's and 1 B, and the B is considered very low status, this might result in a rather low status for you, too. -- The system merely guarantees you won't get the lowest status, even if you are the less wealthy person in the house; but you can still get the second-lowest place.
Comment author:Fronken
24 August 2013 05:50:37PM
1 point
[-]
Could one not change the bidding to use "chore points" of somesuch? I mean, the system described is designed for spouses, but there's no reason it couldn't be adapted for you and your housemates.
Comment author:knb
20 August 2013 10:39:06PM
7 points
[-]
Wow someone else thought of doing this too!
My roommate and I started doing this a year ago. It went pretty well for the first few months. Then our neighbor heard about how much we were paying eachother for chores and started outbidding us.
Comment author:Vaniver
22 August 2013 11:54:58PM
*
7 points
[-]
Then our neighbor heard about how much we were paying eachother for chores and started outbidding us.
This is one of the features of this policy, actually- you can use this as a natural measure of what tasks you should outsource. If a maid would cost $20 to clean the apartment, and you and your roommates all want at least $50 to do it, then the efficient thing to do is to hire a maid.
Comment author:Viliam_Bur
31 August 2013 10:44:37AM
5 points
[-]
The problem could be that they actually are willing to do it for $10, but it's a low-status thing to admit.
If we both lived in the same appartment, and we both pretended that our time is precious that we are only willing to clean the appartment for $1000... and I do it 50% of the time, and you do it 50% of the time, at the end none of us gets poor despite the unrealistic prices, because each of us gets all the money back.
Now when the third person comes and cares about money more than about status (which is easier for them, because they don't live in the same appartment with us), our pretending is exposed and we become either more honest or poor.
Comment author:shminux
20 August 2013 06:26:36PM
1 point
[-]
I can see it working when all parties are trustworthy and committed to fairness, which is a high threshold to begin with. Also, everyone has to buy into the idea of other people being autonomous agents, with no shoulds attached. Still, this might run into trouble when one party badly wants something flatly unacceptable to the other and so unable to afford it and feeling resentful.
One (unrelated) interesting quote:
my womb is worth about the cost of one graduate-level course at Columbia, assuming I’m interested in bearing your kid to begin with.
Comment author:maia
20 August 2013 06:14:14PM
3 points
[-]
Roger and I wrote a web app for exactly this purpose - dividing chores via auction. This has worked well for chore management for a house of 7 roommates, for about 6 months so far.
The feminism angle didn't even occur to us! It's just been really useful for dividing chores optimally.
I can see this working better than a dysfunctional household, but if you're both in the habit of just doing things, this is going to make everything worse.
Comment author:dreeves
23 September 2013 01:37:26AM
1 point
[-]
Very fair point! Just like with Beeminder, if you're lucky enough to simply not suffer from akrasia then all the craziness with commitment devices is entirely superfluous. I liken it to literal myopia. If you don't have the problem then more power to you. If you do then apply the requisite technology to fix it (glasses, commitment devices, decision auctions).
But actually I think decision auctions are different. There's no such thing as not having the problem they solve. Preferences will conflict sometimes. Just that normal people have perfectly adequate approximations (turn taking, feeling each other out, informal mental point systems, barter) to what we've formalized and nerded up with our decision auctions.
Comment author:Manfred
20 August 2013 05:16:54PM
*
10 points
[-]
Wasn't it Ariely's Predictably Irrational that went over market norms vs. tribe norms? If you just had ordinary people start doing this, I would guess it would crash and burn for the obvious market-norm reasons (the urge to game the system, basically). And some ew-squick power disparity stuff if this is ever enforced by a third party or even social pressure.
Comment author:maia
20 August 2013 06:16:35PM
2 points
[-]
Empirically speaking, this system has worked in our house (of 7 people, for about 6 months so far). What kind of gaming the system were you thinking of?
We do use social pressure: there is social pressure to do your contracted chores, and keep your chore point balance positive. This hasn't really created power disparities per se.
Comment author:Manfred
20 August 2013 08:54:50PM
2 points
[-]
What kind of gaming the system were you thinking of?
Yeah, bidding = deception. But in addition to someonewrong's answer, I was thinking you could just end up doing a shitty job at things (e.g. cleaning the bathroom). Which is to say, if this were an actual labor market, and not a method of communicating between people who like each other and have outside-the-market reasons to cooperate, the market doesn't have much competition.
Comment author:juliawise
13 October 2013 03:00:26PM
0 points
[-]
Except she specifies that if they're bidding above market wages for a task (cleaning the bathroom would work fine), they'll just pay someone else to do it. Of course, chores like getting up to deal with a sick child are not so outsourceable.
Comment author:maia
21 August 2013 02:42:28AM
1 point
[-]
Yeah, that's unfortunately not something we can really handle other than decreeing "Doing this chore entails doing X and it doesn't count if you don't do X." Enforcing the system isn't solved by the system itself.
a method of communicating between people who like each other and have outside-the-market reasons to cooperate
What kind of gaming the system were you thinking of?
If the idea is to say exactly how much you are willing to pay, there would be an incentive to:
1) Broadcast that you find all labor extra unpleasant and all goods extra valuable, to encourage people to bid high
2) Bid artificially lower values when you know someone enjoys a labor / doesn't mind parting with a good and will bid accordingly.
In short, optimal play would involve deception, and it happens to be a deception of the sort that might not be difficult to commit subconsciously. You might deceive yourself into thinking you find a chore unpleasant - I have read experimental evidence to support the notion that intrinsically rewarding tasks lose some of their appeal when paired with extrinsic rewards.
No comment on whether the traditional way is any better or worse - I think these two testimonials are sufficient evidence for this to be worth people who have a willing human tribe handy to try it, despite the theoretical issues. After all,
we trust each other not to be cheats and jerks. That’s true love, baby
Edit: There is another, more pleasant problem: If you and I are engaged in trade, and I actually care about your utility function, that's going to effect the price. The whole point of this system is to communicate utility evenly after subtracting for the fact that you care about each other (otherwise why bother with a system?)
Concrete example: We are trying to transfer ownership of a computer monitor, and I'm willing to give it to you for free because I care about you. But if I were to take that into account, then we are essentially back to the traditional method. I'd have to attempt to conjure up the value at which i'd sell the monitor to someone I was neutral towards.
Of course, you could just use this as an argument stopper - whenever there is real disagreement, you use money to effect an easy compromise. But then there is monetary pressure to be argumentative and difficult, and social pressure not to be - it would be socially awkward and monetarily advantageous if you were constantly the one who had a problem with unmet needs.
Comment author:maia
21 August 2013 02:51:59AM
2 points
[-]
1) Broadcast that you find all labor extra unpleasant and all goods extra valuable, to encourage people to bid high
But if other people bid high, then you have to pay more. And they will know if you bid lower, because the auctions are public. How does this help you?
2) Bid artificially lower values when you know someone enjoys a labor / doesn't mind parting with a good and will bid accordingly.
I don't understand how this helps you either; if you bid lower and therefore win the auction, then you have to do the chore for less than you value it at. That's no fun.
The way our system works, it actually gives the lowest bidder, not their actual bid, but the second lowest bid minus 1; that way you don't have to do bidding wars, and can more or less just bid what you value it at. It does create the issue that you mention - bid sniping, if you know what the lowest bidder will bid you can bid just above it so they get as little as possible - but this is at the risk of having to actually do the chore for that little, because bids are binding.
I'd very much like to understand the issues you bring up, because if they are real problems, we might be able to take some stabs at solving them.
whenever there is real disagreement, you use money to effect an easy compromise.
This has become somewhat of a norm in our house. We can pass around chore points in exchange for rides to places and so forth; it's useful, because you can ask for favors without using up your social capital. (Just your chore points capital, which is easier to gain more of and more transparent.)
if you bid lower and therefore win the auction, then you have to do the chore for less than you value it at. That's no fun.
You only do this when you plan to be the buyer. The idea is to win the auction and become the buyer, but putting up as little money as possible. If you know that the other guy will do it for $5, you bid $6, even if you actually value it at $10. As you said, I'm talking about bid sniping.
But if other people bid high, then you have to pay more.
Ah, I should have written "broadcast that you find all labor extra unpleasant and all goods extra valuable when you are the seller (giving up a good or doing a labour) so that people pay you more to do it."
If you're willing to do a chore for _$10, but you broadcast that you find it more than -$10 of unpleasantness, the other party will be influenced to bid higher - say, $40. Then, you can bid $30, and get paid more. It's just price inflation - in a traditional transaction, a seller wants the buyer to pay as much as they are willing to pay. To do this, the seller must artificially inflate the buyer's perception of how much the item is worth to the seller. The same holds true here.
When you intend to be the buyer you do the opposite - broadcast that you're willing to do the labor for cheap to lower prices, then bid snipe. As in a traditional transaction, the buyer wants the seller to believe that the item is not of much worth to the buyer. The buyer also has to try to guess the minimum amount that the seller will part with the item.
it actually gives the lowest bidder, not their actual bid, but the second lowest bid minus 1
So what I wrote above was assuming the price was a midpoint between the buyer's and seller's bid, which gives them both equal power to set the price. This rule slightly alters things, by putting all the price setting power in the buyer's hands.
Under this rule, after all the deceptive price inflation is said and done you should still bid an honest $10 if you are only playing once - though since this is an iterated case, you probably want to bid higher just to keep up appearances if you are trying to be deceptive.
One of the nice things about this rule is that there is no incentive to be deceptive unless other people are bid sniping. The weakness of this rule is that it creates a stronger incentive to bid snipe.
Price inflation (seller's strategy) and bid sniping (buyer's strategy) are the two basic forms of deception in this game. Your rule empowers the buyer to set the price, thereby making price inflation harder at the cost of making bid sniping easier. I don't think there is a way around this - it seems to be a general property of trading. Finding a way around it would probably solve some larger scale economic problems.
Comment author:rocurley
21 August 2013 07:36:18PM
2 points
[-]
(I'm one of the other users/devs of Choron)
There are two ways I know of that the market can try to defeat bid sniping, and one way a bidder can (that I know of).
Our system does not display the lowest bid, only the second lowest bid. For a one-shot auction where you had poor information about the others preferences, this would solve bid sniping. However, in our case, chores come up multiple times, and I'm pretty sure that it's public knowledge how much I bid on shopping, for example.
If you're in a situation where the lowest bid is hidden, but your bidding is predictable, you can sometimes bid higher than you normally would. This punishes people who bid less than they're willing to actually do the chore for, but imposes costs on you and the market as a whole as well, in the form of higher prices for the chore.
A third option, which we do not implement (credit to Richard for this idea), is to randomly award the auction to one of the two (or n) lowest bidders, with probability inversely related to their bid. In particular, if you pick between the lowest 2 bidders, both have claimed to be willing to do the job for the 2nd bidder's price (so the price isn't higher and noone can claim they were forced to do something for less than they wanted). This punishes bid-snipers by taking them at their word that they're willing to do the chore for the reduced price, at the cost of determinism, which allows better planning.
Plus, I think it doesn't work when there are only two players? If I honestly bid $30, and you bid $40 and randomly get awarded the auction, then I have to pay you $40. And that leaves me at -$10 disutility, since the task was only -$30 to me.
Comment author:rocurley
23 August 2013 03:44:30AM
0 points
[-]
To be sure I'm following you: If the 2nd bidder gets it (for the same price as the first bidder), the market efficiency is lost because the 2nd person is indifferent between winning and not, while the first would have liked to win it? If so, I think that's right.
If there are two players... I agree the first bidder is worse off than they would be if they had won. This seems like a special case of the above though: why is it more broken with 2 players?
Yes, that's one of the inefficiencies. The other inefficiency is that whenever the 2nd player wins, the service gets more expensive.
If there are two players... I agree the first bidder is worse off than they would be if they had won. This seems like a special case of the above though: why is it more broken with 2 players?
Because of the fact that the service gets more expensive. When there are multiple players, this might not seem like such a big deal - sure, you might pay more than the cheapest possible price, but you are still ultimately all benefiting (even if you aren't maximally benefiting). Small market inefficiencies are tolerable.
It's not so bad with 3 players who bid 20, 30, 40, since even if the 30-bidder wins, the other two players only have to pay 15 each. It's still inefficient, but it's not worse than no trade.
However, when your economy consists of two people, market inefficiency is felt more keenly. Consider the example I gave earlier once more:
I bid 30. You bid 40. So I can sell you my service for $30-$40, and we both benefit. .
But wait! The coin flip makes you win the auction. So now I have to pay you $40.
My stated preference is that I would not be willing to pay more than $30 for this service. But I am forced to do so. The market inefficiency has not merely resulted in a sub-optimal outcome - it's actually worse than if I had not traded at all!
Edit: What's worse is that you can name any price. So suppose it's just us two, I bid $10 and you bid $100, and it goes to the second bidder...
Comment author:[deleted]
20 August 2013 04:00:09PM
2 points
[-]
Here's a question that's been distracting me for the last few hours, and I want to get it out of my head so I can think about something else.
You're walking down an alley after making a bank withdrawal of a small sum of money. Just about when you realize this may have been a mistake, two Muggers appear from either side of the alley, blocking trivial escapes.
Mugger A: "Hi there. Give me all of that money or I will inflict 3^^^3 disutility on your utility function."
Mugger B: "Hi there. Give me all of that money or I will inflict maximum disutility on your utility function."
You: "You're working together?"
Mugger A: "No, you're just really unlucky."
Mugger B: "Yeah, I don't know this guy."
You: "But I can't give both of you all of this money!"
Mugger A: "Tell you what. You're having a horrible day, so if you give me half your money, I'll give you a 50% chance of avoiding my 3^^^3 disutility. And if you give me a quarter of your money, I'll give you a 25% chance of avoiding my 3^^^3 disutility. Maybe the other Mugger will let you have the same kind of break. Sound good to you, other Mugger?"
Mugger B: "Works for me. Start paying."
You: Do what, exactly?
I can see at least 4 vaugely plausible answers:
Pay Mugger A: 3^^^3 disutility is likely going to be more than whatever you think your maximum is and you want to be as likely as possible of avoiding that. You'll just have to try resist/escape from Mugger B (unless he's just faking).
Pay Mugger B: Maximum disutility is by it's definition of greater than or equal to any other disutility, worse than 3^^^3, and has probably happened to at least a few people with utility functions (although probably NOT to a 3^^^3 extent), so it's a serious threat and you want to be as likely as possible of avoiding that. You'll just have to try resist/escape from Mugger A (unless he's just faking).
Pay both Muggers a split of the money: For example: If you pay half to each, and they're both telling the truth, you have a 25% chance of not getting either disutility and not having to resist/escape at all (unless one or both is faking, which may improve your odds.)
Don't Pay: This seems like it becomes generally less likely than in a normal Pascal's mugging since there are no clear escape routes, and you're outnumbered, so there is at least some real threat unless they're both faking.
The problem is, I can't seem to justify any of my vaugely plausible answers to this conundrum well enough to stop thinking about it. Which makes me wonder if the question is ill formed in some way.
Comment author:Armok_GoB
27 August 2013 08:33:08PM
1 point
[-]
Give it all to mugger B obviously. I almost certainly am experiencing -3^^^3 utilions according to almost any measure every millisecond anyway, given I live in a Big World.
Comment author:Emile
20 August 2013 05:21:31PM
*
5 points
[-]
I may be fighting the hypothetical here, but ...
If utility is unbounded, maximum disutility is undefined, and if it's bounded, then 3^^^3 is by definition smaller than the maximum so you should pay all to mugger B.
Pay both Muggers a split of the money: For example: If you pay half to each, and they're both telling the truth, you have a 25% chance of not getting either disutility and not having to resist/escape at all (unless one or both is faking, which may improve your odds.)
I think trading a 10% chance of utility A for a 10% chance of utility B, with B < A is irrational per the definition of utility (as far as I understand; you can have marginal diminishing utility on money, but not marginally diminishing utility on *utility. I'm less sure about risk aversion though.)
That's not fighting the hypothetical. Fighting the hypothetical is first paying one, then telling the other you'll go back to the bank to pay him too. Or pulling out your kung fu skills, which is really fighting the hypothetical.
If you have some concept of "3^^^3 disutility" as a tractable measure of units of disutility, it seems unlikely you don't also have a reasonable idea of the upper and lower bounds of your utility function. If the values are known this becomes trivial to solve.
I am becoming increasingly convinced that VNM-utility is a poor tool for ad-hoc decision-theoretics, not because of dubious assumptions or inapplicability, but because finding corner-cases where it appears to break down is somehow ridiculously appealing.
Comment author:Khoth
20 August 2013 04:25:10PM
*
3 points
[-]
If they're both telling the truth: since B gives maximum disutility, being mugged by both is no worse than being mugged by B. If you think your maximum disutility is X*3^^^3, I think if you run the numbers you should give a fraction X/2 to B, and the rest to A. (or all to B if X>2)
If they might be lying, you should probably ignore them. Or pay B, whose threat is more credible if you don't think your utility function goes as far as 3^^^3 (although, what scale? Maybe a dust speck is 3^^^^3)
Comments (325)
I don't understand the graph in Stephen Hsu on Cognitive Genomics - help?
So to first quote Hsu's description:
I'll try to explain it in different terms. What you are looking at is a graph of 'results vs effort'. How much work do you have to do to get out some useful results? The importance of this is that it's showing you a visual version of statistical power analysis (introduction).
Ordinary power analysis is about examining the inherent zero-sum trade-offs of power vs sample size vs effect size vs statistical-significance, where you try to optimize each thing for one's particular purpose; so for example, you can choose to have a small (=cheap) sample size and a small Type I (false positives) error rate in detecting a small effect size - as long as you don't mind a huge Type II error rate (low power, false negative, failure to detect real effects).
If you look at my nootropics or sleep experiments, you'll see I do power analysis all the time as a way of understanding how big my experiments need to be before they are not worthlessly uninformative; if your sample size is too small, you simply won't observe anything, even if there really is an effect (eg. you might conclude, 'with such a small n as 23, at the predicted effect size and the usual alpha of 0.05, our power will be very low, like 10%, so the experiment would be a waste of time').
Even though we know intelligence is very influenced by genes, you can't find 'the genes for intelligence' by looking at just 10 people - but how many do you need to look at?
In the case of the graph, the statistical-significance is hardwired & the effect sizes are all known to be small, and we ignore power, so that leaves two variables: sample size and number of null-rejection/findings. The graph shows us simply that as we get a larger sample, we can successfully find more associations (because we have more power to get a subtle genetic effect to pass our significance cutoffs). Simple enough. It's not news to anyone that the more data you collect, the more results you get.
What's useful here is that the slope of the points is encoding the joint relationship of power & significance & effect size for genetic findings, so we can simply vary sample size and spit out estimated number of findings. The intercept remains uncertain, though. What Hsu finds so important about this graph is that it lets us predict for intelligence how many hits we will get at any sample size once we have a datapoint which then nails down a unique line. What's the datapoint? Well, he mentions the very interesting recent findings of ~3 associations - which happened at n=126k. So to plot this IQ datapoint and guessing at roughly where it would go (please pardon my Paint usage):
OK, but how does that let Hsu predict anything? Well, the slope ought to be the same for future IQ findings, since the procedures are basically the same. So all we have to do is guess at the line, and anchor it on this new finding:
So if you want to know what we'll find at 200000 samples, you extend the line and it looks like we'll have ~10 SNPs at that point. Or, if you wanted to know when we'll have found 100 SNPs for intelligence, you simply continue extending the line until it reaches 100 on the y-axis, which apparently Hsu thinks will happen somewhere around 1000000 on the x-axis (which extends off the screen because no one has collected that big a sample yet for anything else, much less intelligence).
I hope that helps; if you don't understand power, it might help to look at my own little analyses where the problem is usually much simpler.
Many thanks for this!
So in broad strokes: the smaller a correlation is, the more samples you're going to need to detect it, so the more samples you take, the more correlations you can detect. For five different human variables, this graph shows number of samples against number of correlations detected with them on a log/log scale; from that we infer that a similar slope is likely for intelligence, and so we can use it to take a guess at how many samples we'll need to find some number of SNPs for intelligence. Am I handwaving in the right direction?
Yes, although I'd phrase this more as 'the more samples you take, the bigger your "budget", which you can then spend on better estimates of a single variable or if you prefer, acceptable-quality estimates of several variables'.
Which one you want depends on what you're doing. Sometimes you want one variable, other times you want more than one variable. In my self-experiments, I tend to spend my entire budget on getting good power on detecting changes in a single variable (but I could have spent my data budget in several ways: on smaller alphas or smaller effect sizes or detecting changes to multiple variables). Genomics studies like these, however, aren't interested so much in singling out any particular gene and studying it in close detail, but finding 'any relevant gene at all and as many as possible'.
And there's a "budget" because if you "double-spend", you end up with the XKCD green acne jelly beans?
Eh, I'm not sure the idea of 'double-spending' really applies here. In the multiple comparisons case, you're spending all your budget on detecting the observed effect size and getting high-power/reducing-Type-II-errors (if there's an effect lurking there, you'll find it!), but you then can't buy as much Type I error reduction as you want.
This could be fine in some applications. For example, when I'm A/B testing visual changes to gwern.net, I don't care if I commit a Type I error, because if I replace one doohickey with another doohickey and they work equally well (the null hypothesis), all I've lost is a little time. I'm worried about coming up with an improvement, testing the improvement, and mistakenly believing it isn't an improvement when actually it is.
The problem with multiple comparisons comes when people don't realize they've used up their budget and they believe they really have controlled alpha errors at 5% or whatever. When they think they've had their cake & ate it too.
I guess a better financial analogy would be more like "you spend all your money on the new laptop you need for work, but not having checked your bank account balance, promise to take your friends out for dinner tomorrow"?
I am a bit confused -- is the framework for this thread observation (where the number of samples is pretty much the only thing you can affect pre-analysis) or experiment design (where you you can greatly affect which data you collect)?
I ask because I'm intrigued by the idea of trading off Type I errors against Type II errors, but I'm not sure it's possible in the observation context without introducing bias.
I'm not sure about this observation vs experiment design dichotomy you're thinking of. I think of power analysis as something which can be done both before an experiment to design it and understand what the data could tell one, and post hoc, to understand why you did or did not get a result and to estimate things for designing the next experiment.
Well, I think of statistical power as the ability to distinguish signal from noise. If you expect signal of a particular strength you need to find ways to reduce the noise floor to below that strength (typically through increasing sample size).
However my standard way of thinking about this is: we have data, we build a model, we evaluate how good the model output is. Bulding a model, say, via some sort of maximum likelihood, gives you "the" fitted model with specific chances to commit a Type I or a Type II error. But can you trade off chances of Type I errors against chances of Type II errors other than through crudely adding bias to the model output?
Model-building seems like a separate topic. Power analysis is for particular approaches, where I certainly can trade off Type I against Type II. Here's a simple example for a two-group t-test, where I accept a higher Type I error rate and immediately see my Type II go down (power go up):
In exchange for accepting 10% Type I rather than 5%, I see my Type II fall from 1-0.60=40% to 1-0.72=28%. Tada, I have traded off errors and as far as I know, the t-test remains exactly as unbiased as it ever was.
Um... In the HPMOR notes section, this little thing got mentioned.
"I am auctioning off A Day Of My Time, to do with as the buyer pleases – this could include delivering a talk at your company, advising on your fiction novel in progress, applying advanced rationality skillz to a problem which is tying your brain in knots, or confiding the secret answer to the hard problem of conscious experience (it’s not as exciting as it sounds). I retain the right to refuse bids which would violate my ethics or aesthetics. Disposition of funds as above."
That sounds like really exciting news to me, TBH. Someone seriously needs to bid. There are less than 7 hours left and nobody has taken him up on the offer.
Well, keep in mind that Eliezer himself claims that "it's not as exciting as it sounds".
And of course you always need to have in mind that what Eliezer considers to be "the secret answer to the hard problem of conscious experience" may not be as satisfying an answer to you as it is to him.
After all, some people think that the non-secret answer to the hard problem of conscious experience is something like "consciousness is what an algorithm feels like from the inside" and this is quite non-satisfactory to me (and I think it was non-satisfactory to Eliezer too).
(And also, I think the bidding started at something like $4000.)
I got excited for the fraction of a second it took me to remember that everyone who could possibly want to bid could probably afford to spend more money than I have to my name on this without it cutting into their living expenses. Unless my plan was "Bid $900, hope no one outbids, ask Eliezer to get me a job as quickly as possible", which isn't really that exciting a category, however useful.
I might have bid on that, but the auction is already over.
So, are $POORETHNICGROUP so poor, badly off and socially failed because they are about 15 IQ points stupider than $RICHETHNICGROUP? No, it may be the other way around: poverty directly loses you around 15 IQ points on average.
Or so says Anandi Mani et al. "Poverty Impedes Cognitive Function" Science 341, 976 (2013); DOI: 10.1126/science.1238041. A PDF while it lasts (from the nice person with the candy on /r/scholar) and the newspaper article I first spotted it in. The authors have written quite a lot of papers on this subject.
So, I totally buy the "cognitive load decreases intellectual performance, both in life and on IQ tests" claim. This is very well replicated, and has immediate personal implications (don't try to remember everything, write it all down; try to minimize sources of stress in your life; try to think about as few projects at a time as possible).
I don't think it's valid to say "instead of A->B, it's B->A," or see this as a complete explanation, because the ~13 point drop is only present in times of financial stress. Take standardized school tests, and suppose that half of the minority students are under immediate financial stress (their parents just got a hefty car repair bill) and the other half aren't (the 'easy' condition in the test), whereas none of the majority students are under immediate financial stress. Then we should expect the minority students to be, on average, 6.5 points lower, but what we see is the gap of 15 points.
It's also plausible that the differentiatior between people is their reaction to stress--I know a lot of high-powered managers and engineers under significant stress at work, who lose much less than a standard deviation of their ability to make good decisions and focus on other things and so on. Some people even seem to perform better under stress, but it's hard to separate out the difference between motivation and fluid intelligence there.
Being poor means living a life of stress, financial and social. John Scalzi attempts to explain it. John Cheese has excellent ha-ha-only-serious stuff on Cracked on the subject too.
I wasn't meaning to put forward a study as settled science, of course; but I think it's interesting, and that they have a pile of other studies showing similar stuff. Now it's replication time.
Then why, during the experiment, did the poor participants and the rich participants have comparable scores when presented with a hypothetical easy financial challenge (a repair of $150)?
The claim the paper makes is that there are temporary challenges which lower cognitive functionality, that are easier to induce in the poor than the rich. If you expect that those challenges are more likely to occur to the poor than the rich (which seems reasonable to me), then this should explain some part of the effect- but isn't on all the time, or the experiment wouldn't have come out the way it did.
While I have my doubts about the replicability of any social science article that made it into Science, the interpretation concerns here are assuming the effect the paper saw is entirely real and at the strength they reported.
The biggest problem I have with racists claiming racial realism is this.
The really interesting thing is that you see results from all over the world showing this. Catholics in Northern Ireland in the 1970s measuring 15 points lower than Protestants. Burakumin in Japan measuring 15 points lower than non-Burakumin. SAME GENE POOL. This strongly suggests you get at least 15 points really easily just from social factors, and these studies may (because a study isn't solid science yet, not even a string of studies from the same group) point to one reason.
That's not obvious. Remember, there were strong taboos against interbreeding with Burakumin in Japan.
They separated only a few hundred years ago.
Could be interesting to know how much of that is the status directly, and how much is better nutrition and medical care.
The racists claim that this is irrelevant because of research that corrects for socioeconomic status and still finds IQ differences. Of course, researchers have found plenty of evidence of important environmental influences on IQ not measured by SES. It seems especially bad for the racial realist hypothesis that people who, for example, identify as "black" in America have the the same IQ disadvantage compared to whites whether their ancestory is 4% European or 40% European; how much African vs. European ancestry someone has seems to matter only indirectly to the IQ effects, which seem to directly follow whichever artificial simplified category someone is identified as belonging to.
I've seen mixed reports on this. Human Varieties, for example, has a series of posts on colorism which finds a relationship between skin color and intelligence in the population of African Americans, as predicted by both the hereditarian and "colorist" (i.e. discrimination) theories, but does not find a relationship between skin color and intelligence within families (as predicted by the hereditarian but not the colorist theory), and I know there were studies using blood type which didn't support the hereditarian theory but appear to have been too weakly designed to do that even if hereditarianism were true. Are you aware of any studies that actually look at genetic ancestry and compare it to IQ? (Self-reported ancestry would still be informative, but not as accurate.)
It's because Europeans are 4% Neanderthal and partake of the Neanderthals' larger brains, and Africans aren't. </completelyspuriousjustsostory>
There is large enough variance in Neanderthal ancestry among Europeans that we might actually be able to see differences within the European population (and then extrapolate those to guess how much of the European-African gap that explains). I seem to recall seeing some preliminary reports on this, but I can't find them right now so I'm not confident they were evidence-driven instead of theory-driven.
Not completely serious, just wondering about possible implications, for sake of munchkinism:
Would it be possible to invent some new color, for example "purple", so that identifying with that color would increase someone's IQ?
I guess it would first require the rest of the society accepting the superiority (at least in intelligence) of the purple people, and their purpleness being easy to identify and difficult for others to fake. (Possible to achieve with some genetic manipulation.)
Also, could this mechanism possibly explain the higher intelligence of Jews? I mean, if we stopped suspecting them from making international conspiracies and secretly ruling the world (which obviously requires a lot of intelligence), would their IQs consequently drop to the average level?
Also... what about Asians? It is the popularity of anime than increases their IQ, or what?
Unfortunately, while we know there are lots of environmental factors that affect IQ, we mostly don't know the details well enough to be sure of very much, or to have much idea how to manipulate it. However, as I understand it, some research has suggested that there are interesting cultural similarities between Jews in most of the world and Chinese who don't live in China, and that the IQ advantage of Chinese is primarily among Chinese who don't live in China, so something in common between how the Chinese and Jewish cultures deal with being minority outsiders may explain part of why both show unusually high IQs when they are minority outsiders (and could explain a lot of East Asians generally; considering how enormous the cultural influence of China has been in the region, it would not be terribly surprising if many other East Asian groups had acquired whatever the relevant factor is).
This paper by Ogbu and Simons discusses some of the theories about groups that do poorly (the "involuntary" or "caste-like" minorities). Unfortunately I couldn't find a citation for any discussion of differences between voluntary minorities which would explain why some voluntary minorities outperform rather than merely equalling the majority, apart from Ned Block's passing reference to a culture of "self-respect" in his review of The Bell Curve.
It's been done - many people do in fact self-identify as 'Indigo children', 'Indigos' or even 'Brights'. The label tends to come with a broadly humanistic and strongly irreligious worldview, but many of them are in fact highly committed to some form of spirituality and mysticism: indeed, they credit these perhaps unusual convictions for their increased intelligence and, more broadly, their highly developed intuition.
Ah, "Brights" is Dawkins and Dennett's terrible word for atheists; "Indigos" is completely insane and incoherent new-age nonsense about allegedly superpowered children. How did you conflate the two?
I lost an AI box experiment against PatrickRobotham with me as the AI today on irc. If anyone else wants to play against me then PM me here or contact me on #lesswrong.
Do we still keep up with those secrecy shenanigans even when no MIRI employees were involved, or can you share some details?
I don't share details because subsequent games will be less fun and because if I am using dick moves I don't want people to know how much of a dick I am.
I enjoyed this non-technical piece about the life of Kolmogorov - responsible for a commonly used measure of complexity, as well as several now-conventional conceptions of probability. I wanted to share: http://nautil.us/issue/4/the-unlikely/the-man-who-invented-modern-probability
I wonder if it makes sense to have something like a registry of the LW regulars who are experts in certain areas. For example, this forum has a number of trained mathematicians, philosophers, computer scientists...
Something like a table containing [nick, general area, training/credentials, area of interest, additional info (e.g. personal site)], maybe?
On a wiki page. Allowing anyone to opt out.
The first step would be to gather data... probably in an article made for this purpose... or in a fresh open thread.
Creuncf gur fyvc bs cncre ybbxrq fbzrguvat yvxr guvf. (Qrfvtavat na nzovtenz jbhyq or nanybtbhf gb svaqvat zhygvcyr zrffntrf jvgu gur fnzr unfu.)
Gung'q arire jbex sbe n frpbaq ba n uhzna. V qba'g guvax V'ir frra nal nzovtenzf juvpu ner fb fzbbgu gung lbh pbhyq frr rvgure bar onfrq ba n cevzr jvgubhg abgvat gung gur jevgvat vf irel bqq. V pna'g rira ernq nal bs gung nzovtenz rkprcg sbe 'fcevat', fgenvavat uneq.
Gung cnegvphyne nzovtenz, fher. (Vg'f nyfb qvsvphyg gb svaq zhygvcyr zrffntrf jvgu gur fnzr unfu.) Ohg Qreera Oebja hfrq guvf nzovtenz va uvf 2007 frevrf "Gevpx be Gerng" jvgu ng yrnfg gur nccrnenapr bs fhpprff (gubhtu nf nyjnlf jvgu Oebja, vg'f cbffvoyr ur jnf sbbyvat hf engure guna gur cnegvpvcnag).
Thanks for all the poll submissions. I decided since I just finished Umineko, this is a good time to analyze the 49 responses.
The gist is that the direction seems to be as predicted and the effect size reasonable (odds-ratio of 1.77), but not big enough to yield any impressive level of statistical-significance (p=0.24):
Or if you prefer, a linear regression:
I'm rather alarmed at how many people appear to have said they're very sure they know how he did it, on (I assume, but I think it's pretty clear) the basis of having thought of one very credible way he could have done it.
I'm going to be optimistic and suppose that all those people thought something like "Although gwern asked how sure we are that we know how it was done, context suggests that the puzzle is really 'find a way to do it' rather than 'identify the specific way used in this case', so I'll say 'very' even though for all I know there could be other ways'.
(For what it's worth, I pedantically chose the "middle" option for that question, but I found the same obvious solution as everyone else.)
In the case of Umineko, there's not really any difference between 'find a way' and 'find the way', since it adheres to a relativistic Schrodinger's-cat-inspired epistemology where all that matters is successfully explaining the observed evidence. So I don't expect the infelicitous wording to make a difference.
Ah, OK. I wasn't aware of that bit of context. Thanks.
As it turns out, there's a second possible way using a detail I didn't bother to mention (because I assumed it was a red herring and not as satisfactory a solution anyway):
Angfhuv npghnyyl fnlf fur'f arire rire gbyq nalbar ure snibevgr frnfba rkprcg sbe gur srznyr freinag Funaaba lrnef ntb, naq guvaxf nobhg jurgure Funaaba pbhyq or pbafcvevat jvgu gur lbhat znyr pnyyre. Rkprcg Funaaba vf n ebyr cynlrq ol gur traqre-pbashfrq pebffqerffvat phycevg Lnfh (nybat jvgu gur ebyrf bs Xnaba & Orngevpr), fb gur thrff pbhyq unir orra onfrq ba abguvat ohg ure zrzbel bs orvat gbyq gung.
Crefbanyyl, rira vs V jnf va fhpu n cbfvgvba, V jbhyq fgvyy cersre hfvat gur pneq gevpx: jul pbhyqa'g Angfhuv unir punatrq ure zvaq bire gur lrnef? Be abg orra frevbhf va gur svefg cynpr? Be Funaaba unir zvferzrzorerq? rgp
Zhygvcyr ovgf bs cncre, boivbhfyl.
Gurer jrer abgrf sbe rnpu bs gur sbhe frnfbaf uvqqra va qvssrerag cynprf nebhaq gur ebbz. Gur pnyyre fvzcyl ersreerq ure gb gur uvqvat-cynpr bs gur abgr gung zngpurq ure nafjre.
Zl svefg gubhtug ba ernqvat gur ceboyrz - juvpu fgvyy frrzf yvxr zl org thrff, ba ersyrpgvba, gubhtu.
Qvqa'g ibgr ba gur "ubj fher ner lbh", orpnhfr V'z ab ybatre fher ubj fher V nz - V'z hasnzvyvne jvgu gur fubj, naq gur ersrerapr gb pelcgbtencul fhttrfgf fbzr bgure fbyhgvba (V'z snzvyvne jvgu ehqvzragnel zntvp gevpxf, juvpu vf cebonoyl jurer ZL fbyhgvba pbzrf sebz.) Ohg V pregnvayl qba'g unir "ab vqrn" ubj vg jnf qbar.
V pna guvax bs guerr jnlf bs qbvat guvf gevpx.
Ur uvq sbhe fyvcf bs cncre, bar sbe rnpu frnfba. Cerfhznoyl ur jvyy erzbir gur bgure guerr ng gur svefg bccbeghavgl.
Ur unf qbar fbzr erfrnepu gb qvfpbire fbzr snpg nobhg ure gb hfr va uvf qrzbafgengvba.
Fur unf hfrq ure snibevgr frnfba nf gur nafjre gb n frphevgl dhrfgvba ba n jro fvgr gung ur unf nqzva-yriry npprff gb.
Gurer znl or bgure jnlf. Jvgu fb znal, V pnaabg or irel fher gung nal fvatyr bar gung V pubbfr vf evtug.
Posted before I read other replies:
V fhfcrpg gurer ner sbhe fyvcf bs cncre va qvssrerag cnegf bs ure ebbz. Naq vs ur pbhyq farnx gurz va, gura gurer'f n ernfbanoyr punapr ur pna farnx gur guerr fyvcf ersreevat gb aba-jvagre frnfbaf bhg orsber fur svaqf gurz.
Yvxr frireny bs gur bgure pbzzragref V dhvpxyl fnj ubj guvf pbhyq or qbar jvgu onfvp fgntr zntvp, ohg qrfcvgr orvat snveyl snzvyvne jvgu pelcgb V qvqa'g vzzrqvngryl znxr gur pbaarpgvba gb pelcgb hagvy V fnj lbhe pbzzrag ba unfu cer-pbzzvgzragf. Univat n fvatyr pnabavpny yvfg bs lbhe cer-pbzzvgzragf. choyvfurq va nqinapr jbhyq frrz gb fbyir cngpu guvf fcrpvsvp irarenovyvgl.
V cnggrea zngpurq zl vqrn bs gur fbyhgvba gb gur onfvp fgntr zntvp gevpx bs univat znal uvqqra bcgvbaf naq znxvat gur znex guvax lbh bayl unq gur bar lbh fubjrq gurz, abg pelcgbtencul.
Mentally subtract my vote from "No idea" onto "Very" since apparently I can read poll answers better than poll questions.
Guvf "chmmyr" frrzf rnfl gb na rkgerzr, gb zr ng yrnfg. Gur gevivny fbyhgvba jbhyq or gb uvqr nyy gur cbffvoyr nafjref va qvssrerag cynprf, naq bayl gryy ure gb ybbx va gur cynpr jurer ur uvq gur nafjre ur trgf gbyq vf pbeerpg. (Va guvf pnfr, haqre gur pybpx.)
My thought was the same as palladias'. I'm not seeing an obvious way involving cryptography though, but I am somewhat familiar with it (I understand RSA and its proof).
Zl crefbany guvaxvat jnf "Bar bs gur rnfvrfg jnlf gb purng n pelcgbtencuvp unfu cerpbzzvgzrag vf gb znxr zhygvcyr fhpu unfurf naq fryrpgviryl erirny n fcrpvsvp bar nf nccebcevngr; gur punenpgre unf irevsvnoyl cerpbzzvggrq gb n cnegvphyne cerqvpgvba bs 'jvagre', ohg unf ur irevsvnoyl cerpbzvggrq gb bayl bar cerqvpgvba?"
(Nqzvggrqyl V unir orra guvaxvat nobhg unfu cerpbzzvgzragf zber guna hfhny orpnhfr V unir n ybat-grez cebwrpg jubfr pbapyhfvba vaibyirf unfu cerpbzzvgzragf naq V qba'g jnag gb zvfhfr gurz be yrnir crbcyr ebbz sbe bowrpgvba.)
Bs pbhefr, erirnyvat n unfu nsgre gur snpg cebirf abguvat, rira vs vg'f irevsvnoyl gvzrfgnzcrq. Nabgure cbffvoyr gevpx vf gb fraq n qvssrerag cerqvpgvba gb qvssrerag tebhcf bs crbcyr fb gung ng yrnfg bar tebhc jvyy frr lbhe cerqvpgvba pbzr gehr. V qba'g xabj bs na rnfl jnl nebhaq gung vs gur tebhcf qba'g pbzzhavpngr.
Guvf vf irel yvxr gur sbbgonyy cvpxf fpnz.
V qvqa'g guvax ng nyy nobhg unfurf (naq V qba'g unir zhpu rkcrevrapr jvgu gurz rkprcg n ovg bs gurbel). V whfg ena 'jung jbhyq V qb jvgu npprff gb gur ebbz nurnq bs gvzr naq jung qb V xabj?' naq bhg cbccrq sbhe furrgf bs cncre.
Cerqvpgvba: Ur chg sbhe fyvcf bs cncre va gur ebbz (r.t. pybpx, grqql orne, fubr, cntr # bs grkgobbx), naq pubfr juvpu bowrpg gb qverpg ure gb onfrq ba ure erfcbafr. Ur'f unir gb erzbir gur bgure guerr fbbavfu, ohg ur boivbhfyl unq npprff bapr, naq vs gurl'er nyy va fhssvpvragyl bofpher cynprf, vg jbhyq or cerggl rnfl
V'z abg fher V jbhyq unir pnyyrq guvf n sbez bs pelcgbtencul jrer V hacevzrq, ohg jvgu bayl sbhe cbffvoyr nafjref ur whfg unf gb cvpx sbhe uvqvat cynprf naq gryy ure gb ybbx va gur evtug bar, evtug?
I have never consciously noticed a dust speck going into my eye, at least I don't remember it. This means it didn't make big enough effect on my mind so that it would have made a lasting impression on my memory. When I first read the post about dust specks and torture, I had to think hard about wtf the speck going into your eye even means.
Does this mean that I should attribute zero negative utility to dust speck going into my eye?
You could consider the analogous problem of waking up during surgery & then forgetting it afterwards.
The dust speck is just a symbol for the smallest negative utility unit. Just imagine something else.
Oh, I was already aware of that (and this is not just hindsight bias, I remember reading about this today and someone suggested replacing the speck with the smallest actual negative utility unit). This isn't really about the original question anyway. I was just thinking if something that doesn't even register on a conscious level could have negative utility.
I guess anything with a negative cumulative effect.
Imagine the dust specks piling in your eye until they start to interfere with your vision.
Well, yes, but it's one dust speck per person...
And it's entirely possible that utility of dust speck isn't additive. In fact, it's trivially so: one dust speck is fine, a few trillion will do gruesome things to your head.
I'm now thinking of developing a Dust Speck Machine Gun. Or Shotgun, possibly.
Well, I don't see how anything that never registers on any level can have any utility.
But... I dunno. Something that lowers your IQ by 1 point may be something you will never discover, and yet it will cause you negative utility...
This is unrelated to rationality, but I'm posting it here in case someone decides it serves their goals to help me be more effective in mine.
I recently bought a computer, used it for a while, then decided I didn't want it. What's the simplest way to securely wipe the hard drive before returning it? Is it necessary to create an external boot volume (via USB or optical disc)?
Probably use dban.
How should I answer this dialog? The help link at the bottom was unhelpful.
I used the second option, but it would surprise me if it didn't work either way.
Seems to have worked; thanks.
Thanks; I'll try it. (I should have mentioned that it was a Windows 8 PC, but your link mentions working under Windows, so thanks again.)
It doesn't work under any operating system, it has its own very simple OS on the CD.
Good point; not sure what I was thinking. I could have said something about the CPU and BIOS(?), but for now I'll just see if it works.
(Edit: seems to havea worked; thanks.)
What if this were a video game? A way of becoming more strategic.
I don't suppose there's any regularly scheduled LW meetups in San Diego, is there? I'll be there this week from Saturday to Wednesday for a conference.
This essay on internet forum behavior by the people behind Discourse is the greatest thing I've seen in the genre in the past two or three years. It rivals even some of the epic examples of wikipedian rule-lawyering that I've witnessed.
Their aggregation of common internet forum rules could have been done by anyone, but it was ultimately they that did it. My confidence in Discourse's success has improved.
"Don't be a dick" is now "Wheaton's law"? Pfeh!
How can I apply rationality to business?
I find the idea of commitment devices strongly aversive. If I change my mind about doing something in the future, I want to be able to do whatever I choose to do, and don't want my past self to create negative repercussions for me if I change my mind.
I think one of my very favorite things about commenting on Lesswrong is that usually when you make a short statement or ask a question people will just respond to what you said rather than taking it as a sign to attack what they think that question implies is your tribe.
Has anyone done a good analysis on the expected value of purchasing health insurance? I will need to purchase health when I turn 26. How comprehensive should the insurance I purchase be?
At first I thought I should purchase a high-deductible that only protects against catastrophes. I have low living expenses and considerable savings, so this wouldn't be risky. The logic here is that insurance costs the expected value of the goods provided plus overhead, so the cost of insurance will always be less than it's expected value. If I purchase less insurance, I waste less money on overhead.
On the other hand, there's a tax break for purchasing health insurance, and soon there will be subsidies as well. Also, insurance companies can reduce the cost of health care by negotiating lower prices for you. So the insurance company will pay less than the person who pays out of pocket. All these together might outweigh money wasted on overhead.
On the third hand, I'm a young healthy male. Under the ACA, my insurance premiums will be inflated so that old, sick, and female persons can have lower premiums. The money that's being transferred to these groups won't be spent on me, so it reduces the expected value of my insurance.
Has anyone added all these effects up? Would you recommend I purchase skimpy insurance or comprehensive?
"Also, insurance companies can reduce the cost of health care by negotiating lower prices for you. "
This is the case even with a high deductable plan. The insurance will have a different rate when you use an in-network doctor or hospital service. If you haven't met the deductible and you go in, they'll send you a bill--but that bill will still be much cheaper than if you had gone in and paid out of pocket (like paying less than half).
But make sure that the high deductable plan actually has a cheaper monthly payment by an amount that matters. With new regulations of what must be covered, the differences between plans may not end up being very big.
Has anyone done a study on redundant information in languages?
I'm just mildly curious, because a back-of-the-envelope calculation suggests that English is about 4.7x redundant - which on a side note explains how we can esiayl regnovze eevn hrriofclly msispled wrods.
(Actually, that would be an interesting experiment - remove or replace fraction x of the letters in a paragraph and see at what average x participants can no longer make a "corrected" copy.)
I'd predict that Chinese is much less redundant in its spoken form, and that I have no idea how to measure redundancy in its written form. (By stroke? By radical?)
Studies of this form have been done at least on the edge case where all the material removed is from the end (ie. tests of the ability of subjects to predict the next letter in an English text). I'd be interested to see your more general test but am not sure if it has been done. (Except, perhaps, as a game show).
Yes, it's been studied quite a bit by linguists. You can find some pointers in http://www.gwern.net/Notes#efficient-natural-language which may be helpful.
Thanks.
... huh. Now I'm thinking about actually doing that experiment...
If you do, please post about it!
I ran into another thing in that vein:
--The Man Who Invented Modern Probability - Issue 4: The Unlikely - Nautilus
This also happens to me with music. I enjoy "unpredictable" music more than predictable music. Knowing music theory I know which notes are supposed to be played -- if a song is in a certain key -- and if a note or chord isn't predicted then it feels a bit more enjoyable. I wonder if the same technique could be applied to different genres of music with the same result, i.e. radio-friendly pop music vs non-mainstream music.
I wonder what that metric has to say about Finnigan's Wake...
By other metrics, Joyce became less compressible throughout his life. Going closer to the original metric, you demonstrate that the title is hard to compress (especially the lack of apostrophe).
We wonder about the moral impact of dust specks in the eyes of 3^^^3 people.
What about dust specks in the eyes of 3^^^3 poodles? Or more to the point, what is the moral cost of killing one person vs one poodle? How many poodles lives would we trade for the life of one person?
Or even within humans, is it human years we would account in coming up with moral equivalencies? Do we discount humans that are less smart, on the theory that we almost certainly discount poodles against humans because they are not as smart as us? Do we discount evil humans compared to helpful humans? Discount unproductive humans against productive ones? What about sims, if it is human*years we count rather than human lives, what of a sim which might be expected to run for more than a trillion subjective years in simulation, do they carry billions times more moral weight than a single meat human who has precommitted to eschew cryonics or upload?
And of course I am using poodle as an algebraic symbol to represent any one of many intelligences. Do we discount poodles against humans because they are not as smart, or is there some other measure of how to relate the moral value of a poodle to the moral value of a person? Does a sim (simulated human running in software) count equal to a meat human? Does an earthworm have epsilon<<1 times the worth of a human, or is it identically 0 times the worth of a human?
What about really big smart AI? Would an AI as smart as an entire planet be worth (morally) preserving at the expense of losing one-fifth the human population?
I observe that the answer to the last question is not constrained to be positive.
"Letting those people die was worth it, because they took their cursed yapping poodle with them!"
(quote marks to indicate not my actual views)
Do the nervous systems of 3^^^3 nematodes beat the nervous systems of a mere 7x10^9 humans? If not, why not?
I believe that I care nothing for nematodes, and that as the nervous systems at hand became incrementally more complicated, I would eventually reach a sharp boundary wherein my degree of caring went from 0 to tiny. Or rather, I currently suspect that an idealized version of my morality would output such.
Keyword here is believe. What probability do you assign?
And if you say epsilon or something like that, is the epsilon bigger or smaller than 1/(3^^^3/10^100)?
... really?
Um, that strikes me as very unlikely. Could you elaborate on your reasoning?
I'm kind of curious as to why you wouldn't expect a continuous, gradual shift in caring. Wouldn't mind design space (which I would imagine your caring to be a function of) be continuous?
Something going from 0 to 10^-20 is behaving pretty close to continuously in one sense. It is clear that there are some configurations of matter I don't care about at all (like a paperclip), while I do care about other configurations (like twelve-year-old human children), so it is elementary that at some point my utility function must go from 0 to nonzero. The derivative, the second derivative, or even the function itself could easily be discontinuous at this point.
I dispute that; the paperclip is almost certainly either more or less likely to become a Boltzmann brain than an equivalent volume of vacuum.
But needn't be! See for example f(x) = exp(-1/x) (x > 0), 0 (x ≤ 0).
Wikipedia has an analysis.
(Of course, the space of objects isn't exactly isomorphic to the real line, but it's still a neat example.)
Agreed, but it is not obvious to me that my utility function needs to be differentiable at that point.
And ... it isn't clear that there are some configurations you care for ... a bit? Sparrows being tortured and so on? You don't care more about dogs than insects and more for chimpanzees than dogs?
(I mean, most cultures have a Great Chain Of Being or whatever, so surely I haven't gone dreadfully awry in my introspection ...)
This is not incompatible with what I just said. It goes from 0 to tiny somewhere, not from 0 to 12-year-old.
Can you bracket this boundary reasonably sharply? Say, mosquito: no, butterfly: yes?
No, but I strongly suspect that all Earthly life without frontal cortex would be regarded by my idealized morals as a more complicated paperclip. There may be exceptions and I have heard rumors that octopi pass the mirror test, and I will not be eating any octopus meat until that is resolved, because even in a world where I eat meat because optimizing my diet is more important and my civilization lets me get away with it, I do not eat anything that recognizes itself in a mirror. So a spider is a definite no, a chimpanzee is an extremely probable yes, a day-old human infant is an extremely probable no but there are non-sentience-related causes for me to care in this case, and pigs I am genuinely unsure of.
Does it matter to you that octopuses are quite commonly cannibalistic?
Assuming pigs were objects of value, would that make it morally wrong to eat them? Unlike octopi, most pigs exist because humans plan on eating them, so if a lot of humans stopped eating pigs, there would be less pigs, and the life of the average pig might not be much better.
(this is not a rhetorical question)
To be clear, I am unsure if pigs are objects of value, which incorporates both empirical uncertainty about their degree of reflectivity, philosophical uncertainty about the precise relation of reflectivity to degrees of consciousness, and ethical uncertainty about how much my idealized morals would care about various degrees of consciousness to the extent I can imagine that coherently. I can imagine that there's a sharp line of sentience which humans are over and pigs are under, and imagine that my idealized caring would drop to immediately zero for anything under the line, but my subjective probability for both of these being simultaneously true is under 50% though they are not independent.
However it is plausible to me that I would care exactly zero about a pig getting a dust speck in the eye... or not.
But zero is not a probability.
Edit: Adele_L is right, I was confusing utilities and probabilities.
... are you pointing out that there is a nonzero probability that Eliezer's CEV actually cares about nematodes?
No, Adele_L is right, I was confusing utilities and probabilities.
Zero is a utility, and utilities can even be negative (i.e. if Eliezer hated nematodes).
This paper about AI from Hector J. Levesque seems to be interesting: http://www.cs.toronto.edu/~hector/Papers/ijcai-13-paper.pdf
It extensively discusses something called 'Winograd schema questions': If you want examples of Winograd schema questions, there is a list here: http://www.cs.nyu.edu/faculty/davise/papers/WS.html
The paper's abstract does a fairly good job of summing it up, although it doesn't explicitly mention Winograd schema questions:
If you have time, this seems worth a read. I started reading other Hector J. Levesque papers because of it.
Edit: Upon searching, I also found some critiques of Levesque's work as well, so looking up opposition to some of these points may also be a good idea.
Sorry if this has been asked before, but can someone explain to me if there is any selfish reason to join Alcor while one is in good health? If I die suddenly, it will be too late to have joined, but even if I had joined it seems unlikely that they would get to me in time.
The only reason I can think of is to support Alcor.
It's like what the TV preacher told Bart Simpson: "Yes, a deathbed conversion is a pretty sweet angle, but if you join now, you're also covered in case of accidental death and dismemberment!"
(may not be an exact quote)
I don't think it's been asked before on Less Wrong, and it's an interesting question.
It depends on how much you value not dying. If you value it very strongly, the risk of sudden, terminal, but not immediately fatal injuries or illnesses, as mentioned by paper-machine, might be unacceptable to you, and would point toward joining Alcor sooner rather than later.
The marginal increase your support would add to the probability of Alcor surviving as an institution might also matter to you selfishly, since this would increase the probability that there will exist a stronger Alcor when you are older and will likely need it more than you do now.
Additionally, while it's true that it's unlikely that Alcor would reach you in time if you were to die suddenly, compare this risk to the chance of your survival if alternately you don't join Alcor soon enough, and, after your hypothetical fatal car crash, you end up rotting in the ground.
And hey, if you really want selfish reasons: signing up for cryonics is high-status in certain subcultures, including this one.
There are also altruistic reasons to join Alcor, but that's a separate issue.
Thank you for your response; I suppose one would need to estimate the probability of dying in such a way that having previously joined Alcor would make a difference.
Perusing Ben Best's web site and using some common sense, it seems that the most likely causes of death for a reasonably healthy middle aged man are cancer, stroke, heart attack, accident, suicide, and homicide. We need to estimate the probability of sudden serious loss of faculties followed by death.
It seems that for cancer, that probability is extremely small. For stroke, heart attack, and accidents, one could look it up but just guesstimating a number based on general observations, I would guess roughly 10 to 15 percent. Suicide and homicide are special cases -- I imagine that in those cases I would be autopsied so there would be much less chance of cryopreservation even if I had already joined Alcor.
Of course even if you pre-joined Alcor, there is still a decent chance that for whatever reason they would not be able to preserve you after, for example, a fatal accident which killed you a few days later.
So all told, my rough estimate is that the improvement in my chances of being cryopreserved upon death if I joined Alcor now as opposed to taking a wait and see approach is 5% at best.
Does that sound about right?
That does sound about right, but with two potential caveats: one is that individual circumstances might also matter in these calculations. For example, my risk of dying in a car accident is much lowered by not driving and only rarely riding in cars. However, my risk of dying of heart disease is raised by a strong family history.
There may also be financial considerations. Cancer almost certainly and often heart disease and stroke take time to kill. If you were paying for cryonics out-of-pocket, this wouldn't matter, but if you were paying with life insurance the cost of the policy would go up, perhaps dramatically, if you were to wait until the onset of serious illness to make your arrangements, as life insurance companies are not fond of pre-existing condtions. It might be worth noting that age alone also increases the cost of life insurance.
That being said, it's also fair to say that even a successful cryopreservation has a (roughly) 10-20% chance of preserving your life, taking most factors into account.
So again, the key here is determining how strongly you value your continued existence. If you could come up with a roughly estimated monetary value of your life, taking the probability of radical life extension into account, that may clarify matters considerably. There at values at which that (roughly) 5% chance is too little, or close to the line, or plenty sufficient, or way more than sufficient; it's quite a spectrum.
Yes I totally agree. Similarly your chances of being murdered are probably a lot lower than the average if you live in an affluent neighborhood and have a spouse who has never assaulted you.
Suicide is an interesting issue -- I would like to think that my chances of committing suicide are far lower than average but painful experience has taught me that it's very easy to be overconfident in predicting one's own actions.
Yes, but there is an easy way around this: Just buy life insurance while you are still reasonably healthy.
Actually this is what got me thinking about the issue: I was recently buying life insurance to protect my family. When I got the policy, I noticed that it had an "accelerated death benefit rider," i.e. if you are certifiably terminally ill, you can get a $100k advance on the policy proceeds. When you think about it, that's not the only way to raise substantial money in such a situation. For example, if you were terminally ill, your spouse probably wouldn't mind if you borrowed $200k against the house for cryopreservation if she knew that when you finally kicked the bucket she would get a check for a million from the insurance company.
So the upshot is that from a selfish perspective, there is a lot to be said for taking a "wait and see" approach.
(There's another issue I thought of: Like most life insurance policies, the ones I bought are good only for 20 years. There is a pretty good chance that I will live for those 20 years but in the meantime develop a serious health condition which makes it almost impossible to buy more insurance. What then?)
I agree with this to an extent.
That's a feature, not a bug, of term life insurance. That's the tradeoff you're making to get coverage now at a cheap rate. But of course, the option value exists on both sides - so if you want to lock in relatively lower rates, well, that's why whole life insurance exists.
Yes, good point. I actually looked into getting whole life insurance but the policies contained so many bells, whistles, and other confusions that I put it all on hold until I had bought some term insurance. Maybe I will look into that again.
Of course if I were disciplined, it would probably make sense to just "buy term and invest the difference" for the next 30 years.
Hmmm. You do have some interesting ideas regarding cryonics funding that do sound promising, but to be safe I would talk to Alcor, specifically Diane Cremeens, about them directly to ensure ahead of time that they'll work for them.
Probably that's a good idea. But on the other hand, what are the chances that they would turn down a certified check for $200k from someone who has a few months to live?
I suppose one could argue that setting things up years in advance so that Alcor controls the money makes it difficult for family members to obstruct your attempt to get frozen.
In addition to the money, Alcor requires a lot of legal paperwork, including a notarized will. You can probably do that if you have "a few months," but it's one more thing to worry about, especially if you're dying of something that leaves you mentally impaired and makes legal consent complicated. I don't know how strict about this Alcor would be; I second the grandparent's advice to ask Diane.
There is some background base rate of sudden, terminal, but not immediately fatal, injury or illness.
For example, I currently do not value life insurance highly, and therefore I value cryonics insurance even less.
Otherwise, there's only some marginal increase in the probability of Alcor surviving as an institution. Seeing as there's precedent for healthy cryonics orgs to adopt the patients of unhealthy cryonics orgs, this marginal increase should be viewed as a yet more marginal increase in the survival of cryonics locations in your locality.
(Assuming transportation costs are prohibitive enough to be treated as a rounding error.)
Do consequentialists generally hold as axiomatic that there must be a morally preferable choice (or conceivably multiple equally preferable choices) in a given situation? If so, could somebody point me to a deeper discussion of this axiom (it probably has a name, which I don't know.)
Not explicitly as an axiom AFAIK, but if you're valuing states-of-the-world, any choice you make will lead to some state, which means that unless your valuation is circular, the answer is yes.
Basically, as long as your valuation is VNM-rational, definitely yes. Utilitarians are a special case of this, and I think most consequentialists would adhere to that also.
What happens if my valuation is noncircular, but is incomplete? What if I only have a partial order over states of the world? Suppose I say "I prefer state X to Z, and don't express a preference between X and Y, or between Y and Z." I am not saying that X and Y are equivalent; I am merely refusing to judge.
My impression is that real human preference routinely looks like this; there are lots of cases people refuse to evaluate or don't evaluate consistently.
It seems like even with partial preferences, one can be consequentialist -- if you don't have clear preferences between outcomes, you have a choice that isn't morally relevant. Or is there a self-contradiction lurking?
If the result of that partial preference is that you start with Z and then decline the sequence of trades Z->Y->X, then you got dutch booked.
Otoh, maybe you want to accept the sequence Z->Y->X if you expect both trades to be offered, but decline each in isolation? But then your decision procedure is dynamically inconsistent: Standing at Z and expecting both trade offers, you have to precommit to using a different algorithm to evaluate the Y->X trade than you will want to use once you have Y.
I think I see the point about dynamic inconsistency. It might be that "I got to state Y from Z" will alter my decisionmaking about Y versus X.
I suppose it means that my decision of what to do in state Y no longer depends purely on consequences, but also on history, at which point they revoke my consequentialist party membership.
But why is that so terrible? It's a little weird, but I'm not sure it's actually inconsistent or violates any of my moral beliefs. I have all sorts of moral beliefs about ownership and rights that are history-dependent so it's not like history-dependence is a new strange thing.
You could have undefined value, but it's not particularly intuitive, and I don't think anyone actually advocates it as a component of a consequentialist theory.
Whether, in real life, people actually do it is a different story. I mean, it's quite likely that humans violate the VNM model of rationality, but that could just be because we're not rational.
Thanks! Do consequentialist kind of port the first axiom (completeness) from the VN-M utility theorem, changing it from decision theory to meta-ethics?
And for others, to put my original question another way: before we start comparing utilons or utility functions, insofar as consequentialists begin with moral intuitions and reason the existence of utility, is one of their starting intuitions that all moral questions have correct answers? Or am I just making this up? And has anybody written about this?
To put that in one popular context: in the Trolley Switch and Fat Man problem, it seems like most people start with the assumption that there exists a right answer (or preferable, or best, whatever your terminology), and that it could never be the case that an agent will do the wrong/immoral/unethical thing no matter what he or she chooses. Am I right that this assumption exists?
Most people do have this belief. I think it's a safe one, though. It follows from a substantive belief most people have, which is that agents are only morally responsible for things that are under their control.
In the context of a trolley problem, it's stipulated that the person is being confronted with a choice -- in the context of the problem, they have to choose. And so it would be blaming them for something beyond their control to say "no matter what you do, you are blameworthy."
One way to fight the hypothetical of the trolley problem is to say "people are rarely confronted with this sort of moral dilemma involuntarily, and it's evil to to put yourself in a position of choosing between evils." I suppose for consistency, if you say this, you should avoid jury service, voting, or political office.
Not explicitly (except in the case of some utilitarians), but I don't think many would deny it. The boundaries between meta-ethics and normative ethics are vaguer than you'd think, but consequentialism is already sort of metaethical. The VMN theorem isn't explicitly discussed that often (many ethicists won't have heard of it), but the axioms are fairly intuitive anyway. However, although I don't know enough about weird forms of consequentialism to know if anyone's made a point of denying completeness, I wouldn't be that surprised if that position exists.
Yes, I think it certainly exists. I'm not sure if it's universal or not, but I haven't read a great deal on the subject yet, you I'm not sure if I would know.
I've got an (IMHO) interesting discussion article written up, but I am unable to post it; I get a "webpage cannot be found" error when I try. I'm using IE 9. Is this a known issue, or have I done something wrong?
Have you tried searching the LW bugtracker or using a different browser?
Thank you for this suggestion. I have discovered that this works in Chrome.
When you're trying to raise the sanity waterline, dredging the swamps can be a hazardous occupation. Indian rationalist skeptic Narendra Dabholkar was assassinated this morning.
He was trying to pass a law to suppress religious freedoms of small sects. That doesn't raise the sanity waterline, it just increases tensions and hatred between groups.
That's a ludicrously forgiving reading of what the bill (which looks like going through) is about. Steelmanning is an exercise in clarifying one's own thoughts, not in justifying fraud and witch-hunting.
I haven't been able to find the text of the bill — only summaries such as this one. Do you have a link?
Did you even read my comment?
Yes, I did. Your characterisation of the new law is factually ridiculous.
That isn't all the law does, as you would know if you actually read it.
Political activism, especially in the third world, is inherently dangerous, whether or not it is rationality-related.
This article, written by Dreeve's wife has displaced Yvain's polyamory essay as the most interesting relationships article I've read this year. The basic idea is that instead of trying to split chores or common goods equally, you use auctions. For example, if the bathroom needs to be cleaned, each partner says how much they'd be willing to clean it for. The person with the higher bid pays the what the other person bid, and that person does the cleaning.
It's easy to see why commenters accused them of being libertarian. But I think egalitarians should examine this system too. Most couples agree that chores and common goods should be split equally. But what does "equally" mean? It's hard to quantify exactly how much each person contributes to a relationship. This allows the more powerful person to exaggerate their contributions and pressure the weaker person into doing more than their fair share. But auctions safeguard against this abuse requiring participants to quantify how much they value each task.
For example, feminists argue that women do more domestic chores than men, and that these chores go unnoticed by men. Men do a little bit, but because men don't see all the work women do, they end up thinking that they're doing their share when they aren't. Auctions safeguard against this abuse. Instead of the wife just cleaning the bathroom, she and her husbands bid for how much they'd be willing to clean the bathroom for. The lower bid is considered the fair market price of cleaning the bathroom. Then she and her husband engage in a joint-purchase auction to decide if the bathroom will be cleaned at all. Either the bathroom gets cleaned and the cleaner gets fairly compensated, or the bathroom doesn't get cleaned because the total utility of cleaning the bathroom is less than the disutility of cleaning the bathroom.
And that's it. No arguing about who cleaned it last. No debating whether it really needs to cleaned. No room for misogynist cultural machines to pressure the wife into doing more than her fair share. Just a market transaction that is efficient and fair.
One datapoint: I know of one household (two adults, one child) which worked out chores by having people list which chores they liked, which they tolerated, and which they hated. It turned out that there was enough intrinsic motivation to make taking care of the house work.
P.S.: those last two sentences ("No room for misogynist cultural machines to pressure the wife into doing more than her fair share. Just a market transaction that is efficient and fair.") also remind me of "If those women were really oppressed, someone would have tended to have freed them by then."
The polyamory and BDSM subcultures prove that nerds can create new social rules that improve sex. Of course, you can't just theorize about what the best social rules would be and then declare that you've "solved the problem." But when you see people living happier lives as a result of changing their social rules, there's nothing wrong with inviting other people to take a look.
I don't understand your postscript. I didn't say there is no inequality in chore division because if there were a chore market would have removed it. I said a chore market would have more equality than the standard each-person-does-what-they-think-is-fair system. Your response seems like fully generalized counterargument: anyone who proposes a way to reduce inequality can be accused of denying that the inequality exists.
The modern BDSM culture's origins are somewhat obscure, but I don't think I'd be comfortable saying it was created by nerds despite its present demographics. The leather scene is only one of its cultural poles, but that's generally thought to have grown out of the post-WWII gay biker scene: not the nerdiest of subcultures, to say the least.
I don't know as much about the origins of poly, but I suspect the same would likely be true there.
Hmm, I don't know that I would consider those rules overall to be clearly superior for everyone, although they do reasonably well for me. Rather, I value the existence of different subcultures with different norms, so that people can choose those that suit their predilections and needs.
(More politically: A "liberal" society composed of overlapping subcultures with different norms, in a context of individual rights and social support, seems to be almost certain to meet more people's needs than a "totalizing" society with a single set of norms.)
There are certain of those social rules that seem to be pretty clear improvements to me, though — chiefly the increased care on the subject of consent. That's an improvement in a vanilla-monogamous-heteronormative subculture as well as a kink-poly-genderqueer one.
This works best if none of the "subcultures with different norms" creates huge negative externatilies for the rest of the society. Otherwise, some people get angry. -- And then we need to go meta and create some global rules that either prevent the former from creating the externalities, or the latter from expressing their anger.
I guess in case of BDSM subculture this works without problems. And I guess the test of the polyamorous community will be how well they will treat their children (hopefully better than polygamous mormons treat their sons), or perhaps how will they handle the poly- equivalents of divorce, especially the economical aspects of it (if there is a significant shared property).
I'm skeptical that most couples agree with this.
Anyway, all of these types of 'chore division' systems that I've seen so far totally disregard human psychology. Remember that the goal isn't to have a fair chore system. The goal is to have a system that preserves a happy and stable relationship. If the resulting system winds up not being 'fair', that's ok.
Most couples worldwide, or most couples in W.E.I.R.D. societies?
Both.
This sounds interesting for cases where both parties are economically secure.
However I can't see it working in my case since my housemates each earn somewhere around ten times what I do. Under this system, my bids would always be lowest and I would do all the chores without exception. While I would feel unable to turn down this chance to earn money, my status would drop from that of an equal to that of a servant. I would find this unacceptable.
I believe you are wrong. (Or I am; in which case please explain to me how.) Here is what I would do it if I lived with a bunch of millionaires, assuming my money is limited:
The first time, I would ask a realistic price X. And I would do the chores. I would put the gained money apart into "the money I don't really own, because I will use them in future to get my status back" budget.
The second time, I would ask 1.5 × X. The third time, 2 × X. The fourth time, 3 × X. If asked, I would explain the change by saying: "I guess I was totally miscalibrated about how I value my time. Well, I'm learning. Sorry, this bidding system is so new and confusing to me." But I would act like I am not really required to explain anything.
Let's assume I always do the chores. Then my income grows exponentially, which is a nice thing per se, but most importantly, it cannot continue forever. At some moment, my bid would be so insanely high, that even Bill Gates would volunteer to do the chores instead. -- Which is completely okay for me, because I would pay him the $1000000000 per hour from my "get the status back" budget, which at the given time already contains the money.
That's it. Keep your money from chores in a separate budget and use them only to pay others for doing the chores. Increase or decrease the bids depending on the state of that budget. If the price becomes relatively stable, there is no way you would do more chores than the other people around you.
The only imbalance I can imagine is if you have a housemate A which always bids more than a housemate B, in which case you will end up between them, always doing more chores than A but less than B. Assuming there are 10 A's and 1 B, and the B is considered very low status, this might result in a rather low status for you, too. -- The system merely guarantees you won't get the lowest status, even if you are the less wealthy person in the house; but you can still get the second-lowest place.
Could one not change the bidding to use "chore points" of somesuch? I mean, the system described is designed for spouses, but there's no reason it couldn't be adapted for you and your housemates.
Wow someone else thought of doing this too!
My roommate and I started doing this a year ago. It went pretty well for the first few months. Then our neighbor heard about how much we were paying eachother for chores and started outbidding us.
This is one of the features of this policy, actually- you can use this as a natural measure of what tasks you should outsource. If a maid would cost $20 to clean the apartment, and you and your roommates all want at least $50 to do it, then the efficient thing to do is to hire a maid.
The problem could be that they actually are willing to do it for $10, but it's a low-status thing to admit.
If we both lived in the same appartment, and we both pretended that our time is precious that we are only willing to clean the appartment for $1000... and I do it 50% of the time, and you do it 50% of the time, at the end none of us gets poor despite the unrealistic prices, because each of us gets all the money back.
Now when the third person comes and cares about money more than about status (which is easier for them, because they don't live in the same appartment with us), our pretending is exposed and we become either more honest or poor.
I can see it working when all parties are trustworthy and committed to fairness, which is a high threshold to begin with. Also, everyone has to buy into the idea of other people being autonomous agents, with no shoulds attached. Still, this might run into trouble when one party badly wants something flatly unacceptable to the other and so unable to afford it and feeling resentful.
One (unrelated) interesting quote:
Roger and I wrote a web app for exactly this purpose - dividing chores via auction. This has worked well for chore management for a house of 7 roommates, for about 6 months so far.
The feminism angle didn't even occur to us! It's just been really useful for dividing chores optimally.
I can see this working better than a dysfunctional household, but if you're both in the habit of just doing things, this is going to make everything worse.
Very fair point! Just like with Beeminder, if you're lucky enough to simply not suffer from akrasia then all the craziness with commitment devices is entirely superfluous. I liken it to literal myopia. If you don't have the problem then more power to you. If you do then apply the requisite technology to fix it (glasses, commitment devices, decision auctions).
But actually I think decision auctions are different. There's no such thing as not having the problem they solve. Preferences will conflict sometimes. Just that normal people have perfectly adequate approximations (turn taking, feeling each other out, informal mental point systems, barter) to what we've formalized and nerded up with our decision auctions.
Wasn't it Ariely's Predictably Irrational that went over market norms vs. tribe norms? If you just had ordinary people start doing this, I would guess it would crash and burn for the obvious market-norm reasons (the urge to game the system, basically). And some ew-squick power disparity stuff if this is ever enforced by a third party or even social pressure.
Empirically speaking, this system has worked in our house (of 7 people, for about 6 months so far). What kind of gaming the system were you thinking of?
We do use social pressure: there is social pressure to do your contracted chores, and keep your chore point balance positive. This hasn't really created power disparities per se.
Yeah, bidding = deception. But in addition to someonewrong's answer, I was thinking you could just end up doing a shitty job at things (e.g. cleaning the bathroom). Which is to say, if this were an actual labor market, and not a method of communicating between people who like each other and have outside-the-market reasons to cooperate, the market doesn't have much competition.
Except she specifies that if they're bidding above market wages for a task (cleaning the bathroom would work fine), they'll just pay someone else to do it. Of course, chores like getting up to deal with a sick child are not so outsourceable.
Yeah, that's unfortunately not something we can really handle other than decreeing "Doing this chore entails doing X and it doesn't count if you don't do X." Enforcing the system isn't solved by the system itself.
Good way to describe it.
If the idea is to say exactly how much you are willing to pay, there would be an incentive to:
1) Broadcast that you find all labor extra unpleasant and all goods extra valuable, to encourage people to bid high
2) Bid artificially lower values when you know someone enjoys a labor / doesn't mind parting with a good and will bid accordingly.
In short, optimal play would involve deception, and it happens to be a deception of the sort that might not be difficult to commit subconsciously. You might deceive yourself into thinking you find a chore unpleasant - I have read experimental evidence to support the notion that intrinsically rewarding tasks lose some of their appeal when paired with extrinsic rewards.
No comment on whether the traditional way is any better or worse - I think these two testimonials are sufficient evidence for this to be worth people who have a willing human tribe handy to try it, despite the theoretical issues. After all,
Edit: There is another, more pleasant problem: If you and I are engaged in trade, and I actually care about your utility function, that's going to effect the price. The whole point of this system is to communicate utility evenly after subtracting for the fact that you care about each other (otherwise why bother with a system?)
Concrete example: We are trying to transfer ownership of a computer monitor, and I'm willing to give it to you for free because I care about you. But if I were to take that into account, then we are essentially back to the traditional method. I'd have to attempt to conjure up the value at which i'd sell the monitor to someone I was neutral towards.
Of course, you could just use this as an argument stopper - whenever there is real disagreement, you use money to effect an easy compromise. But then there is monetary pressure to be argumentative and difficult, and social pressure not to be - it would be socially awkward and monetarily advantageous if you were constantly the one who had a problem with unmet needs.
But if other people bid high, then you have to pay more. And they will know if you bid lower, because the auctions are public. How does this help you?
I don't understand how this helps you either; if you bid lower and therefore win the auction, then you have to do the chore for less than you value it at. That's no fun.
The way our system works, it actually gives the lowest bidder, not their actual bid, but the second lowest bid minus 1; that way you don't have to do bidding wars, and can more or less just bid what you value it at. It does create the issue that you mention - bid sniping, if you know what the lowest bidder will bid you can bid just above it so they get as little as possible - but this is at the risk of having to actually do the chore for that little, because bids are binding.
I'd very much like to understand the issues you bring up, because if they are real problems, we might be able to take some stabs at solving them.
This has become somewhat of a norm in our house. We can pass around chore points in exchange for rides to places and so forth; it's useful, because you can ask for favors without using up your social capital. (Just your chore points capital, which is easier to gain more of and more transparent.)
You only do this when you plan to be the buyer. The idea is to win the auction and become the buyer, but putting up as little money as possible. If you know that the other guy will do it for $5, you bid $6, even if you actually value it at $10. As you said, I'm talking about bid sniping.
Ah, I should have written "broadcast that you find all labor extra unpleasant and all goods extra valuable when you are the seller (giving up a good or doing a labour) so that people pay you more to do it."
If you're willing to do a chore for _$10, but you broadcast that you find it more than -$10 of unpleasantness, the other party will be influenced to bid higher - say, $40. Then, you can bid $30, and get paid more. It's just price inflation - in a traditional transaction, a seller wants the buyer to pay as much as they are willing to pay. To do this, the seller must artificially inflate the buyer's perception of how much the item is worth to the seller. The same holds true here.
When you intend to be the buyer you do the opposite - broadcast that you're willing to do the labor for cheap to lower prices, then bid snipe. As in a traditional transaction, the buyer wants the seller to believe that the item is not of much worth to the buyer. The buyer also has to try to guess the minimum amount that the seller will part with the item.
So what I wrote above was assuming the price was a midpoint between the buyer's and seller's bid, which gives them both equal power to set the price. This rule slightly alters things, by putting all the price setting power in the buyer's hands.
Under this rule, after all the deceptive price inflation is said and done you should still bid an honest $10 if you are only playing once - though since this is an iterated case, you probably want to bid higher just to keep up appearances if you are trying to be deceptive.
One of the nice things about this rule is that there is no incentive to be deceptive unless other people are bid sniping. The weakness of this rule is that it creates a stronger incentive to bid snipe.
Price inflation (seller's strategy) and bid sniping (buyer's strategy) are the two basic forms of deception in this game. Your rule empowers the buyer to set the price, thereby making price inflation harder at the cost of making bid sniping easier. I don't think there is a way around this - it seems to be a general property of trading. Finding a way around it would probably solve some larger scale economic problems.
(I'm one of the other users/devs of Choron)
There are two ways I know of that the market can try to defeat bid sniping, and one way a bidder can (that I know of).
Our system does not display the lowest bid, only the second lowest bid. For a one-shot auction where you had poor information about the others preferences, this would solve bid sniping. However, in our case, chores come up multiple times, and I'm pretty sure that it's public knowledge how much I bid on shopping, for example.
If you're in a situation where the lowest bid is hidden, but your bidding is predictable, you can sometimes bid higher than you normally would. This punishes people who bid less than they're willing to actually do the chore for, but imposes costs on you and the market as a whole as well, in the form of higher prices for the chore.
A third option, which we do not implement (credit to Richard for this idea), is to randomly award the auction to one of the two (or n) lowest bidders, with probability inversely related to their bid. In particular, if you pick between the lowest 2 bidders, both have claimed to be willing to do the job for the 2nd bidder's price (so the price isn't higher and noone can claim they were forced to do something for less than they wanted). This punishes bid-snipers by taking them at their word that they're willing to do the chore for the reduced price, at the cost of determinism, which allows better planning.
And market efficiency.
Plus, I think it doesn't work when there are only two players? If I honestly bid $30, and you bid $40 and randomly get awarded the auction, then I have to pay you $40. And that leaves me at -$10 disutility, since the task was only -$30 to me.
To be sure I'm following you: If the 2nd bidder gets it (for the same price as the first bidder), the market efficiency is lost because the 2nd person is indifferent between winning and not, while the first would have liked to win it? If so, I think that's right.
If there are two players... I agree the first bidder is worse off than they would be if they had won. This seems like a special case of the above though: why is it more broken with 2 players?
Yes, that's one of the inefficiencies. The other inefficiency is that whenever the 2nd player wins, the service gets more expensive.
Because of the fact that the service gets more expensive. When there are multiple players, this might not seem like such a big deal - sure, you might pay more than the cheapest possible price, but you are still ultimately all benefiting (even if you aren't maximally benefiting). Small market inefficiencies are tolerable.
It's not so bad with 3 players who bid 20, 30, 40, since even if the 30-bidder wins, the other two players only have to pay 15 each. It's still inefficient, but it's not worse than no trade.
However, when your economy consists of two people, market inefficiency is felt more keenly. Consider the example I gave earlier once more:
I bid 30. You bid 40. So I can sell you my service for $30-$40, and we both benefit. . But wait! The coin flip makes you win the auction. So now I have to pay you $40.
My stated preference is that I would not be willing to pay more than $30 for this service. But I am forced to do so. The market inefficiency has not merely resulted in a sub-optimal outcome - it's actually worse than if I had not traded at all!
Edit: What's worse is that you can name any price. So suppose it's just us two, I bid $10 and you bid $100, and it goes to the second bidder...
Here's a question that's been distracting me for the last few hours, and I want to get it out of my head so I can think about something else.
You're walking down an alley after making a bank withdrawal of a small sum of money. Just about when you realize this may have been a mistake, two Muggers appear from either side of the alley, blocking trivial escapes.
Mugger A: "Hi there. Give me all of that money or I will inflict 3^^^3 disutility on your utility function."
Mugger B: "Hi there. Give me all of that money or I will inflict maximum disutility on your utility function."
You: "You're working together?"
Mugger A: "No, you're just really unlucky."
Mugger B: "Yeah, I don't know this guy."
You: "But I can't give both of you all of this money!"
Mugger A: "Tell you what. You're having a horrible day, so if you give me half your money, I'll give you a 50% chance of avoiding my 3^^^3 disutility. And if you give me a quarter of your money, I'll give you a 25% chance of avoiding my 3^^^3 disutility. Maybe the other Mugger will let you have the same kind of break. Sound good to you, other Mugger?"
Mugger B: "Works for me. Start paying."
You: Do what, exactly?
I can see at least 4 vaugely plausible answers:
Pay Mugger A: 3^^^3 disutility is likely going to be more than whatever you think your maximum is and you want to be as likely as possible of avoiding that. You'll just have to try resist/escape from Mugger B (unless he's just faking).
Pay Mugger B: Maximum disutility is by it's definition of greater than or equal to any other disutility, worse than 3^^^3, and has probably happened to at least a few people with utility functions (although probably NOT to a 3^^^3 extent), so it's a serious threat and you want to be as likely as possible of avoiding that. You'll just have to try resist/escape from Mugger A (unless he's just faking).
Pay both Muggers a split of the money: For example: If you pay half to each, and they're both telling the truth, you have a 25% chance of not getting either disutility and not having to resist/escape at all (unless one or both is faking, which may improve your odds.)
Don't Pay: This seems like it becomes generally less likely than in a normal Pascal's mugging since there are no clear escape routes, and you're outnumbered, so there is at least some real threat unless they're both faking.
The problem is, I can't seem to justify any of my vaugely plausible answers to this conundrum well enough to stop thinking about it. Which makes me wonder if the question is ill formed in some way.
Thoughts?
Give it all to mugger B obviously. I almost certainly am experiencing -3^^^3 utilions according to almost any measure every millisecond anyway, given I live in a Big World.
I may be fighting the hypothetical here, but ...
If utility is unbounded, maximum disutility is undefined, and if it's bounded, then 3^^^3 is by definition smaller than the maximum so you should pay all to mugger B.
I think trading a 10% chance of utility A for a 10% chance of utility B, with B < A is irrational per the definition of utility (as far as I understand; you can have marginal diminishing utility on money, but not marginally diminishing utility on *utility. I'm less sure about risk aversion though.)
That's not fighting the hypothetical. Fighting the hypothetical is first paying one, then telling the other you'll go back to the bank to pay him too. Or pulling out your kung fu skills, which is really fighting the hypothetical.
If you have some concept of "3^^^3 disutility" as a tractable measure of units of disutility, it seems unlikely you don't also have a reasonable idea of the upper and lower bounds of your utility function. If the values are known this becomes trivial to solve.
I am becoming increasingly convinced that VNM-utility is a poor tool for ad-hoc decision-theoretics, not because of dubious assumptions or inapplicability, but because finding corner-cases where it appears to break down is somehow ridiculously appealing.
If they're both telling the truth: since B gives maximum disutility, being mugged by both is no worse than being mugged by B. If you think your maximum disutility is X*3^^^3, I think if you run the numbers you should give a fraction X/2 to B, and the rest to A. (or all to B if X>2)
If they might be lying, you should probably ignore them. Or pay B, whose threat is more credible if you don't think your utility function goes as far as 3^^^3 (although, what scale? Maybe a dust speck is 3^^^^3)
If you had to group Less Wrong content into eight categories by subject matter, what would those categories be?
I'd subdivide Lifehacks into:
epistemical lifehacks;
general instrumental lifehacks (e.g. how to overcome procrastination);
specific instrumental lifehacks (domain-specific)
I would remove meetups, as that isn't really LW content as such.
It would be good to have it in a separate category, though, so you could disappear it from the front page.
For unspecified levels of meta. :P