Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Open Thread: July 2010

6 Post author: komponisto 01 July 2010 09:20PM

This thread is for the discussion of Less Wrong topics that have not appeared in recent posts. If a discussion gets unwieldy, celebrate by turning it into a top-level post.

Part 2

Comments (653)

Comment author: michaelkeenan 02 July 2010 02:43:06AM 20 points [-]

I propose that LessWrong should produce a quarterly magazine of its best content.

LessWrong readership has a significant overlap with the readers of Hacker News, a reddit/digg-like community of tech entrepreneurs. So you might be familiar with Hacker Monthly, a print magazine version of Hacker News. The first edition, featuring 16 items that were voted highly on Hacker News, came out in June, and the second came out today. The curator went to significant effort to contact the authors of the various articles and blog posts to include them in the magazine.

Why would we want LessWrong content in a magazine? I personally would find it a great recruitment tool; I could have copies at my house and show/lend/give them to friends. As someone at the Hacker News discussion commented, "It's weird but I remember reading some of these articles on the web but, reading them again in magazine form, they somehow seem much more authoritative and objective. Ah, the perils of framing!"

The publishing and selling part is not too difficult. Hacker Monthly uses MagCloud, a company that makes it easy to publish and sell PDFs into printed magazines.

Unfortunately, I don't have the skills or time to do this myself, at least not in the short-term. If someone wants to pick up this project, major tasks would include creating a process to choose articles for inclusion, contacting the authors for permission, and designing the magazine.

There's also the possibility of advertisements. I personally would be excited to see what kinds of companies would like to advertise to an audience of rationalists. Cryonics companies? Index funds? Rationalist books? Non-profits seeking donations?

Should advertising be used just to defray costs, or could the magazine make money? Make money for whom?

If people think it's a good idea but no-one takes it on, I might have some time free early next year to make this happen. But I hope someone gets to it earlier.

Comment author: mattnewport 02 July 2010 03:59:28PM 5 points [-]

Does anyone else find the idea of creating a printed magazine rather anachronistic?

Comment author: Blueberry 02 July 2010 04:12:46PM 3 points [-]

The rumors of print media's death have been greatly exaggerated.

Comment author: Larks 04 July 2010 05:32:37PM 11 points [-]

This comment would seem much more authoritative if seen in print.

Comment author: LucasSloan 02 July 2010 04:37:05AM 2 points [-]

I don't think there's enough content on LW to be worthwhile publishing a magazine. However, Eliezer's book on rationality should offer many of the same benefits.

Comment author: michaelkeenan 02 July 2010 05:15:43AM *  7 points [-]

Not all of the content needs to be from the most recent quarter. There could be classic articles too. But I think we might have enough content each quarter anyway. Let's see...

There were about 120 posts to Less Wrong from April 1 to June 30. The top ten highest-voted were Diseased thinking: dissolving questions about disease by Yvain, Eight Short Studies On Excuses by Yvain, Ugh Fields by Roko, Bayes Theorem Illustrated by komponisto, Seven Shiny Stories by Alicorn, Ureshiku Naritai by Alicorn, The Psychological Diversity of Mankind by Kaj Sotala, Abnormal Cryonics by Will Newsome, Defeating Ugh Fields In Practice by Psychohistorian, and Applying Behavioral Pscyhology on Myself by John Maxwell IV.

Maybe not all of those are appropriate for a magazine (e.g. Bayes Theorem Illustrated is too long). So maybe swap a couple of them out for other ones. Then maybe add a few classic LessWrong articles (for example, Disguised Queries would make a good companion piece to Diseased Thinking), add a few pages of advertising and maybe some rationality quotes, and you'd have at least 30 pages. I know I'd buy it.

Comment author: NancyLebovitz 02 July 2010 05:21:57AM 3 points [-]

Monthly seems too often. Quarterly might work.

Comment author: gwern 02 July 2010 05:04:49AM *  3 points [-]

A yearly anthology would be pretty good, though. HN is reusing others' content and can afford a faster tempo; but that simply means we need to be slower. Monthly is too fast, I suspect that quarterly may be a little too fast unless we lower our standards to include probably wrong but still interesting essays. (I think of "Is cryonics necessary?: Writing yourself into the future" as an example of something I'm sure is wrong, but was still interesting to read.)

Comment author: Kevin 02 July 2010 06:24:04AM 2 points [-]

How about thirdly!?

Comment author: JohannesDahlstrom 07 July 2010 09:51:26AM *  16 points [-]

Drowning Does Not Look Like Drowning

Fascinating insight against generalizing from fictional evidence in a very real life-or-death situation.

Comment author: ciphergoth 08 July 2010 02:39:19PM *  15 points [-]

A New York Times article on Robin Hanson and his wife Peggy Jackson's disagreement on cryonics:

http://www.nytimes.com/2010/07/11/magazine/11cryonics-t.html?ref=health&pagewanted=all

Comment author: WrongBot 08 July 2010 05:12:21PM 9 points [-]

While I'm not planning to pursue cryopreservation myself, I don't believe that it's unreasonable to do so.

Industrial coolants came up in a conversation I was having with my parents (for reasons I am completely unable to remember), and I mentioned that I'd read a bunch of stuff about cryonics lately. My mom then half-jokingly threatened to write me out of her will if I ever signed up for it.

This seemed... disproportionately hostile. She was skeptical of the singularity and my support for the SIAI when it came up a few weeks ago, but she's not particularly interested in the issue and didn't make a big deal about it. It wasn't even close to the level of scorn she apparently has for cryonics. When I asked her about it, she claimed she opposed it based on the physical impossibility of accurately copying a brain. My father and I pointed out that this would literally require the existence of magic, she conceded the point, mentioned that she still thought it was ridiculous, and changed the subject.

This was obviously a case of my mom avoiding her belief's true weak points by not offering her true objection, rationality failures common enough to deserve blog posts pointing them out; I wasn't shocked to observe them in the wild. What is shocking to me is that someone who is otherwise quite rational would feel so motivated to protect this particular belief about cryonics. Why is this so important?

That the overwhelming majority of those who share this intense motivation are women (it seems) just makes me more confused. I've seen a couple of explanations for this phenomenon, but they aren't convincing: if these people object to cryonics because they see it as selfish (for example), why do so many of them come up with fake objections? The selfishness objection doesn't seem like it would be something one would be penalized for making.

Comment author: SilasBarta 08 July 2010 05:36:45PM *  6 points [-]

if these people object to cryonics because they see it as selfish (for example), why do so many of them come up with fake objections?

I -- quite predictably -- think this is a special case of the more general problem that people have trouble explaining themselves. You mom doesn't give her real reason because she can't (yet) articulate it. In your case, I think it's due to two factors: 1) part of the reasoning process is something she doesn't want to say to your face so she avoids thinking it, and 2) she's using hidden assumptions that she falsely assumes you share.

For my part, my dad's wife is nominally unopposed, bitterly noting that "It's your money" and then ominously adding that, "you'll have to talk about this with your future wife, who may find it loopy".

(Joke's on her -- at this rate, no woman will take that job!)

Comment deleted 08 July 2010 10:31:28PM *  [-]
Comment author: Alicorn 08 July 2010 10:35:28PM 5 points [-]

If you're right, this suggests a useful spin on the disclosure: "I want you to run away with me - to the FUTURE!"

However, it was my dad, not my mom, who called me selfish when I brought up cryo.

Comment author: Wei_Dai 08 July 2010 10:51:24PM 3 points [-]

Maybe the husband/son should preemptively play the "if you don't sign up with me, you're betraying me" card?

Comment deleted 08 July 2010 11:07:18PM *  [-]
Comment author: lmnop 08 July 2010 11:25:12PM *  3 points [-]

In the case of refusing cryonics, I doubt that fear of social judgment is the largest factor or even close. It's relatively easy to avoid judgment without incurring terrible costs--many people signed up for cryonics have simply never mentioned it to the girls and boys in the office. I'm willing to bet that most people, even if you promised that their decision to choose cryonics would be entirely private, would hardly waver in their refusal.

Comment author: NancyLebovitz 08 July 2010 06:09:16PM 3 points [-]

I don't have anything against cryo, so this are tentative suggestions.

Maybe going in for cryo means admitting how much death hurts, so there's a big ugh field.

Alternatively, some people are trudging through life, and they don't want it to go on indefinitely.

Or there are people they want to get away from.

However, none of this fits with "I'll write you out of my will". This sounds to me like seeing cryo as a personal betrayal, but I can't figure out what the underlying premises might be. Unless it's that being in the will implies that the recipient will also leave money to descendants, and if you aren't going to die, then you won't.

Comment author: Blueberry 08 July 2010 06:01:06PM *  2 points [-]

That the overwhelming majority of those who share this intense motivation are women (it seems) just makes me more confused.

Is there evidence for this? Specifically the "intense" part?

ETA: Did you ask her why she had such strong feelings about it? Was she able to answer?

Comment author: Vladimir_Nesov 08 July 2010 03:25:53PM *  4 points [-]

A factual error:

when he first announced his intention to have his brain surgically removed from his freshly vacated cadaver and preserved in liquid nitrogen

I'm fairly sure that head-only preservation doesn't involve any brain-removal. It's interesting that in context the purpose of the phrase was to present a creepy image of cryonics, and so the bias towards the phrases that accomplish this goal won over the constraint of not generating fiction.

Comment author: Wei_Dai 08 July 2010 07:06:06PM 2 points [-]

I wonder if Peggy's apparent disvalue of Robin's immortality represents a true preference, and if so, how should an FAI take it into account while computing humanity's CEV?

Comment author: Clippy 08 July 2010 07:22:06PM 3 points [-]

It should store a canonical human "base type" in a data structure somewhere. Then it should store the information about how all humans deviate from the base type, so that they can in principle be reconstituted as if they had just been through a long sleep.

Then it should use Peggy's body and Robin's body for fuel.

Comment author: wedrifid 08 July 2010 03:19:10PM 2 points [-]

That was very nearly terrifying.

Comment author: lsparrish 04 July 2010 04:53:30PM 13 points [-]

Cryonics scales very well. People who think cryonics is costly, even if you had to come up with the entire lump sum close to the end of your life, are generally ignorant of this fact.

So long as you keep the shape constant, for any given container the surface area is a based on a square law whereas the volume is a cube. For example with a cube shaped object, one side squared times 6 is the surface area whereas one side cubed is the volume. Surface area is where the heat gets entry, so if you have a huge container holding cryogenic goods (humans in this case) it costs much less per unit volume (human) than is the case with a smaller container of equal insulation. A way to understand this is that you only have to insulate the outside -- the inside gets free insulation.

But you aren't stuck using equal insulation. You can use thicker insulation, with a much smaller proportional effect on total surface area as you use bigger sizes. Imagine the difference between a marble sized freezer and a house-sized freezer, when you add a foot of insulation. The outside of the insulation is where it begins collecting heat. But with a gigantic freezer, you might add a meter of insulation without it having a significant proportional impact on surface area, compared to how much surface area it already has.

Another factor to take into account is that liquid nitrogen, the super-cheap coolant used by cryonics facilities around the world, is vastly cheaper (more than a factor of 10) when purchased in huge quantities of several tons. The scaling factors for storage tanks are a big part of the reason for this. CI has used bulk purchasing as a mechanism for getting their prices down to $100 per patient per year for their newer tanks. They are actually storing 3,000 gallons of the stuff and using it slowly over time, which means there is a boiloff rate associated with the 3,000 gallon tank as well.

The conclusion I get from this is that there is a very strong self-interested case as well as altruistic case to be made for megascale cryonics versus small independently run units. People who say they won't sign up for cost reasons may be reachable at a later date. To deal with such people's objections, it might be smart to get them to agree with a particular hypothetical price point at which they would feel it is justified. In large enough quantities, it could be concievable that indefinite storage costs are as low as $50 per person, or 50 cents per year.

That is much cheaper than saving a life any other way, but of course there's still the risk that it might not work. However, given a sufficient chance of it working it could still be morally superior to other life saving strategies that cost more money. It also has inherent ecological advantages over other forms of life-saving in that it temporarily reduces population, giving the environment a chance to recover and green tech more time to take hold so that they can be supported sustainably and comfortably.

Comment author: Morendil 04 July 2010 10:31:33PM *  5 points [-]

This needs to be a top-level post. Even with minimal editing. Please.

(ETA: It's not so much that we need to have another go at the cryonics debate; but the above is an argument that I can't recall seeing discussed here previously, that does substantially change the picture, and that illustrates various kinds of reasoning - about scaling properties, about predefining thresholds of acceptability, and about what we don't know we don't know - that are very relevant to LW's overall mission.)

Comment author: NancyLebovitz 02 July 2010 03:50:04AM 9 points [-]

I was at a recent Alexander Technique workshop, and some of the teachers had been observing how two year olds crawl.

If you've had any experience with two year olds, you know they can cover ground at an astonishing rate.

The thing is, adults typically crawl with their faces perpendicular to the ground, and crawling feels clumsy and unpleasant.

Two year olds crawl with their faces at 45 degrees to the ground, and a gentle curve through their upper backs.

Crawling that way gives access to a surprisingly strong forward impetus.

The relevance to rationality and to akrasia is the implication that if something seems hard, it may be that the preconditions for making it easy haven't been set up.

Comment author: VNKKET 01 July 2010 10:07:28PM *  9 points [-]

This is a mostly-shameless plug for the small donation matching scheme I proposed in May:

I'm still looking for three people to cross the "membrane that separates procrastinators and helpers" by donating $60 to the Singularity Institute. If you're interested, see my original comment. I will match your donation.

Comment author: Kutta 02 July 2010 07:32:08AM 5 points [-]

Done, 60 USD sent.

Comment author: VNKKET 02 July 2010 06:16:09PM 2 points [-]

Thank you! Matched.

Comment author: Yvain 02 July 2010 02:10:11AM 4 points [-]

Done!

Comment author: WrongBot 02 July 2010 12:37:28AM 4 points [-]

I'm sorry I didn't see that earlier; I donated $30 to the SIAI yesterday, and I probably could have waited a little while longer and donated $60 all at once. If this offer will still be open in a month or two, I will take you up on it.

Comment author: zero_call 02 July 2010 09:35:38PM 2 points [-]

Without any way of authenticating the donations, I find this to be rather silly.

Comment author: VNKKET 02 July 2010 09:59:14PM *  3 points [-]

I'd also like these donations to be authenticated, but I'm willing to wait if necessary. Here's step 2, including the new "ETA" part, from my original comment:

In your donation's "Public Comment" field, include both a link to your reply to this thread and a note asking for a Singularity Institute employee to kindly follow that link and post a response saying that you donated. ETA: Step 2 didn't work for me, so I don't expect it to work for you. For now, I'll just believe you if you say you've donated. If you would be convinced to donate by seeing evidence that I'm not lying, let me know and I'll get you some.

Would you be willing to match my third $60 if I could give you better evidence that I actually matched the first two? If so, I'll try to get some.

Comment author: Alexandros 04 July 2010 12:32:15PM 8 points [-]

Is there an on-line 'rationality test' anywhere, and if not, would it be worth making one?

The idea would be to have some type of on-line questionnaire, testing for various types of biases, etc. Initially I thought of it as a way of getting data on the rationality of different demographics, but it could also be a fantastic promotional tool for LessWrong (taking a page out of the Scientology playbook tee-hee). People love tests, just look at the cottage industry around IQ-testing. This could help raise the sanity waterline, if only by making people aware of their blind spots.

There are of course the typical problems with 'putting a number on a person's rationality' and perhaps it would need some focused expertise to pull off plausibly, but I do think it's a useful thing to have around, even just to iterate on.

Comment author: SilasBarta 06 July 2010 05:47:21PM 7 points [-]

My kind of test would be like this:

1) Do you always seem to be able to predict the future, even as others doubt your predictions?

If they say yes ---> "That's because of confirmation bias, moron. You're not special."

Comment author: RobinZ 06 July 2010 06:19:52PM 5 points [-]

In their defense, it might be hindsight bias instead. :P

Comment author: Cyan 06 July 2010 05:26:26PM 5 points [-]

There's an online test for calibration of subjective probabilities.

Comment author: Alexandros 06 July 2010 06:20:14PM 2 points [-]

That was pretty awesome, thanks. Not precisely what I had in mind, but close enough to be an inspiration. Cheers.

Comment author: NancyLebovitz 04 July 2010 12:39:12PM 4 points [-]

The test should include questions about applying rationality in one's life, not just abstract problems.

Comment author: michaelkeenan 06 July 2010 03:03:14PM 3 points [-]

I would love for this to exist! I have some notes on easily-tested aspects of rationality which I will share:

The Conjunction Fallacy easily fits into a short multi-choice question.

I'm not sure what the error is called, but you can do the test described in Lawful Uncertainty:

Subjects were asked to predict whether the next card the experiment turned over would be red or blue in a context in which 70% of the cards were blue, but in which the sequence of red and blue cards was totally random. In such a situation, the strategy that will yield the highest proportion of success is to predict the more common event. For example, if 70% of the cards are blue, then predicting blue on every trial yields a 70% success rate. What subjects tended to do instead, however, was match probabilities - that is, predict the more probable event with the relative frequency with which it occurred. For example, subjects tended to predict 70% of the time that the blue card would occur and 30% of the time that the red card would occur. Such a strategy yields a 58% success rate.

You could do the positive bias test where you tell someone the triplet "2-4-6" conforms to a rule and have them figure out the rule.

You might be able to come up with some questions that test resistance to anchoring.

It might be out of scope of rationality and getting closer to an intelligence test, but you could take some "cognitive reflection" questions from here, which were discussed at LessWrong here.

Comment author: [deleted] 06 July 2010 07:00:51PM *  3 points [-]

That Virginia Postrel article was interesting.

I was wondering why more reflective people were both more patient and less risk-averse -- she doesn't make this speculation, but it occurs to me that non-reflective people don't trust themselves and don't trust the future. If you aren't good at math and you know it, you won't take a gamble, because you know that good gamblers have to be clever. If you aren't good at predicting the future, you won't feel safe waiting for money to arrive later. Tomorrow the gods might send you an earthquake.

Risk aversion and time preference are both sensible adaptations for people who know they're not clever. People who are good at math and science don't retain such protections because they can estimate probabilities, and because their world appears intelligible and predictable.

Comment author: oliverbeatson 05 July 2010 01:27:09PM 3 points [-]

The test's questions may need to be considerably dynamic to avert the possibility that people condition to specific problems without shedding the entire infected heuristic. Someone who had read Less Wrong a few times, but didn't make the knowledge truly a part of them, might return false negative for certain biases while retaining those biases in real-life situations. Don't want to make the test about guessing the teacher's password.

Comment author: utilitymonster 05 July 2010 01:26:59PM 3 points [-]

I'd suggest starting with a list of common biases and producing a question (or a few?) for each. The questions could test the biases and you could have an explanation of why the biased reasoning is bad, with examples.

It would also be useful to group the biases together in natural clusters, if possible.

Comment author: [deleted] 06 July 2010 12:56:56AM 2 points [-]

Sounds like a good idea. Doesn't have to be invented from scratch; adapt a few psychological or behavioral-economics experiments. It's hard to ask about rationality in one's own life because of self-reporting problems; if we're going to do it, I think it's better to use questions of the form "Scenario: would you do a, b, c, or d?" rather than self-descriptive questions of the form "Are you more: a or b?"

Comment author: utilitymonster 03 July 2010 05:28:47PM *  8 points [-]

Here's a puzzle I've been trying to figure out. It involves observation selection effects and agreeing to disagree. It is related to a paper I am writing, so help would be appreciated. The puzzle is also interesting in itself.

Charlie tosses a fair coin to determine how to stock a pond. If heads, it gets 3/4 big fish and 1/4 small fish. If tails, the other way around. After Charlie does this, he calls Al into his office. He tells him, "Infinitely many scientists are curious about the proportion of fish in this pond. They are all good Bayesians with the same prior. They are going to randomly sample 100 fish (with replacement) each and record how many of them are big and how many are small. Since so many will sample the pond, we can be sure that for any n between 0 and 100, some scientist will observe that n of his 100 fish were big. I'm going to take the first one that sees 25 big and team him up with you, so you can compare notes." (I don't think it matters much whether infinitely many scientists do this or just 3^^^3.)

Okay. So Al goes and does his sample. He pulls out 75 big fish and becomes nearly certain that 3/4 of the fish are big. Afterwards, a guy named Bob comes to him and tells him he was sent by Charlie. Bob says he randomly sampled 100 fish, 25 of which were big. They exchange ALL of their information.

Question: How confident should each of them be that 3/4 of the fish are big?

Natural answer: Charlie should remain nearly certain that ¾ of the fish are big. He knew in advance that someone like Bob was certain to talk to him regardless of what proportion of fish were big. So he shouldn't be the least bit impressed after talking to Bob.

But what about Bob? What should he think? At first glance, you might think he should be 50/50, since 50% of the fish he knows about have been big and his access to Al's observations wasn't subject to a selection effect. But that can't be right, because then he would just be agreeing to disagree with Al! (This would be especially puzzling, since they have ALL the same information, having shared everything.) So maybe Bob should just agree with Al: he should be nearly certain that ¾ of the fish are big.

But that's a bit odd. It isn't terribly clear why Bob should discount all of his observations, since they don't seem to subject to any observation selection effect; at least from his perspective, his observations were a genuine random sample.

Things get weirder if we consider a variant of the case.

VARIANT: as before, but Charlie has a similar conversation with Bob. Only this time, he tells him he's going to introduce Bob to someone who observed exactly 75 of 100 fish to be big.

New Question: Now what should Bob and Al think?

Here, things get really weird. By the reasoning that led to the Natural Answer above, Al should be nearly certain that ¾ are big and Bob should be nearly certain that ¼ are big. But that can't be right. They would just be agreeing to disagree! (Which would be especially puzzling, since they have ALL the same information.) The idea that they should favor one hypothesis in particular is also disconcerting, given the symmetry of the case. Should they both be 50/50?

Here's where I'd especially appreciate enlightenment: 1.If Bob should defer to Al in the original case, why? Can someone walk me through the calculations that lead to this?

2.If Bob should not defer to Al in the original case, is that because Al should change his mind? If so, what is wrong with the reasoning in the Natural Answer? If not, how can they agree to disagree?

3.If Bob should defer to Al in the original case, why not in the symmetrical variant?

4.What credence should they have in the symmetrical variant?

5.Can anyone refer me to some info on observation selection effects that could be applied here?

Comment author: Vladimir_M 03 July 2010 09:46:22PM *  6 points [-]

First, let's calculate the concrete probability numbers. If we are to trust this calculator, the probability of finding exactly 75 big fish in a sample of a hundred from a pond where 75% of the fish are big is approximately 0.09, while getting the same number in a sample from a 25% big pond has a probability on the order of 10^-25. The same numbers hold in the reverse situation, of course.

Now, Al and Bob have to consider two possible scenarios:

  1. The fish are 75% big, Al got the decently probable 75/100 sample, but Bob happened to be the first scientist who happened to get the extremely improbable 25/100 sample, and there were likely 10^(twenty-something) or so scientists sampling before Bob.

  2. The fish are 25% big, Al got the extremely improbable 75/100 big sample, while Bob got the decently probable 25/100 sample. This means that Bob is probably among the first few scientists who have sampled the pond.

So, let's look at it from a frequentist perspective: if we repeat this game many times, what will be the proportion of occurrences in which each scenario takes place?

Here we need an additional critical piece of information: how exactly was Bob's place in the sequence of scientists determined? At this point, an infinite number of scientists will give us lots of headache, so let's assume it's some large finite number N_sci, and Bob's place in the sequence is determined by a random draw with probabilities uniformly distributed over all places in the sequence. And here we get an important intermediate result: assuming that at least one scientist gets to sample 25/100, the probability for Bob to be the first to sample 25/100 is independent of the actual composition of the pond! Think of it by means of a card-drawing analogy. If you're in a group of 52 people whose names are repeatedly called out in random order to draw from a deck of cards, the proportion of drawings in which you get to be the first one to draw the ace of spades will always be 1/52, regardless of whether it's a normal deck or a non-standard one with multiple aces of spades, or even a deck of 52 such aces!

Now compute the following probabilities:

P1 = p(75% big fish) * p(Al samples 75/100 | 75% big fish) * p(Bob gets to be the first to sample 25/100)
~ 0.5 * 0.09 * 1/N_sci

P2 = p(25% big fish) * p(Al samples 75/100 | 25% big fish) *p (Bob gets to be the first to sample 25/100)
~ 0.5 * 10^-25 * 1/N_sci

(We ignore the finite, but presumably negligible probabilities that no scientist samples 25/100 in either case; these can be made arbitrarily low by increasing N_sci.)

Therefore, we have P1 >> P2, i.e. the overwhelming majority of meetings between Al and Bob -- which are by themselves extremely rare, since Al usually meets someone from the other (N_sci-1) scientists -- happen under the first scenario, where Al gets a sample closely matching the actual ratio.

Now, you say:

It isn't terribly clear why Bob should discount all of his observations, since they don't seem to subject to any observation selection effect; at least from his perspective, his observations were a genuine random sample.

Not really, when you consider repeating the experiment. For the overwhelming majority of repetitions, Bob will get results close to the actual ratio, and on rare occasions he'll get extreme outlier samples. Those repetitions in which he gets summoned to meet with Al, however, are not a representative sample of his measurements! The criteria for when he gets to meet with Al are biased towards including a greater proportion of his improbable 25/100 outlier results.

As for this:

VARIANT: as before, but Charlie has a similar conversation with Bob. Only this time, he tells him he's going to introduce Bob to someone who observed exactly 75 of 100 fish to be big.

I don't think this is a well defined scenario. Answers will depend on the exact process by which this second observer gets selected. (Just like in the preceding discussion, the answer would be different if e.g. Bob had been always assigned the same place in the sequence of scientists.)

Comment author: Blueberry 03 July 2010 05:38:55PM 3 points [-]

Interesting problem!

(This would be especially puzzling, since they have ALL the same information, having shared everything.)

It isn't terribly clear why Bob should discount all of his observations, since they don't seem to subject to any observation selection effect; at least from his perspective, his observations were a genuine random sample.

I think these two statements are inconsistent. If Bob is as certain as Al that Bob was picked specifically for his result, then they do have the same information, and they should both discount Bob's observations to the same degree for that reason. If Bob doesn't trust Al completely, they don't have the same information. Bob doesn't know for sure that Charlie told Al about the selection. From his point of view, Al could be lying.

VARIANT: as before, but Charlie has a similar conversation with Bob. Only this time, he tells him he's going to introduce Bob to someone who observed exactly 75 of 100 fish to be big.

If Charlie tells both of them they were both selected, they have the same information (that both their observations were selected for that purpose, and thus give them no information) and they can only decide based on their priors about Charlie stocking the pond.

If each of them only knows the other was selected and they both trust the other one's statements, same thing. But if each puts more trust in Charlie than in the other, then they don't have the same information.

Comment author: RobinZ 03 July 2010 06:10:16PM 2 points [-]

One key observation is that Al made his observation after being told that he would meet someone who made a particular observation - specifically, the first person to make that specific observation, Bob. This makes Al and Bob special in different ways:

  • Al is special because he has been selected to meet Bob regardless of what he observes. Therefore his data is genuinely uncorrelated with how he was selected for the meeting.
  • Bob is special because he has been selected to meet Al because of the specific data he observes. More precisely, because he will be the first to obtain that specific result. Therefore his result has been selected, and he is only at the meeting because he happens to be the first one to get that result.

In the original case, Bob's result is effectively a lottery ticket - when he finds out from Al the circumstances of the meeting, he can simply follow the Natural Answer himself and conclude that his results were unlikely.

In the modified case, assuming perfect symmetry in all relevant aspects, they can conclude that an astronomically unlikely event has occurred and they have no net information about the contents of the pond.

Comment author: JGWeissman 03 July 2010 06:22:49PM 1 point [-]

From Bob's perspective, he was more likely to be chosen as the one to talk to Al, if there are fewer scientist that observed exactly 25 big fish, which would happen if there are more big fish. So Bob should update on the evidence of being chosen.

Comment author: GreenRoot 06 July 2010 03:58:18PM 7 points [-]

Does anybody know what is depicted in the little image named "mini-landscape.gif" at the bottom of each top level post, or why it appears there?

Comment author: Leonhart 03 July 2010 09:58:34PM 7 points [-]

I can't remember if this has come up before...

Currently the Sequences are mostly as-imported from OB; including all the comments, which are flat and voteless as per the old mechanism.

Given that the Sequences are functioning as our main corpus for teaching newcomers, should we consider doing some comment topiary on at least the most-read articles? Specifically, I wonder if an appropriate thread structure be inferred from context; also we could vote the comments up or down in order to make the useful-in-hindsight stuff more salient. There's a lot of great stuff in there, but IIRC some that is less good as well. Not that we should actually get rid of any of it, of course.

Having said that, I'm already thinking of reasons that this is a bad idea, but I'm throwing it out anyway. Any thoughts? Should we be treating the Sequences as a time capsule or a living textbook? (I think that those phrases have roughly equal vague positive affect :)

Comment author: RobinZ 04 July 2010 02:35:16AM 5 points [-]

Voting is highly recommended - please do, and feel free to reply to comments with additional commentary as well. Otherwise I'd say leave them as be.

Comment author: JamesAndrix 25 July 2010 08:47:08PM 2 points [-]

Also related: A lot of the Sequences show marks of their origin on Overcoming Bias that could be confusing to someone who lands on that article:

Example: "Since this is an econblog... " in http://lesswrong.com/lw/j3/science_as_curiositystopper/

I think some kind of editorial note is in order here, if not a rewrite.

Comment author: JamesAndrix 05 July 2010 06:46:24AM 2 points [-]

Alternatively, we could repost/revisit the sequences on a schedule, and let the new posts build fresh comments.

Or even better, try to cover the same topics from a different perspective.

Comment author: gwern 05 July 2010 08:10:33AM 8 points [-]

I've suggested in the past that we use the old posts as filler; that is, if X days go by without something new making it to the front page, the next oldest item gets promoted instead.

Even if we collectively have nothing to say that is completely new, we likely have interesting things to say about old stuff - even if only linking it forward to newer stuff.

Comment author: gwern 06 July 2010 08:00:13AM 2 points [-]

So, from the 7 upboats, I take it that people in general approve of this idea. What's next? What do we do to make this a reality?

Looking back at an old post from OB (I think), like http://lesswrong.com/lw/od/37_ways_that_words_can_be_wrong/ I don't see any option to promote it to the front page. I thought I had enough karma to promote other peoples' articles, but it looks like I may be wrong about this. Is it even currently technically possible to promote old articles?

Comment author: JohannesDahlstrom 03 July 2010 09:11:56AM *  7 points [-]

http://www.badscience.net/2010/07/yeah-well-you-can-prove-anything-with-science/

Priming people with scientific data that contradicts a particular established belief of theirs will actually make them question the utility of science in general. So in such a near-mode situation people actually seem to bite the bullet and avoid compartmentalization in their world-view.

From a rationality point of view, is it better to be inconsistent than consistently wrong?

There may be status effects in play, of course: reporting glaringly inconsistent views to those smarty-pants boffin types just may not seem a very good idea.

Comment author: cupholder 04 July 2010 08:11:18AM 2 points [-]

See also 'crank magnetism.'

I wonder if this counts as evidence for my heuristic of judging how seriously to take someone's belief on a complicated scientific subject by looking to see if they get the right answer on easier scientific questions.

Comment author: Yoreth 02 July 2010 07:11:38AM *  7 points [-]

Long ago I read a book that asked the question “Why is there something rather than nothing?” Contemplating this question, I asked “What if there really is nothing?” Eventually I concluded that there really isn’t – reality is just fiction as seen from the inside.

Much later, I learned that this idea had a name: modal realism. After I read some about David Lewis’s views on the subject, it became clear to me that this was obviously, even trivially, correct, but since all the other worlds are causally unconnected, it doesn't matter at all for day-to-day life. Except as a means of dissolving the initial vexing question, it was pointless, I thought, to dwell on this topic any more.

Later on I learned about the Cold War and the nuclear arms race and the fears of nuclear annihilation. Apparently, people thought this was a very real danger, to the point of building bomb shelters in their backyards. And yet somehow we survived, and not a single bomb was dropped. In light of this, I thought, “What a bunch of hype this all is. You doomsayers cried wolf for decades; why should I worry now?”

But all of that happened before I was born.

If modal realism is correct,* then for all I know there was a nuclear holocaust in most world-lines; it’s just that I never existed there at all. Hence I cannot use the fact of my existence as evidence against the plausibility of existential threats, any more than we can observe life on Earth and thereby conclude that life is common throughout the universe.

(*Even setting aside MWI, which of course only strengthens the point.)

Strange how abstract ideas come back to bite you. So, should I worry now?

Comment author: cousin_it 02 July 2010 07:17:45AM *  5 points [-]

If you think doom is very probable and we only survived due to the anthropic principle, then you should expect doom any day now, and every passing day without incident should weaken your faith in the anthropic explanation.

If you think all possible worlds exist, then you should expect our small bubble of ordered existence to erupt into chaos any day now, because way more copies of it are contained in chaotic worlds than in ordered ones. Every day you spend without spontaneously turning into a pheasant should weaken your faith in the multiverse.

(These arguments are not standard LW fare, but I've floated them here before and they seem to work okay.)

Comment author: JoshuaZ 02 July 2010 12:30:02PM 5 points [-]

If you think all possible worlds exist, then you should expect our small bubble of ordered existence to erupt into chaos any day now, because way more copies of it are contained in chaotic worlds than in ordered ones. Every day you spend without spontaneously turning into a pheasant should weaken your faith in the multiverse.

This depends on which level of the Tegmark classification you are talking about. Level III for example, quantum MWI, gives very low probabilities for things like turning into a pheasant, since those evens while possible, have tiny chances of occurring. Level IV, the ultimate ensemble, which seems to the main emphasis of the poster above, may have your argument as a valid rebuttal, but since level IV requires consistency, it would require a much better understanding of what consistent rule systems look like. And it may be that the vast majority of those universes don't have observers, so we actually would need to look at consistent rule systems with observers. Without a lot more information, it is very hard to examine the expected probabilities of weird even events in a level IV setting.

Comment author: cousin_it 02 July 2010 07:10:01PM 5 points [-]

since level IV requires consistency, it would require a much better understanding of what consistent rule systems look like

Wha? Any sequence of observations can be embedded in a consistent system that "hardcodes" it.

Comment author: Vladimir_Nesov 02 July 2010 12:08:46PM 3 points [-]

If you think all possible worlds exist, then you should expect our small bubble of ordered existence to erupt into chaos any day now

Not if you interpret your preference about those worlds as assigning most of them low probability, so that only the ordered ones matter.

Comment deleted 05 July 2010 10:24:23AM *  [-]
Comment author: Mitchell_Porter 05 July 2010 11:25:53AM 3 points [-]

There are many momentous issues here.

First: I think a historical narrative can be constructed, according to which a future unexpected in, say, 1900 or even in 1950 slowly comes into view, and in which there are three stages characterized by an extra increment of knowledge. The first increment is cryonics, the second increment is nanotechnology, and the third increment is superintelligence. There is a highly selective view; if you were telling the history of futurist visions in general, you would need to include biotechnology, robotics, space travel, nuclear power, even aviation, and many other things.

In any case, among all the visions of the future that exist out there, there is definitely one consisting of cryonics + nanotechnology + superintelligence. Cryonics is a path from the present to the future, nanotechnology will make the material world as pliable as the bits in a computer, and superintelligence guided by some utility function will rule over all things.

Among the questions one might want answered:

1) Is this an accurate vision of the future?

2) Why is it that still so few people share this perspective?

3) Is that a situation which ought to be changed, and if so, how could it be changed?

Question 1 is by far the most discussed.

Question 2 is mostly pondered by the few people who have answered 'yes' to question 1, and usually psychological answers are given. I think that a certain type of historical thinking could go a long way towards answering question 2, but it would have to be carried out with care, intelligence, and a will to objectivity.

This is what I have in mind: You can find various histories of the world which cover the period from 1960. Most of them will not mention Ettinger's book, or Eric Drexler's, or any of the movements to which they gave rise. To find a history which notices any of that, you will have to specialize, e.g. to a history of American technological subcultures, or a history of 20th-century futurological enthusiasms. An overkill history-based causal approach to question 2 would have a causal model of world history since 1960, a causal model of those small domains in which Ettinger and Drexler's publications had some impact, and finally it would seek to understand why the causal processes of the second sort remained invisible on the scale of the first.

Question 3 is also, intrinsically, a question which will mostly be of interest to the small group who have already answered 'yes' to question 1.

Comment author: cupholder 05 July 2010 11:32:40AM 2 points [-]

A good illustration of multiple discovery (not strictly 'discovery' in this case, but anyway) too:

While Ettinger was the first, most articulate, and most scientifically credible person to argue the idea of cryonics,[citation needed] he was not the only one. In 1962, Evan Cooper had authored a manuscript entitled Immortality, Scientifically, Physically, Now under the pseudonym "N. Durhing".[8] Cooper's book contained the same argument as did Ettinger's, but it lacked both scientific and technical rigor and was not of publication quality.[citation needed]

Comment author: JohannesDahlstrom 03 July 2010 10:46:00AM *  6 points [-]

I'm a bit surprised that nobody seems to have brought up The Salvation War yet. [ETA: direct links to first and second part]

It's a Web Original documentary-style techno-thriller, based around the premise that humans find out that a Judeo-Christian Heaven and (Dantean) Hell (and their denizens) actually exist, but it turns out there's nothing supernatural about them, just some previously-unknown/unapplied physics.

The work opens in medias res into a modern-day situation where Yahweh has finally gotten fed up with those hairless monkeys no longer being the blind obedient slaves of yore, making a Public Service Announcement that Heaven's gates are closed and Satan owns everyone's souls from now on.

When commanded to lie down and die, some actually do. The majority of humankind instead does the logical thing and unites to declare war on Heaven and Hell. Hilarity ensues.

The work is rather saturated with WarmFuzzies and AwesomeMoments appealing to the atheist/rationalist crowd, and features some very memorable characters. It's a work in progress, with the second part of the trilogy now nearing its finale.

Comment author: cousin_it 05 July 2010 01:46:28PM *  7 points [-]

Okay, I've read through the whole thing so far.

This is not rationalist fiction. This is standard war porn, paperback thriller stuff. Many many technical descriptions of guns, rockets, military vehicles, etc. Throughout the story there's never any real conflict, just the American military (with help from the rest of the world) steamrolling everything, and the denizens of Heaven and Hell admiring the American way of life. It was well-written enough to hold my attention like a can of Pringles would, but I don't feel enriched by reading it.

Comment author: NancyLebovitz 05 July 2010 03:58:44PM 2 points [-]

I've only read about a chapter and a half, and may not read any more of it, but there's one small rationalist aspect worthy of note-- the author has a very solid grasp of the idea that machines need maintenance.

Comment author: cousin_it 03 July 2010 07:09:24PM 3 points [-]

Why did you link to TV Tropes instead of the thing itself?

Comment author: NancyLebovitz 26 July 2010 02:58:07AM 5 points [-]

A few years after I became an assistant professor, I realized the key thing a scientist needs is an excuse. Not a prediction. Not a theory. Not a concept. Not a hunch. Not a method. Just an excuse — an excuse to do something, which in my case meant an excuse to do a rat experiment. If you do something, you are likely to learn something, even if your reason for action was silly. The alchemists wanted gold so they did something. Fine. Gold was their excuse. Their activities produced useful knowledge, even though those activities were motivated by beliefs we now think silly. I’d like to think none of my self-experimentation was based on silly ideas but, silly or not, it often paid off in unexpected ways. At one point I tested the idea that standing more would cause weight loss. Even as I was doing it I thought the premise highly unlikely. Yet this led me to discover that standing a lot improved my sleep.

Seth Roberts

I'm not sure he's right about this, but I'm not sure he's wrong, either. What do you think?

Comment author: [deleted] 06 July 2010 05:14:17PM 5 points [-]

Poking around on Cosma Shalizi's website, I found this long, somewhat technical argument for why the general intelligence factor, g, doesn't exist.

The main thrust is that g is an artifact of hierarchal factor analysis, and that whenever you have groups of variables that have positive correlations between them, a general factor will always appear that explains a fair amount of the variance, whether it a actually exists or not.

I'm not convinced, mainly because it strikes me as unlikely that an error of this type would persist for so long, and that even his conception of intelligence as a large number of separate abilities would need some sort of high level selection and sequencing function. But neither of those are particularly compelling reasons for disagreement - can anyone more familiar with the psychological/statistical territory shed some light?

Comment author: [deleted] 07 July 2010 02:54:46PM 10 points [-]

I pointed this out to my buddy who's a psychology doctoral student, his reply is below:

I don't know enough about g to say whether the people talking about it are falling prey to the general correlation between tests, but this phenomenon is pretty well-known to social science researchers.

I do know enough about CFA and EFA to tell you that this guy has an unreasonable boner for CFA. CFA doesn't test against truth, it tests against other models. Which means it only tells you whether the model you're looking at fits better than a comparator model. If that's a null model, that's not a particularly great line of analysis.

He pretty blatantly misrepresents this. And his criticisms of things like Big Five are pretty wild. Big Five, by its very nature, fits the correlations extremely well. The largest criticism of Big Five is that it's not theory-driven, but data-driven!

But my biggest beef has got to be him arguing that EFA is not a technique for determining causality. No shit. That is the very nature of EFA -- it's a technique for loading factors (which have no inherent "truth" to them by loading alone, and are highly subject to reification) in order to maximize variance explained. He doesn't need to argue this point for a million words. It's definitional.

So regardless of whether g exists or not, which I'm not really qualified to speak on, this guy is kind of a hugely misleading writer. MINUS FIVE SCIENCE POINTS TO HIM.

Comment author: satt 07 July 2010 11:28:49AM *  7 points [-]

But neither of those are particularly compelling reasons for disagreement - can anyone more familiar with the psychological/statistical territory shed some light?

Shalizi's most basic point — that factor analysis will generate a general factor for any bunch of sufficiently strongly correlated variables — is correct.

Here's a demo. The statistical analysis package R comes with some built-in datasets to play with. I skimmed through the list and picked out six monthly datasets (72 data points in each):

It's pretty unlikely that there's a single causal general factor that explains most of the variation in all six of these time series, especially as they're from mostly non-overlapping time intervals. They aren't even that well correlated with each other: the mean correlation between different time series is -0.10 with a std. dev. of 0.34. And yet, when I ask R's canned factor analysis routine to calculate a general factor for these six time series, that general factor explains 1/3 of their variance!

However, Shalizi's blog post covers a lot more ground than just this basic point, and it's difficult for me to work out exactly what he's trying to say, which in turn makes it difficult to say how correct he is overall. What does Shalizi mean specifically by calling g a myth? Does he think it is very unlikely to exist, or just that factor analysis is not good evidence for it? Who does he think is in error about its nature? I can think of one researcher in particular who stands out as just not getting it, but beyond that I'm just not sure.

Comment author: HughRistik 07 July 2010 06:00:15PM *  3 points [-]

In your example, we have no reason to privilege the hypothesis that there is an underlying causal factor behind that data. In the case of g, wouldn't its relationships to neurobiology be a reason to give a higher prior probability to the hypothesis that g is actually measuring something real? These results would seem surprising if g was merely a statistical "myth."

Comment author: satt 07 July 2010 07:14:30PM 7 points [-]

In the case of g, wouldn't its relationships to neurobiology be a reason to give a higher prior probability to the hypothesis that g is actually measuring something real?

The best evidence that g measures something real is that IQ tests are highly reliable, i.e. if you get your IQ or g assessed twice, there's a very good correlation between your first score and your second score. Something has to generate the covariance between retestings; that g & IQ also correlate with neurobiological variables is just icing on the cake.

To answer your question directly, g's neurobiological associations are further evidence that g measures something real, and I believe g does measure something real, though I am not sure what.

These results would seem surprising if g was merely a statistical "myth."

Shalizi is, somewhat confusingly, using the word "myth" to mean something like "g's role as a genuine physiological causal agent is exaggerated because factor analysis sucks for causal inference", rather than its normal meaning of "made up". Working with Shalizi's (not especially clear) meaning of the word "myth", then, it's not that surprising that g correlates with neurobiology, because it is measuring something — it's just not been proven to represent a single causal agent.

Personally I would've preferred Shalizi to use some word other than "myth" (maybe "construct") to avoid exactly this confusion: it sounds as if he's denying that g measures anything, but I don't believe that's his intent, nor what he actually believes. (Though I think there's a small but non-negligible chance I'm wrong about that.)

Comment author: RobinZ 07 July 2010 02:58:42PM 2 points [-]

By the way, welcome to Less Wrong! Feel free to introduce yourself on that thread!

If you haven't been reading through the Sequences already, there was a conversation last month about good, accessible introductory posts that has a bunch of links and links-to-links.

Comment author: satt 07 July 2010 03:29:08PM 2 points [-]

Thank you!

Comment author: [deleted] 07 July 2010 02:49:44PM 2 points [-]

From what I can gather, he's saying all other evidence points to a large number of highly specialized mental functions instead of one general intelligence factor, and that psychologists are making a basic error by not understanding how to apply and interpret the statistical tests they're using. It's the latter which I find particularly unlikely (not impossible though).

Comment author: gwern 03 April 2013 11:19:29PM 4 points [-]

Here is a useful post directly criticizing Shalizi's claims: http://humanvarieties.org/2013/04/03/is-psychometric-g-a-myth/

Comment author: cousin_it 06 July 2010 06:12:26PM *  7 points [-]

I think this is one of the few cases where Shalizi is wrong. (Not an easy thing to say, as I'm a big fan of his.)

In the second part of the article he generates synthetic "test scores" of people who have three thousand independent abilities - "facets of intelligence" that apply to different problems - and demonstrates that standard factor analysis still detects a strong single g-factor explaining most of the variance between people. From that he concludes that g is a "statistical artefact" and lacks "reality". This is exactly like saying the total weight of the rockpile "lacks reality" because the weights of individual rocks are independent variables.

As for the reason why he is wrong, it's pretty clear: Shalizi is a Marxist (fo' real) and can't give an inch to those pesky racists. A sad sight, that.

Comment author: Vladimir_M 07 July 2010 04:56:36AM *  7 points [-]

cousin_it:

A sad sight, that.

Indeed. A while ago, I got intensely interested in these controversies over intelligence research, and after reading a whole pile of books and research papers, I got the impression that there is some awfully bad statistics being pushed by pretty much every side in the controversy, so at the end I was left skeptical towards all the major opposing positions (though to varying degrees). If there existed a book written by someone as smart and knowledgeable as Shalizi that would present a systematic, thorough, and unbiased analysis of this whole mess, I would gladly pay $1,000 for it. Alas, Shalizi has definitely let his ideology get the better of him this time.

He also wrote an interesting long post on the heritability of IQ, which is better, but still clearly slanted ideologically. I recommend reading it nevertheless, but to get a more accurate view of the whole issue, I recommend reading the excellent Making Sense of Heritability by Neven Sesardić alongside it.

Comment author: satt 07 July 2010 02:22:12PM 3 points [-]

If there existed a book written by someone as smart and knowledgeable as Shalizi that would present a systematic, thorough, and unbiased analysis of this whole mess, I would gladly pay $1,000 for it.

There is no such book (yet), but there are two books that cover the most controversial part of the mess that I'd recommend: Race Differences in Intelligence (1975) and Race, IQ and Jensen (1980). They are both systematic, thorough, and about as unbiased as one can reasonably expect on the subject of race & IQ. On the down side, they don't really cover other aspects of the IQ controversies, and they're three decades out of date. (That said, I personally think that few studies published since 1980 bear strongly on the race & IQ issue, so the books' age doesn't matter that much.)

Comment author: Vladimir_M 08 July 2010 08:17:18AM 3 points [-]

Yes, among the books on the race-IQ controversy that I've seen, I agree that these are the closest thing to an unbiased source. However, I disagree that nothing very significant has happened in the field since their publication -- although unfortunately, taken together, these new developments have led to an even greater overall confusion. I have in mind particularly the discovery of the Flynn effect and the Minnesota adoption study, which have made it even more difficult to argue coherently either for a hereditarian or an environmentalist theory the way it was done in the seventies.

Also, even these books fail to present a satisfactory treatment of some basic questions where a competent statistician should be able to clarify things fully, but horrible confusion has nevertheless persisted for decades. Here I refer primarily to the use of the regression to the mean as a basis for hereditarian arguments. From what I've seen, Jensen is still using such arguments as a major source of support for his positions, constantly replying to the existing superficial critiques with superficial counter-arguments, and I've never seen anyone giving this issue the full attention it deserves.

Comment author: Morendil 07 July 2010 06:28:53AM 3 points [-]

long post on the heritability of IQ, which is better, but still clearly slanted ideologically

OK, I'll bite. Can you point to specific parts of that post which are in error owing to ideologically motivated thinking?

Comment author: Vladimir_M 07 July 2010 07:29:34AM *  10 points [-]

Morendil:

Can you point to specific parts of that post which are in error owing to ideologically motivated thinking?

A piece of writing biased for ideological reasons doesn't even have to have any specific parts that can be shown to be in error per se. Enormous edifices of propaganda can be constructed -- and have been constructed many times in history -- based solely on the selection and arrangement of the presented facts and claims, which can all be technically true by themselves.

In areas that arouse strong ideological passions, all sorts of surveys and other works aimed at broad audiences can be expected to suffer from this sort of bias. For a non-expert reader, this problem can be recognized and overcome only by reading works written by people espousing different perspectives. That's why I recommend that people should read Shalizi's post on heritability, but also at least one more work addressing the same issues written by another very smart author who doesn't share the same ideological position. (And Sesardić's book is, to my knowledge, the best such reference about this topic.)

Instead of getting into a convoluted discussion of concrete points in Shalizi's article, I'll just conclude with the following remark. You can read Shalizi's article, conclude that it's the definitive word on the subject, and accept his view of the matter. But you can also read more widely on the topic, and see that his presentation is far from unbiased, even if you ultimately conclude that his basic points are correct. The relevant literature is easily accessible if you just have internet and library access.

Comment author: [deleted] 06 July 2010 07:26:05PM 2 points [-]

Your analogy is flawed, I think.

The weight of the rock pile is just what we call the sum of the weights of the rocks. It's just a definition; but the idea of general intelligence is more than a definition. If there were a real, biological thing called g, we would expect all kinds of abilities to be correlated. Intelligence would make you better at math and music and English. We would expect basically all cognitive abilities to be affected by g, because g is real -- it represents something like dendrite density, some actual intelligence-granting property.

People hypothesized that g is real because results of all kinds of cognitive tests are correlated. But what Shalizi showed is that you can generate the same correlations if you let test scores depend on three thousand uncorrelated abilities. You can get the same results as the IQ advocates even when absolutely no single factor determines different abilities.

Sure, your old g will correlate with multiple abilities -- hell, you could let g = "test score" and that would correlate with all the abilities -- but that would be meaningless. If size and location determine the price of a house, you don't declare that there is some factor that causes both large size and desirable location!

Comment author: Vladimir_M 07 July 2010 05:46:27AM *  8 points [-]

SarahC:

But what Shalizi showed is that you can generate the same correlations if you let test scores depend on three thousand uncorrelated abilities. You can get the same results as the IQ advocates even when absolutely no single factor determines different abilities.

Just to be clear, this is not an original idea by Shalizi, but the well known "sampling theory" of general intelligence first proposed by Godfrey Thomson almost a century ago. Shalizi states this very clearly in the post, and credits Thomson with the idea. However, for whatever reason, he fails to mention the very extensive discussions of this theory in the existing literature, and writes as if Thomson's theory had been ignored ever since, which definitely doesn't represent the actual situation accurately.

In a recent paper by van der Maas et al., which presents an extremely interesting novel theory of correlations that give rise to g (and which Shalizi links to at one point), the authors write:

Thorndike (1927) and Thomson (1951) proposed one such alternative mechanism, namely, sampling. In this sampling theory, carrying out cognitive tasks requires the use of many lower order uncorrelated modules or neural processes (so-called bonds). They hypothesized that the samples of modules or bonds used for different cognitive tests partly overlap, causing a positive correlation between the test scores. In this view, the positive manifold is due to a measurement problem in the sense that it is very difficult to obtain independent measures of the lower order processes. Jensen (1998) and Eysenck (1987) identified three problems with this sampling theory. First, whereas some complex mental tests, as predicted by sampling theory, highly load on the g factor, some very narrowly defined tests also display high g loadings. Second, some seemingly completely unrelated tests, such as visual and memory scan tasks, are consistently highly correlated, whereas related tests, such as forward and backward digit span, are only modestly correlated. Third, in some cases brain damage leads to very specific impairments, whereas sampling theory predicts general impairments. These three facts are difficult to explain with sampling theory, which as a consequence has not gained much acceptance.1 Thus, the g explanation remains very dominant in the current literature (see Jensen, 1998, p. 107).

Note that I take no position here about whether these criticisms of the sampling theory are correct or not. However, I think this quote clearly demonstrates that an attempt to write off g by merely invoking the sampling theory is not a constructive contribution to the discussion.

I would also add that if someone managed to construct multiple tests of mental ability that would sample disjoint sets of Thomsonesque underlying abilities and thus fail to give rise to g, it would be considered a tremendous breakthrough. Yet, despite the strong incentive to achieve this, nobody who has tried so far has succeeded. This evidence is far from conclusive, but far from insignificant either.

Comment author: satt 07 July 2010 03:44:35PM 2 points [-]

I think Shalizi isn't too far off the mark in writing "as if Thomson's theory had been ignored". Although a few psychologists & psychometricians have acknowledged Thomson's sampling model, in everyday practice it's generally ignored. There are far more papers out there that fit g-oriented factor models as a matter of course than those that try to fit a Thomson-style model. Admittedly, there is a very good reason for that — Thomson-style models would be massively underspecified on the datasets available to psychologists, so it's not practical to fit them — but that doesn't change the fact that a g-based model is the go-to choice for the everyday psychologist.

There's an interesting analogy here to Shalizi's post about IQ's heritability, now I think about it. Shalizi writes it as if psychologists and behaviour geneticists don't care about gene-environment correlation, gene-environment interaction, nonlinearities, there not really being such a thing as "the" heritability of IQ, and so on. One could object that this isn't true — there are plenty of papers out there concerned with these complexities — but on the other hand, although the textbooks pay lip service to them, researchers often resort to fitting models that ignore these speedbumps. The reason for this is the same as in the case of Thomson's model: given the data available to scientists, models that accounted for these effects would usually be ruinously underspecified. So they make do.

Comment author: Vladimir_M 08 July 2010 08:37:49AM *  4 points [-]

However, it seems to me that the fatal problem of the sampling theory is that nobody has ever managed to figure out a way to sample disjoint sets of these hypothetical uncorrelated modules. If all practically useful mental abilities and all the tests successfully predicting them always sample some particular subset of these modules, then we might as well look at that subset as a unified entity that represents the causal factor behind g, since its elements operate together as a group in all relevant cases.

Or is there some additional issue here that I'm not taking into account?

Comment author: satt 08 July 2010 07:26:03PM 8 points [-]

I can't immediately think of any additional issue. It's more that I don't see the lack of well-known disjoint sets of uncorrelated cognitive modules as a fatal problem for Thomson's theory, merely weak disconfirming evidence. This is because I assign a relatively low probability to psychologists detecting tests that sample disjoint sets of modules even if they exist.

For example, I can think of a situation where psychologists & psychometricians have missed a similar phenomenon: negatively correlated cognitive tests. I know of a couple of examples which I found only because the mathematician Warren D. Smith describes them in his paper "Mathematical definition of 'intelligence' (and consequences)". The paper's about the general goal of coming up with universal definitions of and ways to measure intelligence, but in the middle of it is a polemical/sceptical summary of research into g & IQ.

Smith went through a correlation matrix for 57 tests given to 240 people, published by Thurstone in 1938, and saw that the 3 most negative of the 1596 intercorrelations were between these pairs of tests:

  • "100-word vocabulary test // Recognize pictures of hand as Right/Left" (correlation = -0.22)
  • "Find lots of synonyms of a given word // Decide whether 2 pictures of a national flag are relatively mirrored or not" (correlation = -0.16)
  • "Describe somebody in writing: score=# words used // figure recognition test: decide which numbers in a list of drawings of abstract figures are ones you saw in a previously shown list" (correlation = -0.12)

In Smith's words: "This seems too much to be a coincidence!" Smith then went to the 60-item correlation matrix for 710 schoolchildren published by Thurstone & Thurstone in 1941 and did the same, discovering that

the three most negative [correlations], with values -0.161, -0.152, and -0.138 respectively, are the pairwise correlations of the performance on the "scattered Xs" test (circle the Xs in a random scattering of letters) with these three tests: (a) Sentence completion ... (b) Reading comprehension II ... (c) Reading comprehension I ... Again, it is difficult to believe this also is a coincidence!

The existence of two pairs of negatively correlated cognitive skills leads me to increase my prior for the existence of uncorrelated cognitive skills.

Also, the way psychologists often analyze test batteries makes it harder to spot disjoint sets of uncorrelated modules. Suppose we have a 3-test battery, where test 1 samples uncorrelated modules A, B, C, D & E, test 2 samples F, G, H, I & J, and test 3 samples C, D, E, F & G. If we administer the battery to a few thousand people and extract a g from the results, as is standard practice, then by construction the resulting g is going to correlate with scores on tests 1 & 2, although we know they sample non-overlapping sets of modules. (IQ, being a weighted average of test/module scores, will also correlate with all of the tests.) A lot of psychologists would interpret that as evidence against tests 1 & 2 measuring distinct mental abilities, even though we see there's an alternative explanation.

Even if we did find an index of intelligence that didn't correlate with IQ/g, would we count it as such? Duckworth & Seligman discovered that in a sample of 164 schoolchildren, a composite measure of self-discipline predicted GPA significantly better than IQ, and self-discipline didn't correlate significantly with IQ. Does self-discipline now count as an independent intellectual ability? I'd lean towards saying it doesn't, but I doubt I could justify being dogmatic about that; it's surely a cognitive ability in the term's broadest sense.

Comment author: Vladimir_M 09 July 2010 08:12:25AM *  4 points [-]

satt:

For example, I can think of a situation where psychologists & psychometricians have missed a similar phenomenon: negatively correlated cognitive tests. I know of a couple of examples which I found only because the mathematician Warren D. Smith describes them in his paper "Mathematical definition of 'intelligence' (and consequences)".

That's an extremely interesting reference, thanks for the link! This is exactly the kind of approach that this area desperately needs: no-nonsense scrutiny by someone with a strong math background and without an ideological agenda.

David Hilbert allegedly once quipped that physics is too important to be left to physicists; the way things are, it seems to me that psychometrics should definitely not be left to psychologists. That they haven't immediately rushed to explore further these findings by Smith is an extremely damning fact about the intellectual standards in the field.

Duckworth & Seligman discovered that in a sample of 164 schoolchildren, a composite measure of self-discipline predicted GPA significantly better than IQ, and self-discipline didn't correlate significantly with IQ. Does self-discipline now count as an independent intellectual ability?

Wouldn't this closely correspond to the Big Five "conscientiousness" trait? (Which the paper apparently doesn't mention at all?!) From what I've seen, even among the biggest fans of IQ, it is generally recognized that conscientiousness is at least similarly important as general intelligence in predicting success and performance.

Comment author: Douglas_Knight 10 July 2010 09:46:19PM 3 points [-]

I haven't looked at Smith yet, but the quote looks like parody to me. Since you seem to take it seriously, I'll respond. Awfully specific tests defying the predictions looks like data mining to me. I predict that these negative correlations are not replicable. The first seems to be the claim that verbal ability is not correlated with spatial ability, but this is a well-tested claim. As Shalizi mentions, psychometricians do look for separate skills and these are commonly accepted components. I wouldn't be terribly surprised if there were ones they completely missed, but these two are popular and positively correlated. The second example is a little more promising: maybe that scattered Xs test is independent of verbal ability, even though it looks like other skills that are not, but I doubt it.

With respect to self-discipline, I think you're experiencing some kind of halo effect. Not every positive mental trait should be called intelligence. Self-discipline is just not what people mean by intelligence. I knew that conscientiousness predicted GPAs, but I'd never heard such a strong claim. But it is true that a lot of people dismiss conscientiousness (and GPA) in favor of intelligence, and they seem to be making an error (or being risk-seeking).

Comment author: satt 11 July 2010 06:03:38PM *  2 points [-]

I haven't looked at Smith yet, but the quote looks like parody to me. Since you seem to take it seriously, I'll respond.

Once you read the relevant passage in context, I anticipate you will agree with me that Smith is serious. Take this paragraph from before the passage I quoted from:

Further, let us return to Gould's criticism that due to "validation" of most other highly used IQ tests and subtests, Spearman's g was forced to appear to exist from then on, regardless of whether it actually did. In view of this ... probably the only place we can look in the literature to find data truly capable of refuting or confirming Spearman, is data from the early days, before too much "validation" occurred, but not so early on that Spearman's atrocious experimental and statistical practices were repeated.

The prime candidate I have been able to find for such data is Thurstone's [205] "primary mental abilities" dataset published in 1938.

Smith then presents the example from Thurstone's 1938 data.

Awfully specific tests defying the predictions looks like data mining to me. I predict that these negative correlations are not replicable.

I'd be inclined to agree if the 3 most negative correlations in the dataset had come from very different pairs of tests, but the fact that they come from sets of subtests that one would expect to tap similar narrow abilities suggests they're not just statistical noise.

The first seems to be the claim that verbal ability is not correlated with spatial ability, but this is a well-tested claim. As Shalizi mentions, psychometricians do look for separate skills and these are commonly accepted components. I wouldn't be terribly surprised if there were ones they completely missed, but these two are popular and positively correlated.

Smith himself does not appear to make that claim; he presents his two examples merely as demonstrations that not all mental ability scores positively correlate. I think it's reasonable to package the 3 verbal subtests he mentions as strongly loading on verbal ability, but it's not clear to me that the 3 other subtests he pairs them with are strong measures of "spatial ability"; two of them look like they tap a more specific ability to handle mental mirror images, and the third's a visual memory test.

Even if it transpires that the 3 subtests all tap substantially into spatial ability, they needn't necessarily correlate positively with specific measures of verbal ability, even though verbal ability correlates with spatial ability.

With respect to self-discipline, I think you're experiencing some kind of halo effect. Not every positive mental trait should be called intelligence. Self-discipline is just not what people mean by intelligence.

I'm tempted to agree but I'm not sure such a strong generalization is defensible. Take a list of psychologists' definitions of intelligence. IMO self-discipline plausibly makes sense as a component of intelligence under definitions 1, 7, 8, 13, 14, 23, 25, 26, 27, 28, 32, 33 & 34, which adds up to 37% of the list of definitions. A good few psychologists appear to include self-discipline as a facet of intelligence.

Comment author: [deleted] 06 July 2010 07:30:31PM 3 points [-]

"All of this, of course, is completely compatible with IQ having some ability, when plugged into a linear regression, to predict things like college grades or salaries or the odds of being arrested by age 30. (This predictive ability is vastly less than many people would lead you to believe [cf.], but I'm happy to give them that point for the sake of argument.) This would still be true if I introduced a broader mens sana in corpore sano score, which combined IQ tests, physical fitness tests, and (to really return to the classical roots of Western civilization) rated hot-or-not sexiness. Indeed, since all these things predict success in life (of one form or another), and are all more or less positively correlated, I would guess that MSICS scores would do an even better job than IQ scores. I could even attribute them all to a single factor, a (for arete), and start treating it as a real causal variable. By that point, however, I'd be doing something so obviously dumb that I'd be accused of unfair parody and arguing against caricatures and straw-men."

This is the point here. There's a difference between coming up with linear combinations and positing real, physiological causes.

Comment author: cousin_it 06 July 2010 07:52:38PM *  8 points [-]

My beef isn't with Shalizi's reasoning, which is correct. I disagree with his text connotationally. Calling something a "myth" because it isn't a causal factor and you happen to study causal factors is misleading. Most people who use g don't need it to be a genuine causal factor; a predictive factor is enough for most uses, as long as we can't actually modify dendrite density in living humans or something like that.

Comment author: nhamann 01 July 2010 11:26:35PM 5 points [-]

This seems extremely pertinent for LW: a paper by Andrew Gelman and Cosma Shalizi. Abstract:

A substantial school in the philosophy of science identifies Bayesian inference with inductive inference and even rationality as such, and seems to be strengthened by the rise and practical success of Bayesian statistics. We argue that the most successful forms of Bayesian statistics do not actually support that particular philosophy but rather accord much better with sophisticated forms of hypothetico-deductivism. We examine the actual role played by prior distributions in Bayesian models, and the crucial aspects of model checking and model revision, which fall outside the scope of Bayesian confirmation theory. We draw on the literature on the consistency of Bayesian updating and also on our experience of applied work in social science.

I'm still reading it so I don't have anything to say about it, and I'm not very statistics-savvy so I doubt I'll have much to say about it after I read it, but I thought others here would find it an interesting read.

I stole this from a post by mjgeddes over in the OB open thread for July (Aside: mjgeddes, why all the hate? Where's the love, brotha?)

Comment author: cousin_it 02 July 2010 07:02:18AM 2 points [-]

steven0461 already posted this to the previous Open Thread and we had a nice little talk.

Comment author: TraditionalRationali 02 July 2010 05:18:03AM *  2 points [-]

I wrote a backlink to here from OB. I am not yet expert enough to do an evaluation of this. I do think however that it is an important and interesting question that mjgeddes asks. As an active (although at a low level) rationalist I think it is important to try to at least to some extent follow what expert philosophers of science actually find out of how we can obtain reasonably reliable knowledge. The dominating theory of how science proceeds seems to be the hypothetico-deductive model, somewhat informally described. No formalised model for the scientific process seems so far has been able to answer to serious criticism of in the philosophy of science community. "Bayesianism" seems to be a serious candidate for such a formalised model but seems still to be developed further if it should be able to anser all serious criticism. The recent article by Gelman and Shalizi is of course just the latest in a tradition of bayesian-critique. A classic article is Glymour "Why I am Not a Bayesian" (also in the reference list of Gelman and Shalizi). That is from 1980 so probably a lot has happened since then. I myself am not up-to-date with most of development, but it seems to be an import topic to discuss here on Less Wrong that seems to be quite bayesianistically oriented.

Comment author: Cyan 02 July 2010 02:56:38AM *  2 points [-]

mjgeddes, why all the hate?

ETA: Never mind. I got my crackpots confused.

Original text was:

mjgeddes was once publicly dissed by Eliezer Yudkowsky on OB (can't find the link now, but it was a pretty harsh display of contempt). Since then, he has often bashed Bayesian induction, presumably in an effort to undercut EY's world view and thereby hurt EY as badly as he himself was hurt.

Comment author: Unnamed 01 July 2010 11:00:56PM 5 points [-]

Has anyone continued to pursue the Craigslist charity idea that was discussed back in February, or did that just fizzle away? With stakes that high and a non-negligible chance of success, it seemed promising enough for some people to devote some serious attention to it.

Comment author: Kevin 01 July 2010 11:18:55PM *  10 points [-]

Thanks for asking! I also really don't want this to fizzle away.

It is still being pursued by myself, Michael Vassar, and Michael GR via back channels rather than what I outlined in that post and it is indeed getting serious attention, but I don't expect us to have meaningful results for at least a year. I will make a Less Wrong post as soon as there is anything the public at large can do -- in the meanwhile, I respectfully ask that you or others do not start your own Craigslist charity group, as it may hurt our efforts at moving forward with this.

ETA: Successfully pulling off this Craigslist thing has big overlaps with solving optimal philanthropy in general.

Comment author: Kevin 08 July 2010 01:38:59AM 4 points [-]

Conway's Game of Life in HTML 5

http://sixfoottallrabbit.co.uk/gameoflife/

Comment author: Wei_Dai 06 July 2010 11:43:24AM 4 points [-]

I wish there is an area of science that gives reductionist explanations of morality, that is, the detailed contents of our current moral values and norms. One example that came up earlier was monogamy - why do all modern industrialized countries have monogamy as a social norm?

The thing that's puzzling me now is egalitarianism. As Carl Shulman pointed out, the problem that CEV has with people being able to cheaply copy themselves in the future is shared with democracy and other political and ethical systems that are based on equal treatment or rights of all individuals within a society. Before trying to propose alternatives, I'd like to understand how we came to value such equality in the first place.

Comment author: michaelkeenan 06 July 2010 12:00:51PM 4 points [-]

I wish there is an area of science that gives reductionist explanations of morality, that is, the detailed contents of our current moral values and norms. One example that came up earlier was monogamy - why do all modern industrialized countries have monogamy as a social norm?

I'm currently reading The Moral Animal by Robert Wright, because it was recommended by, among others, Eliezer. I'm summarizing the chapters online as I read them. The fifth chapter, noting that more human societies have been polygynous than have been monogamous, examines why monogamy is popular today; you might want to check it out.

As for the wider question of reductionist explanations of morality, I'm a fan of the research of moral psychologist Jonathan Haidt (New York Times article, very readable paper).

Comment author: beriukay 17 July 2010 12:57:05PM *  3 points [-]

I know this thread is a bit bloated already without me adding to the din, but I was hoping to get some assistance on page 11 of Pearl's Causality (I'm reading 2nd edition).

I've been following along and trying to work out the examples, and I'm hitting a road block when it comes to deriving the property of Decomposition using the given definition (X || Y | Z) iff P( x | y,z ) = P( x | z ), and the basic axioms of probability theory. Part of my problem comes because I haven't been able to meaningfully define the 'YW' in (X || YW | Z), and how that translates into P( ). My best guess was that it is a union operation, but then if they aren't disjoint we wouldn't be using the axioms defined earlier in the book. I doubt someone as smart as Pearl would be sloppy in that way, so it has to be something I am overlooking.

I've been googling variations of the terms on the page, as well as trying to get derivations from Dawid, Spohn, and all the other sources in the footnote, but they all pretty much say the same thing, which is slightly unhelpful. Help would be appreciated.

Edit: It appears I failed at approximating the symbol used in the book. Hopefully that isn't distracting. It should look like the symbol used for orthogonality/perpendicularity, except with a double bar in the vertical.

Comment author: mstevens 07 July 2010 03:36:00PM 3 points [-]

Something I've been pondering recently:

This site appears to have two related goals:

a) How to be more rational yourself b) How to promote rationality in others

Some situations appear to trigger a conflict between these two goals - for example, you might wish to persuade someone they're wrong. You could either make a reasoned, rational argument as to why they're wrong, or a more rhetorical, emotional argument that might convince many but doesn't actually justify your position.

One might be more effective in the short term, but you might think the rational argument preferable as a long term education project, for example.

I don't really have an answer here, I'm just interested in the conflict and what people think.

Comment author: apophenia 05 July 2010 11:41:48PM 3 points [-]

I have begun a design for a general computer tool to calculate utilities. To give a concrete example, you give it a sentence like

I would prefer X2 amount of money in Y1 months, to X2 in Y2 months. Then, give it reasonable bounds for X and Y, simple additional information (i.e. you always prefer more money to less), and let it interview some people. It'll plot a utility function for each person, and you can check the fit of various models (i.e. exponential discounting, no discounting, hyperbolic discounting).

My original goals were to * Emperically check the hyperbolic discounting claim. * Determine the best-priced value meal at Arby's.

However, I lost interest without further motivation. Given that this is of presumed interest to Less Wrong, I propose the following: If someone offers to sponsor me (give money to me on completion of the computer program), I'll work on the project. Or, if enough people bug me, I'll probably due it for no money. I would prefer only one of these two methods, to see which works better. Anybody who wants to bug me / pay me money, please respond in a comment.

Comment author: Alexandros 04 July 2010 12:37:51PM *  3 points [-]

I know Argumentum ad populum does not work, and I know Arguments from authority do not work, but perhaps they can be combined into something more potent:

Can anyone recall a hypothesis that had been supported by a significant subset of the lay population, consistently rejected by the scientific elites, and turned out to be correct?

It seems belief in creationism has this structure. the lower you go in education level, the more common the belief. I wonder whether this alone can be used as evidence against this 'theory' and others like it.

Comment author: wedrifid 04 July 2010 10:03:00AM 3 points [-]

We have recently had a discussion on whether the raw drive for status seeking benefits society. This link seems all too appropriate (or, well, at least apt.)

Comment author: NancyLebovitz 04 July 2010 12:06:12AM 3 points [-]

The comments on the Methods of Rationality thread are heading towards 500. Might this be time for a new thread?

Comment author: RobinZ 04 July 2010 12:30:13AM 2 points [-]

That sounds like a reasonable criterion.

Comment author: utilitymonster 03 July 2010 02:13:39PM *  3 points [-]

Is there a principled reason to worry about being in a simulation but not worry about being a Boltzmann brain?

Here are very similar arguments:

  • If posthumans run ancestor simulations, most of the people in the actual world with your subjective experiences will be sims.

  • If two beings exist in one world and have the same subjective experiences, your probability that you are one should equal your probability that you are the other.

  • Therefore, if posthumans run ancestor simulations, you are probably a sim.

vs.

  • If our current model of cosmology is correct, most of the beings in the history of the universe with your subjective experiences will be Boltzmann brains.

  • If two beings exist in one world and have the same subjective experiences, your probability that you are one should equal your probability that you are the other.

  • Therefore, if our current model of cosmology is correct, you are probably a Boltzmann brain.

Expanding your evidence from your present experiences to all the experiences you've had doesn't help. There will still be lots more Boltzmann brains that last for as long as you've had experiences, having experiences just like yours. Most plausible ways of expanding your evidence have similar effects.

I suppose you could try arguing that the Boltzmann brain scenario, but not simulation scenario, is self-defeating. In the Boltzmann scenario, your reasons for accepting the theory (results of various experiments, etc) are no good, since none of it really happened. In the simulation scenario, you really did see those results, all the results were just realized in a funny sort of way that you didn't expect. It would be nice if the relevance of this argument were better spelled out and cashed out in a plausible Bayesian principle.

edited for format

Comment author: Nisan 03 July 2010 02:42:41PM *  3 points [-]

Is there really a cosmology that says that most beings with my subjective experiences are Boltzmann brains? It seems to me that in a finite universe, most beings will not be Boltzmann brains. And in an infinite universe, it's not clear what "most" means.

Comment author: utilitymonster 03 July 2010 04:12:52PM *  4 points [-]

I gathered this from a talk by Sean Carroll that I attended, and it was supposed to be a consequence of the standard picture. All the Boltzmann brains come up in the way distant future, after thermal equilibrium, as random fluctuations. Carroll regarded this as a defect of the normal approach, and used this as a launching point to speculate about a different model.

I wish I had a more precise reference, but this isn't my area and I only heard this one talk. But I think this issue is discussed in his book From Eternity to Here. Here's a blogpost that, I believe, faithfully summarizes the relevant part of the talk. The normal solution to Boltzmann brains is to add a past hypothesis. Here is the key part where the post discusses the benefits and shortcomings of this approach:

Solution: Albert adds a Past Hypothesis (PAST), which says roughly that the universe started in very low entropy state (much lower than this one). So the objective probability that this is the lowest entropy state of the universe is 0—meaning we can’t be Boltzmann brains. As a bonus, we get an explanation of the direction of time, why ice cubes melt, why we can cause things to happen in the future and not the past, and how we have records of the past and not the future: all these things get a very high objective probability.

But (Sean Carroll argues) this moves too fast: just adding the past hypothesis allows the universe to eventually reach thermal equilibrium. Once that happens (in about 10100 years) there will be an extremely long period (~10^10120 years) during which random fluctuations bring about all sorts of things, including our old enemies, Boltzmann brains. And there will be a lot of them. And some of them will have the same experiences we do.

The years there are missing some carats. Should be 10^100 and 10^10^120.

Comment author: Nisan 03 July 2010 08:55:25PM 2 points [-]

Oh I see. I... I'd forgotten about the future.

Comment author: utilitymonster 03 July 2010 04:20:44PM 2 points [-]

This is always hard with infinities. But I think it can be a mistake to worry about this too much.

A rough way of making the point would be this. Pick a freaking huge number of years, like 3^^^3. Look at our universe after it has been around for that many years. You can be pretty damn sure that most of the beings with evidence like yours are Botlzmann brains on the model in question.

Comment author: murat 02 July 2010 08:59:04AM *  3 points [-]

I have a few questions.

1) What's "Bayescraft"? I don't recall seeing this word elsewhere. I haven't seen a definition on LW wiki either.

2) Why do some people capitalize some words here? Like "Traditional Rationality" and whatnot.

Comment author: Morendil 02 July 2010 09:21:01AM *  4 points [-]

To me "Bayescraft" has the connotation of a particular mental attitude, one inspired by Eliezer Yudkowsky's fusion of the ev-psych, heuristics-and-biases literature with E.T. Jaynes' idiosyncratic take on "Bayesian probabilistic inference", and in particular the desiderata for an inference robot: take all relevant evidence into account, rather than filter evidence according to your ideological biases, and allow your judgement of a proposition's plausibility to move freely in the [0..1] range rather than seek all-or-nothing certainty in your belief.

Comment author: Nisan 02 July 2010 12:43:44PM 3 points [-]

Capitalized words are often technical terms. So "Traditional Rationality" refers to certain epistemic attitudes and methods which have, in the past, been called "rational" (a word which is several hundred years old). This frees up the lower-case word "rationality", which on this site is also a technical term.

Comment author: Oscar_Cunningham 02 July 2010 09:10:46AM 1 point [-]

Bayescraft is just a synonym for Rationallity, with connotations of a) Bayes theorem, since that's what epistemic rationallity must be based on, and b) the notion that rationallity is a skill which must be developed personally and as a group (see also: Martial art of Rationallity (oh look, more capitals!))

The capitals are just for emphasis of concepts that the writer thinks are fundamentally important.

Comment author: cousin_it 02 July 2010 07:41:52AM *  3 points [-]

A small koan on utility functions that "refer to the real world".

  1. Question to Clippy: would you agree to move into a simulation where you'd have all the paperclips you want?

  2. Question to humans: would you agree to all of humankind moving into a simulation where we would fulfill our CEV (at least, all terms of it that don't mention "not living in a simulation")?

In both cases assume you have mathematical proof that the simulation is indestructible and perfectly tamper-resistant.

Comment author: Kingreaper 02 July 2010 02:25:45PM 5 points [-]

Would the simulation allow us to exit, in order to perform further research on the nature of the external world?

If so, I would enter it. If not? Probably not. I do not want to live in a world where there are ultimate answers and you can go no further.

The fact that I may already live in one is just bloody irritating :p

Comment author: Alicorn 02 July 2010 07:40:55PM *  3 points [-]

If we move into the same simulation and can really interact with others, then I wouldn't mind the move at all. Apart from that, experiences are the important bit and simulations can have those.

Comment author: lsparrish 02 July 2010 12:20:22AM 3 points [-]

Paul Graham has written extensively on Startups and what is required. A highly focused team of 2-4 founders, who must be willing to admit when their business model or product is flawed, yet enthused enough about it to pour their energy into it.

Steve Blank has also written about the Customer Development process which he sees as paralleling the Product Development cycle. The idea is to get empirical feedback.by trying to sell your product from the get-go, as soon as you have something minimal but useful. Then you test it for scalability. Eventually you have strong empirical evidence to present to potential investors, aka "traction".

These strike me as good examples of applied rationality. I wonder what percentage of Less Wrong readers would succeed as startup founders?

Comment author: RichardKennaway 02 July 2010 07:20:19AM *  3 points [-]

These strike me as good examples of applied rationality. I wonder what percentage of Less Wrong readers would succeed as startup founders?

I wonder what percentage have ever tried?

Comment author: pjeby 02 July 2010 02:39:02PM *  2 points [-]

I wonder what percentage have ever tried?

That at least partly depends on what you define as a "startup". Graham's idea of one seems to be oriented towards "business that will expand and either be bought out by a major company or become one", vs. "enterprise that builds personal wealth for the founder(s)".

By Graham's criteria, Joel Spolsky's company, Fog Creek, would not have been considered a startup, for example, nor would any business I've ever personally run or been a shareholder of.

[Edit: I should say, "or been a 10%+ shareholder of"; after all, I've held shares in public companies, some of which were undoubtedly startups!]

Comment author: wedrifid 02 July 2010 03:02:31AM 2 points [-]

These strike me as good examples of applied rationality. I wonder what percentage of Less Wrong readers would succeed as startup founders?

I would not deviate too much from the prior (most would fail).

Comment author: [deleted] 07 July 2010 08:27:22PM 6 points [-]

Here are some assumptions one can make about how "intelligences" operate:

  1. An intelligent agent maintains a database of "beliefs"
  2. It has rules for altering this database according to its experiences.
  3. It has rules for making decisions based on the contents of this database.

and an assumption about what "rationality" means:

  1. Whether or not an agent is "rational" depends only on the rules it uses in 2. and 3.

I have two questions:

I think that these assumptions are implicit in most and maybe all of what this community writes about rationality, decision theory, and similar topics. Does anyone disagree? Or agree?

Have assertions 1-4, or something similar to them, been made explicit and defended or criticized anywhere on this website?

The background is that I've been kicking around the idea that a focus on "beliefs" is misleading when modeling intelligence or intelligent agents.

This is my first post, please tell me if I'm misusing any jargon.

Comment author: apophenia 02 July 2010 11:59:27AM 5 points [-]

The following is a story I wrote down so I could sleep. I don't think it's any good, but I posted it on the basis that, if that's true, it should quickly be voted down and vanish from sight.

one five eight nine eight eight eight nine nine eight SEVEN wait. why seven. seven is the nine thousandth deviation. update. simplest explanation. all ones. next explanation. all ones and one zero. next explanation. random ones and zeros with probability point seven nine nine seven repeating. next explanation pi. gap. next explanation. decimal pi with random errors according to poisson distribution converted to binary. next explanation. one seven one eight eight five two decimals of pi with random errors according to poisson distribution converted to binary followed by eight five nine zero one digits of reflexive code. current explanation--

"Eric, you've got to come over and look at this!" Jerry explained excitedly into the phone.

"It's not those damn notebooks again, is it? I've told you, I could just write a computer program and you'd have all your damn results for the last year inside a week," Eric explained sleepily for the umpteenth time.

"No, no. Well... yes. But this is something new, you've got to take a look," Jerry wheedled.

"What is it this time? I know, it can calculate pi with 99.9% percent accuracy, yadda yadda. We have pi to billions of decimal places with total accuracy, Jerry. You're fifty years too late."

"No, I've been trying something new. Come over." Jerry hung up the phone, clearly upset. Eric rubbed his eyes. Fifteen minutes peering at the crackpot notebooks and nodding appreciatively would sooth his friend's ego, he knew. And he was a good friend, if a little nuts. Eric took one last longing look at his bed and grabbed his house key.

"And you see this pattern? The ones that are nearly diagonal here?"

"Jerry, it's all a bunch of digits to me. Are you sure you didn't make a mistake?"

"I double check all my work, I don't want to go back too far when I make a mistake. I've explained the pattern twice already, Eric."

"I know, I know. But it's Saturday morning, I'm going to be a bit--let me get this straight. You decided to apply the algorithm to its old output."

"No, not its own output, that's mostly just pi. The whole pad."

"Jerry, you must have fifty of these things. There's no way you can--"

"Yeah, I didn't go very far. Besides, the scratch pads grow faster than the output as I work through the steps anyway."

"Okay, okay. So you run through these same steps with your scratch pad numbers, and you get correct predictions then too?"

"That's not the point!"

"Calm down, calm down. What's the point then?"

"The point is these patterns in the scratch work--"

"The memory?"

"Yeah, the memory."

"You know, if you'd just let me write a program, I--"

"No! It's too dangerous."

"Jerry, it's a math problem. What's it going to do, write pi at you? Anyway, I don't see this pattern..."

"Well, I do. And so then I wondered, what if I just fed it ones for the input? Just rewarded it no matter what it did?"

"Jerry, you'd just get random numbers. Garbage in, garbage out."

"That's the thing, they weren't random."

"Why the hell are you screwing around with these equations anyway? If you want to find patterns in the Bible or something... just joking! Oww, stop. I kid, kid!"

"But, I didn't get random numbers! I'm not just seeing things, take a look. You see here in the right hand column of memory? We get mostly zeros, but every once in a while there's a one or two."

"Okaaay?"

"And if you write those down we have 2212221..."

"Not very many threes?"

"Ha ha. It's the perfect numbers, Eric. I think I stumbled on some way of outputting the perfect numbers. Although the digits are getting further spaced apart, so I don't know how long it will stay faster than factoring."

"Huh. That's actually kinda cool, if they really are the perfect numbers. You have what, five or six so far? Let's keep feeding it ones and see what happens. Want me to write a program? I hear there's a cash prize for the larger ones."

"NO! I mean, no, that's fine, Eric. I'd prefer you not write a program for this, just in case."

"Geez, Jerry. You're so paranoid. Well, in that case can I help with the calculations by hand? I'd love to get my claim to fame somehow."

"Well... I guess that's okay. First, you copy this digit from here to here..."

Comment author: cousin_it 02 July 2010 02:34:23PM *  4 points [-]

Ooh, an LW-themed horror story. My humble opinion: it's awesome! This phrase was genius:

What's it going to do, write pi at you?

Moar please.

Comment author: Oscar_Cunningham 02 July 2010 02:45:33PM 3 points [-]

How does 2212221 represent perfect numbers?

Comment author: pjeby 02 July 2010 02:30:41PM 4 points [-]

Wait, is that the whole story? 'cause if so, I really don't get it. Where's the rest of it? What happens next? Is Jerry afraid that his algorithm is a self-improving AI or something?

Comment author: apophenia 02 July 2010 11:24:11PM 3 points [-]

Apparently my story is insufficiently explicit. The gag here is that the AI is sentient, and has tricked Jerry into feeding it only reward numbers.

Comment author: Sniffnoy 02 July 2010 11:53:32PM 5 points [-]

I'm going to second the idea that that isn't clear at all.

Comment author: SilasBarta 01 July 2010 09:28:47PM *  6 points [-]

Okay, here's something that could grow into an article, but it's just rambling at this point. I was planning this as a prelude to my ever-delayed "Explain yourself!" article, since it eases into some of the related social issues. Please tell me what you would want me to elaborate on given what I have so far.


Title: On Mechanizing Science (Epistemology?)

"Silas, there is no Bayesian ‘revival’ in science. There is one amongst people who wish to reduce science to a mechanical procedure." – Gene Callahan

“It is not possible … to construct a system of thought that improves on common sense. … The great enemy of the reservationist is the automatist[,] who believes he can reduce or transcend reason. … And the most pernicious [of them] are algorithmists, who believe they have some universal algorithm which is a drop-in replacement for any and all cogitation.” – "Mencius Moldbug"

And I say: What?

Forget about the issue of how many Bayesians are out there – I’m interested in the other claim. There are two ways to read it, and I express those views here (with a bit of exaggeration):

View 1: “Trying to come up with a mechanical procedure for acquiring knowledge is futile, so you are foolish to pursue this approach. The remaining mysterious aspects of nature are so complex you will inevitably require a human to continually intervene to ‘tweak’ the procedure based on human judgment, making it no mechanical procedure at all.”

View 2: “How dare, how dare those people try to mechanize science! I want science to be about what my elite little cadre has collectively decided is real science. We want to exercise our own discretion, and we’re not going to let some Young Turk outsiders upstage us with their theories. They don’t ‘get’ real science. Real science is about humans, yes, humans making wise, reasoned judgments, in a social context, where expertise is recognized and a rewarded. A machine necessarily cannot do that, so don’t even try.”

View 1, I find respectable, even as I disagree with it.

View 2, I hold in utter contempt.

Comment author: Vladimir_M 02 July 2010 07:32:09AM *  8 points [-]

I think there is an additional interpretation that you're not taking into account, and an eminently reasonable one.

First, to clarify the easy question: unless you believe that there is something mysteriously uncomputable going on in the human brain, the question of whether science can be automated in principle is trivial. Obviously, all you'd need to do is to program a sufficiently sophisticated AI, and it will do automated science. That much is clear.

However, the more important question is -- what about our present abilities to automate science? By this I mean both the hypothetical methods we could try and the ones that have actually been tried in practice. Here, at the very least, a strong case can be made that the 20th century attempt to transform science into a bureaucratic enterprise that operates according to formal, automated procedures has largely been a failure. It has undoubtedly produced an endless stream of cargo-cult science that satisfies all these formal bureaucratic procedures, but is nevertheless worthless -- or worse. At the same time, it's unclear how much valid science is coming out except for those scientists who have maintained a high degree of purely informal and private enthusiasm for discovering truth (and perhaps also those in highly practical applied fields where the cash worth of innovations provides a stringent reality check).

This is how I read Moldbug: in many important questions, we can only admit honestly that we still have no way to find answers backed by scientific evidence in any meaningful sense of the term, and we have to grapple with less reliable forms of reasoning. Yet, there is the widespread idea that if only the proper formal bureaucratic structures are established, we can get "science" to give us answers about whichever questions we find interesting, and we should guide our lives and policies according to the results of such "science." It's not hard to see how this situation can give birth to a diabolical network of perverse incentives, producing endless reams of cargo-cult scientific work published by prestigious outlets and venerated as "science" by the general public and the government.

The really scary prospect is that our system of government might lead us to a complete disaster guided by policy prescriptions coming from this perverted system that has, arguably, already become its integral part.

Comment author: SilasBarta 02 July 2010 04:24:38PM *  3 points [-]

Okay, thanks, that tells me what I was looking for: clarification of what it is I'm trying to refute, and what substantive reasons I have to disagree.

So "Moldbug" is pointing out that the attempt to make science into an algorithm has produced a lot of stuff that's worthless but adheres to the algorithm, and we can see this with common sense, however less accurate it might be.

The point I would make in response (and elaborate on in the upcoming article), is that this is no excuse not to look inside the black box that we call common sense and understand why it works, and what about it could be improved, while the Moldbug view asks that we not do it. Like E. T. Jaynes says in chapter 1 of PLoS, the question we should ask is, if we were going to make a robot that infers everything we should infer, what constraints would we place on it?

This exercise is not just some attempt to make robots "as good as humans"; rather, it reveals why that-which-we-call "common sense" works in the first place, and exposes more general principles of superior inference.

In short, I claim that we can have Level 3 understanding of our own common sense. That, contra Moldbug, we can go beyond just being able to produce its output (Level 1), but also know why we regard certain things as common sense but not others, and be able to explain why it works, for what domains, and why and where it doesn't work.

This could lead to a good article.

Comment author: Tyrrell_McAllister 08 July 2010 07:46:39PM 2 points [-]

View 2: “How dare, how dare those people try to mechanize science! . . .

The pithy reply would be that science already is mechanized. We just don't understand the mechanism yet.

Comment author: cupholder 03 July 2010 04:46:44AM *  2 points [-]

"Silas, there is no Bayesian ‘revival’ in science. There is one amongst people who wish to reduce science to a mechanical procedure." – Gene Callahan

Am I the only one who finds this extremely unlikely? So far as I know, Bayesian methods have become massively more popular in science over the last 50 years. (Count JSTOR hits for the word 'Bayesian,' for example, and watch the numbers shoot up over time!)

Comment author: TraditionalRationali 02 July 2010 05:47:00AM *  2 points [-]

That it should be possible to Algorithmize Science seems clear from that the human brain can do science and the human brain should be possible to describe algorthmically. If not at a higher level, so at least -- in principle -- by quantum electrodynamics which is the (known and computable in principle) dynamics of electrons and nuclei that are the building blocks of the brain.( If it should be possible to do in practice it would have to be done at a higher level but as a proof of principle that argument should be enough.)

I guess, however, that what is actually meant is if the scientific method itself could be formalised (algorithmized), so that science could be "mechanized" in a more direct way than building human-level AIs and then let them learn and do science by the somewhat informal process used today by human scientists. That seems plausible. But has still to be done and seems rather difficult. The philosophers of science is working on understanding the scientific process better and better, but they seem still to have a long way to go before an actually working algorithmic description has been achieved. See also the discussion below on the recent article by Gelman and Shalizi criticizing bayesianism.

EDIT "done at a lower level" changed to "done at a higher level"

Comment author: WrongBot 02 July 2010 03:45:49PM 2 points [-]

The scientific method is already a vague sort of algorithm, and I can see how it might be possible to mechanize many of the steps. The part that seems AGI-hard to me is the process of generating good hypotheses. Humans are incredibly good at plucking out reasonable hypotheses from the infinite search space that is available; that we are so very often says more of the difficulty of the problem than our own abilities.

Comment author: NancyLebovitz 02 July 2010 03:44:39AM 2 points [-]

How hard do you think mechanizing science would be? It strikes me as being at least in the same class with natural language.

Comment author: PeerInfinity 29 July 2010 04:58:47AM 2 points [-]

an interesting site I stumbled across recently: http://youarenotsosmart.com/

They talk about some of the same biases we talk about here.

Comment author: SilasBarta 07 July 2010 09:01:41PM *  2 points [-]

Information theory challenge: A few posters have mentioned here that the average entropy of a character in English is about one bit. This carries an interesting implication: you should be able to create an interface using only two of the keyboards keys, such that composing an English message requires just as many keystrokes, on average, as it takes on a regular keyboard.

To do so, you'd have to exploit all the regularities of English to offer suggestions that save the user from having to specify individual letters. Most of the entropy is in the intial charaters of a word or message, so you would probably spend more strokes on specifying those, but then make it up with some "autocomplete" feature for large portions of the message.

If that's too hard, it should be a lot easier to do a 3-input method, which only requires your message set to have an entropy of less than ~1.5 bits per character.

Just thought I'd point that out, as it might be something worth thinking about.

Comment author: gwern 07 July 2010 11:59:45PM 3 points [-]

Already done; see Dasher and especially its Google Tech Talk.

It doesn't reach the 0.7-1 bit per character limit, of course, but then, according to the Hutter challenge no compression program (online or offline) has.

Comment author: SilasBarta 08 July 2010 02:16:41AM 2 points [-]

Wow, and Dasher was invented by David MacKay, author of the famous free textbook on information theory!

Comment author: Christian_Szegedy 07 July 2010 09:21:06PM 2 points [-]

This is already exploited on cell phones to some extent.

Comment deleted 05 July 2010 11:15:29AM *  [-]
Comment author: Douglas_Knight 06 July 2010 02:17:30AM *  3 points [-]

Why do you link to a blog, rather than an introduction or a summary? Is this to test whether we find it so silly that we don't look for their best arguments?

My impression is that antinatalists are highly verbal people who base their idea of morality on how people speak about morality, ignoring how people act. They get the idea that morality is about assigning blame and so feel compelled only to worry about bad acts, thus becoming strict negative utilitarians or rights-deontologists with very strict and uncommon rights. I am not moved by such moralities.

Maybe some make more factual claims, eg, that most lives are net negative or that reflective life would regret itself. These seem obviously false, but I don't see that they matter. These arguments should not have much impact on the actions of the utilitarians that they seem aimed at. They should build a superhuman intelligence to answer these questions and implement the best course of action. If human lives are not worth living, then other lives may be. If no lives are worth living, then a superintelligence can arrange for no lives to be lead, while people evangelizing antinatalism aren't going to make a difference.

Incidentally, Eliezer sometimes seems to be an anti-human-natalist.

Comment author: Nisan 05 July 2010 11:50:12AM 3 points [-]

Here's one: I bet if you asked lots of people whether their birth was a good thing, most of them would say yes.

If it turns out that after sufficient reflection, people, on average, regard their birth as a bad thing, then this argument breaks down.

Comment deleted 05 July 2010 11:57:17AM [-]
Comment author: Nisan 05 July 2010 03:29:57PM 5 points [-]

If our contrarian position was as wrong as we think antinatalism is, would we realize?

If there was an argument for antinatalism that was capable of moving us, would we have seen it? Maybe not. A LessWrong post summarizing all of the good arguments for antinatalism would be a good idea.

Comment author: Leonhart 05 July 2010 03:05:24PM *  5 points [-]

I don't think antinatalism is silly, although I have not really tried to find problems with it yet. My current, not-fully-reflected position is that I would have prefer to not have existed (if that's indeed possible) but, given that I in fact exist, I do not want to die. I don't, right now, see screaming incoherency here, although I'm suspicious.

I would very much appreciate anyone who can point out faultlines for me to investigate. I may be missing something very obvious.

Comment author: JoshuaZ 05 July 2010 03:10:26PM *  4 points [-]

The reason I ask is that antinatalism is a contrarian position we think is silly, but has some smart supporters.

Do people here really think that antinatalism is silly? I disagree with the position (very strongly) but it isn't a view that I consider to be silly in the same way that I would consider say, most religious beliefs to be silly.

But keep in mind that having smart supporters is by no means a strong indication that a viewpoint is not silly. For example, Jonathan Sarfati is a prominent young earth creationist who before he became a YEC proponent was a productive chemist. He's also a highly ranked chess master. He's clearly a bright individual. Now, you might be able to argue that YECism has a higher proportion of people who aren't smart (There's some evidence to back this up. See for example this breakdown of GSS data and also this analysis. Note that the metric used in the first one, the GSS WORDSUM, is surprisingly robust under education levels by some measures so the first isn't just measuring a proxy for education.) That might function as a better indicator of silliness. But simply having smart supporters seems insufficient to conclude that a position is not silly.

It does however seem that on LW there's a common tendency to label beliefs silly when they mean "I assign a very low probability to this belief being correct." Or "I don't understand how someone's mind could be so warped as to have this belief." Both of these are problematic, the second more so than the first because different humans have different value systems. In this particular example, value systems that put harm to others as more bad are more likely to be able to make a coherent antinatalist position. In that regard, note that people are able to discuss things like paperclippers but seem to have more difficulty discussing value systems which are in many ways closer to their own. This may be simply because paperclipping is a simple moral system. It may also be because it is so far removed from their own moral systems that it becomes easier to map out in a consistent fashion where something like antinatalism is close enough to their own moral system that people conflate some of their own moral/ethical/value conclusions with those of the antinatalist, and that this occurs subtly enough for people not to notice.

Comment author: cupholder 05 July 2010 08:21:00PM 2 points [-]

Do people here really think that antinatalism is silly?

A data point: I don't think antinatalism (as defined by Roko above - 'it is a bad thing to create people') is silly under every set of circumstances, but neither is it obviously true under all circumstances. If my standard of living is phenomenally awful, and I knew my child's life would be equally bad, it'd be bad to have a child. But if I were living it up, knew I could be a good parent, and wanted a kid, what would be so awful about having one?

Comment author: Blueberry 05 July 2010 08:26:28PM 4 points [-]

But if I were living it up, knew I could be a good parent, and wanted a kid, what would be so awful about having one?

That your child might experience a great deal of pain which you could prevent by not having it.

That your child might regret being born and wish you had made the other decision.

That you can be a good parent, raise a kid, and improve someone's life without having a kid (adopt).

That the world is already overpopulated and our natural resources are not infinite.

Comment author: cupholder 05 July 2010 08:54:22PM 1 point [-]

Points taken.

Let me restate what I mean more formally. Conditional on high living standards, high-quality parenting, and desire to raise a child, one can reasonably calculate that the expected utility (to myself, to the potential child and to others) of having the child is higher than the expected utility of not having a child. In which case I wouldn't think the antinatalism position has legs.

Comment author: Blueberry 06 July 2010 01:42:52AM 3 points [-]

one can reasonably calculate that the expected utility (to myself, to the potential child and to others) of having the child is higher than the expected utility of not having a child.

I'm not sure about this. It's most likely that anything your kid does in life will get done by someone else instead. There is also some evidence that having children decreases your happiness (though there may be other reasons to have kids).

But even if this is true, it's still not enough for antinatalism. Increasing total utility is not enough justification to create a life. The act of creation makes you responsible for the utility of the individual created, and you have a duty not to create an entity you have reason to think may have negative personal utility. (Strict utilitarians will disagree.)

Comment author: NancyLebovitz 05 July 2010 09:49:42PM 3 points [-]

I'd throw in considering how stable you think those high living standards are.

Comment author: RichardKennaway 05 July 2010 01:25:31PM *  2 points [-]

If our contrarian position was as wrong as we think antinatalism is, would we realize?

We have many contrarian positions, but antinatalism is one position. Personally, I think that some of the contrarian positions that some people advocate here are indeed silly.

Comment author: Kingreaper 05 July 2010 01:16:12PM *  4 points [-]

Even if antinatalism is true at present (I have no major opinion on the issue yet) it need not be true in all possible future scenarios.

In fact, should the human race shrink significantly [due to antinatalism perhaps], without societal collapse, the average utility of a human life should increase. I find it highly unlikely that even the maximum average utility is still less than zero.

Comment author: Jayson_Virissimo 07 July 2010 04:58:28AM *  2 points [-]

In fact, should the human race shrink significantly [due to antinatalism perhaps], without societal collapse, the average utility of a human life should increase.

Why shouldn't having a higher population lead to greater specialization of labor, economies of scale, greater gains from trade, and thus greater average utility?

Comment author: Kingreaper 07 July 2010 01:12:57PM *  0 points [-]

Resource limitations.

There is only a limited amount of any given resource available. Decreasing the number of people therefore increases the amount of resource available per person.

There is a point at which decreasing the population will begin decreasing average utility, but to me it seems nigh certain that that point is significantly below the current population.
I could be wrong, and if I am wrong I would like to know.

Do you feel that the current population is optimum, below optimum, or above optimum?

Comment author: cousin_it 05 July 2010 12:27:23PM *  3 points [-]

The antinatalist argument goes that humans suffer more than they have fun, therefore not living is better than living. Why don't they convert their loved ones to the same view and commit suicide together, then? Or seek out small isolated communities and bomb them for moral good.

I believe the answer to antinatalism is that pleasure != utility. Your life (and the lives of your hypothetical kids) could create net positive utility despite containing more suffering than joy. The "utility functions" or whatever else determines our actions contain terms that don't correspond to feelings of joy and sorrow, or are out of proportion with those feelings.

Comment author: Leonhart 05 July 2010 02:55:43PM *  2 points [-]

The suicide challenge is a non sequitur, because death is not equivalent to never having existed, unless you invent a method of timeless, all-Everett-branch suicide.

Comment author: cousin_it 05 July 2010 03:39:53PM *  3 points [-]

By the standard you propose, "never having existed" is also inadequate unless you invent a method of timeless, all-Everett-branch means of never having existed. Whatever kids an antinatalist can stop from existing in this branch may still exist in other branches.

Comment author: Kingreaper 05 July 2010 03:01:10PM 2 points [-]

Precisely.

If the utility of the first ten or fifteen years of life is extremely negative, and the utility of the rest slightly positive, then it can be logical to believe that not being born is better than being born, but suicide (after a certain age) is worse than either.

Comment author: orthonormal 06 July 2010 05:47:43AM *  4 points [-]

If the utility of the first ten or fifteen years of life is extremely negative

I think that's getting at a non-silly defense of antinatalism: what if the average experience of middle school and high school years is absolutely terrible, outweighing other large chunks of life experience, and adults have simply forgotten for the sake of their sanity?

I don't buy this, but it's not completely silly. (However, it suggests a better Third Alternative exists: applying the Geneva Convention to school social life.)

Comment author: gwern 06 July 2010 07:13:30AM 3 points [-]

adults have simply forgotten for the sake of their sanity?

not completely silly.

Quite right. Suicide rates spike in adolescence, go down, and only spike again in old age, don't they? Suicide is, I think, a good indicator that someone is having a bad life.

(Also, I've seen mentions on LW of studies that people raising kids are unhappier than if they were childless, but once the kids are older, they retrospectively think they were much happier than they actually were.)

Comment author: ocr-fork 29 July 2010 11:35:30PM *  6 points [-]

Quite right. Suicide rates spike in adolescence, go down, and only spike again in old age, don't they? Suicide is, I think, a good indicator that someone is having a bad life.

Suicide rates start at .5 in 100,000 for ages 5-14 and rise to about 15 in 100,000 for seniors.

Comment author: gwern 30 July 2010 04:27:50AM 4 points [-]

Interesting. From page 30, suicide rates increase monotonically in the 5 age groups up to and including 45-54 (peaking at 17.2 per 100,000), but then drops by 3 to 14.5 (age 55-64) and drops another 2 for the 65-74 age bracket (12.6), and then rises again after 75 (15.9).

So, I was right that the rates increase again in old age, but wrong about when the first spike was.

Comment author: pjeby 30 July 2010 04:27:27PM 2 points [-]

So, I was right that the rates increase again in old age, but wrong about when the first spike was.

Unfortunately, the age brackets don't really tell you if there's a teenage spike, except that if there is one, it happens after age 14. That 9.9 could actually be a much higher level concentrated within a few years, if I understand correctly.

Comment author: Mass_Driver 06 July 2010 05:52:01AM 3 points [-]

Whenever anyone mentions how much it sucks to be a kid, I plug this article. It does suck, of course, but the suckage is a function of what our society is like, and not of something inherent about being thirteen years old.

Why Nerds Hate Grade School

Comment author: Mitchell_Porter 06 July 2010 02:55:56AM 2 points [-]

I have long wrestled with the idea of antinatalism, so I should have something to say here. Certainly there were periods in my life in which I thought that the creation of life is the supreme folly.

We all know that terrible things happen, that should never happen to anyone. The simplest antinatalist argument of all is, that any life you create will be at risk of such intolerably bad outcomes; and so, if you care, the very least you can do is not create new life. No new life, no possibility of awful outcomes in it, problem avoided! And it is very easy to elaborate this into a stinging critique of anyone who proposes that nonetheless one shouldn't take this seriously or absolutely (because most people are happy, most people don't commit suicide, etc). You intend to gamble with this new life you propose to create, simply because you hope that it won't turn out terribly? And this gamble you propose appears to be completely unnecessary - it's not as if people have children for the greater good. Etc.

A crude utilitarian way to moderate the absoluteness of this conclusion would be to say, well, surely some lives are worth creating, and it would make a lot of people sad to never have children, so we reluctantly say to the ones who would be really upset to forego reproduction, OK, if you insist... but for people who can take it, we could say: There is always something better that you could do with your life. Have the courage not to hide from the facts of your own existence in the boisterous distraction of naive new lives.

It is probably true that philanthropic antinatalists, like the ones at the blog to which you link, are people who have personally experienced some profound awfulness, and that is why they take human suffering with such deadly seriousness. It's not just an abstraction to them. For example, Jim Crawford (who runs that blog) was once almost killed in a sword attack, had his chest sliced open, and after they stitched him up, literally every breath was agonizing for a long time thereafter. An experience like that would sensitize you to the reality of things which luckier people would prefer not to think about.

Comment deleted 06 July 2010 10:47:56AM *  [-]
Comment author: Mitchell_Porter 07 July 2010 06:22:08AM 1 point [-]

I think that for you, a student of the singularity concept, to arrive at a considered and consistent opinion regarding antinatalism, you need to make some judgments regarding the quality of human life as it is right now, "pre-singularity".

Suppose there is no possibility of a singularity. Suppose the only option for humanity is life more or less as it is now - ageing, death, war, economic drudgery, etc, with the future the same as the past. Everyone who lives will die; most of them will drudge to stay alive. Do you still consider the creation of a human life justifiable?

Do you have any personal hopes attached to the singularity? Do you think, yes, it could be very bad, it could destroy us, that makes me anxious and affects what I do; but nonetheless, it could also be fantastic, and I derive meaning and hope from that fact?

If you are going to affirm the creation of human life under present conditions, but if you are also deriving hope from the anticipation of much better future conditions, then you may need to ask yourself how much of your toleration of the present derives from the background expectation of a better future.

It would be possible to have the attitude that life is already great and a good singularity would just make it better; or that the serious possibility of a bad singularity is enough for the idea to urgently command our attention; but it's also clear that there are people who either use singularity hope to sustain them in the present, or who have simply grown up with the concept and haven't yet run into difficulty.

I think the combination of transhumanism and antinatalism is actually a very natural one. Not at all an inevitable one; biotechnology, for example, is all about creating life. But if you think, for example, that the natural ageing process is intolerable, something no-one should have to experience, then probably you should be an antinatalist.

Comment author: red75 06 July 2010 04:44:15AM *  1 point [-]

Either antinatalism is futile in long run, or it is existential threat.

If we assume that antinatalism is rational, then in long run it will lead to reduction of part of human population, that is capable/trained to do rational decisions, thus making antinatalists' efforts futile. As we can see, people that should be most susceptible to antinatalism don't even consider this option (en masse at least). And given their circumstances they have clear reason for that: every extra child makes it less likely for them to starve to death in old age, as more children more chances for family to control more resources. It is big prisoner dilemma, where defectors win.

Edit: Post-humans are not considered. They will have other means to acquire resources.

Edit: My point: antinatalism can be rational for individuals, but it cannot be rational for humankind to accept (even if it is universally true as antinatalists claim).

Comment author: multifoliaterose 04 July 2010 11:41:41PM 2 points [-]

Another reference request: Eliezer made a post about how it's ultimately incoherent to talk about how "A causes B" in the physical world because at root, everything is caused by the physical laws and initial conditions of the universe. But I don't remember what it is called. Does anybody else remember?

Comment author: Vladimir_Nesov 06 July 2010 09:50:55AM *  4 points [-]

It is coherent to talks about "A causes B", to the contrary it's a mistake to say that everything is caused by physical laws, and therefore you have no free will, for example (as if your actions don't cause anything). Of course, any given event won't normally have only one cause, but considering the causes of an event makes sense. See the posts on free will, and then the solution posts linked from there. The picture you were thinking about is probably from these posts.

Comment author: steven0461 04 July 2010 09:46:01PM 2 points [-]

We think of Aumann updating as updating upward if the other person's probability is higher than you thought it would be, or updating downward if the other person's probability is lower than you thought it would be. But sometimes it's the other way around. Example: there are blue urns that have mostly blue balls and some red balls, and red urns that have mostly red balls and some blue balls. Except on Opposite Day, when the urn colors are reversed. Opposite Day is rare, and if it's OD you might learn it's OD or you might not. A and B are given an urn and are trying to find out whether it's red. It's OD, which A knows but B doesn't. They both draw a few balls. Then A knows if B draws red balls, B (not knowing it's OD) will estimate a high probability for red and therefore A (knowing it's OD) should estimate a low probability for red, and vice versa. So this is a sense in which intelligence can be inverted misguidedness.

Another thought: suppose in the above example, there's a small chance (let's say equal to the chance that it's OD) that A is insane and will behave as if always knowing for sure it's OD. Then if we're back in the case where it actually is OD and A is sane, the estimates of A and B will remain substantially different forever. So taking this as an example it seems like even tiny failures of common knowledge of rationality can (in correspondingly improbable cases) cause big persistent disagreements between rational agents.

Is the reasoning here correct? Are the examples important in practice?

Comment author: Emile 03 July 2010 10:17:37AM *  2 points [-]

I have some half-baked ideas about getting interesting information on lesswronger's political opinions.

My goal is to give everybody an "alien's eye" view of their opinions, something like "You hold position Foo on issue Bar, and justify it by the X books you read on Bar; but among the sample people who read X or more books on Bar, 75% hold position ~Foo, suggesting that you are likely to be overconfident".

Something like collecting:

  • your positions on various issues

  • your confidence in that position

  • how important various characteristics are at predicting correct opinions on that issue (intelligence, general education, reading on the issue, age ("general experience"), specific work or life experience with the issue, etc.)

  • How well you fare on those characteristics

  • Whether you expect to be above or below average (for LessWrong) on those characteristics

  • How many lesswrongers you expect will disagree with you on that issue

  • Whether you expect those who disagree with you to be above or below average on the various characteristics

  • How much you would be willing to change your mind if you saw surprising information

What data we could get from that

  • Are differences in opinion due to different "criteria for rightness" (book-knowledge vs. experience), to different "levels of knowledge" (Smart people believe A, stupid people believe B), or to something else ?

Problems with this approach:

  • Politics is the mind-killer. We may not want too much (or any) politics on LessWrong. If the data is collected anonymously, this may not be a huge problem.

  • It's easier to do data-mining etc. with multiple-choice questions rather than with open-ended questions (because two people never answer the same thing, so it leaves space to interpretation), but doing that correctly requires very good advance knowledge of what possible answers exist.

  • Questions would be veeery carefully phrased.

  • Ideally I would want confidence factors for all answers, but the end result may be too intimidating :P (And discourage people from answering, which makes a small sample size, which means questionable results).

I would certainly be interested in seeing the result of such a survey, but for now my idea is too rough to be actionable - any suggestions ? Comments ?

Comment author: Douglas_Knight 03 July 2010 07:51:58PM 2 points [-]
Comment author: JamesPfeiffer 02 July 2010 05:19:32PM 2 points [-]

I have been thinking about "holding off on proposing solutions." Can anyone comment on whether this is more about the social friction involved in rejecting someone's solution without injuring their pride, or more about the difficulty of getting an idea out of your head once it's there?

If it's mostly social, then I would expect the method to not be useful when used by a single person; and conversely. My anecdote is that I feel it's helped me when thinking solo, but this may be wishful thinking.

Comment author: Oscar_Cunningham 02 July 2010 05:28:36PM 2 points [-]

Definitely the latter, even when I'm on my own, any subsequent ideas after my first one tend to be variations on my first solution, unless I try extra hard to escape its grip.

Comment author: Cyan 07 July 2010 02:03:53AM *  3 points [-]

I love that on LW, feeding the trolls consists of writing well-argued and well-supported rebuttals.

Comment author: kpreid 07 July 2010 02:13:19AM 4 points [-]

This is not a distortion of the original meaning. “Feeding the trolls” is just giving them replies of any sort — especially if they're well-written, because you’re probably investing more effort than the troll.

Comment author: JoshuaZ 07 July 2010 02:08:58AM 2 points [-]

I don't think this is unique to LW at all. I've seen well-argued rebuttals to trolls labeled as feeding in many different contexts including Slashdot and the OOTS forum.

Comment author: Will_Newsome 09 July 2010 01:52:40AM 2 points [-]

So, probably like most everyone else here, I sometimes get complaints (mostly from my ex-girlfriend, you can always count on them to point out your flaws) that I'm too logical and rational and emotionless and I can't connect with people or understand them et cetera. Now, it's not like I'm actually particularly bad at these things for being as nerdy as I am, and my ex is a rather biased source of information, but it's true that I have a hard time coming across as... I suppose the adjective would be 'warm', or 'human'. I've attributed a lot of this to a) my always-seeking-outside-confirmation-of-competence-style narcissism, b) my overly precise (for most people, not here) speech patterns. (For instance, when my ex said I suck at understanding people, I asked "Why do you believe that?" instead of the simpler and less clinical-psychologist-sounding "How so?" or "How?" or what not.) and c) accidentally randomly bringing up terms like 'a priori' which apparently most people haven't heard. I think there's more low hanging fruit here, though. Tsuyoku naritai!

Has anyone else tackled these problems? It's not that I lack charisma - I've managed to pull off that insane/passionate/brilliant thing among my friends - but I do seem to lack the ability to really connect with people - even people I really care about. Do Less Wrongers experience similar problems? Any advice? Or meta-advice about how to learn hard-to-describe dispositions? I've noticed that consciously acting like I was Regina Spektor in one situation or Richard Feynman in another seems to help, for instance.

Comment author: wedrifid 09 July 2010 02:29:22AM *  6 points [-]

I suggest a lot of practice talking to non-nerds or nerds who aren't in their nerd mode. (And less time with your ex!)

A perfect form of practice is dance. Take swing dancing lessons, for example. That removes the possibility of using your overwhelming verbal fluency and persona of intellectual brilliance. It makes it far easier to activate that part that is sometimes called 'human' but perhaps more accurately called 'animal'. Once you master maintaining the social connection in a purely non-verbal setting adding in a verbal component yet maintaining the flow should be far simpler.

Comment author: Will_Newsome 09 July 2010 02:34:53AM *  2 points [-]

I suggest a lot of practice talking to non-nerds or nerds who aren't in their nerd mode.

Non-nerdy people who are interesting are surprisingly difficult to find, and I have a hard time connecting with the ones I do find such that I don't get much practice in. I'm guessing that the biggest demographic here would be artists (musicians). Being passionate about something abstract seems to be the common denominator.

(And less time with your ex!)

Ha, perhaps a good idea, but I enjoy the criticism. She points out flaws that I might have missed otherwise. I wonder if one could market themselves as a professional personality flaw detector or the like. I'd pay to see one.

Once you master maintaining the social connection in a purely non-verbal setting adding in a verbal component yet maintaining the flow should be far simpler.

Interesting, I had discounted dancing because of its nonverbality. Thanks for alerting me to my mistake!

Comment author: wedrifid 09 July 2010 03:45:56AM 4 points [-]

Interesting, I had discounted dancing because of its nonverbality. Thanks for alerting me to my mistake!

I was using very similar reasoning when I suggested "non nerds or nerds not presently in nerd mode". The key is hide the abstract discussion crutch!

Ha, perhaps a good idea, but I enjoy the criticism. She points out flaws that I might have missed otherwise. I wonder if one could market themselves as a professional personality flaw detector or the like. I'd pay to see one.

Friends who are willing to suggest improvements (Tsuyoku naritai) sincerely are valuable resources! If your ex is able to point out a flaw then perhaps you could ask her to lead you through an example of how to have a 'warm, human' interaction, showing you the difference between that and what you usually do? Mind you, it is still almost certainly better to listen to criticism from someone who has a vested interest in your improvement rather than your acknowledgement of flaws. Like, say, a current girlfriend. ;)

Comment author: WrongBot 09 July 2010 02:08:32AM 6 points [-]

"Fake it until you make it" is surprisingly good advice for this sort of thing. I had moderate self-esteem issues in my freshman year of college, so I consciously decided to pretend that I had very high self-esteem in every interaction I had outside of class. This may be one of those tricks that doesn't work for most people, but I found that using a song lyric (from a song I liked) as a mantra to recall my desired state of mind was incredibly helpful, and got into the habit of listening to that particular song before heading out to meet friends. (The National's "All The Wine" in this particular case. "I am a festival" was the mantra I used.)

That's in the same class of thing as acting like Regina Spektor or Feynman; if you act in a certain way consistently enough, your brain will learn that pattern and it will begin to feel more natural and less conscious. I don't worry about my self-esteem any more (in that direction, at least).

Comment author: Kevin 09 July 2010 06:52:09AM *  4 points [-]

b) my overly precise (for most people, not here) speech patterns

The kind of ultra rational Bayesian lingustic patterns used around here would be considered obnoxiously intellectual and pretentious (and incomprehensible?) by most people. Practice mirroring the speech patterns of the people you are communicating with, and slip into rationalist talk when you need to win an argument about something important.

When I'm talking to street people, I say "man" a lot because it's something of a high honorific. Maybe in California I will need to start saying "dude", though man seems inherently more respectful.

Comment author: [deleted] 10 July 2010 04:48:30PM 3 points [-]

I think most people here have some sort of similar problem. Mine isn't being emotionless (ha!) but not knowing the right thing to say, putting my foot in my mouth, and so on. Occasionally coming across as a pedant, which is so embarrassing.

I may be getting better at it, though. One thing is: if you are a nerd (in the sense of passionate about something abstract) just roll with it. You will get along better with similar people. Your non-nerdy friends will know you're a nerd. I try to be as nice as possible so that when, inevitably, I say something clumsy or reveal that I'm ignorant of something basic, it's not taken too negatively. Nice but clueless is much better than arrogant.

And always wait for a cue from the other person to reveal something about yourself. Don't bring up politics unless he does; don't mention your interests unless he asks you; don't use long words unless he does.

I can't dance for shit, but various kinds of exercise are a good way to meet a broader spectrum of people.

Do I still feel like I'm mostly tolerated rather than liked? Yeah. It can be pretty depressing. But such is life.

As for dating -- the numbers are different from my perspective, of course, but so far I've found I'm not going to click really profoundly with guys who aren't intelligent. I don't mean that in a snobbish way, it's just a self-knowledge thing -- conversation is really fun for me, and I have more fun spending time with quick, talkative types. There's no point forcing yourself to be around people you don't enjoy.

Comment author: knb 09 July 2010 09:30:02AM 2 points [-]

In my experience, something as simple as adding a smile can transform a demeanor otherwise perceived as "cold" or "emotionless" to "laid-back" or "easy-going".

Comment author: JoshuaZ 09 July 2010 02:14:44AM 2 points [-]

Date nerdier people? In general, many nerdy rational individuals have a lot of trouble getting a long with not so nerdy individuals. There's some danger that I'm other optimizing but I have trouble thinking how an educated rational individual would have be able to date someone who thought that there was something wrong with using terms like "a priori." That's a common enough term, and if someone uses a term that they don't know they should be happy to learn something. So maybe just date a different sort of person?

Comment author: Will_Newsome 09 July 2010 02:27:25AM *  1 point [-]

I wasn't talking mostly about dating, but I suppose that's an important subfield.

The topic you mention came up at the Singularity Institute Visiting Fellows house a few weeks. 3 or 4 guys, myself included, expressed a preference for girls who had specialized in some other area of life: gains from trade of specialized knowledge. And I just love explaining to a girl how big the universe is and how gold is formed in super novas... most people can appreciate that, even if they see no need for using the word 'a priori'. I don't mean average intelligence, but one standard deviation above the mean intelligence. Maybe more; I tend to underestimate people. There was 1 person who was rather happy with his relationship with a girl who was very like him. However, the common theme was that people who had more dating experience consistently preferred less traditionally intelligent and more emotionally intelligent girls (I'm not using that term technically, by the way), whereas those with less dating experience had weaker preferences for girls who were like themselves. Those with more dating experience also seemed to put much more emphasis on the importance of attractiveness instead of e.g. intelligence or rationality. Not that you have to choose or anything, most of the time. I'm going to be so bold as to claim that most people with little dating experience that believe they would be happiest with a rationalist girlfriend should update on expected evidence and broaden their search criteria for potential mates.

As for preferences of women, I'm sorry, but the sample size was too small for me to see any trends. (To be fair this was a really informal discussion, not an official SIAI survey of course. :) )

Important addendum: I never actually checked to see if any of the guys in the conversation had dated women who were substantially more intelligent than average, and thus they might not have been making a fair comparison (imagining silly arguments about deism versus atheism or something). I myself have never dated a girl that was 3 sigma intelligent, for instance. I'm mostly drawing my comparison from fictional (imagined) evidence.

Comment author: JoshuaZ 09 July 2010 02:33:50AM 2 points [-]

I've dated females who were clearly less intelligent than I am, some about the same, and some clearly more intelligent. I'm pretty sure the last category was the most enjoyable (I'm pretty sure that rational intelligent nerdy females don't want want to date guys who aren't as smart as they are either). There may be issues with sample size.