Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

The Popularization Bias

21 Post author: Wei_Dai 17 July 2009 03:43PM

I noticed that most recommendations in the recent recommended readings thread consist of either fiction or popularizations of specific scientific disciplines. This introduces a potential bias: aspiring rationalists may never learn about some fields or ideas that are important for the art of rationality, just because they've never been popularized.

In my recent post on the fair division of black-hole negentropy, I tried to introduce two such ideas/fields (which may be one too many for a single post :). One is that black holes have entropy quadratic in mass, and therefore are ideal entropy dumps (or equivalently, negentropy mines). This is a well-known result in thermodynamics, plus an obvious application of it. Some have complained that the idea is too sci-fi, but actually the opposite is true. Unlike other perhaps equally obvious futuristic ideas such as cryonics, AI and the Singularity, I've never read or watched a piece of science fiction that explorered this one. (BTW, in case it's not clear why black-hole negentropy is important for rationality, it implies that value probably scales superlinearly with material and that huge gains from cooperation can be directly derived from the fundamental laws of physics.)

Similarly, there are many popularizations of topics such as the Prisoner's Dilemma and the Nash Equilibrium in non-cooperative game theory (and even a blockbuster movie about John Nash!), but I'm not aware of any for cooperative game theory.

Much of Less Wrong, and Overcoming Bias before it, can be seen as an attempt to correct this bias. Eliezer's posts have provided fictional treatments or popular accounts of probability theory, decision theory, MWI, algorithmic information theory, Bayesian networks, and various ethical theories, to name a few, and others have continued the tradition to some extent. But since popularization and writing fiction are hard, and not many people have both the skills and the motivation to do them, I wonder if there are still other important ideas/fields that most of us don't know about yet.

So here's my request: if you know of such a field or idea, just name it in a comment and give a reference for it, and maybe say a few words about why it's important, if that's not obvious. Some of us may be motivated to learn about it for whatever reason, even from a textbook or academic article, and may eventually produce a popular account for it.


Comments (53)

Comment author: Vladimir_Nesov 17 July 2009 08:22:43PM *  4 points [-]

One must select what's important, there is too much science to tell about it all. "Correcting" popularization bias must consist in steering the selection effect according to some specific criteria different from sum-total of popularization in the world. Since what's important to specific people heavily depends on their interests, it's unlikely for there to be a magic bullet that more or less universally improves on available popularized material.

The valid way out of this debacle seem to be to acquire general knowledge, to learn to see what science knows and understand it for yourself, given enough effort. Popularizing this skill instead of popularizing specific content may be a better strategy.

Comment author: whpearson 17 July 2009 08:13:18PM *  4 points [-]

The No free lunch theorems of search could do with a populist write up.

Basically to tell people making AIs that they need to reference the world/problems they are trying to deal with.

Comment author: timtyler 17 July 2009 09:48:50PM -1 points [-]

Occam's razor means that the no free lunch theorems are practically irrelevant.

Comment author: sketerpot 17 July 2009 11:45:11PM 1 point [-]

There are an awful lot of caveats that apply to the No Free Lunch theorem. Is it really very applicable in practice? If you're just going to use it as a hand-wave concept, I think it's more honest to use TANSTAAFL and make your lack of rigorous mathematical backing clear.

So, can anybody list a few lessons we can draw from the NFL theorem?

Comment author: Wei_Dai 17 July 2009 04:38:42PM 3 points [-]

To start things off, here are my entries:

Comment author: timtyler 17 July 2009 05:16:16PM 0 points [-]

Hypercomputation seems like a misguided attack on the Church-Turing thesis to me. If nobody can build a hypercomputer - and there's no evidence that anyone ever will be able to - then I am not sure I can see what the point is.

Comment author: timtyler 17 July 2009 09:59:27PM -1 points [-]

I guess it's because there is no proof that someone won't find a way of computing the uncomputable. It seems unlikely to me - but I suppose there is not much harm in philosopers speculating.

Comment author: timtyler 18 July 2009 07:49:36AM 0 points [-]

Re: Toby's "Regardless of the actual computational limits of our universe, I have no doubt that the study of hypercomputation will lead to many important theoretical results across computer science, philosophy, mathematics and physics."

Hmm. What have we got so far out of Omegas and Oracles? I expect what we will get out of Hypercomputation will be mostly confusion - since it sounds as though it is a field with a real object of study.

Comment author: Wei_Dai 19 July 2009 12:03:15PM 0 points [-]

Well, one practical result we've got is that we shouldn't program AIs to assume (either implicitly or explicitly) that the universe must be computable. See this discussion between Eliezer and me about this.

Comment author: timtyler 20 July 2009 08:34:01AM -1 points [-]

Making agents with assumptions about anything which we are not confident of the truth of seems like a dubious strategy.

We are fairly confident of the Church-Turing thesis, though: "Today the thesis has near-universal acceptance" - http://en.wikipedia.org/wiki/Church–Turing_thesis

Comment author: Wei_Dai 30 July 2009 10:19:05PM 2 points [-]

The Theory of Bayesian Aggregation - Bayesian Group Agents and Two Modes of Aggregation by Mathias Risse.

ABSTRACT: Suppose we have a group of Bayesian agents, and suppose that they would like for their group as a whole to be a Bayesian agent as well. Moreover, suppose that those agents want the probabilities and utilities attached to this group agent to be aggregated from the individual probabilities and utilities in reasonable ways. Two ways of aggregating their individual data are available to them, viz., ex ante aggregation and ex post aggregation. The former aggregates expected utilities directly, whereas the latter aggregates probabilities and utilities separately. A number of recent formal results show that both approaches have problematic implications. This study discusses the philosophical issues arising from those results. In this process, I hope to convince the reader that these results about Bayesian aggregation are highly significant to decision theorists, but also of immense interest to theorists working in areas such as ethics and political philosophy.

Comment author: Vladimir_Nesov 31 July 2009 03:24:35PM *  0 points [-]

Wasn't as enlightening as the abstract made it sound.

Comment author: Wei_Dai 01 August 2009 02:08:36AM 0 points [-]

The results seem quite significant, even if it's not clear what they mean. One possible interpretation is that expected utility maximization is not the correct ideal for group rationality.

Comment author: Vladimir_Nesov 01 August 2009 02:20:32AM 0 points [-]

Or they just do it totally wrong.

Comment author: Wei_Dai 18 July 2009 04:58:04AM 2 points [-]

I wonder if I over-corrected upon learning about cooperative game theory. Based on the relative lack of responses here, perhaps there aren't that many nuggets of knowledge left to be picked off the street, so to speak.

I'm curious, was anyone else aware of cooperative game theory, before I mentioned it here?

Comment author: gwern 18 July 2009 10:52:35PM 6 points [-]

I'm curious, was anyone else aware of cooperative game theory, before I mentioned it here?

I had vaguely heard of it and the main result you presented, but I didn't find it very interesting - and I still don't, even after your post. (The black hole material was much more interesting.)

In comparison, the first time I read about the Prisoner's Dilemma and the Tragedy of the Commons, my reaction was: 'this is amazing! It provides a new way to look at just about everything - littering on sidewalks, war, traffic & SUVs, cheating on taxes...' For a year or two, I saw everything through that lense.

Comment author: gworley 21 July 2009 06:27:07PM *  0 points [-]

Yes. Not to sound like a jerk, but I didn't realize it was so poorly known.

On the issue of nuggets of knowledge left, I think it's more so the case that we just don't know where we'll find them or that they aren't already well known. It will take something that will make someone who is aware of the details of some field realize that a popular account is needed because even his/her fellow smart people don't know about it.

Comment author: cousin_it 21 July 2009 12:02:07PM 0 points [-]

I'd read the Wikipedia page before, for some reason it didn't seem very interesting to pursue further.

Comment author: conchis 21 July 2009 11:41:22AM 0 points [-]

Yup. Although I think that the core is possibly a more useful concept than the Shapley value. (I actually had a vague suspicion it could be useful for Toby and Nick Bostrom's work on dealing with moral uncertainty, but never bothered to follow up.)

Comment author: GuySrinivasan 19 July 2009 05:29:12PM 0 points [-]

Yes, when I first learned about the Shapely value, I bothered everyone I knew by telling them all excited-like about it when they obviously didn't much care. :)

Comment author: RichardKennaway 17 July 2009 06:56:16PM 2 points [-]

Complexity theory. Back when I learned it, Garey and Johnson was the standard book, but there must be more up to date sources -- perhaps even popular ones (for some less than Harry Potter-sized value of popular).

Comment author: anonym 18 July 2009 07:27:04PM 1 point [-]

Michael Sipser's Introduction to the Theory of Computation is an extremely friendly introduction to theory of computation, including complexity theory and computability theory. As opposed to Garey and Johnson, it seems broader and shallower, covering computability theory (incl. space complexity and other non-NP-Complete topics) as well as complexity theory, and probably in a much friendlier fashion. It's one of the few compsci books I've ever read that I would describe as a "page turner": it was so interesting and readable that I couldn't put it down when reading it, and I still like to pick it up from time to time just to reread sections for pleasure.

[The 1st edition is much cheaper than the 2nd edition for anybody interesting in buying ($10-$20 used, versus >$55 used on 2nd edition or $115 new).]

Comment author: Eliezer_Yudkowsky 17 July 2009 06:18:31PM 2 points [-]

Unlike other perhaps equally obvious futuristic ideas such as cryonics, AI and the Singularity, I've never read or watched a piece of science fiction that explorered this one.

"The Gravity Mine" by Stephen Baxter. http://www.infinityplus.co.uk/stories/gravitymine.htm

Comment author: Wei_Dai 18 July 2009 07:29:27AM 1 point [-]

That's not a bad story, but the author seems more interested in using black holes as exotic locales with cool "special effects", rather than exploring the implications of their physics. The reader walks away entertained, but not really having learned anything about black-hole thermodynamics.

Comment author: timtyler 17 July 2009 04:58:13PM *  0 points [-]

Re: if it's not clear why black-hole negentropy is important for rationality, it implies that value probably scales superlinearly with material and that huge gains from cooperation can be directly derived from the fundamental laws of physics.

That is supposed to help clear up the issue?!? It has rather the opposite effect here.

Comment author: timtyler 17 July 2009 04:56:48PM 1 point [-]

Re: One is that black holes have entropy quadratic in mass, and therefore are ideal entropy dumps (or equivalently, negentropy mines).

What would anyone want a black hole entropy dump for? If you are in orbit around a star, you can just let entropy radiate off as heat. Compared to that sending it into the nearest black hole would probably require a lot of energy. This seems like a bad idea - so what is the proposed point?

Comment author: Wei_Dai 17 July 2009 05:20:50PM 4 points [-]

The point is that a black hole is much colder than interstellar space, and its temperature decreases as its mass increases. This coldness implies that it takes much less energy to dump a certain amount of entropy into a black hole than into interstellar space. Of course you probably don't want to ship that entropy across interstellar distances before dumping. That would likely wipe out any savings. You'd create a black hole close by, or build your civilization around an existing one.

Comment author: timtyler 17 July 2009 05:38:54PM *  1 point [-]

It still doesn't seem to make sense. Buiding a black hole anywhere near a sentient agent seems like a really, really bad idea. Orbiting around one doesn't help you drop things into it much - because of orbital inertia. The suggestion seems rather like proposing that we dump the planet's excess heat into the Sun - as opposed to radiating it off in all directions. Yes, we could build a heat ray and point it at the sun - but if you think about that for a moment, you will realise why it wouldn't help get rid of entropy, and would actually just make things worse.

The tiny relative temperature difference between the surface of the hole and interstellar space hardly makes much difference if you are many millions of miles away from it. Also, the hole is likely to be surrounded by extremely hot stuff in orbit around it. Are you sure that you have thought this idea through?

Comment author: RolfAndreassen 17 July 2009 05:59:07PM *  1 point [-]

By the time your civilisation is taking advantage of black holes, it's large enough that even a small temperature difference can scale to quite a bit of negentropy. Further, you don't have to be in orbit, you can build a Dyson shell around the hole at such a distance that the surface gravity is one g. (Or several shells, if people prefer different levels of gravity.) Then there's no orbital velocity to deal with. (And in any case, you could brake by tidal friction and extract some entropy that way.) Or to be shorter, why are you objecting to the practical details of a thought experiment? Nothing about the game theory relies on black holes or the particular exponent 2; it could just as well be mass^1.5, and the analysis would remain the same although the numbers would change a bit.

Comment author: Tiiba 18 July 2009 06:07:19AM *  0 points [-]

Regarding this discussion, I'm totally confused what people are talking about. It sounds like you want to take some of your excess energy and throw it into a black hole. Wouldn't it be smarter to give it to me? How can energy be "excess"?

Comment author: Wei_Dai 18 July 2009 09:30:10AM 1 point [-]

Eliezer has a post that explains some of the background assumed here: http://lesswrong.com/lw/o5/the_second_law_of_thermodynamics_and_engines_of/.

Comment author: Tiiba 19 July 2009 07:38:22AM *  2 points [-]

I have just finished reading this article. I still have no idea what it is that you intend to do with the black hole, or why it's useful. Seriously, not even an inkling. And I seem to be unique in this regard, which sucks.

The only way that I can think of for a black hole to reduce entropy is if you throw things into it. Give them to me.

Comment author: HalFinney 19 July 2009 11:01:26PM 0 points [-]

Tilba, Wei's earlier post pointed to this article:


You might also need to know that computation can be done in principle almost without expending energy, and the colder you do the computation, the less energy is wasted. Hence being cold is a good thing, and black holes are very cold.

Comment author: Tiiba 20 July 2009 03:13:29AM 1 point [-]

I didn't get it right away, but now that I do, it's pretty ingenious. Let me see if I got it right. Build a big ball in space. If the ball was empty, starlight and cosmic background would heat it up, the inner surface would emit photons, and they would bounce around the shell - so you're back to square one. But the black hole at the center can absorb those photons without becoming hot. And the photons are unusable because they are ambient.

On the other hand, there is now a temperature difference between the inside and the outside. Can it be used to make usable energy?

Comment author: timtyler 18 July 2009 06:57:28AM 0 points [-]

Not energy, entropy. Energy is useful - entropy is useless.

Comment author: djcb 17 July 2009 09:45:20PM 0 points [-]

+1; indeed, this is interesting from an scifi-itch-scratching viewpoint, but I guess we have the next 10^6 years to worry about the details.

Anyway, I like LW for bringing such things to my attention (thanks Wei_Dai!), but apart from being interesting, this seems not like an idea that need mass-popularization, or?

Comment author: Wei_Dai 18 July 2009 07:03:18AM 2 points [-]

You ask a fair question, I think. Here are some potential short-term implications of black-hole negentropy:

  • The far future will most likely not be dominated by an everyone-for-himself type of scenario (like Robin Hanson's Burning the Cosmic Commons. Knowing that, and possibly having a chance to see the far future for yourself, does that affect your short-term goals?
  • There is less need to adopt drastic policies to prevent the Burning the Cosmic Commons scenario.
  • The universe is capable of supporting much more life than we might intuit, even after seeing calculations like the one in Nick Bostrom's Astronomical Waste, which fail to take into account quadratic negentropy. What are the ethical implications of that? I'm not sure yet, but I'd be surprised if there weren't any.
Comment author: timtyler 17 July 2009 05:21:41PM *  -1 points [-]

If anyone else would like to read up on maximum entropy thermodynamics - particularly Dewar's recent work - that would be cool. This material explains much about why self-organising systems (including living ones) behave as they do - in thermodynamic terms. I discuss this here now and again, but - despite the links to Bayes and Jaynes - no-one seems to know very much about it.

A primer: http://en.citizendium.org/wiki/Life/Signed_Articles/John_Whitfield

Comment author: SilasBarta 17 July 2009 06:38:20PM 0 points [-]

That looked to be interesting until I glanced down at Figure 1, which reads:

Entropy and biodiversity are mathematically equivalent, making tropical forests the most entropic [entropy exporting] environments on Earth.

Eeek! Tropical forests the most entropy-exporting? Not, say, the 1000 C regions below the earth's surface? Not volcanoes or geysers?

Comment author: timtyler 17 July 2009 07:05:38PM *  -1 points [-]

Volcanoes and geysers are mostly uncommon, intermittent phenomena. Some volcano craters do stay pretty hot, for extended periods, though - it's true.

I'm not sure about how to measure the rate of entropy dissipation within the Earth - but I'm not sure it radiates as much heat from the surface as ultimately comes from the sun.

The insides of nuclear reactors, and other power plants are probably the most entropic places of all - again, per unit area. Whether those count as "environments" could be debated.

Comment author: HalFinney 19 July 2009 11:04:40PM 1 point [-]

I'd like to see a more popular discussion of Aumann's disagreement theorem (and its follow-ons), and what I believe is called Kripkean possible-world semantics, an alternative formulation of Bayes theorem, used in Aumann's original proof. The proof is very short, just a couple of sentences, but explaining the possible-world formalism is a big job.

Comment author: knb 17 July 2009 06:34:22PM *  1 point [-]

I've never read or watched a piece of science fiction that explorered this one.

I believe the Silent Ones in the Golden Age trilogy used black holes for this purpose.

Comment author: thomblake 17 July 2009 06:26:20PM 0 points [-]

Unlike other perhaps equally obvious futuristic ideas such as cryonics, AI and the Singularity, I've never read or watched a piece of science fiction that explorered this one.

In Dr. Who, the Time Lords used a black hole as a 'mysterious energy source'.

Comment author: eirenicon 17 July 2009 06:51:44PM 2 points [-]

That has as much relevance to black-hole negentropy as Demolition Man does to cryonics. In science fiction, the inability to explain something is indistinguishable from attributing it to magic.

Comment author: thomblake 17 July 2009 06:59:30PM 0 points [-]

That has as much relevance to black-hole negentropy as Demolition Man does to cryonics.

Meh. Given that the impression was that no science fiction deals with it, I'd count it, just as I'd count Demolition Man as relevant to cryonics.

Comment author: eirenicon 17 July 2009 07:18:12PM 2 points [-]

As far as I can recall, the last time we saw a black hole in Doctor Who, the TARDIS pulled another spaceship across its event horizon to safety. Just prior to that, they faced off against the actual literal Devil, who was chained in a hellish inferno inside a moon serviced by telepathic squid-people. I love Doctor Who, but I have a hard time calling it science fiction.

Comment author: thomblake 17 July 2009 07:26:16PM 0 points [-]

Aha. You're referring to that other show, also coincidentally called Doctor Who. But yes, the original series was just about that silly.

As for the implausibilty of telepathic squid people, just stay out of the dark places of the world and you should be fine for now. Until then, Cthulhu f'thagn.

Comment author: Document 02 November 2010 06:45:57PM *  1 point [-]

In Dr. Who, the Time Lords used a black hole as a 'mysterious energy source'.

Same for the Ori in the SG-1 episode Beachhead (transcript here; summary and transcript of prior black-hole episode here and here, which may partly explain the writers' thinking).