(Cross-posted from another blog but written with LessWrong in mind. Don't worry, if this post isn't well-received then for LessWrong the series will end with this post.)
Summary: This post is the beginning of a systematic attempt to answer the question "what is the most important thing?". (Updateless) decision theory is used to provisionally define "importance" and Juergen Schmidhuber's theory of beauty is introduced as a possible answer to the question. The motivations for bringing in Schmidhuber's theory are discussed. This post is also intended to serve as an example of how to understand and solve hard problems in general, and emphasizes the heuristic "go meta".
This post is the first in a series about what might be the most important question we know to ask: What is the most important thing?
Don't try to answer the question yet. When faced with a tough question our first instinct should always be to go meta. What is it that causes me to ask the question "what is the most important thing"? What makes me think the question is itself important? Is that thing important? Does it point to itself as the most important thing? If not, then where does it point? Does the thing it points to, point to itself? If we follow this chain, where do we end up? How path-dependent is the answer? How much good faith do we have to assume on the part of the various things, to trust that they'll give their honest opinions? If we can't simply assume good faith, can we design a mechanism to promote honesty? What mechanisms are already in place, and are there cheap, local improvements we can make for those mechanisms?
And to ask all those questions we have to assume various commonsense notions that we might in fact need to pin down more precisely beforehand. Like, what is importance? Luckily we have some tools we can use to try to figure that part out.
Decision theory is one such tool. In Bayesian decision theory "importance" might be a fair name for what is measured by your decision policy, which you get by multiplying your beliefs by your value function. Informally, your decision policy tells you what options or actions to pay most attention to, or what possibilities are most important. But arguably it's your values themselves that should be considered "important", and your beliefs just tell you how the important stuff relates to what is actually going on in the world. Of the decision policy and the utility function, which should we provisionally consider a better referent for "importance"?
Luckily, decision theories like updateless decision theory (UDT) un-ask the question for us. As the name suggests, unlike Bayesian decision theories like Eliezer's timeless decision theory, UDT doesn't update its beliefs. It just has a utility function which specifies what actions it should take in all of the possible worlds it finds itself in. It doesn't care about the state of the world on top of its utility function—i.e., it doesn't have beliefs—because what worlds it cares about is a fact already specified by its utility function, and not something added in. So "importance" can only be one thing, and it's a surprisingly simple notion that's powerful enough to solve simple decision problems. UDT has problems with mathematical uncertainty and reflection—it has a magical "mathematical intuition module", and weird things happen when it proves things about its own output after taking into account that it will always give the "optimal" solution to a problem—but those issues don't change the fact that decision theory's notion of importance is a decent provisional notion for us to work with.
Of course, many meta-ethicists would have reservations about defining importance this way. They would say that (moral) importance isn't something agent-specific: it's an objective fact of the universe what's (morally) important. But even given that, as bounded agents we have to find out what's actually important somehow, so when we're making decisions we can talk about our best guess at what's important without committing ourselves to any meta-ethical position. The kind of importance that has bearing on all our decisions is a prescriptive notion of importance, not a descriptive one nor a normative one. It's our agent-specific, best approximation of normative importance.
So given our decision theoretic notion of importance we can get back to the question given above: what is the most important thing? If counterfactually we had all of our values represented as a utility function, what would be the term that had the most utility associated with it? We don't know how to talk about them computationally, but for now we'll let ourselves use vague human concepts. Would the most important thing be eudaimonia, maybe? How about those other Aristotelian emphases of arete (virtue) and phronesis (practical and moral wisdom)? Maybe the sum of all three? Taken together they surely cover a lot of ground.
Various answers are plausible, but again, this is a perfect time to go meta. What causes the question "what is the most important thing?" to rise to our attention, and what causes us to try to find the answer?
One reason we ask is that it's an interesting question of its own accord. We want to understand the world, and we're curious about the answers to some questions even when they don't seem to have any practical significance, like with chess problems or with jigsaw puzzles. We're curious by nature.
We can always go meta again, we can always seek whence cometh a sequence [pdf]. What causes us to be interested in things, and what causes things to be interesting? It might be a subtle point that these can be distinct questions. Maybe aliens are way more interested in sorting pebbles into prime-numbered heaps than we are. In that case we might want to acknowledge that sorting pebbles into prime-numbered heaps can be interesting in a certain general sense—it just doesn't really interest us. But we might be interested that the aliens find it interesting: I'd certainly want to know why the aliens are so into prime numbers, pebbles, and the conjunction of the two. Given my knowledge of psychology and sociology their hypothetical fixation strikes me as highly unlikely. And that brings us to the question of what in general, in a fairly mind-universal sense, causes things to be interesting.
Luckily we can take a computational perspective to get a preliminary answer. Juergen Schmidhuber's theory of beauty and other stuff is an attempt to answer the question of what makes things interesting. The best introduction to his theory is his descriptively-titled paper "Driven by Compression Progress: A Simple Principle Explains Essential Aspects of Subjective Beauty, Novelty, Surprise, Interestingness, Attention, Curiosity, Creativity, Art, Science, Music, Jokes". Here's the abstract:
I argue that data becomes temporarily interesting by itself to some self-improving, but computationally limited, subjective observer once he learns to predict or compress the data in a better way, thus making it subjectively simpler and more beautiful. Curiosity is the desire to create or discover more non-random, nonarbitrary, regular data that is novel and surprising not in the traditional sense of Boltzmann and Shannon but in the sense that it allows for compression progress because its regularity was not yet known. This drive maximizes interestingness, the ďŹrst derivative of subjective beauty or compressibility, that is, the steepness of the learning curve. It motivates exploring infants, pure mathematicians, composers, artists, dancers, comedians, yourself, and (since 1990) artiďŹcial systems.
This compression-centric formulation of beauty and interestingness reminds me of a Dinosaur Comic:
In Schmidhuber's beautiful and interesting theory, compression plays a key role, and explains many things that we find important. So is compression the most important thing? Should we structure our decision theory around a compression progress drive, as Schmidhuber has done with some of his artificial intelligences?
I doubt it—I don't think we've gone meta enough. But we'll further consider that question, and continue our exploration of the more important question "what's the most important thing?" in future posts.