The I-Less Eye
or: How I Learned to Stop Worrying and Love the Anthropic Trilemma
Imagine you live in a future society where the law allows up to a hundred instances of a person to exist at any one time, but insists that your property belongs to the original you, not to the copies. (Does this sound illogical? I may ask my readers to believe in the potential existence of uploading technology, but I would not insult your intelligence by asking you to believe in the existence of a society where all the laws were logical.)
So you decide to create your full allowance of 99 copies, and a customer service representative explains how the procedure works: the first copy is made, and informed he is copy number one; then the second copy is made, and informed he is copy number two, etc. That sounds fine until you start thinking about it, whereupon the native hue of resolution is sicklied o'er with the pale cast of thought. The problem lies in your anticipated subjective experience.
After step one, you have a 50% chance of finding yourself the original; there is nothing controversial about this much. If you are the original, you have a 50% chance of finding yourself still so after step two, and so on. That means after step 99, your subjective probability of still being the original is 0.5^99, in other words as close to zero as makes no difference.
Assume you prefer existing as a dependent copy to not existing at all, but preferable still would be existing as the original (in the eyes of the law) and therefore still owning your estate. You might reasonably have hoped for a 1% chance of the subjectively best outcome. 0.5^99 sounds entirely unreasonable!
Priors as Mathematical Objects
Followup to: "Inductive Bias"
What exactly is a "prior", as a mathematical object? Suppose you're looking at an urn filled with red and white balls. When you draw the very first ball, you haven't yet had a chance to gather much evidence, so you start out with a rather vague and fuzzy expectation of what might happen - you might say "fifty/fifty, even odds" for the chance of getting a red or white ball. But you're ready to revise that estimate for future balls as soon as you've drawn a few samples. So then this initial probability estimate, 0.5, is not repeat not a "prior".
An introduction to Bayes's Rule for confused students might refer to the population frequency of breast cancer as the "prior probability of breast cancer", and the revised probability after a mammography as the "posterior probability". But in the scriptures of Deep Bayesianism, such as Probability Theory: The Logic of Science, one finds a quite different concept - that of prior information, which includes e.g. our beliefs about the sensitivity and specificity of mammography exams. Our belief about the population frequency of breast cancer is only one small element of our prior information.
Frequentist Magic vs. Bayesian Magic
[I posted this to open thread a few days ago for review. I've only made some minor editorial changes since then, so no need to read it again if you've already read the draft.]
This is a belated reply to cousin_it's 2009 post Bayesian Flame, which claimed that frequentists can give calibrated estimates for unknown parameters without using priors:
And here's an ultra-short example of what frequentists can do: estimate 100 independent unknown parameters from 100 different sample data sets and have 90 of the estimates turn out to be true to fact afterward. Like, fo'real. Always 90% in the long run, truly, irrevocably and forever.
And indeed they can. Here's the simplest example that I can think of that illustrates the spirit of frequentism:
Suppose there is a machine that produces biased coins. You don't know how the machine works, except that each coin it produces is either biased towards heads (in which case each toss of the coin will land heads with probability .9 and tails with probability .1) or towards tails (in which case each toss of the coin will land tails with probability .9 and heads with probability .1). For each coin, you get to observe one toss, and then have to state whether you think it's biased towards heads or tails, and what is the probability that's the right answer.
Let's say that you decide to follow this rule: after observing heads, always answer "the coin is biased towards heads with probability .9" and after observing tails, always answer "the coin is biased towards tails with probability .9". Do this for a while, and it will turn out that 90% of the time you are right about which way the coin is biased, no matter how the machine actually works. The machine might always produce coins biased towards heads, or always towards tails, or decide based on the digits of pi, and it wouldn't matter—you'll still be right 90% of the time. (To verify this, notice that in the long run you will answer "heads" for 90% of the coins actually biased towards heads, and "tails" for 90% of the coins actually biased towards tails.) No priors needed! Magic!
Ureshiku Naritai
This is a supplement to the luminosity sequence. In this comment, I mentioned that I have raised my happiness set point (among other things), and this declaration was met with some interest. Some of the details are lost to memory, but below, I reconstruct for your analysis what I can of the process. It contains lots of gooey self-disclosure; skip if that's not your thing.
In summary: I decided that I had to and wanted to become happier; I re-labeled my moods and approached their management accordingly; and I consistently treated my mood maintenance and its support behaviors (including discovering new techniques) as immensely important. The steps in more detail:
1. I came to understand the necessity of becoming happier. Being unhappy was not just unpleasant. It was dangerous: I had a history of suicidal ideation. This hadn't resulted in actual attempts at killing myself, largely because I attached hopes for improvement to concrete external milestones (various academic progressions) and therefore imagined myself a magical healing when I got the next diploma (the next one, the next one.) Once I noticed I was doing that, it was unsustainable. If I wanted to live, I had to find a safe emotional place on which to stand. It had to be my top priority. This required several sub-projects:
More thoughts on assertions
Response to: The "show, don't tell" nature of argument
Morendil says not to trust simple assertions. He's right, for the certain class of simple assertions he's talking about. But in order to see why, let's look at different types of assertions and see how useful it is to believe them.
Summary:
- Hearing an assertion can be strong evidence if you know nothing else about the proposition in question.
- Hearing an assertion is not useful evidence if you already have a reasonable estimate of how many people do or don't believe the proposition.
- An assertion by a leading authority is stronger than an assertion by someone else.
- An assertion plus an assertion that there is evidence makes no factual difference, but is a valuable signal.
Levels of communication
Communication fails when the participants in a conversation aren't talking about the same thing. This can be something as subtle as having slightly differing mappings of verbal space to conceptual space, or it can be a question of being on entirely different levels of conversation. There are at least four such levels: the level of facts, the level of status, the level of values, and the level of socialization. I suspect that many people with rationalist tendencies tend to operate primarily on the fact level and assume others to be doing so as well, which might lead to plenty of frustration.
The level of facts. This is the most straightforward one. When everyone is operating on the level of facts, they are detachedly trying to discover the truth about a certain subject. Pretty much nothing else than the facts matter.
The level of status. Probably the best way of explaining what happens when everyone is operating on the level of status is the following passage, originally found in Keith Johnstone's Impro:
A Much Better Life?
(Response to: You cannot be mistaken about (not) wanting to wirehead, Welcome to Heaven)
The Omega Corporation
Internal Memorandum
To: Omega, CEO
From: Gamma, Vice President, Hedonic Maximization
Sir, this concerns the newest product of our Hedonic Maximization Department, the Much-Better-Life Simulator. This revolutionary device allows our customers to essentially plug into the Matrix, except that instead of providing robots with power in flagrant disregard for the basic laws of thermodynamics, they experience a life that has been determined by rigorously tested algorithms to be the most enjoyable life they could ever experience. The MBLS even eliminates all memories of being placed in a simulator, generating a seamless transition into a life of realistic perfection.
Our department is baffled. Orders for the MBLS are significantly lower than estimated. We cannot fathom why every customer who could afford one has not already bought it. It is simply impossible to have a better life otherwise. Literally. Our customers' best possible real life has already been modeled and improved upon many times over by our programming. Yet, many customers have failed to make the transition. Some are even expressing shock and outrage over this product, and condemning its purchasers.
Applying utility functions to humans considered harmful
There's a lot of discussion on this site that seems to be assuming (implicitly or explicitly) that it's meaningful to talk about the utility functions of individual humans. I would like to question this assumption.
To clarify: I don't question that you couldn't, in principle, model a human's preferences by building this insanely complex utility function. But there's an infinite amount of methods by which you could model a human's preferences. The question is which model is the most useful, and which models have the least underlying assumptions that will lead your intuitions astray.
Utility functions are a good model to use if we're talking about designing an AI. We want an AI to be predictable, to have stable preferences, and do what we want. It is also a good tool for building agents that are immune to Dutch book tricks. Utility functions are a bad model for beings that do not resemble these criteria.
Are wireheads happy?
Related to: Utilons vs. Hedons, Would Your Real Preferences Please Stand Up
And I don't mean that question in the semantic "but what is happiness?" sense, or in the deep philosophical "but can anyone not facing struggle and adversity truly be happy?" sense. I mean it in the totally literal sense. Are wireheads having fun?
They look like they are. People and animals connected to wireheading devices get upset when the wireheading is taken away and will do anything to get it back. And it's electricity shot directly into the reward center of the brain. What's not to like?
Only now neuroscientists are starting to recognize a difference between "reward" and "pleasure", or call it "wanting" and "liking". The two are usually closely correlated. You want something, you get it, then you feel happy. The simple principle behind our entire consumer culture. But do neuroscience and our own experience really support that?
That other kind of status
"Human nature 101. Once they've staked their identity on being part of the defiant elect who know the Hidden Truth, there's no way it'll occur to them that they're our catspaws." - Mysterious Conspirator A
This sentence sums up a very large category of human experience and motivation. Informally we talk about this all the time; formally it usually gets ignored in favor of a simple ladder model of status.
In the ladder model, status is a one-dimensional line from low to high. Every person occupies a certain rung on the ladder determined by other people's respect. When people take status-seeking actions, their goal is to to change other people's opinions of themselves and move up the ladder.
But many, maybe most human actions are counterproductive at moving up the status ladder. 9-11 Conspiracy Theories are a case in point. They're a quick and easy way to have most of society think you're stupid and crazy. So is serious interest in the paranormal or any extremist political or religious belief. So why do these stay popular?
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)