Dying Outside
A man goes in to see his doctor, and after some tests, the doctor says, "I'm sorry, but you have a fatal disease."
Man: "That's terrible! How long have I got?"
Doctor: "Ten."
Man: "Ten? What kind of answer is that? Ten months? Ten years? Ten what?"
The doctor looks at his watch. "Nine."
Recently I received some bad medical news (although not as bad as in the joke). Unfortunately I have been diagnosed with a fatal disease, Amyotrophic Lateral Sclerosis or ALS, sometimes called Lou Gehrig's disease. ALS causes nerve damage, progressive muscle weakness and paralysis, and ultimately death. Patients lose the ability to talk, walk, move, eventually even to breathe, which is usually the end of life. This process generally takes about 2 to 5 years.
There are however two bright spots in this picture. The first is that ALS normally does not affect higher brain functions. I will retain my abilities to think and reason as usual. Even as my body is dying outside, I will remain alive inside.
The second relates to survival. Although ALS is generally described as a fatal disease, this is not quite true. It is only mostly fatal. When breathing begins to fail, ALS patients must make a choice. They have the option to either go onto invasive mechanical respiration, which involves a tracheotomy and breathing machine, or they can die in comfort. I was very surprised to learn that over 90% of ALS patients choose to die. And even among those who choose life, for the great majority this is an emergency decision made in the hospital during a medical respiratory crisis. In a few cases the patient will have made his wishes known in advance, but most of the time the procedure is done as part of the medical management of the situation, and then the ALS patient either lives with it or asks to have the machine disconnected so he can die. Probably fewer than 1% of ALS patients arrange to go onto ventilation when they are still in relatively good health, even though this provides the best odds for a successful transition.
Hypothetical Paradoxes
When we form hypotheticals, they must use entirely consistent and clear language, and avoid hiding complicated operations behind simple assumptions. In particular, with respect to decision theory, hypotheticals must employ a clear and consistent concept of free will, and they must make all information available to the theorizer available to the decider in the question. Failure to do either of these can make a hypothetical meaningless or self-contradictory if properly understood.
Newcomb's problem and the the Smoking Lesion fail to do both. I will argue that hidden assumptions in both problems imply internally contradictory concepts of free will, and thus both hypotheticals are incomprehensible and irrelevant when used to contradict decision theories.
And I'll do it without math or programming! Metatheory is fun.
Thou Art Physics
Followup to: Dissolving the Question, Hand vs. Fingers, Timeless Causality, Living in Many-Worlds
Three months ago—jeebers, has it really been that long?—I posed the following homework assignment: Do a stack trace of the human cognitive algorithms that produce debates about 'free will'. Note that this task is strongly distinguished from arguing that free will does, or does not exist.
Now, as expected, the notion of "timeless physics" is causing people to ask, "If the future is determined, how can our choices control it?" The wise reader can guess that it all adds up to normality; but this leaves the question of how.
People hear: "The universe runs like clockwork; physics is deterministic; the future is fixed." And their minds form an causal network that looks like this:
Here we see the causes "Me" and "Physics", competing to determine the state of the "Future" effect. If the "Future" is fully determined by "Physics", then obviously there is no room for it to be affected by "Me".
Ingredients of Timeless Decision Theory
Followup to: Newcomb's Problem and Regret of Rationality, Towards a New Decision Theory
Wei Dai asked:
"Why didn't you mention earlier that your timeless decision theory mainly had to do with logical uncertainty? It would have saved people a lot of time trying to guess what you were talking about."
...
All right, fine, here's a fast summary of the most important ingredients that go into my "timeless decision theory". This isn't so much an explanation of TDT, as a list of starting ideas that you could use to recreate TDT given sufficient background knowledge. It seems to me that this sort of thing really takes a mini-book, but perhaps I shall be proven wrong.
The one-sentence version is: Choose as though controlling the logical output of the abstract computation you implement, including the output of all other instantiations and simulations of that computation.
The three-sentence version is: Factor your uncertainty over (impossible) possible worlds into a causal graph that includes nodes corresponding to the unknown outputs of known computations; condition on the known initial conditions of your decision computation to screen off factors influencing the decision-setup; compute the counterfactuals in your expected utility formula by surgery on the node representing the logical output of that computation.
Bloggingheads: Yudkowsky and Aaronson talk about AI and Many-worlds
Eliezer Yudkowsky and Scott Aaronson - Percontations: Artificial Intelligence and Quantum Mechanics
Sections of the diavlog:
- When will we build the first superintelligence?
- Why quantum computing isn’t a recipe for robot apocalypse
- How to guilt-trip a machine
- The evolutionary psychology of artificial intelligence
- Eliezer contends many-worlds is obviously correct
- Scott contends many-worlds is ridiculous (but might still be true)
Sense, Denotation and Semantics
J. Y. Girard, et al. (1989). Proofs and types. Cambridge University Press, New York, NY, USA. (PDF)
I found introductory description of a number of ideas given in the beginning of this book very intuitively clear, and these ideas should be relevant to our discussion, preoccupied with the meaning of meaning as we are. Though the book itself is quite technical, the first chapter should be accessible to many readers.
From the beginning of the chapter:
Let us start with an example. There is a standard procedure for multiplication, which yields for the inputs 27 and 37 the result 999. What can we say about that?
A first attempt is to say that we have an equality
27 × 37 = 999
This equality makes sense in the mainstream of mathematics by saying that the two sides denote the same integer and that × is a function in the Cantorian sense of a graph.
This is the denotational aspect, which is undoubtedly correct, but it misses the essential point:
There is a finite computation process which shows that the denotations are equal. It is an abuse (and this is not cheap philosophy — it is a concrete question) to say that 27 × 37 equals 999, since if the two things we have were the same then we would never feel the need to state their equality. Concretely we ask a question, 27 × 37, and get an answer, 999. The two expressions have different senses and we must do something (make a proof or a calculation, or at least look in an encyclopedia) to show that these two senses have the same denotation.
Honesty: Beyond Internal Truth
When I expect to meet new people who have no idea who I am, I often wear a button on my shirt that says:
SPEAK THE TRUTH,
EVEN IF YOUR VOICE TREMBLES
Honesty toward others, it seems to me, obviously bears some relation to rationality. In practice, the people I know who seem to make unusual efforts at rationality, are unusually honest, or, failing that, at least have unusually bad social skills.
And yet it must be admitted and fully acknowledged, that such morals are encoded nowhere in probability theory. There is no theorem which proves a rationalist must be honest - must speak aloud their probability estimates. I have said little of honesty myself, these past two years; the art which I've presented has been more along the lines of:
SPEAK THE TRUTH INTERNALLY,
EVEN IF YOUR BRAIN TREMBLES
I do think I've conducted my life in such fashion, that I can wear the original button without shame. But I do not always say aloud all my thoughts. And in fact there are times when my tongue emits a lie. What I write is true to the best of my knowledge, because I can look it over and check before publishing. What I say aloud sometimes comes out false because my tongue moves faster than my deliberative intelligence can look it over and spot the distortion. Oh, we're not talking about grotesque major falsehoods - but the first words off my tongue sometimes shade reality, twist events just a little toward the way they should have happened...
From the inside, it feels a lot like the experience of un-consciously-chosen, perceptual-speed, internal rationalization. I would even say that so far as I can tell, it's the same brain hardware running in both cases - that it's just a circuit for lying in general, both for lying to others and lying to ourselves, activated whenever reality begins to feel inconvenient.
A Request for Open Problems
Open problems are clearly defined problems1 that have not been solved. In older fields, such as Mathematics, the list is rather intimidating. Rationality, on the other, seems to have no list.
While we have all of us here together to crunch on problems, let's shoot higher than trying to think of solutions and then finding problems that match the solution. What things are unsolved questions? Is it reasonable to assume those questions have concrete, absolute answers?
The catch is that these problems cannot be inherently fuzzy problems. "How do I become less wrong?" is not a problem that can be clearly defined. As such, it does not have a concrete, absolute answer. Does Rationality have a set of problems that can be clearly defined? If not, how do we work toward getting our problems clearly defined?
See also: Open problems at LW:Wiki
Without models
Followup to: What is control theory?
I mentioned in my post testing the water on this subject that control systems are not intuitive until one has learnt to understand them. The point I am going to talk about is one of those non-intuitive features of the subject. It is (a) basic to the very idea of a control system, and (b) something that almost everyone gets wrong when they first encounter control systems.
I'm going to address just this one point, not in order to ignore the rest, but because the discussion arising from my last post has shown that this is presently the most important thing.
There is a great temptation to think that to control a variable -- that is, to keep it at a desired value in spite of disturbing influences -- the controller must contain a model of the process to be controlled and use it to calculate what actions will have the desired effect. In addition, it must measure the disturbances or better still, predict them in advance and what effect they will have, and take those into account in deciding its actions.
In terms more familiar here, the temptation to think that to bring about desired effects in the world, one must have a model of the relevant parts of the world and predict what actions will produce the desired results.
However, this is absolutely wrong. This is not a minor mistake or a small misunderstanding; it is the pons asinorum of the subject.
Note the word "must". It is not disputed that one can use models and predictions, only that one must, that the task inherently requires it.
This Didn't Have To Happen
My girlfriend/SO's grandfather died last night, running on a treadmill when his heart gave out.
He wasn't signed up for cryonics, of course. She tried to convince him, and I tried myself a little the one time I met her grandparents.
"This didn't have to happen. Fucking religion."
That's what my girlfriend said.
I asked her if I could share that with you, and she said yes.
Just so that we're clear that all the wonderful emotional benefits of self-delusion come with a price, and the price isn't just to you.
View more: Next

Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)