Shifting Load to Explicit Reasoning
Related to: Which Parts Are "Me"?, Making your explicit reasoning trustworthy, The 5-Second Level.
What's damaging about moralizing that we wish to avoid, what useful purpose does moralizing usually serve, and what allows to avoid the damage while retaining the usefulness? It engages psychological adaptations that promote conflict (by playing on social status), which are unpleasant to experience and can lead to undesirable consequences in the long run (such as feeling systematically uncomfortable interacting with a person, and so not being able to live or work or be friends with them). It serves the purpose of imprinting your values, which you feel to be right, on the people you interact with. Consequentialist elucidation of reasons for approving or disapproving of a given policy (virtue) is an effective persuasion technique if your values are actually right (for the people you try to confer them on), and it doesn't engage the same parts of your brain that make moralizing undesirable.
What happens here is transfer of responsibility for important tasks from the imperfect machinery that historically used to manage them (with systematic problems in any given context that humans but not evolution can notice), to explicit reasoning.
The Cognitive Costs to Doing Things
What's the mental burden of trying to do something? What's it cost? What price are you going to pay if you try to do something out in the world.
I think that by figuring out what the usual costs to doing things are, we can reduce the costs and otherwise structure our lives so that it's easier to reach our goals.
When I sat down to identify cognitive costs, I found seven. There might be more. Let's get started -
Activation Energy - As covered in more detail in this post, starting an activity seems to take a larger of willpower and other resources than keeping going with it. Required activation energy can be adjusted over time - making something into a routine lowers the activation energy to do it. Things like having poorly defined next steps increases activation energy required to get started. This is a major hurdle for a lot of people in a lot of disciplines - just getting started.
Opportunity cost - We're all familiar with general opportunity cost. When you're doing one thing, you're not doing something else. You have limited time. But there also seems to be a cognitive cost to this - a natural second guessing of choices by taking one path and not another. This is the sort of thing covered by Barry Schwartz in his Paradox of Choice work (there's some faulty thought/omissions in PoC, but it's overall valuable). It's also why basically every significant military work ever has said you don't want to put the enemy in a position where their only way out is through you - Sun Tzu argued always leaving a way for the enemy to escape, which splits their focus and options. Hernan Cortes famously burned the boats behind him. When you're doing something, your mind is subtly aware and bothered by the other things you're not doing. This is a significant cost.
Inertia - Eliezer Yudkowsky wrote that humans are "Adaptation-Executers, not Fitness-Maximizers." He was speaking in terms of large scale evolution, but this is also true of our day to day affairs. Whatever personal adaptations and routines we've gotten into, we tend to perpetuate. Usually people do not break these routines unless a drastic event happens. Very few people self-scrutinize and do drastic things without an external event happening.
The difference between activation energy and inertia is that you can want to do something, but be having a hard time getting started - that's activation energy. Whereas inertia suggests you'll keep doing what you've been doing, and largely turn your mind off. Breaking out of inertia takes serious energy and tends to make people uncomfortable. They usually only do it if something else makes them more uncomfortable (or, very rarely, when they get incredibly inspired).
Ego/willpower depletion - The Wikipedia article on ego depletion is pretty good. Basically, a lot of recent research shows that by doing something that takes significant willpower your "battery" of willpower gets drained some, and it becomes harder to do other high-will-required tasks. From Wikipedia: " In an illustrative experiment on ego depletion, participants who controlled themselves by trying not to laugh while watching a comedian did worse on a later task that required self-control compared to participants who did not have to control their laughter while watching the video." I'd strongly recommend you do some reading on this topic if you haven't - Roy Baumeister has written some excellent papers on it. The pattern holds pretty firm - when someone resists, say, eating a snack they want, it makes it harder for them to focus and persist doing rote work later.
Neurosis/fear/etc - Almost all humans are naturally more risk averse than gain-inclined. This seems to have been selected for evolutionarily. We also tend to become afraid far in excess of what we should for certain kinds of activities - especially ones that risk social embarrassment.
I never realized how strong these forces were until I tried to break free of them - whenever I got a strong negative reaction from someone to my writing, it made it considerably harder to write pieces that I thought would be popular later. Basic things like writing titles that would make a post spread, or polishing the first paragraph and last sentence - it's like my mind was weighing on the "con" side of pro/con that it would generate criticism, and it was... frightening's not quite the right word, but something like that.
Some tasks can be legitimately said to be "neurosis-inducing" - that means, you start getting more neurotic when you ponder and start doing them. Things that are almost guaranteed to generate criticism or risk rejection frequently do this. Anything that risks compromising a person's self image can be neurosis inducing too.
Altering of hormonal balance - A far too frequently ignored cost. A lot of activities will change your hormonal balance for the better or worse. Entering into conflict-like situations can and does increase adrenalin and cortisol and other stress hormones. Then you face adrenalin withdrawal and crash later. Of course, we basically are biochemistry, so significant changing of hormonal balance affects a lot of our body - immune system, respiration, digestion, etc. A lot of people are aware of this kind of peripherally, but there hasn't been much discussion about the hormonal-altering costs of a lot of activities.
Maintenance costs from the idea re-emerging in your thoughts - Another under-appreciated cognitive cost is maintenance costs in your thoughts from an idea recurring, especially when the full cycle isn't complete. In Getting Things Done, David Allen talks about how "open loops" are "anything that's not where it's supposed to be." These re-emerge in our thoughts periodically, often at inopportune times, consuming thought and energy. That's fine if the topic is exceedingly pleasant, but if it's not, it can wear you out. Completing an activity seems to reduce the maintenance cost (though not completely). An example would be not having filled your taxes out yet - it emerges in your thoughts at random times, derailing other thought. And it's usually not pleasant.
Taking on any project, initiative, business, or change can generate these maintenance costs from thoughts re-emerging.
Conclusion I identified these seven as the mental/cognitive costs to trying to do something -
- Activation Energy
- Opportunity cost
- Inertia
- Ego/willpower depletion
- Neurosis/fear/etc
- Altering of hormonal balance
- Maintenance costs from the idea re-emerging in your thoughts
I think we can reduce some of these costs by planning our tasks, work lives, social lives, and environment intelligently. Others of them it's good to just be aware of so we know when we start to drag or are having a hard time. Thoughts on other costs, or ways to reduce these are very welcome.
Thomas C. Schelling's "Strategy of Conflict"
It's an old book, I know, and one that many of us have already read. But if you haven't, you should.
If there's anything in the world that deserves to be called a martial art of rationality, this book is the closest approximation yet. Forget rationalist Judo: this is rationalist eye-gouging, rationalist gang warfare, rationalist nuclear deterrence. Techniques that let you win, but you don't want to look in the mirror afterward.
Imagine you and I have been separately parachuted into an unknown mountainous area. We both have maps and radios, and we know our own positions, but don't know each other's positions. The task is to rendezvous. Normally we'd coordinate by radio and pick a suitable meeting point, but this time you got lucky. So lucky in fact that I want to strangle you: upon landing you discovered that your radio is broken. It can transmit but not receive.
Two days of rock-climbing and stream-crossing later, tired and dirty, I arrive at the hill where you've been sitting all this time smugly enjoying your lack of information.
And after we split the prize and cash our checks I learn that you broke the radio on purpose.
Understanding your understanding
Related to: Truly Part of You, A Technical Explanation of Technical Explanation
Partly because of LessWrong discussions about what really counts as understanding (some typical examples), I came up with a scheme to classify different levels of understanding so that posters can be more precise about what they mean when they claim to understand -- or fail to understand -- a particular phenomenon or domain.
Each level has a description so that you know if you meet it, and tells you what to watch out for when you're at or close to that level. I have taken the liberty of naming them after the LW articles that describe what such a level is like.
Level 0: The "Guessing the Teacher's Password" Stage
Summary: You have no understanding, because you don't see how any outcome is more or less likely than any other.
Heading Toward: No-Nonsense Metaethics
Part of the sequence: No-Nonsense Metaethics
A few months ago, I predicted that we could solve metaethics in 15 years. To most people, that was outrageously optimistic. But I've updated since then. I think much of metaethics can be solved now (depending on where you draw the boundary around the term 'metaethics'.) My upcoming sequence 'No-Nonsense Metaethics' will solve the part that can be solved, and make headway on the parts of metaethics that aren't yet solved. Solving the easier problems of metaethics will give us a clear and stable platform from which to solve the hard questions of morality.
Metaethics has been my target for a while now, but first I had to explain the neuroscience of pleasure and desire, and how to use intuitions for philosophy.
Luckily, Eliezer laid most of the groundwork when he explained couldness, terminal and instrumental values, the complexity of human desire and happiness, how to dissolve philosophical problems, how to taboo words and replace them with their substance, how to avoid definitional disputes, how to carve reality at its joints with our words, how an algorithm feels from the inside, the mind projection fallacy, how probability is in the mind, reductionism, determinism, free will, evolutionary psychology, how to grasp slippery things, and what you would do without morality.
Of course, Eliezer wrote his own metaethics sequence. Eliezer and I seem to have similar views on morality, but I'll be approaching the subject from a different angle, I'll be phrasing my solution differently, and I'll be covering a different spread of topics.
Why do I think much of metaethics can be solved now? We have enormous resources not available just a few years ago. The neuroscience of pleasure and desire didn't exist two decades ago. (Well, we thought dopamine was 'the pleasure chemical', but we were wrong.) Detailed models of reductionistic meta-ethics weren't developed until the 1980s and 90s (by Peter Railton and Frank Jackson). Reductionism has been around for a while, but there are few philosophers who relentlessly play Rationalist's Taboo. Eliezer didn't write How an Algorithm Feels from the Inside until 2008.
Our methods will be familiar ones, already used to dissolve problems ranging from free will to disease. We will play Taboo with our terms, reducing philosophical questions into scientific ones. Then we will examine the cognitive algorithms that make it feel like open questions remain.
Along the way, we will solve or dissolve the traditional problems of metaethics: moral epistemology, the role of moral intuition, the is-ought gap, matters of moral psychology, the open question argument, moral realism vs. moral anti-realism, moral cognitivism vs. non-cognitivism, and more.
You might respond, "Sure, Luke, we can do the reduce-to-algorithm thing with free will or disease, but morality is different. Morality is fundamentally normative. You can't just dissolve moral questions with Taboo-playing and reductionism and cognitive science."
Well, we're going to examine the cognitive algorithms that generate that intuition, too.
And at the end, we will see what this all means for the problem of Friendly AI.
I must note that I didn't exactly invent the position I'll be defending. After sharing my views on metaethics with many scientifically-minded people in private conversation, many have said something like "Yeah, that's basically what I think about metaethics, I've just never thought it through in so much detail and cited so much of the relevant science [e.g. recent work in neuroeconomics and the science of intuition]."
But for convenience I do need to invent a name for my theory of metaethics. I call it pluralistic moral reductionism.
Next post: What is Metaethics?
The Correct Contrarian Cluster
Followup to: Contrarian Status Catch-22
Suppose you know someone believes that the World Trade Center was rigged with explosives on 9/11. What else can you infer about them? Are they more or less likely than average to believe in homeopathy?
I couldn't cite an experiment to verify it, but it seems likely that:
- There are persistent character traits which contribute to someone being willing to state a contrarian point of view.
- All else being equal, if you know that someone advocates one contrarian view, you can infer that they are more likely than average to have other contrarian views.
All sorts of obvious disclaimers can be included here. Someone who expresses an extreme-left contrarian view is less likely to have an extreme-right contrarian view. Different character traits may contribute to expressing contrarian views that are counterintuitive vs. low-prestige vs. anti-establishment etcetera. Nonetheless, it seems likely that you could usefully distinguish a c-factor, a general contrarian factor, in people and beliefs, even though it would break down further on closer examination; there would be a cluster of contrarian people and a cluster of contrarian beliefs, whatever the clusters of the subcluster.
(If you perform a statistical analysis of contrarian ideas and you find that they form distinct subclusters of ideologies that don't correlate with each other, then I'm wrong and no c-factor exists.)
Now, suppose that someone advocates the many-worlds interpretation of quantum mechanics. What else can you infer about them?
Levels of Action
One of the most useful concepts I have learned recently is the distinction between actions which directly improve the world, and actions which indirectly improve the world.
Suppose that you go onto Mechanical Turk, open an account, and spend a hundred hours transcribing audio. At current market rates, you'd get paid around $100 for your labor. By taking this action, you have made yourself $100 wealthier. This is an example of what I'd call a Level 1 or object-level action: something that directly moves the world from a less desirable state into a more desirable state.
On the other hand, suppose you take a typing class, which teaches you to type twice as fast. On the object level, this doesn't move the world into a better state- nothing about the world has changed, other than you. However, the typing class can still be very useful, because every Level 1 project you tackle later which involves typing will go better- you'll be able to do it more efficiently, and you'll get a higher return on your time. This is what I'd call a Level 2 or meta-level action, because it doesn't make the world better directly - it makes the world better indirectly, by improving the effectiveness of Level 1 actions. There are also Level 3 (meta-meta-level) actions, Level 4 (meta-meta-meta-level actions), and so on.
The Many Worlds of Hugh Everett
I've just finished this book and its one of the most enjoyable things I've read in a long time. Being a staple of science fiction and the only interpretation of quantum mechanics to enter the popular imagination it's a little surprising that "The Many Worlds of Hugh Everett" by Peter Byrne is the first biography of the originator of that amazing idea. Everett certainly had an interesting life, he was a libertarian and a libertine, became a cold warrior who with his top secret clearance was comfortable with the idea of megadeath, became wealthy by started one of the first successful software companies until alcoholism drove him and his company into the ground. Everett died of heart failure in 1982 at the age of 51, he was legally drunk at the time. He requested that his body be cremated and his ashes thrown into the garbage. And so he was.
Byrne had an advantage other potential biographers did not, the cooperation of his son Mark, a successful rock musician and composer whose music has been featured in such big budget movies as American Beauty, Hellboy, Yes Man, all three of the Shrek movies and many others. Mark gave Byrne full access to his garage which was full of his father's papers that nobody had looked at in decades.
Everett was an atheist all his life, after his death Paul Davies, who got 1,000,000 pounds for winning the Templeton religion prize, said that if true Many Worlds destroyed the anthropic argument for the existence of God. Everett would have been delighted. Nevertheless Everett ended up going to Catholic University of America near Washington DC. Although Byrne doesn't tell us exactly what was in it, Everett as a freshman devised a logical proof against the existence of God. Apparently it was good enough that one of his pious professors became very upset and depressed with "ontological horror" when he read it. Everett liked the professor and felt so guilty he decided not to use it on a person of faith again. This story is very atypical of the man, most of the time Everett seems to care little for the feelings of others and although quite brilliant wasn't exactly lovable.
Everett wasn't the only one dissatisfied with the Copenhagen Interpretation which insisted the measuring device had to be outside the wave function, but he was unlike other dissidents such as Bohm or Cramer in that Everett saw no need to add new terms to Schrodinger's Equation and thought the equation meant exactly what it said. The only reason those extra terms were added was to try to rescue the single universe idea, and there was no experimental justification for that. Everett was unique in thinking that quantum mechanics gave a description of nature that was literally true.
John Wheeler, Everett's thesis adviser, made him cut out about half the stuff in his original 137 page thesis and tone down the language so it didn't sound like he thought all those other universes were equally real when in fact he did. For example, Wheeler didn't like the word "split" and was especially uncomfortable with talk of conscious observers splitting, most seriously he made him remove the entire chapter on information and probability which today many consider the best part of the work. His long thesis was not published until 1973, if that version had been published in 1957 instead of the truncated Bowdlerized version things would have been different; plenty of people would still have disagreed but he would not have been ignored for as long as he was.
Byrne writes of Everett's views: "the splitting of observers share an identity because they stem from a common ancestor, but they also embark on different fates in different universes. They experience different lifespans, dissimilar events (such as a nuclear war perhaps) and at some point are no longer the same person, even though they share certain memory records." Everett says that when a observer splits it is meaningless to ask "which of the final observers corresponds to the initial one since each possess the total memory of the first" he says it is as foolish as asking which amoeba is the original after it splits into two. Wheeler made him remove all such talk of amoebas from his published short thesis.
Byrne says Everett did not think there were just an astronomically large number of other universes but rather an infinite number of them, not only that he thought there were a non-denumerable infinite number of other worlds. This means that the number of them was larger than the infinite set of integers, but Byrne does not make it clear if this means they are as numerous as the number of points on a line, or as numerous as an even larger infinite set like the set of all possible clock faces, or maybe an even larger infinity than that where easy to understand examples of that sort of mega-infinite magnitude are hard to come by. Neill Graham tried to reformulate the theory so you'd only need a countably infinite number of branches and Everett at first liked the idea but later rejected it and concluded you couldn't derive probability by counting universes. Eventually even Graham seems to have agreed and abandoned the idea that the number of universes was so small you could count them.
Taken as a whole Everett's multiverse, where all things happen, probability is not a useful concept and everything is deterministic. However for observers like us trapped in a single branch of the multiverse, observers who do not have access to the entire wave function and all the information it contains but only a small sliver of it, probability is the best we can do. That probability we see is not part of the thing itself but is just a subjective measure of our ignorance.
Infinity can cause problems in figuring out probability but Everett said his theory could calculate what the probability any event could be observed in any branch of the multiverse, and it turns out to be the Born Rule (discovered by Max Born, grandfather of Olivia Newton John) which means the probability of finding a particle at a point is the squaring of the amplitude of the Schrodinger Wave function at that point. The Born Rule has been shown experimentally to be true but the Copenhagen Interpretation just postulates it, Everett said he could derive it from his theory it "emerges naturally as a measure of probability for observers confined to a single branch (like our branch)". He proved the mathematical consistency of this idea by adding up all the probabilities in all the branches of the event happening and getting exactly 100%. Dieter Zeh said Everett may not have rigorously derived the Born Rule but did justify it and showed it "as being the only reasonable choice for a probability measure if objective reality is represented by the universal wave function [Schrodinger's wave equation]". Rigorous proof or not that's more than any other quantum interpretation has managed to do.
Everett wrote to his friend Max Jammer:
"None of these physicists had grasped what I consider to be the major accomplishment of the theory- the "rigorous" deduction of the probability interpretation of Quantum Mechanics from wave mechanics alone. This deduction is just as "rigorous" as any deductions of classical statistical mechanics. [...] What is unique about the choice of measure and why it is forced upon one is that in both cases it is the only measure that satisfies the law of conservation of probability through the equations of motion. Thus logically in both classical statistical mechanics and in quantum mechanics, the only possible statistical statements depend upon the existence of a unique measure which obeys this conservation principle."
Nevertheless some complained that Everett did not use enough rigor in his derivation. David Deutsch has helped close that rigor gap. He showed that the number of Everett-worlds after a branching is proportional to the conventional probability density. He then used Game Theory to show that all these are all equally likely to be observed. Everett would likely have been delighted as he used Game Theory extensively in his other life as a cold warrior. Professor Deutsch gave one of the best quotations in the entire book, talking about many worlds as a interpretation of Quantum Mechanics "is like talking about dinosaurs as an interpretation of the fossil record".
Everett was disappointed at the poor reception his doctoral dissertation received and never published anything on quantum mechanics again for the rest of his life; instead he became a Dr. Strangelove type character making computer nuclear war games and doing grim operational research for the pentagon about armageddon. He was one of the first to point out that any defense against intercontinental ballistic missiles would be ineffectual and building an anti-balistic missile system could not be justified except for "political or psychological grounds". Byrne makes the case that Everett was the first one to convince high military leaders through mathematics and no nonsense non sentimental reasoning that a nuclear war could not be won, "after an attack by either superpower on the other, the majority of the attacked population that survived the initial blasts would be sterilized and gradually succumb to leukemia. Livestock would die quickly and survivors would be forced to rely on eating grains potatoes and vegetables. Unfortunately the produce would be seething with radioactive Strontium 90 which seeps into human bone marrow and causes cancer". Linus Pauling credited Evert by name and quoted from his pessimistic report in his Nobel acceptance speech for receiving the 1962 Nobel Peace prize.
Despite his knowledge of the horrors of a nuclear war Everett, like most of his fellow cold warrior colleagues in the 50's and 60's, thought the probability of it happening was very high and would probably happen very soon. Byrne speculates in a footnote that Everett may have privately used anthropic reasoning and thought that the fact we live in a world where such a war has not happened (at least not yet) was more confirmation that his Many Worlds idea was right. Incidentally this is one of those rare books where the footnotes are almost as much fun to read as the main text.
Hugh's daughter Liz Everett killed herself a few years after her father's death, in her suicide note she said "Funeral requests: I prefer no church stuff. Please burn be and DON'T FILE ME. Please sprinkle me in some nice body of water or the garbage, maybe that way I'll end up in the correct parallel universe to meet up with Daddy". And so she was.
John K Clark
Tolerate Tolerance
Followup to: Why Our Kind Can't Cooperate
One of the likely characteristics of someone who sets out to be a "rationalist" is a lower-than-usual tolerance for flaws in reasoning. This doesn't strictly follow. You could end up, say, rejecting your religion, just because you spotted more or deeper flaws in the reasoning, not because you were, by your nature, more annoyed at a flaw of fixed size. But realistically speaking, a lot of us probably have our level of "annoyance at all these flaws we're spotting" set a bit higher than average.
That's why it's so important for us to tolerate others' tolerance if we want to get anything done together.
For me, the poster case of tolerance I need to tolerate is Ben Goertzel, who among other things runs an annual AI conference, and who has something nice to say about everyone. Ben even complimented the ideas of M*nt*f*x, the most legendary of all AI crackpots. (M*nt*f*x apparently started adding a link to Ben's compliment in his email signatures, presumably because it was the only compliment he'd ever gotten from a bona fide AI academic.) (Please do not pronounce his True Name correctly or he will be summoned here.)
But I've come to understand that this is one of Ben's strengths—that he's nice to lots of people that others might ignore, including, say, me—and every now and then this pays off for him.
And if I subtract points off Ben's reputation for finding something nice to say about people and projects that I think are hopeless—even M*nt*f*x—then what I'm doing is insisting that Ben dislike everyone I dislike before I can work with him.
Is that a realistic standard? Especially if different people are annoyed in different amounts by different things?
But it's hard to remember that when Ben is being nice to so many idiots.
Cooperation is unstable, in both game theory and evolutionary biology, without some kind of punishment for defection. So it's one thing to subtract points off someone's reputation for mistakes they make themselves, directly. But if you also look askance at someone for refusing to castigate a person or idea, then that is punishment of non-punishers, a far more dangerous idiom that can lock an equilibrium in place even if it's harmful to everyone involved.
Anthropics makes sense with shorter people
Reproduced from Meteuphoric by request.
Often people think that various forms of anthropic reasoning require you to change your beliefs in ways other than conditionalizing on evidence. This is false, at least in the cases I know of. I shall talk about Frank Arntzenius' paper Some Problems for Conditionalization and Reflection [gated] because it explains the issue well, though I believe his current views agree with mine.
He presents five thought experiments: Two Roads to Shangri La, The Prisoner, John Collins's Prisoner, Sleeping Beauty and Duplication. In each of them, it seems the (arguably) correct answer violates van Fraassen's reflection principle, which basically says that if you expect to believe something in the future without having been e.g. hit over the head between now and then, you should believe it now. For instance the thirder position in Sleeping Beauty seems to violate this principle because before the experiment Beauty believes there is a fifty percent chance of heads, and that when she wakes up she will think there is a thirty three percent chance. Arntzenius argued that these seemingly correct answers really are the correct ones, and claimed that they violate the reflection principle because credences can evolve in two ways other than by conditionalization.
First he said credences can shift, for instance through time. I know that tomorrow I will have a higher credence in it being Monday than I do today, and yet it would not be rational for me to increase my credence in it being Monday now on this basis. They can also 'spread out'. For instance if you know you are in Fairfax today, and that tomorrow a perfect replica of your brain experiencing Fairfax will be made and placed in a vat in Canberra, tomorrow your credence will go from being concentrated in Fairfax to being spread between there and Canberra. This is despite no damage having been done to your own brain. As Arntzenius pointed out, such an evolution of credence looks like quite the opposite of conditionalization, since conditionalization consists of striking out possibilities that your information excludes - it never opens up new possibilities.
I agree that beliefs should evolve in these two ways. However they are both really conditionalization, just obscured. They make sense as conditionalization when you think of them as carried out by different momentary agents, based on the information they infer from their connections to other momentary agents with certain beliefs (e.g. an immediately past self).
Normal cases can be considered this way quite easily. Knowing that you are the momentary agent that followed a few seconds after an agent who knew a certain set of facts about the objective world, and who is (you assume) completely trustworthy, means you can simply update the same prior with those same facts and come to the same conclusion. That is, you don't really have to do anything. You can treat a stream of moments as a single agent. This is what we usually do.
However sometimes being connected in a certain way to another agent does not make everything that is true for them true for you. Most obviously, if they are a past self and know it is 12 o clock, your connection via being their one second later self means you should exclude worlds where you are not at time 12:00:01. You have still learned from your known relationship to that agent and conditionalized, but you have not learned that what is true of them is true of you, because it isn't. This is the first way Arntzenius mentioned that credences seem to evolve through time not by by conditionalization.
The second way occurs when one person-moment is at location X, and another person moment has a certain connection to the person at X, but there is more than one possible connection of that sort. For instance when two later people both remember being an earlier person because the earlier person was replicated in some futuristic fashion. Then while the earlier person moment could condition on their exact location, the later one must condition on being in one of several locations connected that way to the earlier person's location, so their credence spreads over more possibilities than that of the earlier self. If you call one of these later momentary agents the same person as the earlier one, and say they are conditionalizing, it seems they are doing it wrong. But considered as three different momentary people learning from their connections they are just conditionalizing as usual.
What exactly the later momentary people should believe is a matter of debate, but I think that can be framed entirely as a question of what their state spaces and priors look like.
Momentary humans almost always pass lots of information from one to the next, chronologically along chains of memory through non-duplicated people, knowing their approximate distance from one another. So most of the time they can treat themselves as single units who just have to update on any information coming from outside, as I explained. But conditionalization is not specific to these particular biological constructions; and when it is applied to information gained through other connections between agents, the resulting time series of beliefs within one human will end up looking different to that in a chain with no unusual extra connections.
This view also suggests that having cognitive defects, such as memory loss, should not excuse anyone from having credences, as for instance Arntzenius argued it should in his paper Reflections on Sleeping Beauty: "in the face of forced irrational changes in one's degrees of belief one might do best simply to jettison them altogether". There is nothing special about credences derived from beliefs of a past agent you identify with. They are just another source of information. If the connection to other momentary agents is different to usual, for instance through forced memory loss, update on it as usual.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)