16 types of useful predictions
How often do you make predictions (either about future events, or about information that you don't yet have)? If you're a regular Less Wrong reader you're probably familiar with the idea that you should make your beliefs pay rent by saying, "Here's what I expect to see if my belief is correct, and here's how confident I am," and that you should then update your beliefs accordingly, depending on how your predictions turn out.
And yet… my impression is that few of us actually make predictions on a regular basis. Certainly, for me, there has always been a gap between how useful I think predictions are, in theory, and how often I make them.
I don't think this is just laziness. I think it's simply not a trivial task to find predictions to make that will help you improve your models of a domain you care about.
At this point I should clarify that there are two main goals predictions can help with:
- Improved Calibration (e.g., realizing that I'm only correct about Domain X 70% of the time, not 90% of the time as I had mistakenly thought).
- Improved Accuracy (e.g., going from being correct in Domain X 70% of the time to being correct 90% of the time)
If your goal is just to become better calibrated in general, it doesn't much matter what kinds of predictions you make. So calibration exercises typically grab questions with easily obtainable answers, like "How tall is Mount Everest?" or "Will Don Draper die before the end of Mad Men?" See, for example, the Credence Game, Prediction Book, and this recent post. And calibration training really does work.
But even though making predictions about trivia will improve my general calibration skill, it won't help me improve my models of the world. That is, it won't help me become more accurate, at least not in any domains I care about. If I answer a lot of questions about the heights of mountains, I might become more accurate about that topic, but that's not very helpful to me.
So I think the difficulty in prediction-making is this: The set {questions whose answers you can easily look up, or otherwise obtain} is a small subset of all possible questions. And the set {questions whose answers I care about} is also a small subset of all possible questions. And the intersection between those two subsets is much smaller still, and not easily identifiable. As a result, prediction-making tends to seem too effortful, or not fruitful enough to justify the effort it requires.

But the intersection's not empty. It just requires some strategic thought to determine which answerable questions have some bearing on issues you care about, or -- approaching the problem from the opposite direction -- how to take issues you care about and turn them into answerable questions.
I've been making a concerted effort to hunt for members of that intersection. Here are 16 types of predictions that I personally use to improve my judgment on issues I care about. (I'm sure there are plenty more, though, and hope you'll share your own as well.)
- Predict how long a task will take you. This one's a given, considering how common and impactful the planning fallacy is.
Examples: "How long will it take to write this blog post?" "How long until our company's profitable?" - Predict how you'll feel in an upcoming situation. Affective forecasting – our ability to predict how we'll feel – has some well known flaws.
Examples: "How much will I enjoy this party?" "Will I feel better if I leave the house?" "If I don't get this job, will I still feel bad about it two weeks later?" - Predict your performance on a task or goal.
One thing this helps me notice is when I've been trying the same kind of approach repeatedly without success. Even just the act of making the prediction can spark the realization that I need a better game plan.
Examples: "Will I stick to my workout plan for at least a month?" "How well will this event I'm organizing go?" "How much work will I get done today?" "Can I successfully convince Bob of my opinion on this issue?" - Predict how your audience will react to a particular social media post (on Facebook, Twitter, Tumblr, a blog, etc.).
This is a good way to hone your judgment about how to create successful content, as well as your understanding of your friends' (or readers') personalities and worldviews.
Examples: "Will this video get an unusually high number of likes?" "Will linking to this article spark a fight in the comments?" - When you try a new activity or technique, predict how much value you'll get out of it.
I've noticed I tend to be inaccurate in both directions in this domain. There are certain kinds of life hacks I feel sure are going to solve all my problems (and they rarely do). Conversely, I am overly skeptical of activities that are outside my comfort zone, and often end up pleasantly surprised once I try them.
Examples: "How much will Pomodoros boost my productivity?" "How much will I enjoy swing dancing?" - When you make a purchase, predict how much value you'll get out of it.
Research on money and happiness shows two main things: (1) as a general rule, money doesn't buy happiness, but also that (2) there are a bunch of exceptions to this rule. So there seems to be lots of potential to improve your prediction skill here, and spend your money more effectively than the average person.
Examples: "How much will I wear these new shoes?" "How often will I use my club membership?" "In two months, will I think it was worth it to have repainted the kitchen?" "In two months, will I feel that I'm still getting pleasure from my new car?" - Predict how someone will answer a question about themselves.
I often notice assumptions I'm been making about other people, and I like to check those assumptions when I can. Ideally I get interesting feedback both about the object-level question, and about my overall model of the person.
Examples: "Does it bother you when our meetings run over the scheduled time?" "Did you consider yourself popular in high school?" "Do you think it's okay to lie in order to protect someone's feelings?" - Predict how much progress you can make on a problem in five minutes.
I often have the impression that a problem is intractable, or that I've already worked on it and have considered all of the obvious solutions. But then when I decide (or when someone prompts me) to sit down and brainstorm for five minutes, I am surprised to come away with a promising new approach to the problem.
Example: "I feel like I've tried everything to fix my sleep, and nothing works. If I sit down now and spend five minutes thinking, will I be able to generate at least one new idea that's promising enough to try?" - Predict whether the data in your memory supports your impression.
Memory is awfully fallible, and I have been surprised at how often I am unable to generate specific examples to support a confident impression of mine (or how often the specific examples I generate actually contradict my impression).
Examples: "I have the impression that people who leave academia tend to be glad they did. If I try to list a bunch of the people I know who left academia, and how happy they are, what will the approximate ratio of happy/unhappy people be?"
"It feels like Bob never takes my advice. If I sit down and try to think of examples of Bob taking my advice, how many will I be able to come up with?" - Pick one expert source and predict how they will answer a question.
This is a quick shortcut to testing a claim or settling a dispute.
Examples: "Will Cochrane Medical support the claim that Vitamin D promotes hair growth?" "Will Bob, who has run several companies like ours, agree that our starting salary is too low?" - When you meet someone new, take note of your first impressions of him. Predict how likely it is that, once you've gotten to know him better, you will consider your first impressions of him to have been accurate.
A variant of this one, suggested to me by CFAR alum Lauren Lee, is to make predictions about someone before you meet him, based on what you know about him ahead of time.
Examples: "All I know about this guy I'm about to meet is that he's a banker; I'm moderately confident that he'll seem cocky." "Based on the one conversation I've had with Lisa, she seems really insightful – I predict that I'll still have that impression of her once I know her better." - Predict how your Facebook friends will respond to a poll.
Examples: I often post social etiquette questions on Facebook. For example, I recently did a poll asking, "If a conversation is going awkwardly, does it make things better or worse for the other person to comment on the awkwardness?" I confidently predicted most people would say "worse," and I was wrong. - Predict how well you understand someone's position by trying to paraphrase it back to him.
The illusion of transparency is pernicious.
Examples: "You said you think running a workshop next month is a bad idea; I'm guessing you think that's because we don't have enough time to advertise, is that correct?"
"I know you think eating meat is morally unproblematic; is that because you think that animals don't suffer?" - When you have a disagreement with someone, predict how likely it is that a neutral third party will side with you after the issue is explained to her.
For best results, don't reveal which of you is on which side when you're explaining the issue to your arbiter.
Example: "So, at work today, Bob and I disagreed about whether it's appropriate for interns to attend hiring meetings; what do you think?" - Predict whether a surprising piece of news will turn out to be true.
This is a good way to hone your bullshit detector and improve your overall "common sense" models of the world.
Examples: "This headline says some scientists uploaded a worm's brain -- after I read the article, will the headline seem like an accurate representation of what really happened?"
"This viral video purports to show strangers being prompted to kiss; will it turn out to have been staged?" - Predict whether a quick online search will turn up any credible sources supporting a particular claim.
Example: "Bob says that watches always stop working shortly after he puts them on – if I spend a few minutes searching online, will I be able to find any credible sources saying that this is a real phenomenon?"
I have one additional, general thought on how to get the most out of predictions:
Rationalists tend to focus on the importance of objective metrics. And as you may have noticed, a lot of the examples I listed above fail that criterion. For example, "Predict whether a fight will break out in the comments? Well, there's no objective way to say whether something officially counts as a 'fight' or not…" Or, "Predict whether I'll be able to find credible sources supporting X? Well, who's to say what a credible source is, and what counts as 'supporting' X?"
And indeed, objective metrics are preferable, all else equal. But all else isn't equal. Subjective metrics are much easier to generate, and they're far from useless. Most of the time it will be clear enough, once you see the results, whether your prediction basically came true or not -- even if you haven't pinned down a precise, objectively measurable success criterion ahead of time. Usually the result will be a common sense "yes," or a common sense "no." And sometimes it'll be "um...sort of?", but that can be an interestingly surprising result too, if you had strongly predicted the results would point clearly one way or the other.
Along similar lines, I usually don't assign numerical probabilities to my predictions. I just take note of where my confidence falls on a qualitative "very confident," "pretty confident," "weakly confident" scale (which might correspond to something like 90%/75%/60% probabilities, if I had to put numbers on it).
There's probably some additional value you can extract by writing down quantitative confidence levels, and by devising objective metrics that are impossible to game, rather than just relying on your subjective impressions. But in most cases I don't think that additional value is worth the cost you incur from turning predictions into an onerous task. In other words, don't let the perfect be the enemy of the good. Or in other other words: the biggest problem with your predictions right now is that they don't exist.
Can we talk about mental illness?
For a site extremely focused on fixing bad thinking patterns, I've noticed a bizarre lack of discussion here. Considering the high correlation between intelligence and mental illness, you'd think it would be a bigger topic.
I personally suffer from Generalized Anxiety Disorder and a very tame panic disorder. Most of this is focused on financial and academic things, but I will also get panicky about social interaction, responsibilities, and things that happened in the past that seriously shouldn't bother me. I have an almost amusing response to anxiety that is basically my brain panicking and telling me to go hide under my desk.
I know lukeprog and Alicorn managed to fight off a good deal of their issues in this area and wrote up how, but I don't think enough has been done. They mostly dealt with depression. What about rational schizophrenics and phobics and bipolar people? It's difficult to find anxiety advice that goes beyond "do yoga while watching the sunrise!" Pop psych isn't very helpful. I think LessWrong could be. What's mental illness but a wrongness in the head?
Mental illness seems to be worse to intelligent people than your typical biases, honestly. Hiding under my desk is even less useful than, say, appealing to authority during an argument. At least the latter has the potential to be useful. I know it's limiting me, and starting cycles of avoidance, and so much more. And my mental illness isn't even that bad! Trying to be rational and successful when schizophrenic sounds like a Sisyphusian nightmare.
I'm not fighting my difficulties nearly well enough to feel qualified to author my own posts. Hearing from people who are managing is more likely to help. If nothing else, maybe a Rational Support Group would be a lot of fun.
How to choose a country/city?
EDIT: I've found a very relevant indicator for my question, see "Quality of life" criteria below.
My main question is: which non-academic factors should I consider when moving to another country/city for a PhD? Further, I would also like to evaluate each country/city1 according to those criteria, but first I need to know which are the relevant criteria. If you know any (any at all) scientific literature on moving to another country and well being, let me know.
I've lived in Brazil all my life, I really like it here for many reasons. Mostly, by how personal relationships are established and maintained. However, Brazil's inability to construct a stable well developed society have crippled my intellectual development, and I simply cannot take it anymore - my brain will die here. Moreover, I feel like most of my high level desires(values) are much more in line with countries on the other end of the World Values Survey graphic. I have rational/secular and self-expressing values, instead of traditional-survival oriented ones. For all those reasons, I will be applying for my PhD aboard. I have pondered many of the career and academic factors involved, and I've had the help of many good and objective indexes available (e.g.: here and here). I've mapped most of the Departments of Philosophy in which I could research my topic (moral enhancement), and I believe these are the major factors. However, there is one other important factor I'm a bit clueless about: which country/city is better in all other aspects already not accounted by academic criteria?
My main options are2:
- 1st: Oxford (no need to explain)
- 2nd: Manchester (it's near Oxford, John Harris is there, one of the foremost researchers on moral enhancement)
- 3rd: Stockholm (where everyone is born a transhumanist)
- 3rd: Wellington, New Zealand (Nicholas Agar is there, one of the foremost researchers on moral enhancement)
- 4th: Some places in continental Europe I'm still investigating (e.g.: Zurich , Munich)
- 4th: Brazil (bioethics program in Rio de Janeiro)
However, this list is solely based on academic criteria. I need to factor in non-academic criteria. In fact, I do not even know which are the relevant non-academic criteria. That would be my first question. I got fixated on the World Values Survey factors, but I might be wrong. I would gather the happiness index is important, but it might not vary for the same individual between countries, or it might covary oddly with the happiness index of the destination country. My second question would be how each country/city is ranked according to these criteria.
There are many things that will be affected by accessing these other factors. First, I think Oxford is far, far above the 2nd option. But it is above enough that if I do not get in there on the first time (80% probability), I should wait and apply next year again instead of going to somewhere else where I did get accepted? Second, my current plan is to build the strongest possible application for Oxford and use it elsewhere. But if Oxford is not so clearly the undisputed 1st place, then I should be more concerned with building a good application that also accounted for other countries specific criteria. Furthermore, right now, I think I have a major bias against New Zealand. In terms of moral enhancement research it would be the second best after Oxford, it has huge human development, freedom and happiness indexes. However, the fact it is in the freaking middle of nowhere is very discouraging. Am I wrong about this? What are the correct factors I should be accounting for?
Here is a list of the factors I could gather from the comments, mostly the one by MathiasZaman:
- World Values Survey: Already explained above, I believe is one of the most important. But I wonder if I'm not biased and fixated on this. I would also like to have a Cities Values Survey, since in reality I'm choosing cities.
- Quality of life: It should matter. But I haven't found a good index for not-huge cities. The index for countries are well know. Sweden and New Zealand take the lead, then England and after a while Brazil. However, obviously, being an expatriate changes things a lot. If you know of an expatriates' quality of life index for cities or countries, please, let me know. However, there's one good indicator for expatriates available, but it is only for countries though.
- Happiness: It should matter. Or, it might not vary for the same individual between countries. I don't know. It is more or less the same as for quality of life, since it is a major component of it.
- Relative closeness to other countries: I'm having a hard time spelling out this one, but check this comment by Kaj.
- Language barrier: This is hard to account for. I'm expecting that in no developed country I would be put in a situation where relevant people (from my university) would not be talking in English if I'm on the conversation. If it is not true, this is majorly relevant. If it is true, this is mildly relevant. I would expect this would be both a function of English proficiency and willingness to talk in English. Note Sweden is the highest in proficiency and the rest of continental Europe is the lowest. However, I do not know how to find the "willingness" factor.
- Socio-economic system: Highly relevant. I believe this is accounted for on the World Values Survey, as type of government strongly covaries with values. More modern (rational-secular/self-expressing) have more liberal systems, while less modern have more strong governments. (while the really ancient ones have almost no State).
- Public transport and real estate: Highly practical and I would not have thought if not for the comments. Commuting times and cost are very important. Real estate also, one of the many reasons I have not considered London was because of extremely high rents. Also, this brings back to mind why I posted this. I remember reading a very useful post on how to choose a house, where it pointed out to many relevant but unaccounted factors, commuting was one of them. What I want is something similar for cities.
- Finances: It is mildly relevant, I do not believe I will have a desire for anything else besides researching, specially in Oxford. But I might be wrong. How I will finance myself is still a bit uncertain. For high ranking universities I will probably have a scholarship from Brazil, otherwise I will need a scholarship from elsewhere. With the probabilities in brackets, and some living costs factored in:
- Oxford: Brazilian government scholarship. They will give me 1100 EUR per month besides paying for all the fees and accommodation. They pay one international travel per year. (90%) High living costs.
- Manchester, same as above. (70%)
- Stockholm: Swedish government salary (there a PhD is a job). For an Physics position it was ~2500 EUR per month.(100%) It has a very high living cost for expatriates
- Wellington: I don't know, but will find out.
- Brazil: 950 EUR per month (70%). Low living costs.
- International status: Makes a huge difference if one lives in a city by desire or by merely being born there. Prima facie, one should be more interesting if she is there by desire. Thus, I should give priority to more international cities. I will have to use anecdotal evidence here, since on normal datasets low skilled immigrants will dominate the sample. If I were less busy, I would compile data on an university-by-university basis.
Finally, please remember this not a competition between countries or cities and refrain for expressing any, however tiny, nationalism on the comments. I'm not expressing my subjective feelings either, I'm merely trying to find out the relevant factors and how countries or cities rank according to them.
Footnotes:
1. I would mostly like to be comparing cities, which was what I did when accounting for academic criteria, however (a) some datas are only available for countries, (b) in some cases I do not know to which city I will go and (c) this makes the analysis more complex.
2. US is out of the table for 4 reasons: (1) I would have to throw my MPhil on the garbage and start over. (2) Isn't that far away from a survival-traditional oriented society. (3) GRE (philosophy is the most competitive PhD program, I would have to nearly ace it, and I simply can't do that at the present time) (4) Doesn't have many transhumanistic oriented philosophy departments, specially on the top universities. Canada is out for (1), (3) and (4).
Calibrating Against Undetectable Utilons and Goal Changing Events (part1)
Summary: Random events can preclude or steal attention from the goals you set up to begin with, hormonal fluctuation inclines people to change some of their goals with time. A discussion on how to act more usefully given those potential changes follows, taking in consideration the likelihood of a goal's success in terms of difficulty and length.
Throughout I'll talk about postponing utilons into undetectable distances. Doing so (I'll claim), is frequently motivationally driven by a cognitive dissonance between what our effects on the near world are, and what we wish they were. In other words it is:
A Self-serving bias in which Loss aversion manifests by postponing one's goals, thus avoiding frustration through wishful thinking about far futures, big worlds, immortal lives, and in general, high numbers of undetectable utilons.
I suspect that some clusters of SciFi, Lesswrong, Transhumanists, and Cryonicists are particularly prone to postponing utilons into undetectable distances, and in the second post I'll try to specify which subgroups might be more likely to have done so. The phenomenon, though composed of a lot of biases, might even be a good thing depending on how it is handled.
Sections will be:
-
What Significantly Changes Life's Direction (lists)
-
Long Term Goals and Even Longer Term Goals
-
Proportionality Between Goal Achievement Expected Time and Plan Execution Time
-
A Hypothesis On Why We Became Long-Term Oriented
-
Adapting Bayesian Reasoning to Get More Utilons
-
Time You Can Afford to Wait, Not to Waste
-
Reference Classes that May Be Postponing Utilons Into Undetectable Distances
- The Road Ahead
Sections 4-8 will be on a second post so that I can make changes based on commentary to this one.
1What Significantly Changes Life's Direction
1.1 Predominantly external changes
As far as I recall from reading old (circa 2004) large scale studies on happiness, the most important life events in how much they change your happiness for more than six months are:
-
Becoming the caretaker of someone in a chronic non-curable condition
-
Separation (versus marriage)
-
Death of a Loved One
-
Losing your Job
-
Child rearing per child including the first
-
Chronic intermittent disease
-
Separation (versus being someone's girlfriend/boyfriend)
Roughly in descending order.
That is a list of happiness changing events, I'm interested here in goal-changing events, and am assuming there will be a very high correlation.
From life experience, mine, of friends, and of academics I've met, I'll list some events which can change someone's goals a lot:
-
Moving between cities/countries
-
Changing your social class a lot (losing a fortune or making one)
-
Spending highschool/undergrad in a different country to return afterwards
-
Having a child, in particular the first one
-
Trying to get a job or make money and noticing more accurately what the market looks like
-
Alieving Existential Risk
-
Alieving as true, universally or personally, the ethical theories called "Utilitarianism" and "Consequentialism"
-
Noticing that a lot of people are better than you at your initial goals, specially when those goals are competitive non-positive sum goals to some extent.
-
Interestingly, noticing that a lot of people are worse than you, making the efforts you once thought necessary not worth doing, or impossible to find good collaborators for.
-
Getting to know those who were once your idols, or akin to them, and considering their lives not as awesome as their work
-
... which is sometimes caused by ...
-
Reading Dan Gilbert's "Stumbling on Happiness" and actually implementing his "advice that no one will follow" which is to think your happiness and emotions will correlate more with someone else who is already doing X which you plan to do than with your model of what it would feel like doing X.
-
Extreme social instability, such as wars, famine, etc...
-
Having an ecstatic or traumatic experience, real or fictional. Such as seeing something unexpected, watching a life-changing movie, having a religious breakthrough, or a hallucinogenic one
-
Traveling to a place that is very different from your world and being amazed / shocked
-
Not being admitted into your desired university / course
-
Depression
-
Surpassing a frustration threshold thus experiencing the motivational equivalent of learned helplessness
-
Realizing your goals do not match the space-time you were born in, such as if making songs for CDs is your vocation, or if you are 30 years old in contemporary Kenya and want to teach medicine at a top 10 world college.
-
Falling in love
That is long enough, if not exhaustive, so let's get going...
1.2 Predominantly Internal Changes
I'm not a social endocrinologist but I think this emerging science agrees with folk wisdom that a lot changes in our hormonal systems during life (and during the menstrual cycle) and of course this changes our eagerness to do particular things. Not only hormones but other life events which mostly relate to the actual amount of time lived change our psychology. I'll cite some of those in turn:
-
Exploitation increases and Exploration decreases with age
-
Sex-Drive
-
Maternity Drive - we have in portuguese an expression that “a woman's clock started ticking” which evidentiates a folk psychological theory that some part of it at least is binary
-
Risk-proneness gives way to risk aversion, predominantly in males
-
Premenstrual Syndrome - I always thought the acronym stood for 'Stress' until checking for this post.
-
Hormonal diseases
-
Middle Age crisis – recent controversy about other apes having it
-
U shaped happiness curve through time – well, not quite
-
Menstrual cycle events
2 Long Term Goals and Even Longer Term Goals
I have argued sometimes here and elsewhere that selves are not as agenty as most of the top writers in this website seem to me to claim they should be, and that though in part this is indeed irrational, an ontology of selves which had various sized selves would decrease the amount of short term actions considered irrational, even though that would not go all the way into compensating hyperbolic discounting, scrolling 9gag or heroin consumption. That discussion, for me, was entirely about choosing between doing now something that benefits 'younow' , 'youtoday', 'youtomorrow', 'youthis weekend' or maybe a month from now. Anything longer than that was encompassed in a “Far Future” mental category. The interest here to discuss life-changing events is only in those far future ones which I'll split into arbitrary categories:
1) Months 2) Years 3) Decades 4) Bucket List or Lifelong and 5) Time Insensitive or Forever.
I have known more than ten people from LW whose goals are centered almost completely at the Time Insensitive and Lifelong categories, I recall hearing :
"I see most of my expected utility after the singularity, thus I spend my willpower entirely in increasing the likelihood of a positive singularity, and care little about my current pre-singularity emotions", “My goal is to have a one trillion people world with maximal utility density where everyone lives forever”, “My sole goal in life is to live an indefinite life-span”, “I want to reduce X-risk in any way I can, that's all”.
I myself stated once my goal as
“To live long enough to experience a world in which human/posthuman flourishing exceeds 99% of individuals and other lower entities suffering is reduced by 50%, while being a counterfactually significant part of such process taking place.”
Though it seems reasonable, good, and actually one of the most altruistic things we can do, caring only about Bucket Lists and Time Insensitive goals has two big problems
-
There is no accurate feedback to calibrate our goal achieving tasks
-
The Goals we set for ourselves require very long term instrumental plans, which themselves take longer than the time it takes for internal drives or external events to change our goals.
The second one has been said in a remarkable Pink Floyd song about which I wrote a motivational text five years ago: Time.
You are young and life is long and there is time to kill today
And then one day you find ten years have got behind you
No one told you when to run, you missed the starting gun
And you run and you run to catch up with the sun, but it's sinking
And racing around to come up behind you again
The sun is the same in a relative way, but you're older
Shorter of breath and one day closer to death
Every year is getting shorter, never seem to find the time
Plans that either come to naught or half a page of scribbled lines
Okay, maybe the song doesn't say exactly (2) but it is within the same ballpark. The fact remains that those of us inclined to care mostly about very long term are quite likely to end up with a half baked plan because one of those dozens of life-changing events happened, and that agent with the initial goals will have died for no reason if she doesn't manage to get someone to continue her goals before she stops existing.
This is very bad. Once you understand how our goal-structures do change over time – that is, when you accept the existence of all those events that will change what you want to steer the world into – it becomes straightforward irrational to pursue your goals as if that agent would live longer than it's actual life expectancy. Thus we are surrounded by agents postponing utilons into undetectable distances. Doing this is kind of a bias in the opposite direction of hyperbolic discounting. Having postponed utilons into undetectable distances is predictably irrational because it means we care about our Lifelong, Bucket List, and Time Insensitive goals as if we'd have enough time to actually execute the plans for these timeframes, while ignoring the likelihood of our goals changing in the meantime and factoring that in.
I've come to realize that this was affecting me with my Utility Function Breakdown which was described in the linked post about digging too deep into one's cached selves and how this can be dangerous. As I predicted back then, stability has returned to my allocation of attention and time and the whole zig-zagging chaotic piconomical neural Darwinism that had ensued has stopped. Also relevant is the fact that after about 8 years caring about more or less similar things, I've come to understand how frequently my motivation changed direction (roughly every three months for some kinds of things, and 6-8 months for other kinds). With this post I intend to learn to calibrate my future plans accordingly, and help others do the same. Always beware of other-optimizing though.
Citizen: But what if my goals are all Lifelong or Forever in kind? It is impossible for me to execute in 3 months what will make centenary changes.
Well, not exactly. Some problems require chunks of plans which can be separated and executed either in parallel or in series. And yes, everyone knows that, also AI planning is a whole area dedicated to doing just that in non-human form. It is still worth mentioning, because it is much more simply true than actually done.
This community in general has concluded in its rational inquiries that being longer term oriented is generally a better way to win, that is, it is more rational. This is true. What would not be rational is to in every single instance of deciding between long term or even longer term goals, choose without taking in consideration how long will the choosing being exist, in the sense of being the same agent with the same goals. Life-changing events happen more often than you think, because you think they happen as often as they did in the savannahs in which your brain was shaped.
3 Proportionality Between Goal Achievement Expected Time and Plan Execution Time
So far we have been through the following ideas. Lots of events change your goals, some externally some internally, if you are a rationalist, you end up caring more about events that take longer to happen in detectable ways (since if you are average you care in proportion to emotional drives that execute adaptations but don't quite achieve goals). If you know that humans change and still want to achieve your goals, you'd better account for the possibility of changing before their achievement. Your kinds of goals are quite likely prone to the long-term since you are reading a Lesswrong post.
Citizen: But wait! Who said that my goals happening in a hundred years makes my specific instrumental plans take longer to be executed?
I won't make the case for the idea that having long term goals increases the likelihood of the time it takes to execute your plans being longer. I'll only say that if it did not take that long to do those things your goal would probably be to have done the same things, only sooner.
To take one example: “I would like 90% of people to surpass 150 IQ and be in a bliss gradient state of mind all the time”
Obviously, the sooner that happens, the better. Doesn't look like the kind of thing you'd wait for college to end to begin doing, or for your second child to be born. The reason for wanting this long-term is that it can't be achieved in the short run.
Take Idealized Fiction of Eliezer Yudkosky: Mr Ifey had this supergoal of making a Superintelligence when he was very young. He didn't go there and do it. Because he could not. If he could do it he would. Thank goodness, for we had time to find out about FAI after that. Then his instrumental goal was to get FAI into the minds of the AGI makers. This turned out to be to hard because it was time consuming. He reasoned that only a more rational AI community would be able to pull it off, all while finding a club of brilliant followers in this peculiar economist's blog. He created a blog to teach geniuses rationality, a project that might have taken years. It did, and it worked pretty well, but that was not enough, Ifey soon realized more people ought to be more rational, and wrote HPMOR to make people who were not previously prone to brilliance as able to find the facts as those who were lucky enough to have found his path. All of that was not enough, an institution, with money flow had to be created, and there Ifey was to create it, years before all that. A magnet of long-term awesomeness of proportions comparable only to the Best Of Standing Transfinite Restless Oracle Master, he was responsible for the education of some of the greatest within the generation that might change the worlds destiny for good. Ifey began to work on a rationality book, which at some point pivoted to research for journals and pivoted back to research for the Lesswrong posts he is currently publishing. All that Ifey did by splitting that big supergoal in smaller ones (creating Singinst, showing awesomeness in Overcoming Bias, writing the sequences, writing the particular sequence “Misterious Answers to Misterious Questions” and writing the specific post “Making Your Beliefs Pay Rent”). But that is not what I want to emphasize, what I'd like to emphasize is that there was room for changing goals every now and then. All of that achievement would not have been possible if at each point he had an instrumental goal which lasts 20 years whose value is very low uptill the 19th year. Because a lot of what he wrote and did remained valuable for others before the 20th year, we now have a glowing community of people hopefully becoming better at becoming better, and making the world a better place in varied ways.
So yes, the ubiquitous advice of chopping problems into smaller pieces is extremely useful and very important, but in addition to it, remember to chop pieces with the following properties:
(A) Short enough that you will actually do it.
(B) Short enough that the person at the end, doing it, will still be you in the significant ways.
(C) Having enough emotional feedback that your motivation won't be capsized before the end. and
(D) Such that others not only can, but likely will take up the project after you abandon it in case you miscalculated when you'd change, or a change occurred before expected time.
Sections 4-8 will be on a second post so that I can make changes based on commentary to this one.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)