A few thoughts on a Friendly AGI (safe vs friendly, other minds problem, ETs and more)
Friendly AI is an idea that I find to be an admirable goal. While I'm not yet sure an intelligence explosion is likely, or whether FAI is possible, I've found myself often thinking about it, and I'd like for my first post to share a few those thoughts on FAI with you.
Safe AGI vs Friendly AGI
-Let's assume an Intelligence Explosion is possible for now, and that an AGI with the ability to improve itself somehow is enough to achieve it.
-Let's define a safe AGI as an above-human general AI that does not threaten humanity or terran life (eg. FAI, Tool AGI, possibly Oracle AGI)
-Let's define a Friendly AGI as one that *ensures* the continuation of humanity and terran life.
-Let's say an unsafe AGI is all other AGIs.
-Safe AGIs must supress unsafe AGIs in order to be considered Friendly. Here's why:
-An unsafe AGI is likely to be built at that point because:
-Some people will find the safe AGI's goals unnacceptable
-Some people will rationalise or simply mistake that their AGI design is safe when it is not
-Some people will not care if their AGI design is safe, because they do not care about other people, or because they hold some extreme beliefs
-Therefore, If a safe AGI does not prevent unsafe AGIs from coming into existence, humanity will very likely be destroyed.
-The AGI most likely to prevent unsafe AGIs from being created is one that actively predicted their development and terminates that development before or on completion.
-So to summarise
-Oracle and Tool AGIs are not Friendly AIs, they are just safe AIs, because they don't suppress anything.
-Oracle and Tool AGIs are a bad plan for AI if we want to prevent the destruction of humanity, because hostile AGIs will surely follow.
(**On reflection I cannot be certain of this specific point, but I assume it would take a fairly restrictive regime for this to be wrong. Further comments on this very welcome.)
Other minds problem - Why should be philosophically careful when attempting to theorise about FAI
I read quite a few comments in AI discussions that I'd probably characterise as "the best utility function for a FAI is one that values all consciousness". I'm quite concerned that this persists as a deeply held and largely unchallenged assumption amongst some FAI supporters. I think in general I find consciousness to be an extremely contentious, vague and inconsistently defined concept, but here I want to talk about some specific philosophical failures.
My first concern is that while many AI theorists like to say that consciousness is a physical phenomenon, which seems to imply Monist/Physicalist views, they at the same time don't seem to understand that consciousness is a Dualist concept that is coherent only in a Dualist framework. A Dualist believes there is a thing called a "subject" (very crudely this equates with the mind) and then things called objects (the outside "empirical" world interpreted by that mind). Most of this reasoning begins with Descartes' cogito ergo sum or similar starting points ( https://en.wikipedia.org/wiki/Cartesian_dualism ). Subjective experience, qualia and consciousness make sense if you accept that framework. But if you're a Monist, this arbitrary distinction between a subject and object is generally something you don't accept. In the case of a Physicalist, there's just matter doing stuff. A proper Physicalist doesn't believe in "consciousness" or "subjective experience", there's just brains and the physical human behaviours that occur as a result. Your life exists from a certain point of view, I hear you say? The Physicalist replies, "well a bunch of matter arranged to process information would say and think that, wouldn't it?".
I don't really want to get into whether Dualism or Monism is correct/true, but I want to point out even if you try to avoid this by deciding Dualism is right and consciousness is a thing, there's yet another more dangerous problem. The core of the problem is that logically or empirically establishing the existence of minds, other than your own is extremely difficult (impossible according to many). They could just be physical things walking around acting similar to you, but by virtue of something purely mechanical - without actual minds. In philosophy this is called the "other minds problem" ( https://en.wikipedia.org/wiki/Problem_of_other_minds or http://plato.stanford.edu/entries/other-minds/). I recommend a proper read of it if the idea seems crazy to you. It's a problem that's been around for centuries, and yet to-date we don't really have any convincing solution (there are some attempts but they are highly contentious and IMHO also highly problematic). I won't get into it more than that for now, suffice to say that not many people accept that there is a logical/empirical solution to this problem.
Now extrapolate that to an AGI, and the design of its "safe" utility functions. If your AGI is designed as a Dualist (which is neccessary if you wish to encorporate "consciousness", "experience" or the like into your design), then you build-in a huge risk that the AGI will decide that other minds are unprovable or do not exist. In this case your friendly utility function designed to protect "conscious beings" fails and the AGI wipes out humanity because it poses a non-zero threat to the only consciousness it can confirm - its own. For this reason I feel "consciousness", "awareness", "experience" should be left out of FAI utility functions and designs, regardless of the truth of Monism/Dualism, in favour of more straight-forward definitions of organisms, intelligence, observable emotions and intentions. (I personally favour conceptualising any AGI as a sort of extension of biological humanity, but that's a discussion for another day) My greatest concern is there is such strong cultural attachment to the concept of consciousness that researchers will be unwilling to properly question the concept at all.
What if we're not alone?
It seems a little unusual to throw alien life into the mix at this point, but I think its justified because an intelligence explosion really puts an interstellar existence well within our civilisation's grasp. Because it seems that an intelligence explosion implies a very high rate of change, it makes sense to start considering even the long term implication early, particularly if the consequences are very serious, as I believe they may be in this realm of things.
Let's say we successfully achieved a FAI. In order to fufill its mission of protecting humanity and the biosphere, it begins expanding, colonising and terraforming other planets for potential habitation by Earth originating life. I would expect this expansion wouldn't really have a limit, because the more numourous the colonies, the less likely it is we could be wiped out by some interstellar disaster.
Of course, we can't really rule out the possibility that we're not alone in the universe, or even the galaxy. If we make it as far as AGI, then its possible another alien civilisation might reach a very high level of technological advancement too. Or there might be many. If our FAI is friendly to us but basically treats them as paperclip fodder, then potentially that's a big problem. Why? Well:
-Firstly, while a species' first loyalty is to itself, we should consider that it might be morally unsdesirable to wipe out alien civilisations, particularly as they might be in some distant way "related" (see panspermia) to own biosphere.
-Secondly, there is conceivable scenarios where alien civilisations might respond to this by destroying our FAI/Earth/the biosphere/humanity. The reason is fairly obvious when you think about it. An expansionist AGI could be reasonably viewed as an attack or possibly an act of war.
Let's go into a tiny bit more detai. Given that we've not been destroyed by any alien AGI just yet, I can think of a number of possible interstellar scenarios:
(1) There is no other advanced life
(2) There is advanced life, but it is inherently non-expansive (expand inwards, or refuse to develop dangerous AGI)
(3) There is advanced life, but they have not discovered AGI yet. There could potentially be a race-to-the-finish (FAI) scenario on.
(4) There is already expanding AGIs, but due to physical limits on the expansion rate, we are not aware of them yet. (this could use further analysis)
One civilisation, or an allied group of civilisations have develop FAIs and are dominant in the galaxy. They could be either:
(6) Dominators that tolerate civilisations so long as they remain primitive and non-threatening by comparison.
(7) Some sort of interstellar community that allows safe civilisations to join (this community still needs to stomp on dangerous potential rival AGIs)
In the case of (6) or (7), developing a FAI that isn't equipped to deal with alien life will probably result in us being liquidated, or at least partially sanitised in some way. In (1) (2) or (5), it probably doesn't matter what we do in this regard, though in (2) we should consider being nice. In (3) and probably (4) we're going to need a FAI capable of expanding very quickly and disarming potential AGIs (or at least ensuring they are FAIs from our perspective).
The upshot of all this is that we probably want to design safety features into our FAI so that it doesn't destroy alien civilisations/life unless its a significant threat to us. I think the understandable reaction to this is something along the lines of "create an FAI that values all types of life" or "intelligent life" or something along these lines. I don't exactly disagree, but I think we must be cautious in how we formulate this too.
Say there are many different civilisations in the galaxy. What sort of criteria would ensure that, given some sort of zero-sum scenario, Earth life wouldn't be destroyed. Let's say there was some sort of tiny but non-zero probability that humanity could evade the FAI's efforts to prevent further AGI development. Or perhaps there was some loophole in the types of AGI's that humans were allowed to develop. Wouldn't it be sensible, in this scenario, for a universalist FAI to wipe out humanity to protect the countless other civilisations? Perhaps that is acceptable? Or perhaps not? Or less drastically, how does the FAI police warfare or other competition between civilisations? A slight change in the way life is quantified and valued could change drastically the outcome for humanity. I'd probably suggest we want to weight the FAI's values to start with human and Earth biosphere primacy, but then still give some non-zero weighting to other civilisations. There is probably more thought to be done in this area too.
Simulation
I want to also briefly note that one conceivable way we might postulate as a safe way to test Friendly AI designs is to simulate a worlds/universes of less complexity than our own, make it likely that it's inhabitants invent a AGI or FAI, and then closely study the results of these simluations. Then we could study failed FAI attempt with much greater safety. It also occured to me that if we consider the possibilty of our universe being a simulated one, then this is a conceivable scenario under which our simulation might be created. After all, if you're going to simulate something, why not something vital like modelling existential risks? I'm not sure yet sure of the implications exactly. Maybe we need to consider how it relates to our universe's continued existence, or perhaps it's just another case of Pascal's Mugging. Anyway I thought I'd mention it and see what people say.
A playground for FAI theories
I want to lastly mention this link (https://www.reddit.com/r/LessWrongLounge/comments/2f3y53/the_ai_game/). Basically its a challenge for people to briefly describe an FAI goal-set, and for others to respond by telling them how that will all go horribly wrong. I want to suggest this is a very worthwhile discussion, not because its content will include rigourous theories that are directly translatable into utility functions, because very clearly it won't, but because a well developed thread of this kind would be mixing pot of ideas and good introduction to common known mistakes in thinking about FAI. We should encourage a slightly more serious verison of this.
Thanks
FAI and AGI are very interesting topics. I don't consider myself able to really discern whether such things will occur, but its an interesting and potentially vital topic. I'm looking forward to a bit of feedback on my first LW post. Thanks for reading!
The ecological rationality of the bad old fallacies
I think that the community here may have some of the most qualified people to judge a new frame of studying the fallacies of argumentation with some instruments that psychologists use. I and my friend Dan Ungureanu, a linguist at Charles University in Prague could use some help!
I’ll write a brief introduction on the state of argumentation theory first, for context:
There is such thing as a modern argumentation theory. It can be traced back to the fifties when Perelman and Olbrechts-Tyteca published their New Rhetoric and Toulmin published his The uses of argument. The fallacies of argumentation, now somewhat popular in the folk argumentation culture, have had their turning point when the book Fallacies (Hamblin, 1970) argued that most fallacies are not fallacies at all, they are most of the time the reasonable option. Since then some argumentation schools have taken Hamblin’s challenge and tried to come up with a theory of fallacies. Of them, the Informal logic school and the pragma-dialectics are the most well-known. They even have made empirical experiments to verify their philosophies.
Another normative approach, resumed here by Kaj Sotala in Fallacies as weak Bayesian evidence, is comparing fallacious arguments with the Bayesian norm (Hahn & Oaksford, 2007; also eg. Harris, Hsu & Madsen, 2012; Oaksford & Hahn, 2013).
We cherry-pick a discourse to spot the fallacies. We realized that a couple of years ago when we had to teach the informal fallacies to journalism masters students: we would pick a text that we disagree with, and then search for fallacies. Me and Dan, we often come up with different ones for the same paragraph. They are vague. Than we switched to cognitive biases, as possible explanations for some fallacies, but we were still in the ‘privileging the hypothesis territory’, I would say now, with the benefit of hindsight.
Maybe the world heuristic has already sprung to some of you. I’ve seen this here and somewhere else on the net: fallacies as heuristics. Argumentation theorists only stumbled on this idea recently (Walton, 2010).
Now here’s what this whole intro was for: lesswrong and before overcoming bias are sites build on the idea that we can improve our rationality by doing some things in relation to the now famous Heuristics&Biases program. The heuristics as defined by Twersky and Kahneman are only marginally useful for assessing the heuristic value of a type of argument that we use to call a fallacy. The heuristic elicitation design is maybe a first step: we can see if we have some form of attribute substitution (we always have, if we think that a Bayesian daemon is the benchmark).
We started with the observation that if people generally fall back to some particular activity when they are “lazy”, that activity could be a precious hint about human nature. We believe that it is far easier to spot the fallacy a) when you are looking for it and b) that you are looking for it usually if the topic is interesting, complex, grey: theology, law, politics, health and the like. If indeed the fallacies of argumentation are stable and universal behaviors across (at least some) historical time and across cultures, we can see those “fallacies” as rules of thumb that use other, lower-level fast and frugal heuristics as solid inference rules in the right ecology. Ecological rationality is a match between the environment and the – bounded rational – agent’s decision mechanisms (G. Gigerenzer 1999, V. Smith, 2003).
You can’t just invent a norm and then compare behaviors of organisms or artifacts with it. Not even Bayes rule: the decision of some organisms will have to be Bayesian only in their natural environment (E.T. Jaynes observed this). That is why we need a computational theory of people even when we study arguments: there is no psychology which isn’t evolutionary psychology. We need to know the function, but saying fallacy is about valence, so people traditionally ask why we are so narrow or stupid or, recently, when are the fallacies irrational and when they are not. (no, we don’t want to start again the 1996 polemic between Gigerenzer and Tversky&Kahneman!).
Well, that is what we think, anyway. And if you spot a big flaw, please point it to us before we send our paper to a journal.
Here’s the draft of our paper:
https://www.academia.edu/6271737/The_Ecological_Rationality_of_Argumentation_Fallacies
Thanks
Discovering Your Secretly Secret Sensory Experiences
In his recent excellent blog post, Yvain discusses a few "universal" (commonplace) human experiences that many people never notice they don't have, such as the ability to smell, see some colors, see mental pictures, and feel emotions. I was reminded of a longstanding argument I had with a friend. She always insisted that she would rather be blind than deaf. I could not understand how that was possible, since the visual world is so much richer and more interesting. We later found out that I can see an order of magnitude more colors than she can, but have subpar ability to distinguish tones. And I thought she was just being a contrarian for its own sake. I thought the experience of that many colors was universal, and had rarely seen evidence that challenged that belief.
More seriously, a good friend of mine did not realize he suffered from a serious genetic disorder that caused him extreme body pain and terrible headaches whenever he became tired or dehydrated for the first three decades of his life. He thought everyone felt that way, but considered it whiny to talk about it. He almost never mentioned it, and never realized what it was, until <bragging> I noticed how tense his expressions became when he got tired, asked him about it, then put it together with some other unusual physical experiences I knew he had </bragging>
This got me thinking about when it is likely we might be having unusual sensory experiences and not realize for long periods of time. I am calling these "secretly secret experiences." Here are the factors that might increase the likelihood of having a secretly secret experience.
1) When they are rarely consciously mentally examined: experiences such as the ability to distinguish subtle differences in shades of color are tested occasionally (when choosing paint or ripe fruit), but few people besides interior decorators think about how good their shade-distinguishing skills are. Others include that feeling of being in different moods or mental states, breathing, sensing commonly-sensed things (the look of roads or the sound of voices, etc.) Most of the examples from the blog post fall under this category. People might not notice that they over- or under-experience or differently experience such feelings, relative to others.
2) When they are rarely discussed in everyday life: If my experience of pooping feels very different from other peoples' I may never know, because I don't discuss the experience in detail with anyone. If people talked about their experiences, I would probably notice if mine didn't match up, but that's unlikely to happen. The same might apply for other experiences that are taboo to discuss, such as masturbation, sex (in some cultures), anything considered gross or unhygienic, or socially awkward experiences (in some cultures).
3) When there is social pressure to experience something a certain way: it may be socially dangerous to admit you don't find members of the opposite sex attractive, or you didn't enjoy The Godfather or whatever. Depending on your sensitivity to social pressure (see 4) and the strength of the pressure, this could lead to unawareness about true rare preferences.
4) Sensitivity to external influences: Some people pick up on social cues more easily than others. Some notice social norms more readily, and some seem more or less willing to violate some norms (partly because of how well they perceive them, plus some other factors). I can imagine that a deeply autistic person might be influenced far less by mainstream descriptions of different experiences. Exceptionally socially attuned people might (perhaps) take social influences to heart and be less able to distinguish their own from those they know about.
5) When skills are redundant or you have good substitutes: For example, if we live in a world with only fish and mammals, and all mammals are brown and warm and all fish are cold and silver, you might never notice that you can't feel temperature because you are still a perfectly good mammal and fish distinguisher. In the real world, it's harder to find clear examples, but I can think of substitutes for color-sightedness such as shade and textural cues that increase the likelihood of a color-blind person not realizing zir blindness. Similarly, empathy and social adeptness may increase someone's ability both to mask that ze is having a different experience than others, and the likelihood that ze will believe all others are good at hiding a different experience than the one they portray openly.
What else can people think of?
Special thanks to JT for his feedback and for letting me share his story.
A medium for more rational discussion
It would be cool if online discussions allowed you to 1) declare your claims, 2) declare how your claims depend on each other (ie. make a dependency tree), 3) discuss the claims, and 4) update the status of the claim by saying whether or not you agree with it, and using something like the text shorthand for uncertainty to say how confident you are in your agreement/disagreement.
I think that mapping out these things visually would allow for more productive conversation. And it would also allow newcomers to the discussion to quickly and easily get up to date, rather than having to sift through tons of comments. On this note, there should also probably be something like an answer wiki for each claim to summarize the arguments and say what the consensus is.
I get the feeling that it should be flexible though. That probably means that it should be accompanied by the normal commenting system. Sometimes you don't actually know what your claims are, but need to "talk it out" in order to figure out what they are. Sometimes you don't really know how they depend on each other. And sometimes you have something tangential to say (on that note, there should probably be an area for tangential comments, or at least a way to flag them as tangential).
As far who would be interested in this, obviously this Less Wrong community would be interested, and I think that there are definitely some other online communities that would (Hacker News, some subreddits...).
Also, this may be speculating, but I would hope that it would develop a reputation for the most effective way to have a productive discussion. So much so that people would start saying, "go outline your argument on [name]". Maybe there'd even be pressure for politicians to do this. If so, then I think this could put pressure on society to be more rational.
What do you guys think?
EDIT: If anyone is actually interested in building this, you definitely have my permission (don't worry about "stealing the idea"). I want to build it, but 1) I don't think I'm a good enough programmer yet, and 2) I'm busy with my startup.
EDIT: Another idea: if you think that a statement commits an established fallacy, then you should be able to flag it (like this). And if enough other people agree, then the statement is underlined or highlighted or something. The advantage to this is that it makes the discussion less "bulky". A simple version of this would be flagging things as less than DH6. But there are obviously a bunch of other things worth flagging that Eliezer has talked about in the sequences that are pretty non-controversial.
EDIT: Here is a rough mockup of how it would look. Notes:
- The claims should show how many votes of agreement/disagreement they got. Probably using text shorthand for uncertainty.
- The claims should be colored green if there is a lot of agreement, and red if there is a lot of disagreement.
- See edit above. Commenting in the discussion should be like this. And you should be able to flag statements as fallacious in a similar way. If there is enough agreement about the flag, the statement should be underlined in red or something.
True numbers and fake numbers
In physical science the first essential step in the direction of learning any subject is to find principles of numerical reckoning and practicable methods for measuring some quality connected with it. I often say that when you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meagre and unsatisfactory kind; it may be the beginning of knowledge, but you have scarcely in your thoughts advanced to the state of Science, whatever the matter may be.
-- Lord Kelvin
If you believe that science is about describing things mathematically, you can fall into a strange sort of trap where you come up with some numerical quantity, discover interesting facts about it, use it to analyze real-world situations - but never actually get around to measuring it. I call such things "theoretical quantities" or "fake numbers", as opposed to "measurable quantities" or "true numbers".
An example of a "true number" is mass. We can measure the mass of a person or a car, and we use these values in engineering all the time. An example of a "fake number" is utility. I've never seen a concrete utility value used anywhere, though I always hear about nice mathematical laws that it must obey.
The difference is not just about units of measurement. In economics you can see fake numbers happily coexisting with true numbers using the same units. Price is a true number measured in dollars, and you see concrete values and graphs everywhere. "Consumer surplus" is also measured in dollars, but good luck calculating the consumer surplus of a single cheeseburger, never mind drawing a graph of aggregate consumer surplus for the US! If you ask five economists to calculate it, you'll get five different indirect estimates, and it's not obvious that there's a true number to be measured in the first place.
Another example of a fake number is "complexity" or "maintainability" in software engineering. Sure, people have proposed different methods of measuring it. But if they were measuring a true number, I'd expect them to agree to the 3rd decimal place, which they don't :-) The existence of multiple measuring methods that give the same result is one of the differences between a true number and a fake one. Another sign is what happens when two of these methods disagree: do people say that they're both equally valid, or do they insist that one must be wrong and try to find the error?
It's certainly possible to improve something without measuring it. You can learn to play the piano pretty well without quantifying your progress. But we should probably try harder to find measurable components of "intelligence", "rationality", "productivity" and other such things, because we'd be better at improving them if we had true numbers in our hands.
Halloween thread - rationalist's horrors.
This is a kind of "X files" thread.
Post experiences which spooked you, which made you doubt reality, mathematical or physical laws, your sanity, memory or perception. The more improbable the better, but no second-hand legends please, share only what you personally experienced. If you had the event later explained rationally please use rot13 to avoid spoilers.
Meetup : West LA—What, Exactly, Is a Person?
Discussion article for the meetup : West LA—What, Exactly, Is a Person?
How to get in: Go to the Westside Tavern in the upstairs Wine Bar (all ages welcome), located inside the Westside Pavillion on the second floor, right by the movie theaters. The entrance sign says "Lounge".
Parking is free for 3 hours, or for longer if you are a caitiff.
Discussion:
if we only care about desires we act upon, then we only respect preferences with optimization power, and thus might actually makes right
I no longer find the Hansonian construal of "care" tenable. Caring is feeling strong emotions, not magically becoming an agent.
Acknowledging that personhood is a nonbinary concept inspires questions. How do we measure how persony someone is? Are octopodes people? Birds are obviously people, but what about the nonhuman great apes? Is there a relationship between agency and qualia, and if so, what is it? Do persons with more intense qualia care more about things, or do people with more agency care more about things? Is the idea of being more-of-a-person coherent at all? Should we allow more intense experiences to count more on the utilitarian calculus, even though this can in principle be hijacked? How should we treat instrumental utility monsters? When does a baby become a person? Can you lose personhood through an act of will? Why do we think personhood is important? Why do we think importance is important? Who the Hell do you think I am!? We'll discuss all this and more, this Wednesday, at the Westside Tavern! Be there or be square!
Recommended reading:
- Personhood by Zack M Davis
- Reasons and Persons
- Reasons for Persons
- Hacking the CEV for Fun and Profit
- Vladimir_M's comment's on Yvain's Offense vs. Harm Minimization post
- The Truth Points to Itself, Part 1
- Tengen Toppa Gurren Lagann
Prior exposure to Less Wrong is recommended but not required. That which can be destroyed by your presence at this meetup should be!
There may or may not be a whiteboard.
Discussion article for the meetup : West LA—What, Exactly, Is a Person?
True Optimisation
Hello less wrong community! This is my first post here, so I know that my brain has not (obviously) been optimised to its fullest, but I've decided to give posting a try.
Recently, someone very close to me has unfortunately passed away, leading to the invitable inner dilemma about death. I don't know how many of you are fans of HPMOR, but the way that Harry's dark side feels about death? Pretty much me around death, dying, etc. however, I've decided to push that to the side for the time being, because that is not a useful of efficient way to think.
I was raised by a religious family, but from the age of about 11 stopped believing in deities and religious services. However, I've always clung to the idea of an afterlife for people, mainly because my brain seems incapable of handling the idea of ceasing to exist. I know that we as a scientific community know that thoughts are electrical impulses, so is there any way of storing them outside of brain matter? Can they exist freely out of brain matter, or could they be stored in a computer chip or AI?
The conflict lies here: is immortality or mortality rational?
Every fibre in my being tells me that death is irrational and wrong. It is irrational for humanity to not try and prevent death. It is irrational for people to not try and bring back people who have died. Because of this, we have lost some of the greatest minds, scientific and artistic, that will probably ever exist. Although the worlds number of talented and intelligent people does not appear to be finite, I find it hard to live in a world where so muh knowledge is being lost every day.
but on the other hand, how would we feed all those people? What if the world's resources run out? As a transhumanist, I believe that we can use science to prevent things like death, but nature wasn't designed to support a population like that.
How do we truly optimise the world: no death and without destruction of the planet?
Fractals and time management
As you might know, fractal structures appear in a variety of natural situations and have found many technical applications (see Wikipedia for more information and examples). In this short article I want to ask the question, whether it makes sense to structure various activities according to a 'fractal timetable'?
Cleaning rota
When you have to clean a flat or a house you probably you have seen a list like this before. There are some tasks that one needs to do every day, others come along only once a week or once a month. Aside from those main cleaning tasks, there will be many small things you do several times during a day, like throwing something into the trash bin or washing your hands.
If you analyse the structure of this behaviour, you will find that it looks similar to a one dimensional fractal (compare with the various layers in the construction of the Cantor set, for example).
School Timetables
Most schools that I am familiar with use periodic arrangements for the teaching. You have a weekly timetable and at the same time every week you have the same subject for a whole year. This makes sense from the point of view of teacher and room allocation, but is this the best structure for optimal learning?
My own experience suggests that the quality of my memory strongly depends on my understanding. If I take the time to understand everything, I will remember those things for years and can even reconstruct lost knowledge by using intuition and logical deduction. If I learned something poorly, on the other hand, I sometimes forget it completely in a matter of hours.
Understanding is usually gained by a deep involvement with the topic for a longer period of time. I also find it much easier to learn something if I can focus on it for a certain period of time and examine the object/concept in detail without being disturbed by other matters.
What if the best way of teaching school mathematics (for example) would be to have a 3 week long intense workshop once a year with some other 10 one day sessions allocated once a month and small homework problems evenly distributed throughout the year? The same could be done with the other subjects to fill the full school year.
Other Areas
Our motivation, health and available time fluctuate widely, but most jobs require a periodic commitment. This might be OK for mechanical jobs, but for professions with a substantial amount of creativity and cognitive demand one certainly can do better by playing around with the time/work distribution. (Here is an interesting TED talk about a 'year off'.)
Similar problems/opportunities arise in fitness, personal development and relationships.
Questions
I don't know, whether there are any existing studies on this topic. A superficial Google search didn't reveal anything interesting. I also would like to know, whether you had similar or contradictory experiences? Maybe I am an exception when it comes to this type of learning.
Do you think that adding the mathematical model of a 'fractal' makes this approach more intuitive/useful or whether 'flexible time management' captures enough of the structure of the problem?
Thanks!
Preparing for a Rational Financial Planning Sequence
What follows is a rough outline for a possible rational financial planning sequence that was inspired by some other recent discussion here. I'm not sure how useful this would be to how many people. I know there are some LessWrongers who would enjoy and learn from this; but I don't know if there are 5, 50, or 500. If you'd like to read it, let me know. If 500 people tell me they can't wait for this, I'll probably write it. If 5 people say maybe they'll glance at it, then probably not.
Part I: Preliminaries:
Financial Rationality
Multiplying uncertainties
The inside and outside views
Interpolation is reliable; extrapolation isn't
Part II: This is important:
- Why to save for retirement
- Dying alone in a hole: the story of Jane.
- Why compound interest is cool
- 65-year old you will not want to live like a grad student
- 65-year old you will not want to work like 35-year old you
- Existential risk does not defeat personal risk
- Existential success does not defeat personal risk
Part III: Analyzing Your Life
(This section needs a lot more fleshing out, and thought)
Personal satisfaction and happiness: do what you love, and adjust your financial expectations accordingly
How much do you need to retire?
When do you want to retire?
How much do you need to live on today?
Big expenses you need to plan for
Increasing Income
College the best financial decision you'll ever make or the worst?
Choosing a career: what is your comparative advantage?
Switching careers
Career Decisions
equity vs salary; steady singles or home run hitter
employee or owner
Career Tactics
Salary negotiation
promotion
when to change jobs
Cutting Expenses
Save more tomorrow
Inheritance
Part IV: The Practical How-to Advice:
Emergency Cash
Credit cards: the good, the bad, and the criminal
Banking
Where to save (tax advantaged accounts)
The importance of fees
401K matching: the highest return you'll ever see
Social Security
Pensions
What to invest in (index funds)
diversification
stock vs bond funds
domestic vs. international
target retirement funds
Comic books are not a retirement plan (but a comic book store might be)
Avoiding hucksters and doomsayers
Investment Advisors
What if the shit hits the fan?
Can smart, rational investors beat the market?
Good debt; bad debt
Cars and other expensive purchases
Cutting out the middleman: making money on Craig's list, amazon, eBay and AirB&B
Buying a house
Renting vs. owning a house; rental parity
Student loans
Health Insurance
Life Insurance
Auto Insurance
Your Spouse: the most important financial decision you'll ever make
Diamonds are forever, but most women would rather have a house.
One or two incomes?
Live longer, be happier, get married
Children
Charity
If there are any topics you'd like to see covered that aren't here (wills? lawyers? the financial press?), let me know. Similarly, if you think there's a section that doesn't belong and should be dropped, let me know that too.
One caveat: while some sections are fairly generic, others will be very U.S. centric. The most specific advice will not be applicable to non-U.S. citizens and residents. That does limit the audience, but there's not too much I can do about that. Perhaps if it's successful I can seek out co-authors to do UK, Canadian, or other country editions.
A question for people who are interested in financial planning material: If this were available as a complete book (electronic and paper) today, how likely do you think it is that you would buy this book instead of one of the other available books on the subject? What would you pay for such a book? If this were available as both a book and a sequence on LessWrong, how might that change your decision?
For now, this discussion thread is just a minimum viable product (MVP) to find out if a sequence is worth the time it would take me to complete. If the MVP pans out, I'll write and post one or two of these chapters to further gauge interest. If the MVP doesn't look promising, I'll drop it and move on to my next book idea.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)