Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
A current article in Science reports on this study about how good people are at predicting what their future selves will be like. Not very good, apparently. Daniel Gilbert, a psychologist at Harvard, with other colleagues conducted several experiments online, in which 19,000 people were asked about such things as personality traits, preferences in music, etc., answering about the present, about themselves 10 years earlier, and about what they expected 10 years hence. More precisely, this not being a longitudinal study, people of any age X predicted less difference with their X+10 selves than people of age X+10 recollected of themselves at age X. The effect did not go away with increasing age: 58-year-olds still expected less change in the next 10 years than 68-year-olds reported in the last ten.
Gilbert and colleagues call this effect "the end of history illusion," because it suggests that people believe, consciously or not, that the present marks the point at which they've finally stopped changing.
"What these data suggest, and what scads of other data from our lab and others suggest, is that people really aren't very good at knowing who they're going to be and hence what they're going to want a decade from now," Gilbert says.
Someone suggests an alternative explanation:
Another possibility is that people "might well anticipate substantial change, yet not know how they would change, and thus, just predict the status quo"
An actionable moral:
"The single best way to make predictions about what you're going to want in the future isn't to imagine yourself in the future, … it's to look at other people who are in the very future you're imagining," [Gilbert] says.
Half-closing my eyes and looking at the recent topic of morality from a distance, I am struck by the following trend.
In mathematics, there are no substantial controversies. (I am speaking of the present era in mathematics, since around the early 20th century. There were some before then, before it had been clearly worked out what was a proof and what was not.) There are few in physics, chemistry, molecular biology, astronomy. There are some but they are not the bulk of any of these subjects. Look at biology more generally, history, psychology, sociology, and controversy is a larger and larger part of the practice, in proportion to the distance of the subject from the possibility of reasonably conclusive experiments. Finally, politics and morality consist of nothing but controversy and always have done.
Curiously, participants in discussions of all of these subjects seem equally confident, regardless of the field's distance from experimental acquisition of reliable knowledge. What correlates with distance from objective knowledge is not uncertainty, but controversy. Across these fields (not necessarily within them), opinions are firmly held, independently of how well they can be supported. They are firmly defended and attacked in inverse proportion to that support. The less information there is about actual facts, the more scope there is for continuing the fight instead of changing one's mind. (So much for the Aumann agreement of Bayesian rationalists.)
Perhaps mathematicians and hard scientists are not more rational than others, but work in fields where it is easier to be rational. When they turn into crackpots outside their discipline, they were actually that irrational already, but have wandered into an area without safety rails.
"A Whole-Cell Computational Model Predicts Phenotype from Genotype" by Jonathan Karr et al.
This paper appeared a few days ago in Cell, and describes a computational simulation of the bacterium Mycoplasma genitalium, conducted at this lab. The paper is behind a paywall, but is blogged about here. The simulation software is freely available from the project web site.
From the abstract: "Here, we present a ‘‘whole-cell’’ model of the bacterium Mycoplasma genitalium, a human urogenital parasite whose genome contains 525 genes. Our model attempts to: (1) describe the life cycle of a single cell from the level of individual molecules and their interactions; (2) account for the specific function of every annotated gene product; and (3) accurately predict a wide range of observable cellular behaviors."
According to an editorial commentary in the same issue, this is the first simulation of a complete free-living microbe.
It appears that standard lab rats and mice are all morbidly obese. Using them as model organisms may give misleading results that fail to transfer to humans, or even to healthy rats and mice.
I don't know how well this is going to work, but I mention it here because it's actually going to be done in a few weeks time at a day-long meeting of the research group that I work with. (Not my idea. I don't know which of us thought it up.)
Keyword game: explaining a scientific term. Everyone puts a keyword used in their project (for example, "Selective Sweep") into a hat. For each keyword in turn, get someone who does not understand the keyword to explain what they think it might mean. They can then be enlightened by the people who know (of which there should be at least one!).
This is to be done in groups of four, and afterwards, the groups reassemble and each group presents its newly understood keyword meanings to the main group.
A recent article in PloS Computational Biology suggests that memory is encoded in the microtubules. "Signaling and encoding in MTs and other cytoskeletal structures offer rapid, robust solid-state information processing which may reflect a general code for MT-based memory and information processing within neurons and other eukaryotic cells."
They argue that synaptic connections are transient compared with the lifetime of memories, and therefore memories cannot be stored in them, but in some more persistent structure. The structure they suggest is the phosphorylation state of sites on microtubule lattices within neurons. And that's about as much of the technical detail as I feel able to summarise. It's not all speculation, they report technical work on the structures of these cellular components. Total memory capacity would be somewhere upwards of 10^20 bits (or in more everyday units, 10 million terabytes), depending on the encoding, of which they suggest several schemes.
Journalistic writeup here.
Note that Stuart Hameroff, one of the authors, is known for his proposals for microtubules as the mechanism of consciousness through quantum effects (and with Penrose, quantum gravitational effects). The present paper, however, is solely about memory and does not touch on quantum coherence or consciousness.
Thus the subtitle of this blog posting at PLoS, referencing this article on "Cigarette smoking: an underused tool in high-performance endurance training". The point being that you can write a review article to argue anything you want, with sufficient cherry-picking and chains of links.
If you are doing actual experiments and making observations or proving theorems, then to a large extent -- larger in some sciences than in others -- you are constrained by the brute facts. But when writing secondary literature, especially in areas where data is generally fuzzier, it is easy, whether deliberately or not, to write to a bottom line, including findings you like and excluding those you don't.
Something to bear in mind when reading or writing any review article.
I have just received a survey questionnaire regarding future directions in EU (European Union) research funding, and thought it would be interesting to see how LessWrong would answer the main question:
Imagine that EU funding is available for one ambitious, visionary project extending beyond 2020.
- What kind of research challenges should such a project address in your area?
- What would be the most urgent research tasks?
Inspired by PuyaSharif's conundrum, I find myself continually faced with the opposite problem, which is identical to the original except in the bold-faced sentences:
You are given the following information:
Your task is to hide a coin in your house (or any familiar finite environment).
After you've hidden the coin your memory will be erased and restored to a state just before you receiving this information.
Then you will be told about the task (i.e that you have hidden a coin), and asked to try to find the coin.
If you find it you win. The faster you find it, the better you win.
Where do you leave the coin so that when you have no memory of where you put it, you can lay your hands on it at once?
For just one coin, you might think up some suitable Schelling point, but now multiply the task a thousandfold, for all of your possessions. (I am not a minimalist; of books alone I have 3500.) How do you arrange all your stuff, all your life, in such a way that everything is exactly where you would first think of looking for it?
Article in current Scientific American (first para and bullet points, rest is paywalled).
Podcast by the author (free).
The author, Douglas Fox, argues that there may be physical limits to how intelligent a brain made of neurons can become, limits that may not be very distant from where we are now.
He makes evolutionary arguments at a couple of points, suggesting that he is talking about how smart an organism could have evolved, rather than how smart we might make ourselves; he certainly isn't talking about how smart a machine we might create out of different materials.
From the podcast (I don't have access to the article):
Four routes to higher intelligence, which he argues won't get us very far:
- Increase the speed of axons. But that means making them fatter, which drives the neurons further apart, neutralising the gain.
- Increase brain size. That needs more energy, and before long you get something unsustainable. You get longer pathways in a larger brain, which slows them down. The neurons will make more connections, which makes them bigger, so the number of neurons scales slower than the volume of the brain. And anyway, whales and elephants have bigger brains than us but don't seem to be more intelligent, and cows have brains a hundred times the size of a mouse brain but aren't a hundred times smarter. So brain size doesn't seem to matter; at best the relationship with intelligence is unclear.
- Packing more neurons into the existing volume by making them smaller. You run into signal to noise problems. The ion channels involved in generating action potentials are a certain size, and you must have fewer in a smaller neuron, hence more random variation. The result is neurons spontaneously firing.
- Offload intelligence support. Books and the internet will remember things for you and help you tap the collective intelligence of your social network. Compare social insects doing things that they couldn't do individually. But by alleviating the necessity of intelligence this may even have reduced the evolutionary pressure to get smarter.
He's described simply as an "award-winning author", but I don't know if he has any scientific background, and there are too many people of the same name to Google him.
View more: Next