Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
As you know, the Singularity Summit 2009 is on the weekend of Oct 3 - Oct 4. What is it, you ask? I'll start from the beginning...
An interesting collection of molecules occupied a certain tide pool 3.5 to 4.5 billion years ago, interesting because the molecule collection built copies of itself out of surrounding molecules, and the resulting molecule collections also replicated while accumulating beneficial mutations. Those molecule collections satisfied a high-level functional criterion called "genetic fitness", and it happened by pure chance.
If you think about all the possible arrangements of atoms that can occupy a 1-millimeter by 1-millimeter by 1-millimeter cube of space, most of them are going to suck at causing the future universe to contain copies of themselves. Genetic fitness is a vanishingly small target in configuration-space.
And if you studied the universe 5 billion years ago, you would not see a process capable of hitting such a small target. No physical process could create low-entropy collections of atoms satisfying high-level functional criteria. The second law of thermodynamics thus ensured that mice, as well as mousetraps, were physically impossible.
Take a second to go upvote You Are A Brain if you haven't already...
Liron's post reminded me of something that I meant to say a while ago. In the course of giving literally hundreds of job interviews to extremely high-powered technical undergraduates over the last five years, one thing has become painfully clear to me: even very smart and accomplished and mathy people know nothing about rationality.
For instance, reasoning by expected utility, which you probably consider too basic to mention, is something they absolutely fall flat on. Ask them why they choose as they do in simple gambles involving risk, and they stutter and mutter and fail. Even the Econ majors. Even--perhaps especially--the Putnam winners.
Of those who have learned about heuristics and biases, a nontrivial minority have gotten confused to the point that they offer Kahneman and Tversky's research as justifying their exhibition of a bias!
So foundational explanatory work like Liron's is really pivotal. As I've touched on before, I think there's a huge amount to be done in organizing this material and making it approachable for people that don't have the basics. Who's going to write the Intuitive Explanation of Utility Theory?
Update: Tweaked language per suggestion, added Kahneman and Tversky link.
Say you're taking your car to an auto mechanic for repairs. You've been told he's the best mechanic in town. The mechanic rolls up the steel garage door before driving the car into the garage, and you look inside and notice something funny. There are no tools. The garage is bare - just an empty concrete space with four bay doors and three other cars.
You point this out to the mechanic. He shrugs it off, saying, "This is how I've always worked. I'm just that good. You were lucky I had an opening; I'm usually booked." And you believe him, having seen the parking lot full of cars waiting to be repaired.
You take your car to another mechanic in the same town. He, too, has no tools in his garage. You visit all the mechanics in town, and find a few that have some wrenches, and others with a jack or an air compressor, but no one with a full set of tools.
You notice the streets are nearly empty besides your car. Most of the cars in town seem to be in for repairs. You talk to the townsfolk, and they tell you how they take their cars from one shop to another, hoping to someday find the mechanic who is brilliant and gifted enough to fix their car.
I sometimes tell people how I believe that governments should not be documents, but semi-autonomous computer programs. I have a story that I'm not going to tell now, about incorporating inequalities into laws, then incorporating functions into them, then feedback loops, then statistical measures, then learning mechanisms, on up to the point where voters and/or legislatures set only the values that control the system, and the system produces the low-level laws and policy decisions (in a way that balances exploration and exploitation). (Robin's futarchy in which you "vote on values, bet on beliefs" describes a similar, though less-automated system of government.)
And one reaction - actually, one of the most intelligent reactions - is, "But then... legislators would have to understand something about math." As if that were a bug, and not a feature.
(This is meant as an entirely rewritten version of the original post. It is still long, but hopefully clearer.)
Theism is often bashed. Part of that bashing is gratuitous and undeserved. Some people therefore feel compelled to defend theism. Their defence of theism goes further than just putting the record straight though. It attempts to show how theism can be a good thing, or right. That is probably going too far.
I would argue several points. And for that I will be using the most idealistic vision of religion I can conjure, keeping in mind that real world examples may not be as utopian. My intended conclusion is that fairness and tolerance are a necessary and humane means to the end of helping people, which cannot, however, be used to justify as right something that is ultimately wrong.
Theism is indeed a good thing, on short and mid term, both for individuals and society, as it holds certain benefits.Such as helping people stick together in close knit communities, helping people life a more virtuous life by giving themselves incentives to do so, helping them feel better when life feels unbearable or meaningless.
Another point is that theism also possesses deep similarities with science, and uses optimally rational arguments and induction. Optimally, that is, insofar as the premises of theism allow; those premises, what we could call their priors are, for instance, in Christianity, to be found in the Bible.
Finally, I also wanted to draw on further similarities between religion and secular groups of people. Atheism, humanism, transhumanism, even rationalism as we know it on LW. These similarities lie in the objectives which any of those groups honestly strives to attain. Those goals are, for instance, truth, the welfare of human beings, and their betterment.
Within the world view of each of those groups, each is indeed doing its best to achieve those ends. One of catholicism's final beacon, used to guide people's life path, can be roughly said to be "what action should I take that will make me more able to love others, and myself" for instance. This, involves understanding, and following the word of God, as love and morality is understood to emanate from that source.
And so the Bible, is supposed to hold those absolute truths, not so much in a straightforward, explained way, but rather in the same way that the observable universe is supposed to hold absolute truth for secular science. And just as it is possible to misconstrue observations and build flawed theories in the scientific model, given that observational, experimental data, so is it for a christian person, to misunderstand the data presented in the Bible. Rational edifices of thought have therefore been built to derive humanly understandable, cross checked (inside that edifice), usable-on-a-daily-basis truth, from the Bible.
That is about as far as we can go for similarities, purity of purpose, intellectual honesty and adequacy with the real world.
The premise of theism itself, is flawed. Theism presupposes the supernatural. Therefore, the priors of theism, do not correspond to the real state of the universe as we observe it, and this implies two main consequences.
The first is that an intellectual edifice based upon flawed premises, no matter how carefully crafted, will still be flawed itself.
The second runs deeper and is that the premises of theism themselves are in part incompatible with rationality itself, and hence limit the potential use of rational methods. In other words, some methods of rationality, as well as some particular arguments are forbidden, or unknown to what we could tentatively call religious science.
From that, my first conclusion is that theism is wrong. Epistemically wrong, but also, doing itself a disservice, as the goals it has set itself up to, cannot be completed through its program. This program will not be able to hit its targets in optmization space, because of that epistemical flaw. Even though theism possesses short and mid term advantages, its whole edifice makes it a dead end, which will at the very least slow down humanity's progress towards nobler objectives like truth or betterment, if not even rendering that progress outright impossible past a certain point.
Yet, it seems to me that this mistaken edifice isn't totally insane, far from it, at least at its roots. Hence it should be possible to heal it. Or at least, helping the people that are part of it, healing them.
But, religion cannot be honestly called right, no matter how deep that idea is rooted in our culture and collective consciousness. On the long term, theism deprives us of our potential, it builds a virtual, unnecessary cage around us.
To conclude on that, I wanted to point out that religious belief appears to be a human universal, and probably a hard coded part of human nature. It seems fair to recognize it in us, if we have that tendency. I know I do, for instance, and fairly strongly so. Idem for belief in the supernatural.
This should be part of a more general mental discipline, of admitting to our faults and biases, rather than trying to hide and make up for them. The only way to dissect and correct them, is to first thoroughly observe those faults in our reasoning. Publicly so even. In a community of rationalists, there should be no question that even the most flawed, irrational of us, should only be treated as a friend in need of help, if he so desires, and if we have enough ressources to provide to his needs. The important thing there, is to have someone possessing a willingness to learn, and grow past his mistakes. This, can indeed be made easier, if we are supportive of each other, and tolerant, unconditionally.
Yet, at the same time, even for that purpose, we can't yield to falseness. We can and must admit for instance that religion has good points, that we may not have a licence to change people against their will, and that if people want to be helped, that they should feel relaxed in explaining all the relevant information about what they perceive to be their problem. We can't go as far as saying that such a flaw, or problem, is, in itself, alright, though.
What if you could choose which memories and associations to retain and which to discard? Using that capability rationally (whatever that word means to you) would be a significant challenge -- and that challenge has just come one step closer to being a reality.
Dr. Fenton had already devised a clever way to teach animals strong memories for where things are located. He teaches them to move around a small chamber to avoid a mild electric shock to their feet. Once the animals learn, they do not forget. Placed back in the chamber a day later, even a month later, they quickly remember how to avoid the shock and do so.
But when injected — directly into their brain — with a drug called ZIP that interferes with PKMzeta, they are back to square one, almost immediately. “When we first saw this happen, I had grad students throwing their hands up in the air, yelling,” Dr. Fenton said. “Well, we needed a lot more than that” one study.
They now have it. Dr. Fenton’s lab repeated the experiment, in various ways...
I recently saw this Reuters article on Yahoo News. In typical science reporting fashion, the headline seems to be pure hyperbole - does anyone here know enough to clarify what the groups referenced have actually achieved?
This links represent what I could find:
Homepage of the "Robot Scientist" project:http://www.aber.ac.uk/compsci/Research/bio/robotsci/
Homepage of Hod Lipson: http://www.mae.cornell.edu/lipson/
Hod Lipson's 2007 paper "Automated reverse engineering of nonlinear dynamical systems" (pdf)
People have long noted that individuals diagnosed as schizophrenic usually manifest disturbances of language, communication, and abstract thought. One way to examine that disturbance is to ask patients to interpret various common proverbs, as psychiatrists have done since before the turn of the century. (Interested readers can find a layperson-suitable discussion of this method's utility in the modern day at the following link: AAPL newsletter.)
Originally, patients' responses were evaluated by their correctness. Now they're graded on their degree of abstraction. Responses that understand the sayings literally or in simplistically concrete terms are generally considered to be signs of a failure to abstract, although illiterate or mentally challenged individuals also tend to respond that way, and individuals encountering a proverb for the first time are less likely to recognize its symbolic meaning. It seems clear that cultural exposure to proverbial forms, to the idiomatic usage of phrases and scenarios, affects how we recognize such methods of communication.
But why was the 'correctness' criterion dropped? Because perfectly normal people, whom no one would consider schizophrenic, often gave interpretations that wildly conflicted with what the interviewer considered to be the correct one. Which interpretations were 'correct' depended heavily on the traditions and cultures that the listeners came from.
Let's consider a classic example of a proverb often given divergent interpretations:
The rolling stone gathers no moss.
People from societies where stability and slowly-developed connections are valued consider this saying to be a warning of the dangers of activity and change. Without staying still, beautiful moss won't grow. People from societies where activity and change are valued, however, consider it to be a prescription for how to avoid decay and degeneration. If you don't keep moving, you'll be covered by moss!
When asked to explain their interpretation, the value of moss growth is typically presented as desirable or undesirable, depending on the defended meaning. But if you start out by asking people whether moss is something to seek or avoid, there's no clear preference outside of specific contexts. People generally don't have aesthetic preferences either way; overall, people don't care.
So the symbolic meaning of the mossy growth doesn't determine how people interpret the saying; people invest the moss with meaning to justify the judgment they had already reached. This is may be an example of what people at this site would call a 'cached thought'. Rather than giving a reason for their judgment, people reply with rationalizations that have nothing to do with why they reached their conclusion. Rather than thinking about why they decided as they did, people bring out a ready smokescreen.
What's the actual logical structure of the saying? Rational analysis sheds a great deal of light on the question. The meaning can be stated in various ways, all equivalent.
Stability is required for the development of certain states. Activity is incompatible with the development of certain states. (Desirable/undesirable) states can be (encouraged/prevented) by (engaging in/avoiding) (necessary precursors/incompatible conditions).
The saying encodes a pattern that expresses a relationship, but the pattern is devoid of evaluation. It's a blank screen upon which people project their pre-existing values and judgments. To truly understand the proverb, it's necessary to recognize which aspects of our perception are the saying itself, and which are our own ideas projected onto it.
- Eliezer Yudkowsky was once attacked by a Moebius strip. He beat it to death with the other side, non-violently.
- Inside Eliezer Yudkowsky's pineal gland is not an immortal soul, but another brain.
- Eliezer Yudkowsky's favorite food is printouts of Rice's theorem.
- Eliezer Yudkowsky's favorite fighting technique is a roundhouse dustspeck to the face.
- Eliezer Yudkowsky once brought peace to the Middle East from inside a freight container, through a straw.
- Eliezer Yudkowsky once held up a sheet of paper and said, "A blank map does not correspond to a blank territory". It was thus that the universe was created.
- If you dial Chaitin's Omega, you get Eliezer Yudkowsky on the phone.
- Unless otherwise specified, Eliezer Yudkowsky knows everything that he isn't telling you.
- Somewhere deep in the microtubules inside an out-of-the-way neuron somewhere in the basal ganglia of Eliezer Yudkowsky's brain, there is a little XML tag that says awesome.
- Eliezer Yudkowsky is the Muhammad Ali of one-boxing.
- Eliezer Yudkowsky is a 1400 year old avatar of the Aztec god Aixitl.
- The game of "Go" was abbreviated from "Go Home, For You Cannot Defeat Eliezer Yudkowsky".
- When Eliezer Yudkowsky gets bored, he pinches his mouth shut at the 1/3 and 2/3 points and pretends to be a General Systems Vehicle holding a conversation among itselves. On several occasions he has managed to fool bystanders.
- Eliezer Yudkowsky has a swiss army knife that has folded into it a corkscrew, a pair of scissors, an instance of AIXI which Eliezer once beat at tic tac toe, an identical swiss army knife, and Douglas Hofstadter.
- If I am ignorant about a phenomenon, that is not a fact about the phenomenon; it just means I am not Eliezer Yudkowsky.
- Eliezer Yudkowsky has no need for induction or deduction. He has perfected the undiluted master art of duction.
- There was no ice age. Eliezer Yudkowsky just persuaded the planet to sign up for cryonics.
- There is no spacetime symmetry. Eliezer Yudkowsky just sometimes holds the territory upside down, and he doesn't care.
- Eliezer Yudkowsky has no need for doctors. He has implemented a Universal Curing Machine in a system made out of five marbles, three pieces of plastic, and some of MacGyver's fingernail clippings.
- Before Bruce Schneier goes to sleep, he scans his computer for uploaded copies of Eliezer Yudkowsky.
If you know more Eliezer Yudkowsky facts, post them in the comments.
Since the first days of civilization, humans have been known to entertain themselves with virtual reality games. The 6th century saw the birth of chess, a game where a few carved figures placed on a checkered board are supposed to mimic human social hierarchy. Later, the technological breakthroughs of the 20th century allowed creation of games which were significantly more sophisticated. For instance, the highly addictive “Civilization” allowed players to create a history for an entire nation, guiding it from the initial troubles of wheel invention and to the headaches of global warming. Here is a quick summary of the virtual reality games features.
1) The “reality” of the game, while being superficially similar to the reality of the player, must at the same time be much simpler. Hence three-dimensional humans play in the two-dimensional world.
2) The laws of the game must be largely deterministic to allow a meaningful intervention by the player. Yet, in order not to make it too predictable and hence boring, an element of chance must be introduced.
3) The game protagonists must appear to have freedom of movement and yet be limited to the borders of the screen/allocated memory size. The limits of this virtual freedom are usually low at the early stages of the game, but grow as the scenario develops.
4) The game scenario must end before it reaches the limit of the allocated resources.
I now propose a little Gedanken experiment. Imagine the existence of a four-dimensional world hosting a civilization whose technology is way ahead of ours. Is there a strong reason to think that such civilization is impossible or that members of this civilization would not play virtual reality games? If the answer is no, how may these games look like? Using the analogy with our own games, we might expect the following.
1) The game protagonists would resemble the players. Yet, the need for simplification would require them to be three-dimensional.
2) To satisfy the second rule we need to combine determinism and chance. In the three-dimensional universe quantum mechanics is known to do the trick.
3) At the early stages of the game, protagonists’ freedom of movement is constrained by low technological development. At later stages a physical limit may be required (speed of light?).
4) This point may have something to do with the Fermi Paradox.
I’m interested in other possible analogies. If somebody can suggest a way to rule out the whole idea that would be even greater.
View more: Next