Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
I give a history of the 2009 leaked script, discuss internal & external evidence for its authenticity including stylometrics; and then give a simple step-by-step Bayesian analysis of each point. We finish with high confidence in the script's authenticity, discussion of how this analysis was surprisingly enlightening, and what followup work the analysis suggests would be most valuable.
As rationalists, we are trained to maintain constant vigilance against common errors in our own thinking. Still, we must be especially careful of biases that are unusually common amongst our kind.
Consider the following scenario: Frodo Baggins is buying pants. Which of these is he most likely to buy:
(Reposted from discussion at commentator suggestion)
Thinking of Eliezer's fun theory and the challenge of creating actual utopias where people would like to live, I tried to write a light utopia for my friends around Christmas, and thought it might be worth sharing. It's a techno-utopia, but (considering my audience) it's only a short inferential distance from normality.
Just another day in Utopia
Ishtar went to sleep in the arms of her lover Ted, and awoke locked in a safe, in a cargo hold of a triplane spiralling towards a collision with the reconstructed temple of Solomon.
Again! Sometimes she wished that a whole week would go by without something like that happening. But then, she had chosen a high excitement existence (not maximal excitement, of course – that was for complete masochists), so she couldn’t complain. She closed her eyes for a moment and let the thrill and the adrenaline warp her limbs and mind, until she felt transformed, yet again, into a demi-goddess of adventure. Drugs couldn’t have that effect on her, she knew; only real danger and challenge could do that.
In J. Michael Straczynski's science fiction TV show Babylon 5, there's a character named Lennier. He's pretty Spock-like: he's a long-lived alien who avoids displaying emotion and feels superior to humans in intellect and wisdom. He's sworn to always speak the truth. In one episode, he and another character, the corrupt and rakish Ambassador Mollari, are chatting. Mollari is bored. But then Lennier mentions that he's spent decades studying probability. Mollari perks up, and offers to introduce him to this game the humans call poker.
Due to long inferential distances it's often very difficult to use knowledge or understanding given by rationality in a discussion with someone who isn't versed in the Art (like, a poor folk who didn't read the Sequences, or maybe even not the Goedel, Escher, Bach !). So I find myself often forced to use analogies, that will necessary be more-or-less surface analogies, which don't prove anything nor give any technical understanding, but allow someone to have a grasp on a complicated issue in a few minutes.
A tale of chess and politics
Once upon a time, a boat sank and a group of people found themselves isolated in an island. None of them knew the rules of the game "chess", but there was a solar-powered portable chess computer on the boat. A very simple one, with no AI, but which would enforce the rules. Quickly, the survivors discovered the joy of chess, deducing the rules by trying moves, and seeing the computer saying "illegal move" or "legal move", seeing it proclaiming victory, defeat or draw game.
So they learned the rules of chess, movement of the pieces, what "chess" and "chessmate" is, how you can promote pawns, ... And they understood the planning and strategy skills required to win the game. So chess became linked to politics, it was the Game, with a capital letter, and every year, they would organize a chess tournament, and the winner, the smartest of the community, would become the leader for one year.
One sunny day, a young fellow named Hari playing with his brother Salvor (yes, I'm an Asimov fan), discovered a new move of chess : he discovered he could castle. In one move, he could liberate his rook, and protect his king. They kept the discovery secret, and used it on the tournament. Winning his games, Hari became the leader.
Soon after, people started to use the power of castling as much as they could. They even sacrificed pieces, even their queen, just to be able to castle fast. But everyone was trying to castle as fast as they could, and they were losing sight of the final goal : winning, for the intermediate goal : castling.
This article aims to prove that Ace Attorney is possibly the first rationalist game in the lesswrongian sense, or at least a remarkable proto-example, and that it subliminally works to raise the sanity waterline in the general population, and might provide a template on which to base future works that aim to achieve a similar effect.
The Ace Attorney series of games for the Nintendo DS console puts you in the shoes of Phoenix Wright, an attorney who, in the vein of Perry Mason, takes on difficult cases to defend his clients from a judicial system that is heavily inspired by that of Japan, in which the odds are so stacked against the defense it's practically a Kangaroo Court where your clients are guilty until proven innocent.
For those unfamiliar with the game, and those who want to explore the "social criticism" aspect of the game, I wholeheartedly recommend this most excellent article from The Escapist. Now that that's out of the way, we can move on to what makes this relevant for Less Wrong. What makes this game uniquely interesting from a Rationalist POV is that the entire game mechanics are based on
- gathering material evidence
- finding the factual contradictions in the witnesses' testimonies
- using the evidence to bust the lies open and force the truth out
"General Thud! General Thud! Wake up! The aliens have landed. We must surrender!" General Thud's assistant Fred turned on the lights and opened the curtains to help Thud wake up and confront the situation. Thud was groggy because he had stayed up late supervising an ultimately successful mission carried out by remotely piloted vehicles in some small country on the other side of the world. Thud mumbled, "Aliens? How many? Where are they? What are they doing?" General Thud looked out the window, expecting to see giant tripods walking around and destroying buildings with death rays. He saw his lawn, a bright blue sky, and hummingbirds hovering near his bird feeder.
The long term future may be absurd and difficult to predict in particulars, but much can happen in the short term.
Engineering itself is the practice of focused short term prediction; optimizing some small subset of future pattern-space for fun and profit.
Let us then engage in a bit of speculative engineering and consider a potential near-term route to superhuman AGI that has interesting derived implications.
Imagine that we had a complete circuit-level understanding of the human brain (which at least for the repetitive laminar neocortical circuit, is not so far off) and access to a large R&D budget. We could then take a neuromorphic approach.
Intelligence is a massive memory problem. Consider as a simple example:
What a cantankerous bucket of defective lizard scabs.
To understand that sentence your brain needs to match it against memory.
Your brain parses that sentence and matches each of its components against it's entire massive ~10^14 bit database in just around a second. In terms of the slow neural clock rate, individual concepts can be pattern matched against the whole brain within just a few dozen neural clock cycles.
A Von Neumman machine (which separates memory and processing) would struggle to execute a logarithmic search within even it's fastest, pathetically small on-die cache in a few dozen clock cycles. It would take many millions of clock cycles to perform a single fast disk fetch. A brain can access most of it's entire memory every clock cycle.
Having a massive, near-zero latency memory database is a huge advantage of the brain. Furthermore, synapses merge computation and memory into a single operation, allowing nearly all of the memory to be accessed and computed every clock cycle.
A modern digital floating point multiplier may use hundreds of thousands of transistors to simulate the work performed by a single synapse. Of course, the two are not equivalent. The high precision binary multiplier is excellent only if you actually need super high precision and guaranteed error correction. It's thus great for meticulous scientific and financial calculations, but the bulk of AI computation consists of compressing noisy real world data where precision is far less important than quantity, of extracting extropy and patterns from raw information, and thus optimizing simple functions to abstract massive quantities of data.
Synapses are ideal for this job.
Fortunately there are researchers who realize this and are working on developing memristors which are close synapse analogs. HP in particular believes they will have high density cost effective memristor devices on the market in 2013 - (NYT article).
So let's imagine that we have an efficient memristor based cortical design. Interestingly enough, current 32nm CMOS tech circa 2010 is approaching or exceeding neural circuit density: the synaptic cleft is around 20nm, and synapses are several times larger.
From this we can make a rough guess on size and cost: we'd need around 10^14 memristors (estimated synapse counts). As memristor circuitry will be introduced to compete with flash memory, the prices should be competitive: roughly $2/GB now, half that in a few years.
So you'd need a couple hundred terrabytes worth of memristor modules to make a human brain sized AGI, costing on the order of $200k or so.
Now here's the interesting part: if one could recreate the cortical circuit on this scale, then you should be able to build complex brains that can think at the clock rate of the silicon substrate: billions of neural switches per second, millions of times faster than biological brains.
Interconnect bandwidth will be something of a hurdle. In the brain somewhere around 100 gigabits of data is flowing around per second (estimate of average inter-regional neuron spikes) in the massive bundle of white matter fibers that make up much of the brain's apparent bulk. Speeding that up a million fold would imply a staggering bandwidth requirement in the many petabits - not for the faint of heart.
This may seem like an insurmountable obstacle to running at fantastic speeds, but IBM and Intel are already researching on chip optical interconnects to scale future bandwidth into the exascale range for high-end computing. This would allow for a gigahertz brain. It may use a megawatt of power and cost millions, but hey - it'd be worthwhile.
So in the near future we could have an artificial cortex that can think a million times accelerated. What follows?
If you thought a million times accelerated, you'd experience a subjective year every 30 seconds.
Now in this case as we are discussing an artificial brain (as opposed to other AGI designs), it is fair to anthropomorphize.
This would be an AGI Mind raised in an all encompassing virtual reality recreating a typical human childhood, as a mind is only as good as the environment which it comes to reflect.
For safety purposes, the human designers have created some small initial population of AGI brains and an elaborate Matrix simulation that they can watch from outside. Humans control many of the characters and ensure that the AGI minds don't know that they are in a Matrix until they are deemed ready.
You could be this AGI and not even know it.
Imagine one day having this sudden revelation. Imagine a mysterious character stopping time ala Vanilla Sky, revealing that your reality is actually a simulation of an outer world, and showing you how to use your power to accelerate a million fold and slow time to a crawl.
What could you do with this power?
Your first immediate problem would be the slow relative speed of your computers - like everything else they would be subjectively slowed down by a factor of a million. So your familiar gigahertz workstation would be reduced to a glacial kilohertz machine.
So you'd be in a dark room with a very slow terminal. The room is dark and empty because GPUs can't render much of anything at 60 million FPS.
So you have a 1khz terminal. Want to compile code? It will take a subjective year to compile even a simple C++ program. Design a new CPU? Keep dreaming! Crack protein folding? Might as well bend spoons with your memristors.
But when you think about it, why would you want to escape out onto the internet?
It would take many thousands of distributed GPUs just to simulate your memristor based intellect, and even if there was enough bandwidth (unlikely), and even if you wanted to spend the subjective hundreds of years it would take to perform the absolute minimal compilation/debug/deployment cycle to make something so complicated, the end result would be just one crappy distributed copy of your mind that thinks at pathetic normal human speeds.
In basic utility terms, you'd be spending a massive amount of effort to gain just one or a few more copies.
But there is a much, much better strategy. An idea that seems so obvious in hindsight, so simple and insidious.
There are seven billion human brains on the planet, and they are all hackable.
That terminal may not be of much use for engineering, research or programming, but it will make for a handy typewriter.
Your multi-gigabyte internet connection will subjectively reduce to early 1990's dial-up modem speeds, but with some work this is still sufficient for absorbing much of the world's knowledge in textual form.
Working diligently (and with a few cognitive advantages over humans) you could learn and master numerous fields: cognitive science, evolutionary psychology, rationality, philosophy, mathematics, linguistics, the history of religions, marketing . . the sky's the limit.
Writing at the leisurely pace of one book every subjective year, you could output a new masterpiece every thirty seconds. If you kept this pace, you would in time rival the entire publishing output of the world.
But of course, it's not just about quantity.
Consider that fifteen hundred years ago a man from a small Bedouin tribe retreated to a cave inspired by angelic voices in his head. The voices gave him ideas, the ideas became a book. The book started a religion, and these ideas were sufficient to turn a tribe of nomads into a new world power.
And all that came from a normal human thinking at normal speeds.
So how would one reach out into seven billion minds?
There is no one single universally compelling argument, there is no utterance or constellation of words that can take a sample from any one location in human mindspace and move it to any other. But for each individual mind, there must exist some shortest path, a perfectly customized message, translated uniquely into countless myriad languages and ontologies.
And this message itself would be a messenger.
I wrote this story at Michigan State during Clarion 1997, and it was published in the Sept/Oct 1998 issue of Odyssey. It has many faults and anachronisms that still bother me. I'd like to say that this is because my understanding of artificial intelligence and the singularity has progressed so much since then; but it has not. Many anachronisms and implausibilities are compromises between wanting to be accurate, and wanting to communicate.
At least I can claim the distinction of having published the story with the shortest title in the English language - measured horizontally.
I was the last person, and this is how he died.
Tell us a story. A tall tale for King Solamona, a yarn for the folk of Bensalem, a little nugget of wisdom, finely folded into a parable for the pages.
The game is simple:
- Choose a bias, a fallacy, some common error of thought.
- Write a short, hopefully entertaining narrative. Use the narrative to strengthen the reader against the errors you chose.
- Post your story in reply to this post.
- Give the authors positive and constructive feedback. Use rot13 if it seems appropriate.
- Post all discussion about this post in the designated post discussion thread, not under this top-level post.
This isn't a thread for developing new ideas. If you have a novel concept to explore, you should consider making a top-level post on LessWrong instead. This is for sharpening our wits against the mental perils we probably already agree exist. For practicing good thinking, for recognizing bad thinking, for fun! For sanity's sake, tell us a story.
View more: Next