King Under The Mountain: Adventure Log + Soundtrack
With the help of many dedicated Less Wrongers (players muflax, Karl, Charlie, and Emile; musicians Mike Blume and Alicorn, technical support Ari Rahikkala) we have successfully completed what is, as far as I know, the first actual Dungeons and Discourse adventure anywhere. Except we're not calling it that, because I don't have the rights to use that name. Though it's not precisely rationality related, I hope it is all right if I post a summary of the adventure by popular demand.
Also, at some point it turned into a musical. The first half of the songs are only available as lyrics at the moment, but Alicorn and MBlume very kindly produced the second half as real music, which I've uploaded to YouTube and linked at the bottom of this post (skip to it now).
THE ADVENTURE
BACKGROUND
The known world has many sects and religions, but all contain shadowy legends of two primeval deities: Sophia, Goddess of Wisdom; and Aleithos, God of Truth. When Sophia announced her plan to create thinking, rational beings, Aleithos objected, declaring that they would fall into error and produce endless falsehoods. Sophia ignored her brother's objections and created humankind, who named the world after their goddess-mother. But Aleithos' fears proved well-founded: humankind fell into error and produced endless falsehoods, and their clamor drove the God of Truth insane.
The once mighty Aleithos fell from heaven, and all of his angelic servants turned into Paradox Beasts, arachnoid monstrosities that sought and devoured those who challenged the laws of logic. Over centuries, most of the Paradox Beasts were banished, but Aleithos himself remained missing. And though thousands of seekers set off to all the corners of the world in search of Truth, the Mad God keeps his own counsel, if He still even exists at all.
The Truth God's madness had one other effect; the laws of physics, once inviolable, turned fluid, and those sufficiently advanced in the study of Truth gained apparently magical abilities. With knowledge literally being power, great philosophers and scientists built mighty cities and empires.
In the middle of the Cartesian Plain at the confluence of the rivers Ordinate and Abcissa stands the mightiest of all, the imperial city of Origin. At the very center of the city stands the infinitely tall Z-Axis Tower, on whose bottom floor lives the all-seeing Wizard of 0=Z. Surrounding the Tower are a host of colleges and universities that attract the greatest scholars from all over Origin, all gathered in service to the great project to find Truth.
Into the city comes Lady Cerune Russell, an exotic noblewoman from far-off parts seeking great thinkers to join her on a dangerous adventure. Four scholars flock to her banner. Nomophilos the Elder the Younger (Emile) is a political scientist studying the central role of laws in creating a just society. Phaidros (muflax) is a zealous Protestant theologian trying to meld strains of thought as disparate as Calvinism, Gnosticism, and W.L. Craig's apologetics. Ephraim (Charlie) is a Darwinian biologist with strong leftist sympathies and an experimental streak that sometimes gets him in trouble. And Macx (Karl) is a quiet but very precise logician with a talent for puzzles.
Cerune explains to the Original scholars that she is the last living descendant of Good King Bertrand, historic ruler of the land of Russellia far to the west. Russellia was the greatest nation in the world until two hundred years ago, when a cataclysm destroyed the entire kingdom in a single day and night. Now the skies above Russellia are dark and filled with choking ash, monsters roam its plains, and the Good King is said to be locked in a magical undying sleep deep beneath the Golden Mountain in the kingdom's center. Though many have traveled to Russellia in search of answers, none have returned alive; Cerune, armed with secret information from the Turing Oracle which she refuses to share, thinks she can do better. The four Originals agree to protect her as she makes the dangerous journey to the Golden Mountain to investigate the mysterious disaster and perhaps lift the curse. Cerune gives them a day in Origin to prepare for the journey.
CHAPTER ONE: ORIGIN
The party skip the city's major attractions, including the Z-Axis Tower and the Hagia Sophia, in favor of more academic preparations: a visit to the library to conduct research, and a shopping trip to Barnes & Aristoi Booksellers, where they purchase reading material for the journey ahead. Here, they find a map of the lands on the road to Russellia, including the unpleasant-sounding Slough of Despotism and the Shadow City of Xar-Morgoloth, whose very name inexplicably chills the air when spoken aloud. After a long discussion on how this thermodynamic-defying effect could probably be used to produce unlimited free energy, they return to more immediate matters and head to the armory to pick up some weapons - a trusty isoceles triangle for Nomophilos, a bow for Macx - before the stores close for the evening. After a final night in Origin, they meet Cerune at the city gates and set off.
They originally intend to stick to the course of the Abcissa, but it is flooding its banks and Cerune recommends crossing the river into Platonia at the Pons Asinorum. After being attacked by a Euclidean Elemental charged with letting no one enter who does not know geometry, they reach the other bank and find a strange old man, raving incomprehensibly. His turns of phrase start to make sense only after the party realizes that he is speaking as if he - and all objects - have no consistent identity.
In his roundabout way, he identifies himself as Heraclitus, the Fire Mage, one of the four great Elemental Mages of Platonia. Many years ago, he crossed into Origin on some errand, only to be ambushed by his arch-enemy, the Water Mage Thales. Thales placed a curse on Heraclitus that he could never cross the same river twice, trapping him on the wrong side of the Abcissa and preventing his return to Platonia. In order to dispel the curse, Heraclitus finds a loophole in the curse: he convinces himself that objects have no permanent identity, and so he can never cross the same river twice since it is not the same river and he is not the same man. Accepting this thesis, he crosses the Abcissa without incident - only to find that his new metaphysics of identity prevents him from forming goals, executing long-term plans, or doing anything more complicated than sitting by the riverbank and eating the fish that swim by.
This sets off a storm of conversation, as each member of the party tries to set Heraclitus right in their own way; Phaidros by appealing to God as a final arbiter of identity, Macx and Nomophilos by arguing that duty is independent of identity and that Heraclitus has a duty to his family and followers. Unfortunately, they make a logical misstep and end out convincing Heraclitus that it is illogical from his perspective to hold conversation; this ends the debate. And as the five philosophers stand around discussing what to do, they are ambushed by a party of assassins, who shoot poisoned arrows at them from a nearby knoll.
Outnumbered and outflanked, the situation seems hopeless, until Macx notices several of the attackers confused and unwilling to attack. With this clue, he identifies them as Buridan's Assassins, who in the presence of two equally good targets will hesitate forever, unable to choose: he yells to his friends to stand with two or more adventurers equidistant from each assassin, and sure enough, this paralyzes the archers and allows the party some breathing space.
But when a second group of assassins arrives to join the first, the end seems near - until Heraclitus, after much pondering, decides to accept his interlocutors' arguments for object permanence and joins in the battle. His fire magic makes short work of the remaining assassins, and when the battle is over, he thanks them and gives a powerful magic item as a gift to each. Then he disappears in a burst of flame after warning his new friends to beware the dangers ahead.
The party searches the corpses of the assassins - who all carry obsidian coins marked PLXM - and then camp for the night on the fringe of the Slough of Despotism.
CHAPTER TWO: THE SLOUGH OF DESPOTISM
The Slough of Despotism is a swamp unfortunately filled with allegators, giant reptiles who thrive on moral superiority and on casting blame. They accuse our heroes of trespassing on their property; our heroes counter that the allegators, who do not have a state to enforce property rights, cannot have a meaningful concept of property. The allegators threaten to form a state, but before they can do so the party manages to turn them against each other by pointing out where their property rights conflict; while the allegators argue, the adventurers sneak off.
They continue through the swamp, braving dense vegetation, giant snakes, and more allegators (who are working on the whole state thing; the party tells them that they're too small and disorganized to be a real state, and that they would have to unite the entire allegator nation under a mutually agreed system of laws) before arriving at an old barrow tomb. Though four of the five adventurers want to leave well enough alone, Ephraim's experimental spirits gets the better of him, and he enters the mound. Its local Barrow Wight has long since departed, but he has left behind a suit of Dead Wight Mail, which confers powerful bonuses on Conservatives and followers of the Right-Hand Path. Nomophilos, the party's Conservative, is all set to take the Mail when Phaidros objects that it is morally wrong to steal from the dead; this sparks a fight that almost becomes violent before Nomo finally backs down; with a sigh of remorse, he leaves the magic item where he found it.
Beyond the barrow tomb lies the domain of the Hobbesgoblins, the mirror image of the Allegators in that they have a strong - some might say dictatorial - state under the rule of their unseen god-king, Lord-Over-All. They are hostile to any foreigners who refuse to swear allegiance to their ruler, but after seeing an idol of the god-king - a tentacled monstrosity bearing more than a passing resemblance to Cthulhu - our heroes are understandably reluctant to do so. As a result, the Hobbesgoblins try to refuse them passage through their capital city of Malmesbury on the grounds that, without being subordinated to Lord-Over-All or any other common ruler, the adventurers are in a state of nature relative to the Hobbesgoblins and may rob, murder, or otherwise exploit them. The Hobbesgoblins don't trust mere oaths or protestations of morality - but Nomophilos finally comes up with a compromise that satisfies them. He offers them a hostage in return for their good behavior, handing them his pet tortoise Xeno. This satisfies the Hobbesgoblins as assurance of their good behavior, and the party passes through Malmesbury without incident.
On the far side of Malmesbury they come to a great lake, around which the miasmas of the swamp seem to swirl expectantly. On the shore of the lake lives Theseus with his two ships. Theseus tells his story: when he came of age, he set off on a trading expedition upon his father's favorite ship. His father made him swear to return the ship intact, but after many years of travel, Theseus realized that every part of the ship had been replaced and repaired, so that there was not a single piece of the ship that was the same as when it had left port. Mindful of his oath, he hunted down the old pieces he had replaced, and joined them together into a second ship. But now he is confused: is it the first or the second ship which he must return to his father?
The five philosophers tell Theseus that it is the first ship: the ship's identity is linked to its causal history, not to the matter that composes it. Delighted with this answer, he offers the second ship to the adventurers, who sail toward the far shore.
Halfway across the lake, they meet an old man sitting upon a small island. He introduces himself as Thomas Hobbes, and says that his spies and secret police have told him everything about the adventurers since they entered the Slough. Their plan to save Russellia is a direct threat to his own scheme to subordinate the entire world under one ruler, and so he will destroy them. When the party expresses skepticism, his "island" rises out of the water and reveals itself to be the back of the monstrous sea creature, Leviathan, the true identity of the Hobbesgoblins' Lord-Over-All. After explaining his theory of government ("Let's Hear It For Leviathan", lyrics only) Hobbes and the monster attack for the game's first boss battle. The fight is immediately plagued by mishaps, including one incident where Phaidros's "Calvin's Predestined Hellfire" spell causes Hobbes to briefly turn into a Dire Tiger. When one of Leviathan's tentacles grab Cerune, she manifests a battle-axe of magic fire called the Axe of Separation and hacks the creature's arm off. She refuses to explain this power, but inspired by the small victory the party defeat Hobbes and reduce Leviathan into a state of Cartesian doubt; the confused monster vanishes into the depths, and the adventurers hurry to the other side and out of the Slough.
CHAPTER THREE: THE SHADOW CITY
Although our heroes make good time, they soon spot a detachment of Hobbesgoblins pursuing them. Afraid the goblins will be angry at the defeat of their god, the party hides; this turns out to be unnecessary, as the goblins only want Ephraim - the one who actually dealt the final blow against Leviathan - to be their new Lord-Over-All. Ephraim rejects the positions, and the party responds to the goblins' desperate pleading by suggesting a few pointers for creating a new society - punishing violence, promoting stability, reinforcing social behavior. The Hobbesgoblins grumble, but eventually depart - just in time for the party to be attacked by more of Buridan's Assassins. These killers' PLXM coins seem to suggest an origin in Xar-Morgoloth, the Shadow City, and indeed its jet-black walls now loom before them. But the city sits upon the only pass through the Central Mountains, so the party reluctantly enters.
Xar-Morgoloth turns out to be a pleasant town of white-washed fences and laughing children. In search of an explanation for the incongruity the five seek out the town's spiritual leader, the Priest of Lies. The Priest explains that although Xar-Morgoloth is superficially a nice place, the town is evil by definition. He argues that all moral explanations must be grounded in base moral facts that cannot be explained, whether these be respect for others, preference of pleasure over pain, or simple convictions that murder and theft are wrong. One of these base level moral facts, he says, is that Xar-Morgoloth is evil. It is so evil, in fact, that it is a moral imperative to keep people out of the city - which is why he sent assassins to scare them off.
Doubtful, the party seeks the mysterious visiting philosopher whom the Priest claimed originated these ideas: they find Immanuel Kant living alone on the outskirts of the city. Kant tells his story: he came from a parallel universe, but one day a glowing portal appeared in the sky, flinging him into the caves beyond Xar-Morgoloth. Wandering into Xar-Morgoloth, he tried to convince the citizens of his meta-ethical theories, but they insisted they could ground good and evil in basic moral intuitions instead. Kant proposed that Xar-Morgoloth was evil as a thought experiment to disprove them, but it got out of hand.
When our heroes challenge Kant's story and blame him for the current state of the city, Kant gets angry and casts Parmenides' Stasis Hex, freezing them in place. Then he announces his intention to torture and kill them all. For although in this world Immanuel Kant is a moral philosopher, in his own world (he explains) Immanuel Kant is a legendary villain and figure of depravity ("I'm Evil Immanuel Kant", lyrics only). Cerune manifests a second magic weapon, the Axe of Choice, to break the Stasis Hex, and the party have their second boss battle, which ends in defeat for Evil Kant. Searching his home, they find an enchanted Parchment of Natural Law that causes the chill in the air whenever the city's name is spoken.
Armed with this evidence, they return to the Priest of Lies and convince him that his moral theory is flawed. The Priest dispels the shadow over the city, recalls his assassins, and restores the town name to its previous non-evil transliteration of Summerglass. He then offers free passage through the caverns that form the only route through the Central Mountains.
CHAPTER FOUR: THE CAVERNS OF ABCISSA
Inside the caverns, which are nearly flooded by the overflowing Abcissa River, the party encounter an army of Water Elementals, leading them to suspect that they may be nearing the headquarters of Heraclitus' arch-enemy, Thales. The Water Elementals are mostly busy mining the rock for gems and magic artifacts, but one of them is sufficiently spooked by Phaidros to cast a spell on him, temporarily turning him to water. This is not immediately a disaster - Phaidros assumes a new form as a water elemental but keeps his essential personality - except that in an Ephraimesque display of overexperimention, Phaidros wonders what would happen if he temporarily relaxed the morphogenic field that holds him in place - as a result, he loses his left hand, a wound which stays in place when he reverts back to his normal form a few hours later. A resigned Phaidros only quotes the Bible: ("And if your hand offend you, cut it off: it is better for you to enter into life maimed, than having two hands to go into hell" - Mark 9:43) and trusts in the Divine plan.
The Caverns of Abcissa are labyrinthine and winding, but eventually the party encounters a trio who will reappear several times in their journey: Ruth (who tells the truth), Guy (who'll always lie) and Clancy (who acts on fancy). These three have a habit of hanging around branching caverns and forks in the road, and Ephraim solves their puzzle thoroughly enough to determine what route to take to the center of the cave system.
Here, in a great cavern, lives a civilization of cave-men whose story sounds a lot like Evil Kant's - from another world, minding their own business until a glowing portal appeared in the sky and sucked them into the caves. The cave-men are currently on the brink of civil war after one of their number, Thag, claims to have visited the mythical "outside" and discovered a world of magic and beauty far more real than the shadows dancing on the walls of their cavern. Most of the other cave-men, led by the very practical Vur, have rejected his tale, saying that the true magic and beauty lies in accepting the real, in-cave world rather than chasing after some outside paradise - but a few of the youth have flocked to Thag's banner, including Antil, a girl with mysterious magic powers.
Only the timely arrival of the adventurers averts a civil war; the party negotiates a truce and offers to solve the dispute empirically - they will escort Vur and Antil with them through the caverns so that representatives of both sides can see whether or not the "outside" really exists. This calms most of the cave-men down, and with Vur and Antil alongside, they head onward to the underground source of the Abcissa - which, according to their research, is the nerve center of Thales' watery empire.
On the way, they encounter several dangers. First, they awake a family of hibernating bears, who are quickly dispatched but who manage to maul the frail Vur so severely that only some divine intervention mediated by Phaidros saves his life. Second, they come across a series of dimensional portals clearly linked to the stories related by Evil Kant and the cave-men. Some link directly to otherworldly seas, pouring their water into the Abcissa and causing the recent floods. Others lead to otherworldly mines and quarries, and are being worked by gangs of Water Elementals. After some discussion of the ethics of stranding the Water Elementals, the five philosophers decide to shut down as many of the portals as possible.
They finally reach the source of the Abcissa, and expecting a battle, deck themselves out in magic armor that grants immunity to water magic. As expected, they encounter Thales, who reveals the full scale of his dastardly plot - to turn the entire world into water. But his exposition is marred by a series of incongruities, including his repeated mispronunciations of his own name ("All is Water", lyrics only). And when the battle finally begins, the party dispatches Thales with minimal difficulty, and the resulting corpse is not that of a Greek philosopher at all, but rather that of Davidson's Swampman, a Metaphysical summon that can take the form of any creature it encounters and imitate them perfectly.
Before anyone has time to consider the implications of their discovery, they are attacked by the real Water Mage, who bombards them with powerful water spells to which their magic armor mysteriously offers no protection. Worse, the Mage is able to create dimensional portals at will, escaping attacks effortlessly. After getting battered by a series of magic Tsunamis that nearly kill several of the weaker party members, the adventurers are in dire straits.
Then the tide begins to turn. Antil manifests the power to go invisible and attack the Water Mage from an unexpected vantage. Cerune manifests another magic weapon, the Axe of Extension, which gives her allies the same powers over space as the Water Mage seems to possess. And with a little prompting from Cerune, Phaidros and Nomophilos realize the Water Mage's true identity. Magic armor doesn't grant protection from his water spells because they are not water at all, but XYZ, a substance physically identical to but chemically different from H2O. And his mastery of dimensional portals arises from his own origin in a different dimension, Twin Earth. He is Hilary Putnam ("All is Water, Reprise", lyrics only) who has crossed dimensions, defeated Thales, and assumed his identity in order to take over his watery empire and complete his world domination plot. With a last push of magic, the party manage to defeat Putnam, who is knocked into the raging Abcissa and drowned in the very element he sought to control.
They tie up the loose ends of the chapter by evacuating the Water Elementals from Twin Earth, leading the cave-men to the promised land of the Outside, and confronting Antil about her mysterious magic. Antil gives them the source of her power to turn invisible: the Ring of Gyges, which she found on the cave floor after an earthquake. She warns them never to use it, as it presents a temptation which their ethics might be unable to overcome.
CHAPTER FIVE: CLIMBING MOUNT IMPROBABLE
Now back on the surface, the party finds their way blocked by the towering Mount Improbable, which at first seems too tall to ever climb. But after some exploration, they find there is a gradual path sloping upward, and begin their ascent. They are blocked, however, by a regiment of uniformed apes: cuteness turns to fear when they get closer and find the apes have machine guns. They decide to negotiate, and the apes prove willing to escort them to their fortress atop the peak if they can prove their worth by answering a few questions about their religious beliefs.
Satisfied, the ape army lead them to a great castle at the top of the mountain where Richard Dawkins ("Beware the Believers", credit Michael Edmondson) and his snow leopard daemon plot their war against the gods themselves. Dawkins believes the gods to be instantiated memes - creations of human belief that have taken on a life of their own due to Aleithos' madness - and accuses them of causing disasters, poverty, and ignorance in order to increase humanity's dependence upon them and keep the belief that sustains their existence intact. With the help of his genetically engineered apes and a fleet of flying battleships, he has been waging war against all the major pantheons of polytheism simultaneously. Dawkins is now gearing up to attack his most implacable foe, Jehovah Himself, although he admits He has so far managed to elude him.
Hoping the adventurers will join his forces, he takes them on a tour of the castle, showing them the towering battlements, the flotilla of flying battleships, and finally, the dungeons. In these last are imprisoned Fujin, Japanese god of storms; Meretseger, Egyptian goddess of the flood, and even Ares, the Greek god of war (whom Dawkins intends to try for war crimes: not any specific war crime, just war crimes in general). When the party reject Dawkins' offer to join his forces (most vocally Phaidros, most reluctantly Ephraim) Dawkins locks them in the dungeons themselves.
They are rescued late at night by their old friend Theseus. Theseus lost his ship in a storm (caused by the Japanese storm god, Fujin) and joined Dawkins' forces to get revenge; he is now captain of the aerial battleships. Theseus loads the adventurers onto a flying battleship and deposits them on the far side of the mountain, where Dawkins and his apes will be unlikely to find them.
Their troubles are not yet over, however, for they quickly encounter a three man crusade consisting of Blaise Pascal, Johann Tetzel, and St. Augustine of Hippo (mounted, cavalry-style, upon an actual hippopotamus). The three have come, led by a divine vision, to destroy Dawkins and his simian armies as an abomination unto the Lord, and upon hearing that the adventurers have themselves escaped Dawkins, invite them to come along. But the five, despite their appreciation for Pascal's expository fiddle music ("The Devil and Blaise Pascal") are turned off by Tetzel's repeated attempts to sell them indulgences, and Augustine's bombastic preaching. After Phaidros gets in a heated debate with Augustine over the role of pacifism in Christian thinking, the two parties decide to go their separate ways, despite Augustine's fiery condemnations and Pascal's warning that there is a non-zero chance the adventurers' choice will doom them to Hell.
After another encounter with Ruth, Guy, and Clancy, our heroes reach the base of Mount Improbable and at last find themselves in Russellia.
CHAPTER SIX: THE PALL OVER RUSSELLIA
Russellia is, as the legends say, shrouded in constant darkness. The gloom and the shock of being back in her ancestral homeland are too much for Cerune, who breaks down and reveals her last few secrets. Before beginning the quest, she consulted the Turing Oracle in Cyberia, who told her to seek the aid of a local wizard, Zermelo the Magnificent. Zermelo gave her nine magic axes of holy fire, which he said possessed the power to break the curse over Russellia. But in desperation, she has already used three of the magic axes, and with only six left she is uncertain whether she will have the magic needed.
At that moment, Heraclitus appears in a burst of flame, seeking a debriefing on the death of his old enemy Thales. After recounting the events of the past few weeks, our heroes ask Heraclitus whether, as a Fire Mage, he can reforge the axes of holy fire. Heraclitus admits the possibility, but says he would need to know more about the axes, their true purpose, and the enemy they were meant to fight. He gives the party an enchanted matchbook, telling them to summon him by striking a match when they gather the information he needs.
Things continue going wrong when, in the midst of a discussion about large numbers, Phaidros makes a self-contradictory statement that summons a Paradox Beast. Our heroes stand their ground and manage to destroy the abomination, despite its habit of summoning more Paradox Beasts to its aid through its Principle of Explosion spell. Bruised and battered, they limp into the nearest Russellian city on their map, the town of Ravenscroft.
The people of Ravenscroft tell their story: in addition to the eternal darkness, Russellia is plagued by vampire attacks and by a zombie apocalypse, which has turned the population of the entire country, save Ravenscroft, into ravenous brain-eating zombies. Despite the burghers claiming the zombie apocalypse had been confirmed by no less a figure than Thomas Nagel, who passed through the area a century ago, our heroes are unconvinced: for one thing, the Ravenscrofters are unable to present any evidence that the other Russellians are zombies except for their frequent attacks on Ravenscroft - and the Ravenscrofters themselves attack the other towns as a "pre-emptive measure". But the Ravenscrofters remain convinced, and even boast of their plan to launch a surprise attack on neighboring Brixton the next day.
Suspicious, our heroes head to the encampment of the Ravenscroft army, where they are just in time to see Commander David Chalmers give a rousing oration against the zombie menace ("Flee! A History of Zombieism In Western Thought", credit Emerald Rain). They decide to latch on to Chalmers' army, both because it is heading the same direction they are and because they hope they may be able to resolve the conflict between Ravenscroft and Brixton before it turns violent.
They camp with the army in some crumbling ruins from the golden age of the Russellian Empire. Entering a ruined temple, they disarm a series of traps to enter a vault containing a legendary artifact, the Morningstar of Frege. They also encounter a series of statues and bas-reliefs of the Good King, in which he demonstrates his chivalry by swearing an oath to Aleithos that he will defend all those who cannot defend themselves. Before they can puzzle out the meaning of all they have seen, they are attacked by vampires, confirming the Ravenscrofters' tales; they manage to chase them away with their magic and a hare-brained idea of Phaidros' to bless their body water, turning it into holy water and burning them up from the inside.
The next morning, they sneak into Brixton before the main army, and find their fears confirmed: the Brixtonites are normal people, no different from the Russellians, and they claim that Thomas Nagel told them that they were the only survivors of the zombie apocalypse. They manage to forge a truce between Ravenscroft and Brixton, but to their annoyance, the two towns make peace only to attack a third town, Mountainside, which they claim is definitely populated by zombies this time. In fact, they say, the people of Mountainside openly admit to being zombies and don't even claim to have souls.
Once again, our heroes rush to beat the main army to Mountainside. There they find the town's leader, Daniel Dennett, who explains the theory of eliminative materialism ("The Zombies' Secret"). The party tries to explain the subtleties of Dennett's position to a bloodthirsty Chalmers, and finally all sides agree to drop loaded terms like "human" and "zombie" and replace them with a common word that suggests a fundamental humanity but without an internal Cartesian theater (one of our heroes suggests "NPC", and it sticks). The armies of the three towns agree to ally against their true common enemy - the vampires who live upon the Golden Mountain and kidnap their friends and families in their nighttime raids.
Before the attack, Nomophilos and Ephraim announce their intention to build an anti-vampire death ray. The theory is that places on the fringe of Russellia receive some sunlight, while places in the center are shrouded in endless darkness. If the towns of Russellia can set up a system of mirrors from their highest towers, they can reflect the sunlight from the borderlands into a central collecting mirror in Mountainside, which can be aimed at the vampires' hideout to flood it with daylight, turning them to ashes. Ephraim, who invested most of his skill points into techne, comes up with schematics for the mirror, and after constructing a successful prototype, Chalmers and Dennett sound the attack order.
The death ray takes out many of the vampires standing guard, but within their castle they are protected from its light: our heroes volunteer to infiltrate the stronghold, but are almost immediately captured and imprisoned - the vampires intend to sacrifice Cerune in a ritual to use her royal blood to increase their power. But the adventurers make a daring escape: arch-conservative Nomophilos uses the invisible hand of the marketplace to steal the keys out of the jailer's pocket, and Phaidros summons a five hundred pound carnivorous Christ metaphor to maul the guards. Before the party can escape the castle, they are confronted by the vampire lord himself, who is revealed to be none other than Thomas Nagel ("What Is It Like To Be A Bat?"). In the resulting battle, Nagel is turned to ashes and the three allied cities make short work of the remaining vampires, capturing the castle.
The next morning finds our heroes poring over the vampire lord's library. Inside, they find an enchanted copy of Godel Escher Bach (with the power to summon an identical enchanted copy of Godel Escher Bach) and a slew of books on Russellian history. Over discussion of these latter, they finally work out what curse has fallen over the land, and what role the magic axes play in its removal.
[spoiler alert; stop here if you want to figure it out for yourself]
The Good King's oath to defend those who could not defend themselves was actually more complicated than that: he swore an oath to the god Aleithos to defend those and only those who could not defend themselves. His enemies, realizing the inherent contradiction, attacked him, trapping Russell in a contradiction - if he defended himself, he was prohibited from doing so; if he did not defend himself, he was obligated to do so. Trapped, he was forced to break his oath, and the Mad God punished him by casting his empire into eternal darkness and himself into an endless sleep.
The nine axes of Zermelo the Magnificent embody the nine axioms of ZFC. If applied to the problem, they will allow set theory to be reformulated in a way that makes the paradox impossible, lifting the curse and waking the Good King.
Upon figuring out the mystery, the party strike the enchanted match and summon Heraclitus, who uses fire magic to reforge the Axes of Choice, Separation, and Extension. Thus armed, the party leave the Vampire Lord's castle and enter the system of caverns leading into the Golden Mountain.
CHAPTER SEVEN: THE KING UNDER THE MOUNTAIN
The party's travels through the cavern are quickly blocked by a chasm too deep to cross. Nomophilos saves the day by realizing that the enchanted copy of Godel Escher Bach creates the possibility of infinite recursion; he uses each copy of GEB to create another copy, and eventually fills the entire chasm with books, allowing the party to walk through to the other side.
There they meet Ruth, Clancy, and Guy one last time; the three are standing in front of a Logic Gate, and to open it the five philosophers must solve the Hardest Logic Puzzle Ever. In an epic feat that the bards will no doubt sing for years to come, Macx comes up with a solution to the puzzle, identifies each of the three successfully, and opens the Logic Gate.
Inside the gate is the Good King, still asleep after two centuries. His resting place is guarded by the monster he unleashed, a fallen archangel who has become a Queen Paradox Beast. The Queen summons a small army of Paradox Beast servants with Principle of Explosion, and the battle begins in earnest. Cerune stands in a corner, trying to manifest her nine magic axes, but Nomophilos uses his Conservative spell "Morning in America" to summon a Raygun capable of piercing the Queen Paradox Beast's armored exoskeleton. Macx summons a Universal Quantifier and attaches it to his Banish Paradox Beast spell to decimate the Queen's armies. Ephraim desperately tries to wake the Good King, while Phaidros simply prays.
After an intense battle, Cerune manifests all nine axes and casts them at the Queen Paradox Beast, dissolving the paradox and destroying the beast's magical defenses. The four others redouble their efforts, and finally manage to banish the Queen. When the Queen Paradox Beast is destroyed, Good King Bertrand awakens.
Bertrand is temporarily discombobulated, but eventually regains his bearings and listens to the entire adventure. Then he tells his story. The attack that triggered the curse upon him, he says, was no coincidence, but rather a plot by a sinister organization against whom he had been waging a shadow war: the Bayesian Conspiracy. He first encountered the conspiracy when their espionage arm, the Bayes Network, tried to steal a magic emerald of unknown origin from his treasury. Since then, he worked tirelessly to unravel the conspiracy, and had reached the verge of success - learning that their aim was in some way linked to a plan to gain the shattered power of the Mad God Aleithos for themselves - when the Conspiracy took advantage of his oath and managed to put him out of action permanently.
He is horrified to hear that two centuries have passed, and worries that the Bayesians' mysterious plan may be close to fruition. He begs the party to help him re-establish contact with the Conspiracy and continue figuring out their plans, which may be a dire peril to the entire world. But he expresses doubt that such a thing is even possible at this stage.
In a burst of flame, Heraclitus appears, announcing that all is struggle and that he has come to join in theirs. He admits that the situation is grim, but declares it is not as hopeless as it seems, because they do not fight alone. He invokes the entire Western canon as the inspiration they follow and the giants upon whose shoulders they stand ("Grand Finale").
Heraclitus, Good King Bertrand, and the five scholars end the adventure by agreeing to seek out the Bayesian Conspiracy and discover whether Russell's old adversaries are still active. There are nebulous plans to continue the campaign (subject to logistical issues) in a second adventure, Fermat's Last Stand.
MUSIC
LYRICS ONLY
Hobbes' Song: Let's Hear It For Leviathan
Kant's Song: I'm Evil Immanuel Kant
Thales' Song: All Is Water
Putnam's Song: All Is Water, Reprise
GOOD ARTISTS BORROW, GREAT ARTISTS STEAL
Dawkins' Song: Beware The Believers (credit: Michael Edmondson)
Chalmers' Song: Flee: A History of Zombieism In Western Thought (credit: Emerald Rain)
ORIGINAL ADAPTATIONS
Pascal's Song: The Devil and Blaise Pascal
Dennett's Song: The Zombies' Secret
Vampire Nagel's Song: What Is It Like To Be A Bat?
Heraclitus' Song: Grand Finale
[Sequence announcement] Introduction to Mechanism Design
Mechanism design is the theory of how to construct institutions for strategic agents, spanning applications like voting systems, school admissions, regulation of monopolists, and auction design. Think of it as the engineering side of game theory, building algorithms for strategic agents. While it doesn't have much to say about rationality directly, mechanism design provides tools and results for anyone interested in world optimization.
In this sequence, I'll touch on
- The basic mechanism design framework, including the revelation principle and incentive compatibility.
- The Gibbard-Satterthwaite impossibility theorem for strategyproof implementation (a close analogue of Arrow's Theorem), and restricted domains like single-peaked or quasilinear preference where we do have positive results.
- The power and limitations of Vickrey-Clarke-Groves mechanisms for efficiently allocating goods, generalizing Vickrey's second-price auction.
- Characterizations of incentive-compatible mechanisms and the revenue equivalence theorem.
- Profit-maximizing auctions.
- The Myerson-Satterthwaite impossibility for bilateral trade.
- Two-sided matching markets à la Gale and Shapley, school choice, and kidney exchange.
As the list above suggests, this sequence is going to be semi-technical, but my foremost goal is to convey the intuition behind these results. Since mechanism design builds on game theory, take a look at Yvain's Game Theory Intro if you want to brush up.
Various resources:
- For further introduction, you can start with the popular or more scholarly survey of mechanism design from the 2007 Nobel memoriam prize in economics.
- Jeff Ely has lecture notes and short videos to accompany an undergraduate class in microeconomic theory from the perspective of mechanism design.
- The textbook A Toolbox for Economic Design by Dimitrios Diamantaras is very accessible and comprehensive if you can get ahold of a copy.
- Tilman Börgers has a draft textbook intended for graduate students.
- Chapters 9-16 of Algorithmic Game Theory and chapters 10-11 of Multiagent Systems cover various topics in mechanism design from the perspective of computer scientists.
- Video lectures introducing market design and computational aspects of mechanism design.
I plan on following up on this sequence with another focusing on group rationality and information aggregation, surveying scoring rules and prediction markets among other topics.
Suggestions and comments are very welcome.
Litany of a Bright Dilettante
So, one more litany, hopefully someone else finds it as useful.
It's an understatement that humility is not a common virtue in online discussions, even, or especially when it's most needed.
I'll start with my own recent example. I thought up a clear and obvious objection to one of the assertions in Eliezer's critique of the FAI effort compared with the Pascal's Wager and started writing a witty reply. ...And then I stopped. In large part because I had just gone through the same situation, but on the other side, dealing with some of the comments to my post about time-turners and General Relativity by those who know next to nothing about General Relativity. It was irritating, yet here I was, falling into the same trap. And not for the first time, far from it. The following is the resulting thought process, distilled to one paragraph.
I have not spent 10,000+ hours thinking about this topic in a professional, all-out, do-the-impossible way. I probably have not spent even one hour seriously thinking about it. I probably do not have the prerequisites required to do so. I probably don't even know what prerequisites are required to think about this topic productively. In short, there are almost guaranteed to exist unknown unknowns which are bound to trip up a novice like me. The odds that I find a clever argument contradicting someone who works on this topic for a living, just by reading one or two popular explanations of it are minuscule. So if I think up such an argument, the odds of it being both new and correct are heavily stacked against me. It is true that they are non-zero, and there are popular examples of non-experts finding flaws in an established theory where there is a consensus among the experts. Some of them might even be true stories. No, Einstein was not one of these non-experts, and even if he were, I am not Einstein.
And so on. So I came up with the following, rather unpolished mantra:
If I think up what seems like an obvious objection, I will resist assuming that I have found a Weaksauce Weakness in the experts' logic. Instead I may ask politely whether my argument is a valid one, and if not, where the flaw lies.
If you think it useful, feel free to improve the wording.
Only say 'rational' when you can't eliminate the word
Almost all instances of the word "true" can be eliminated from the sentences in which they appear by applying Tarski's formula. For example, if you say, "I believe the sky is blue, and that's true!" then this can be rephrased as the statement, "I believe the sky is blue, and the sky is blue." For every "The sentence 'X' is true" you can just say X and convey the same information about what you believe - just talk about the territory the map allegedly corresponds to, instead of talking about the map.
When can't you eliminate the word "true"? When you're generalizing over map-territory correspondences, e.g., "True theories are more likely to make correct experimental predictions." There's no way to take the word 'true' out of that sentence because it's talking about a feature of map-territory correspondences in general.
Similarly, you can eliminate the sentence 'rational' from almost any sentence in which it appears. "It's rational to believe the sky is blue", "It's true that the sky is blue", and "The sky is blue", all convey exactly the same information about what color you think the sky is - no more, no less.
When can't you eliminate the word "rational" from a sentence?
When you're generalizing over cognitive algorithms for producing map-territory correspondences (epistemic rationality) or steering the future where you want it to go (instrumental rationality). So while you can eliminate the word 'rational' from "It's rational to believe the sky is blue", you can't eliminate the concept 'rational' from the sentence "It's epistemically rational to increase belief in hypotheses that make successful experimental predictions." You can Taboo the word, of course, but then the sentence just becomes, "To increase map-territory correspondences, follow the cognitive algorithm of increasing belief in hypotheses that make successful experimental predictions." You can eliminate the word, but you can't eliminate the concept without changing the meaning of the sentence, because the primary subject of discussion is, in fact, general cognitive algorithms with the property of producing map-territory correspondences.
The word 'rational' should never be used on any occasion except when it is necessary, i.e., when we are discussing cognitive algorithms as algorithms.
If you want to talk about how to buy a great car by applying rationality, but you're primarily talking about the car rather than considering the question of which cognitive algorithms are best, then title your post Optimal Car-Buying, not Rational Car-Buying.
Thank you for observing all safety precautions.
Free research help, editing and article downloads for LessWrong
Update: Please use the most recent thread.
The LW Public Goods Team wants to encourage useful research projects (as well other kinds of projects) for the LW community. If you're interested in doing this kind of work, you might run into a problem that is best solved by good outside assistance. Without assistance you might get discouraged and stop working on the project or never even start it. We want to help you avoid that. Do you
- Not know how to interpret a finding and want help figuring it out?
- Need access to a particular paper and need someone with a library subscription to download it for you?
- Need someone to edit your writing?
- Not even know what you're having trouble with, but you know is that you're stuck and need someone to troubleshoot you?
Then, we want to help!
How do you request such help? For now, I think the best way is to post to the discussion section about your problem. That way other interested people can also provide help and be interested in your research. If you feel uncomfortable doing this, you may post to the public goods team mailing list (lw-public-goods-team@googlegroups.com) or if it's not too long after this was posted, post in the comments.
I personally commit to doing at least 3 hours a week of tasks like these for people doing LessWrong related projects (assuming demand for it; I'll be keeping a log) for at least the next month. Morendil has committed to doing at least an hour of this and atucker has promised to some as well.
Our goal is to find out whether this kind of help is effective and encourages people. If this kind of assistance turns out to be valuable, we'll continue to offer it.
If you would like to volunteer some time (a little or a lot), say so in the comments!
We Should Introduce Ourselves Differently
I told an intelligent, well-educated friend about Less Wrong, so she googled, and got "Less Wrong is an online community for people who want to apply the discovery of biases like the conjunction fallacy, the affect heuristic, and scope insensitivity in order to fix their own thinking." and gave up immediately because she'd never heard of the biases.
While hers might not be the best possible attitude, I can't see that we win anything by driving people away with obscure language.
Possible improved introduction: "Less Wrong is a community for people who would like to think more clearly in order to improve their own and other people's lives, and to make major disasters less likely."
Elon Musk donates $10M to the Future of Life Institute to keep AI beneficial
We are delighted to report that technology inventor Elon Musk, creator of Tesla and SpaceX, has decided to donate $10M to the Future of Life Institute to run a global research program aimed at keeping AI beneficial to humanity.
There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. A long list of leading AI-researchers have signed an open letter calling for research aimed at ensuring that AI systems are robust and beneficial, doing what we want them to do. Musk's donation aims to support precisely this type of research: "Here are all these leading AI researchers saying that AI safety is important", says Elon Musk. "I agree with them, so I'm today committing $10M to support research aimed at keeping AI beneficial for humanity."
[...] The $10M program will be administered by the Future of Life Institute, a non-profit organization whose scientific advisory board includes AI-researchers Stuart Russell and Francesca Rossi. [...]
The research supported by the program will be carried out around the globe via an open grants competition, through an application portal at http://futureoflife.org that will open by Thursday January 22. The plan is to award the majority of the grant funds to AI researchers, and the remainder to AI-related research involving other fields such as economics, law, ethics and policy (a detailed list of examples can be found here [PDF]). "Anybody can send in a grant proposal, and the best ideas will win regardless of whether they come from academia, industry or elsewhere", says FLI co-founder Viktoriya Krakovna.
[...] Along with research grants, the program will also include meetings and outreach programs aimed at bringing together academic AI researchers, industry AI developers and other key constituents to continue exploring how to maximize the societal benefits of AI; one such meeting was held in Puerto Rico last week with many of the open-letter signatories.
Elon Musk donates $10M to keep AI beneficial, Future of Life Institute, Thursday January 15, 2015
My simple hack for increased alertness and improved cognitive functioning: very bright light
This is a simple idea that I came up with by myself. I was looking for a means to enter high functioning lots-of-beta-waves modes without the use of chemical stimulants. What I found was that very bright light works really, really well.
I got the brightest light bulbs I could get cheaply. 105 watts of incandescents with halogen gas, billed as the equivalent of 130 watts of incandescent light. And I got an adaptor like this that lets me screw four of those into the same socket in the ceiling. The result is about as painful to look at as the sun. It makes my (small) room brighter than a clear summer's day at my latitude and slightly brighter than a supermarket.
I guess it affects adenosine much like caffeine does because that's what it feels like. Yet unlike caffeine, it can be rapidly turned on and off, literally with the flip of a switch.
For waking up in the morning, I find bright light more effective than a 200mg caffeine tablet, although my caffeine tolerance is moderate for a scientist.
I have not compared the effects of very bright light to modafinil, which requires a prescription in my country.
When under this amount of light, I need to remind myself to go to bed, because I tire about three hours later than with common luminosity. Yet once I switch it off, I can usually sleep within a few minutes, as (I'm guessing) a flood of unblocked adenosine suddenly overwhelms me. I used to have those unproductive late hours where I was too awake to sleep but too tired to be smart. I don't have those anymore.
You've probably heard of light therapy, which uses light to help manage seasonal affective disorder. I don't have that issue, but I definitely notice that the light does improve my mood. (Maybe that's simply because I like to function well.) I'm pretty sure the expensive "light therapy bulbs" you can get are scams, because the color of the light doesn't actually make a difference. The amount of light does.
One nice side benefit is that it keeps me awake while meditating, so I don't need the upright posture that usually does that job. Without the need for an upright posture, I can go beyond two hours straight, which helps enter more profoundly altered states.
After about 10 months of almost daily use of this lighting, I have not noticed any decrease in effectiveness. I do notice I find normally-lit rooms comparatively gloomy, and have an increasingly hard time understanding why people tolerate that. Supermarkets and offices are brightly lit to make the rats move faster - why don't we do that at our homes and while we're at it, amp it up even further? After all, our brains were made for the African savanna, which during the day is a lot brighter than most apartments today.
Since everyone can try this for a few bucks, I hope some of you will. If you do, please provide feedback on whether it works as well for you as it does for me. Any questions?
AGI and Friendly AI in the dominant AI textbook
AI: A Modern Approach is by far the dominant textbook in the field. It is used in 1200 universities, and is the 25th most-cited publication in computer science. If you're going to learn AI, this is how you learn it.
Luckily, the concepts of AGI and Friendly AI get pretty good treatment in the 3rd edition, released in 2009.
The Singularity is mentioned in the first chapter on page 12. Both AGI and Friendly AI are also mentioned in the first chapter, on page 27:
[Many leaders in the field] believe AI should return to its roots of striving for, in Simon's words, "machines that think, that learn and that create." They call the effort human-level AI or HLAI: their first symposium was in 2004 (Minsky et al. 2004)...
A related idea is the subfield of Artificial General Intelligence or AGI (Goertzel and Pennachin, 2007), which held its first conference and organized the Journal of Artificial General Intelligence in 2008. AGI looks for a universal algorithm for learning and acting in any environment, and has its roots in the work of Ray Solomonoff (1965), one of the attendees of the original 1956 Dartmouth conference. Guaranteeing that what we create is really Friendly AI is also a concern (Yudkowsky, 2008; Omohundro, 2008), one we will return to in Chapter 26.
Chapter 26 is about the philosophy AI, and section 26.3 is "The Ethics and Risks of Developing Artificial Intelligence." They are:
- People might lose their jobs to automation.
- People might have too much (or too little) leisure time.
- People might lose their sense of being unique.
- AI systems might be used toward undesirable ends.
- The use of AI systems might result in a loss of accountability.
Each of those sections is one or two paragraphs long. The final risk of AI takes up 3.5 pages: (6) The Success of AI might mean the end of the human race. Here's a snippet:
The question is whether an AI system poses a bigger risk than traditional software. We will look at three sources of risk. First, the AI system's state estimation may be incorrect, causing it to do the wrong thing. For example... a missile defense system might erroneously detect an attack and launch a counterattack, leading to the death of billions...
Second, specifying the right utility function for an AI system to maximize is not so easy. For example, we might propose a utility function designed to minimize human suffering, expressed as an additive reward function over time as in Chapter 17. Given the way humans are, however, we'll always find a way to suffer even in paradise; so the optimal decision for the AI system is to terminate the human race as soon as possible - no humans, no suffering...
Third, the AI system's learning function may cause it to evolve into a system with unintended behavior. This scenario is the most serious, and is unique to AI systems, so we will cover it in more depth. I.J. Good wrote (1965),
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then be unquestionably be an "intelligence explosion," and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. The "intelligence explosion" has also been called the technological singularity by... Vernor Vinge...
Then they mention Moravec, Kurzweil, and transhumanism, before returning to a more concerned tone about AI. They cover Asimov's three laws of robotics, and then:
Yudkowsky (2008) goes into more detail about how to design a Friendly AI. He asserts that friendliness (a desire not to harm humans) should be designed in from the start, but that the designers should recognize both that their own designs may be flawed, and that the robot will learn and evolve over time. Thus the challenge is one of mechanism design - to define a mechanism for evolving AI systems under a system of checks and balances, and to give the systems utility functions that will remain friendly in the face of such changes. We can't just give a program a static utility function, because circumstances, and our desired responses to circumstances, change over time. For example, if technology had allowed us to design a super-powerful AI agent in 1800 and endow it with the prevailing morals of the time, it would be fighting today to reestablish slavery and abolish women's right to vote. On the other hand, if we build an AI agent today and tell it how to evolve its utility function, how can we assure that it won't read that "Humans think it is moral to kill annoying insects, in part because insect brains are so primitive. But human brains are primitive compared to my powers, so it must be moral for me to kill humans."
Omohundro (2008) hypothesizes that even an innocuous chess program could pose a risk to society. Similarly, Marvin Minsky once suggested that an AI program designed to solve the Riemann Hypothesis might end up taking over all the resources of Earth to build more powerful supercomputers to help achieve its goal. THe moral is that even if you only want you program to play chess or prove theorems, if you give it the capability to learn and alter itself, you need safeguards.
It's good this work is getting such mainstream coverage!
Require contributions in advance
If you are a person who finds it difficult to tell "no" to their friends, this one weird trick may save you a lot of time!
Scenario 1
Alice: "Hi Bob! You are a programmer, right?"
Bob: "Hi Alice! Yes, I am."
Alice: "I have this cool idea, but I need someone to help me. I am not good with computers, and I need someone smart whom I could trust, so they wouldn't steal my idea. Would you have a moment to listen to me?"
Alice explains to Bob her idea that would completely change the world. Well, at the least the world of bicycle shopping.
Instead of having many shops for bicycles, there could be one huge e-shop that would collect all the information about bicycles from all the existing shops. The customers would specify what kind of a bike they want (and where they live), and the system would find all bikes that fit the specification, and display them ordered by lowest price, including the price of delivery; then it would redirect them to the specific page of the specific vendor. Customers would love to use this one website, instead of having to visit multiple shops and compare. And the vendors would have to use this shop, because that's where the customers would be. Taking a fraction of a percent from the sales could make Alice (and also Bob, if he helps her) incredibly rich.
Bob is skeptical about it. The project suffers from the obvious chicken-and-egg problem: without vendors already there, the customers will not come (and if they come by accident, they will quickly leave, never to return again); and without customers already there, there is no reason for the vendors to cooperate. There are a few ways how to approach this problem, but the fact that Alice didn't even think about it is a red flag. She also has no idea who are the big players in the world of bicycle selling; and generally she didn't do her homework. But after pointing out all these objections, Alice still remains super enthusiastic about the project. She promises she will take care about everything -- she just cannot write code, and she needs Bob's help for this part.
Bob believes strongly in the division of labor, and that friends should help each other. He considers Alice his friend, and he will likely need some help from her in the future. Fact is, with perfect specification, he could make the webpage in a week or two. But he considers bicycles to be an extremely boring topic, so he wants to spend as little time as possible on this project. Finally, he has an idea:
"Okay, Alice, I will make the website for you. But first I need to know exactly how the page will look like, so that I don't have to keep changing it over and over again. So here is the homework for you -- take a pen and paper, and make a sketch of how exactly the web will look like. All the dialogs, all the buttons. Don't forget logging in and logging out, editing the customer profile, and everything else that is necessary for the website to work as intended. Just look at the papers and imagine that you are the customer: where exactly would you click to register, and to find the bicycle you want? Same for the vendor. And possibly a site administrator. Also give me the list of criteria people will use to find the bike they want. Size, weight, color, radius of wheels, what else? And when you have it all ready, I will make the first version of the website. But until then, I am not writing any code."
Alice leaves, satisfied with the outcome.
This happened a year ago.
No, Alice doesn't have the design ready, yet. Once in a while, when she meets Bob, she smiles at him and apologizes that she didn't have the time to start working on the design. Bob smiles back and says it's okay, he'll wait. Then they change the topic.
Scenario 2
Cyril: "Hi Diana! You speak Spanish, right?"
Diana: "Hi Cyril! Yes, I do."
Cyril: "You know, I think Spanish is the most cool language ever, and I would really love to learn it! Could you please give me some Spanish lessons, once in a while? I totally want to become fluent in Spanish, so I could travel to Spanish-speaking countries and experience their culture and food. Would you please help me?"
Diana is happy that someone takes interest in her favorite hobby. It would be nice to have someone around she could practice Spanish conversation with. The first instinct is to say yes.
But then she remembers (she knows Cyril for some time; they have a lot of friends in common, so they meet quite regularly) that Cyril is always super enthusiastic about something he is totally going to do... but when she meets him next time, he is super enthusiastic about something completely different; and she never heard about him doing anything serious about his previous dreams.
Also, Cyril seems to seriously underestimate how much time does it take to learn a foreign language fluently. Some lessons, once in a while will not do it. He also needs to study on his own. Preferably every day, but twice a week is probably a minimum, if he hopes to speak the language fluently within a year. Diana would be happy to teach someone Spanish, but not if her effort will most likely be wasted.
Diana: "Cyril, there is this great website called Duolingo, where you can learn Spanish online completely free. If you give it about ten minutes every day, maybe after a few months you will be able to speak fluently. And anytime we meet, we can practice the vocabulary you have already learned."
This would be the best option for Diana. No work, and another opportunity to practice. But Cyril insists:
"It's not the same without the live teacher. When I read something from the textbook, I cannot ask additional questions. The words that are taught are often unrelated to the topics I am interested in. I am afraid I will just get stuck with the... whatever was the website that you mentioned."
For Diana this feels like a red flag. Sure, textbooks are not optimal. They contain many words that the student will not use frequently, and will soon forget them. On the other hand, the grammar is always useful; and Diana doesn't want to waste her time explaining the basic grammar that any textbook could explain instead. If Cyril learns the grammar and some basic vocabulary, then she can teach him all the specialized vocabulary he is interested in. But now it feels like Cyril wants to avoid all work. She has to draw a line:
"Cyril, this is the address of the website." She takes his notebook and writes 'www.duolingo.com'. "You register there, choose Spanish, and click on the first lesson. It is interactive, and it will not take you more than ten minutes. If you get stuck there, write here what exactly it was that you didn't understand; I will explain it when we meet. If there is no problem, continue with the second lesson, and so on. When we meet next time, tell me which lessons you have completed, and we will talk about them. Okay?"
Cyril nods reluctantly.
This happened a year ago.
Cyril and Diana have met repeatedly during the year, but Cyril never brought up the topic of Spanish language again.
Scenario 3
Erika: "Filip, would you give me a massage?"
Filip: "Yeah, sure. The lotion is in the next room; bring it to me!"
Erika brings the massage lotion and lies on the bed. Filip massages her back. Then they make out and have sex.
This happened a year ago. Erika and Filip are still a happy couple.
Filip's previous relationships didn't work well, in long term. In retrospect, they all followed a similar scenario. At the beginning, everything seemed great. Then at some moment the girl started acting... unreasonably?... asking Filip to do various things for her, and then acting annoyed when Filip did exactly what he was asked to do. This happened more and more frequently, and at some moment she broke up with him. Sometimes she provided explanation for breaking up that Filip was unable to decipher.
Filip has a friend who is a successful salesman. Successful both professionally and with women. When Filip admitted to himself that he is unable to solve the problem on his own, he asked his friend for advice.
"It's because you're a f***ing doormat," said the friend. "The moment a woman asks you to do anything, you immediately jump and do it, like a well-trained puppy. Puppies are cute, but not attractive. Have you ready any of those books I sent you, like, ten years ago? I bet you didn't. Well, it's all there."
Filip sighed: "Look, I'm not trying to become a pick-up artist. Or a salesman. Or anything. No offense, but I'm not like you, personality-wise, I never have been, and I don't want to become your - or anyone else's - copy. Even if it would mean greater success in anything. I prefer to treat other people just like I would want them to treat me. Most people reciprocate nice behavior; and those who don't, well, I avoid them as much as possible. This works well with my friends. It also works with the girls... at the beginning... but then somehow... uhm... Anyway, all your books are about manipulating people, which is ethically unacceptable for me. Isn't there some other way?"
"All human interaction is manipulation; the choice is between doing it right or wrong, acting consciously or driven by your old habits..." started the friend, but then he gave up. "Okay, I see you're not interested. Just let me show you the most obvious mistake you make. You believe that when you are nice to people, they will perceive you as nice, and most of them will reciprocate. And when you act like an asshole, it's the other way round. That's correct, on some level; and in a perfect world this would be the whole truth. But on a different level, people also perceive nice behavior as weakness; especially if you do it habitually, as if you don't have any other option. And being an asshole obviously signals strength: you are not afraid to make other people angry. Also, in long term, people become used to your behavior, good or bad. The nice people don't seem so nice anymore, but they still seem weak. Then, ironicaly, if the person well-known to be nice refuses to do something once, people become really angry, because their expectations were violated. And if the asshole decides to do something nice once, they will praise him, because he surprised them pleasantly. You should be an asshole once in a while, to make people see that you have a choice, so they won't take your niceness for granted. Or if your girlfriend wants something from you, sometimes just say no, even if you could have done it. She will respect you more, and then she will enjoy more the things you do for her."
Filip: "Well, I... probably couldn't do that. I mean, what you say seems to make sense, however much I hate to admit it. But I can't imagine doing it myself, especially to a person I love. It's just... uhm... wrong."
"Then, I guess, the very least you could do is to ask her to do something for you first. Even if it's symbolic, that doesn't matter; human relationships are mostly about role-playing anyway. Don't jump immediately when you are told to; always make her jump first, if only a little. That will demonstrate strength without hurting anyone. Could you do that?"
Filip wasn't sure, but at the next opportunity he tried it, and it worked. And it kept working. Maybe it was all just a coincidence, maybe it was a placebo effect, but Filip doesn't mind. At first it felt kinda artificial, but then it became natural. And later, to his surprise, Filip realized that practicing these symbolic demands actually makes it easier to ask when he really needed something. (In which case sometimes he was asked to do something first, because his girlfriend -- knowingly or not? he never had the courage to ask -- copied the pattern; or maybe she has already known it long before. But he didn't mind that either.)
The lesson is: If you find yourself repeatedly in situations where people ask you to do something for them, but at the end they don't seem to appreciate what you did for them, or don't even care about the thing they asked you to do... and yet you find it difficult to say "no"... ask them to contribute to the project first.
This will help you get rid of the projects they don't care about (including the ones they think they care about in far mode, but do not care about enough to actually work on them in near mode) without being the one who refuses cooperation. Also, the act of asking the other person to contribute, after being asked to do something for them, mitigates the status loss inherent in working for them.
Intellectual insularity and productivity
Guys I'd like your opinion on something.
Do you think LessWrong is too intellectually insular? What I mean by this is that we very seldom seem to adopt useful vocabulary or arguments or information from outside of LessWrong. For example all I can think of is some of Robin Hanson's and Paul Graham's stuff. But I don't think Robin Hanson really counts as Overcoming Bias used to be LessWrong.
The community seems to not update on ideas and concepts that didn't originate here. The only major examples fellow LWers brought up in conversation where works that Eliezer cited as great or influential. :/
Another thing, I could be wrong about this naturally, but it seems to clear that LessWrong has not grown. I'm not talking numerically. I can't put my finger to major progress done in the past 2 years. I have heard several other users express similar sentiments. To quote one user:
I notice that, in topics that Eliezer did not explicitly cover in the sequences (and some that he did), LW has made zero progress in general.
I've recently come to think this is probably true to the first approximation. I was checking out a blogroll and saw LessWrong listed as Eliezer's blog about rationality. I realized that essentially it is. And worse this makes it a very crappy blog since the author doesn't make new updates any more. Originally the man had high hopes for the site. He wanted to build something that could keep going on its own, growing without him. It turned out to be a community mostly dedicated to studying the scrolls he left behind. We don't even seem to do a good job of getting others to read the scrolls.
Overall there seems to be little enthusiasm for actually systematically reading the old material. I'm going to share my take on what is I think a symptom of this. I was debating which title to pick for my first ever original content Main article (it was originally titled "On Conspiracy Theories") and made what at first felt like a joke but then took on a horrible ring of:
Over time the meaning of an article will tend to converge with the literal meaning of its title.
We like linking articles, and while people may read a link the first time, they don't tend to read it the second or third time they run across it. The phrase is eventually picked up and used out the appropriate of context. Something that was supposed to be shorthand for a nuanced argument starts to mean exactly what "it says". Well not exactly, people still recall it is a vague applause light. Which is actually worse.
I cited precisely "Politics is the Mindkiller" as an example of this. In the original article Eliezer basically argues that gratuitous politics, political thinking that isn't outweighed by its value to the art of rationality, is to be avoided. This soon came to meant it is forbidden to discuss politics in Main and Discussion articles, though it does live in the comment sections.
Now the question if LessWrong remains productive intellectually, is separate from the question of it being insular. But I feel both need to be discussed. If our community wasn't growing and it wasn't insular either, it could at least remain relevant.
This site has a wonderful ethos for discussion and thought. Why do we seem to be wasting it?
A signaling theory of class x politics interaction
The media, most recently The Economist and Scientific American, have been publicizing a surprising statistical finding: in the current economic climate, when more Americans than ever are poor, support for policies that redistribute wealth to the poor are at their lowest levels ever. This new-found antipathy towards aid to the poor concentrates in people who are near but not yet on the lowest rung of the social ladder. The Economist adds some related statistics: those who earn slightly more than the minimum wage are most against raising the minimum wage, and support for welfare in an area decreases as the percentage of welfare recipients in the area rises.
Both articles explain the paradoxical findings by appealing to something called "last place aversion", an observed tendency for people to overvalue not being in last place. For example, in laboratory experiments where everyone gets randomly determined amounts of money, most people are willing to help those with less money than themselves gain cash - except the person with the second to lowest amount of money, who tends to try to thwart the person in last place even if it means enriching those who already have the most.
"Last place aversion" is interesting, and certainly deserves at least a footnote in the catalogue of cognitive biases and heuristics, but I find it an unsatisfying explanation for the observations about US attitudes toward wealth redistribution. For one thing, the entire point of last place aversion is that it only affects those in last place, but in a massive country like the United States, everyone can find someone worse off than themselves (with one exception). For another, redistributive policies usually stop short of making those who need government handouts wealthier than those who do not; subsidizing more homeless shelters doesn't risk giving the homeless a nicer house than your own. Finally, many of the policies people oppose, like taxing the rich, don't directly translate to helping those in last place.
I propose a different mechanism, one based on ... wait for it ... signaling.
In a previous post, I discussed multi-level signaling and counter-signaling, where each level tries to differentiate itself from the level beneath it. For example, the nouveau riche differentiate themselves from the middle class by buying ostentatious bling, and the nobility (who are at no risk of being mistaken for the middle class) differentiate themselves from the nouveau riche by not buying ostentatious bling.
The very poor have one strong incentive to support redistribution of wealth: they need the money. They also have a second, subtler incentive: most redistributive policies come packaged with a philosophy that the poor are not personally responsible for the poverty, but are at least partially the victims of the rest of society. Therefore, these policies inflate both their pocketbook and their ego.
The lower middle class gain what status they have by not being the very poor; effective status signaling for a lower middle class person is that which proves that she is certainly not poor. One effective method is to hold opinions contrary to those of the poor: that redistribution of wealth is evil and that the poor deserve their poverty. This ideology celebrates the superiority of the lower middle class over the poor by emphasizing the biggest difference between the lower middle class and the very poor: self-reliance. By asserting this ideology, a lower middle class person can prove her lower middle class status.
The upper middle class gain what status they have by not being the lower middle class; effective status signaling for an upper middle class person is that which proves that she is certainly not lower middle class. One effective way is to hold opinions contrary to those of the lower middle class: that really the poor and lower middle class are the same sort of people, but some of them got lucky and some of them got unlucky. The only people who can comfortably say "Deep down there's really no difference between myself and a poor person" are people confident that no one will actually mistake them for a poor person after they say this.
As a thought experiment, imagine your reactions to the following figures:
1. A bearded grizzled man in ripped jeans, smelling slightly of alcohol, ranting about how the government needs to give more free benefits to the poor.
2. A bearded grizzled man in ripped jeans, smelling slightly of alcohol, ranting about how the poor are lazy and he worked hard to get where he is today.
3. A well-dressed, stylish man in a business suit, ranting about how the government needs to give more free benefits to the poor.
4. A well-dressed, stylish man in a business suit, ranting about how the poor are lazy and he worked hard to get where he is today.
My gut reactions are (1, lazy guy who wants free money) (2, honorable working class salt-of-the-earth) (3, compassionate guy with good intentions) (4, insensitive guy who doesn't realize his privilege). If these are relatively common reactions, these would suffice to explain the signaling patterns in these demographics.
If this were true, it would explain the unusual trends cited in the first paragraph. An area where welfare became more common would see support for welfare drop, as it became more and more necessary for people to signal that they themselves were not welfare recipients. Support for minimum wage would be lowest among people who earn just slightly more than minimum wage, and who need to signal that they are not minimum wage earners. And since upper middle class people tend to favor redistribution as a status signal and lower middle class people tend to oppose it, a recession that drives more people into the lower middle class would cause a drop in support for redistributive policies.
My story / owning one's reasons
This is my first post. I've lurked for quite some time and just recently took the opportunity to join this week. I doubt that anything I post will be groundbreaking, but one thought has been developing that I thought I could at least try writing about. I'd appreciate suggestions regarding the content, but also about appropriateness at LessWrong in general. I have mainly read top level threads, but in my perusal of the discussion area it seems that, for the most part, most things are acceptable... so here goes.
Background
I consider this relevant and somewhat necessary. I also think many may find this interesting. I went through a "conversion experience" approximately 9 years ago next month. In my teens I was a heavy user of drugs and alcohol and was sent to a 12 step boarding school in upstate New York from my home in Milwaukee. After a "breakdown" experience there which amounted to realizing the legal ramifications of my substance usage and receiving a reprieve from those consequences (probation), I believed that god had saved my life. I dedicated myself to the 12 steps [1] and a spiritual path, which took the form of taking seriously my Catholic faith.
I moved to Minnesota for college and joined a Catholic Outreach group. I believed that living out a religious faith was the key to maintaining my sobriety. I also attended AA meetings. I maintained an extremely orthodox and passionate faith for 6 years. I was about as religious as they come -- attending adoration nightly for a month at one point, daily prayer/scripture study, prayer and "discernment" for big decisions (marrying my wife, buying a house, etc.), and so on. And don't view these as pew-warming exercises; I meant everything I did. I was passionate about the second chance I believed I'd been given, thought god was responsible, and had dedicated my life to being his faithful servant and living a holy life.
Turning Point
Last Christmas while visiting my parents, I suddenly began to doubt. I still couldn't tell you exactly why. I simply recall wondering if anyone other than the gospel writers wrote about Jesus. Google let me down. I was very disappointed to find that hardly anyone had even cared to mention him. Now, as an aside, I am almost positive that under different circumstances I would have assumed there was a perfectly reasonable explanation and simply moved on. I had never before actually thought that I might be wrong about my faith. This time was different. The seed was planted. I actually opened up to the idea that I might be wrong. Several key thoughts/developments arose:
- I trusted that if god existed, study and research should only serve to prove that fact more concretely
- I thought the most objective way to find an answer about god's existence would be to suspect that Christianity was not true and attempt to prove it back to myself
- When I realized that other than my personal conversion I had no justification for my belief, I felt absolutely horrid and decided that I never wanted that to be the case about anything again. While perhaps unrealistic, I wished to always know precisely where I stood on matters, as well to be prepared to provide evidence for how I had reached that stance
It's been one year since my journey to research the "god question" began. You can find out more if you're interested at my blog. I can't say I've reached the level of conclusiveness I was hoping for by now, but I can say that I no longer believe.
Main Point
The previous material was a setup for focusing on the last of the three points above. What compelled me to write this was a discussion with a friend (who's still a believer) over Christmas. I had just listened to Richard Dawkins discuss Noah's ark, and was summarizing for my friend what he had said, highlighting that Noah's ark offers nothing in the way of an explanation for the isolation of particular species to various locations around the globe when compared to the explanation provided by evolution. I should point out that Catholics are not of an inerrant/literalist tradition. All of the Bible is inspired, but that doesn't require it to be factually valid (as odd as that sounds... it's what the dogma proclaims). In fact, Genesis and Revelation have been pointed as being able to be interpreted figuratively by the Church. In any case, in most instances of fundamentalist thought, my friend acknowledges belief in things like a young earth and simultaneous development of life (man riding dinosaurs) as silly.
But then I asked her what she thought about the story of Noah's ark. Silence. More silence. Then I asked her,
"Are you wondering what you're supposed to think right now?"
She responded in the affirmative and asked how I knew. I simply said that it's what I would have been wondering if I were asked something I suspected intersected an official Church teaching but didn't know what the actual teaching was.
This interaction produced two responses: gratitude and caution. First off, I'm grateful that since my non-belief I have been truly liberated to think about many issues -- abortion, stem cell research, homosexuality, etc. It is truly wonderful to earnestly consider these topics in a rational way without my previous requirement to be allegiant-under-all-circumstances-and-rationality-be-damned. I only knew what my friend was thinking because it used to be me.
---
Inquirer: Are you pro-life?
Me: thinking as follows
- All Catholics are pro-life
- I'm Catholic
- Therefore, I'm pro-life
Me: Why, yes I am, sir.
---
It was like this for many topics. I had a bag full of cached thoughts ready to go because rather than making my choices one at time... I had subscribed to the equivalent of a political party, which required me to buy into everything under a particular umbrella whether I had thought about it or not.
So, again, I'm grateful to have been liberated from the umbrella and be free to learn about trusted methods of rationality and make better decisions.
However... my friend's response got me on my guard as well. That was the purpose of sharing this perhaps verbose story in the first place. I wanted it to serve as a reminder to myself and to others about the importance of "owning one's reasons." Her response made me wonder if I have cached thoughts operating in other realms. Do I know why I recommend a vs. b? Or why I subscribe to policy/side-of-debate/method/product x vs. y? And, most importantly, do my answers ever change, even slightly, depending on which "umbrella" I sense I'm standing under? For example, at work when I'm surrounded by those I know to be strongly conservative... do my voiced answers/reasons change compared to when I'm with those I know to be liberal?
My answer to that is, "Yes." There are circumstances where I lessen my conclusions/impact/boldness because I'm letting the "umbrella" I feel I've subscribed to by belonging to a particular group influence my answer. One may respond that this is simply a desire not to offend or be attacked (peer pressure), but I don't think that's necessarily it. I think it's a result of me not "owning my reasons" sufficiently -- knowing the rational approach I took, the supporting evidence behind my decision, the ability recall said evidence, etc.
My reflection has led me to suspect that if my efforts at rationality focused as much on the path as the satisfaction of having arrived at the destination, I'd be more confident and less swayed by wondering what I'm supposed to think in a given situation. In other words, I'd be more confident to state, "The answer is x. Would you like me to show my work?"
Perhaps it's not this easy or simple, but it's my current stab at some recent ideas. I'd appreciate any feedback, especially since this is my first post! I'm happy to be here.
Moving on from Cognito Mentoring
Back in December 2013, Jonah Sinick and I launched Cognito Mentoring, an advising service for intellectually curious students. Our goal was to improve the quality of learning, productivity, and life choices of the student population at large, and we chose to focus on intellectually curious students because of their greater potential as well as our greater ability to relate with that population. We began by offering free personalized advising. Jonah announced the launch in a LessWrong post, hoping to attract the attention of LessWrong's intellectually curious readership.
Since then, we feel we've done a fair amount, with a lot of help from LessWrong. We've published a few dozen blog posts and have an information wiki. Slightly under a hundred people contacted us asking us for advice (many from LessWrong), and we had substantive interactions with over 50 of them. As our reviews from students and parents suggest, we've made a good impression and have had a positive impact on many of the people we've advised. We're proud of what we've accomplished and grateful for the support and constructive criticism we've received on LessWrong.
However, what we've learned in the last few months has led us to the conclusion that Cognito Mentoring is not ripe for being a full-time work opportunity for the two of us.
For the last few months, we've eschewed regular jobs and instead done contract work that provides us the flexibility to work on Cognito Mentoring, eating into our savings somewhat to cover the cost of living differences. This is a temporary arrangement and is not sustainable. We therefore intend to scale back our work on Cognito Mentoring to "maintenance mode" so that people can continue to benefit from the resources we've already collected, with minimal additional effort on our part, freeing us up to take regular jobs with more demanding time requirements.
We might revive Cognito Mentoring as a part-time or full-time endeavor in the future if there are significant changes to our beliefs about the traction, impact, and long-run financial viability of Cognito Mentoring. Part of the purpose of "maintenance mode" will be to leave open the possibility of such a revival if the idea does indeed have potential.
In this post, I discuss some of the factors that led us to change our view, the conditions under which we might revive Cognito Mentoring, and more details about how "maintenance mode" for Cognito Mentoring will look.
Reason #1: Downward update on social value
We do think that the work we've done on Cognito Mentoring so far has generated social value, and the continued presence of the website will add more value over time. However, our view has shifted in the direction of lower marginal social value from working on Cognito Mentoring full-time, relative to simply keeping the website live and doing occasional work to improve it. Specifically:
- It's quite possible that the lowest-hanging fruit with respect to the advisees who would be most receptive to our advice has already been plucked. We received the bulk of our advisees through LessWrong within the month after our initial posting. Other places where we've posted about our service have led to fewer advisees (more here).
- Of our website content, only a small fraction of the content gets significant traction (see our list of popular pages), so honing and promoting our best content might be a better strategy for improving social value than trying to create a comprehensive resource. This can be done while in maintenance mode, and does not require full-time effort on our part.
What might lead us to change our minds: If we continue to be contacted by large numbers of potentially high-impact people, or we get evidence that the advising we've already done has had significantly greater impact than we think it did, we'll update our social value upward.
Reason #2: Downward update on long-run financial viability
We have enough cash to go on for a few more months. But for Cognito Mentoring to be something that we work full time on, we need an eventual steady source of income from it. Around mid-March 2014, we came to the realization that charging advisees is not a viable revenue source, as Jonah described at the end of his post about how Cognito Mentoring can do the most good (see also this comment by Luke Muehlhauser and Jonah's response to it below the comment). At that point, we decided to focus more on our informational content and on looking for philanthropic funding.
Our effort at looking into philanthropic funding did give us a few leads, and some of them could plausibly result in us getting small grants. However, none of the leads we got pointed to potential steady long-term income sources. In other words, we don't think philanthropic funding is a viable long-term revenue model for Cognito Mentoring.
Our (anticipated) difficulty in getting philanthropic funding arises from two somewhat different reasons.
- What we're doing is somewhat new and does not fit the standard mold of educational grants. Educational foundations tend to give grants for fairly specific activities, and what we're doing does not seem to fit those.
- We haven't demonstrated significant traction or impact yet (even though we've had a reasonable amount of per capita impact, the total number of people we've influenced so far is relatively small). This circles back to Reason #1: funders' reluctance to fund us may in part stem from their belief that we won't have much social value, given our lack of traction so far. Insofar as funders' judgment carries some information value, this should also strengthen Reason #1.
What might lead us to change our minds: If we are contacted by a funder who is willing to bankroll us for over a year and also offer a convincing reason for why he/she thinks bankrolling us is a good idea (so that we're convinced that our funding can be sustained beyond a year) we'll change our minds.
Reason #3: Acquisition of knowledge and skills
One of the reasons we've been able to have an impact through Cognito Mentoring so far is that both Jonah and I have knowledge of many diverse topics related to the questions that our advisees have posed to us. But our knowledge is still woefully inadequate in a number of areas. In particular, many advisees have asked us questions in the realms of technology, entrepreneurship, and the job environment, and while we have pointed them to resources on these, firsthand experience, or close secondhand experience, would help us more effectively guide advisees. We intend to take jobs related to computer technology (in fields such as programming or data science), and these jobs might be at startups or put us in close contact with startups. This will better position us to return to mentoring later if we choose to resume it part-time or full-time.
Knowledge and skills we acquire working in the technology sector could also help us design better interfaces or websites that can more directly address the needs of our audience. So far, we've thought of ourselves as content-oriented people, so we've used standard off-the-shelf software such as WordPress (for our main website and blog) and MediaWiki (for our information wiki). Part of the reason is that we wanted to focus on content creation rather than interface design, but part of the reason we've stuck to these is that we didn't think we could design interfaces. Once we've acquired more programming and design experience, we might be more open to the idea of designing interfaces and software that can meet particular needs of our target audience.We might design an interface that helps people study more effectively, make better life decisions, or share reviews of courses and colleges, in a manner similar to softwares or websites such as Anki or Beeminder or Goodreads. There might also be potential for a more effective online resource that teaches programming than those in existence (e.g. Codecademy). It's not clear right now whether there exists a useful opportunity of this sort that we are particularly well-suited to, but with more coding experience, we'll at least be able to implement an idea of this sort if we decide it has promise.
Reason #4: Letting it brew in the background can give us a better idea of the potential
If we continue to gradually add content to the wiki, and continue to get links and traffic to it from other sources, it's likely that the traffic will grow slowly and steadily. The extent of organic growth will help us figure out how much promise Cognito Mentoring has. If our wiki gets to the point of steadily receiving thousands of pageviews a day, we will reconsider reviving Cognito Mentoring as a part-time or full-time endeavor. If, on the other hand, traffic remains at approximately the current level (about a hundred pageviews a day, once we exclude spikes arising from links from LessWrong and Marginal Revolution) then the idea is probably not worth revisiting, and we'll leave it in maintenance mode.
In addition, by maintaining contact with the people we've advised, we can get more insight into the sort of impact we've had, whether it is significant over the long term, and how it can be improved. This again can tell us whether our impact is sufficiently large as to make Cognito Mentoring worth reviving.
What "maintenance mode" entails
- We'll continue to have contact information available, but will scale back on personalized advising: People are welcome to contact us with questions and suggestions about content, but we will not generally offer detailed personalized responses or do research specific to individuals who contact us. We'll attempt to point people to relevant content we've already written, or to other resources we're already aware of that can address their concerns.
- The information wiki will remain live, and we will continue to make occasional improvements, but we won't have a time schedule of when particular improvements have to be implemented by.
- Existing blog posts will remain, but we probably won't be making many new blog posts. New blog posts will happen only if one of us has an idea that really seems worth sharing and for which the Cognito Mentoring blog is an ideal forum.
- We'll continue our administrative roles in the communities of existing Cognito Mentoring advisees
- We'll continue periodically reviewing the progress of people we've advised so far: This will help us get a better sense of how valuable our work has been, and can be useful should we choose to revive Cognito Mentoring.
- We'll continue to correspond with advisees we have so far (time permitting), though we'll give more priority to advisees who continue to maintain contact of their own accord and those whose activities seem to have higher impact potential.
- We'll try to get our best content linked from other sources, such as about.com: Sources like about.com are targeted at the general population. We can try to get linked to from there as an additional resource for the more intellectually curious population that's outside the core focus of about.com.
- We'll link more extensively to other sources that people can use: For instance, we can more emphatically point to 80,000 Hours for people who are interested in career advising in relation to effective altruist pursuits. We can point to about.com and College Confidential for more general information about mainstream institutions. We already make a number of recommendations on our website, but as we stop working actively, it becomes all the more important that people who come to us are appropriately redirected to other sources that can help them.
Conclusion and summary (TL;DR)
We (qua Cognito Mentoring) are grateful to LessWrong for being welcoming of our posts, offering constructive criticism, and providing us with some advisees we've enjoyed working with. We think that the work we've done has value, but don't think that there's enough marginal value from full-time work on Cognito Mentoring. We think we can do more good for ourselves and the world by switching Cognito Mentoring to maintenance mode and freeing our time currently spent on Cognito Mentoring for other pursuits. The material that we have already produced will continue to remain in the public domain and we hope that people will benefit from it. We may revisit our "maintenance mode" decision if new evidence changes our view regarding traction, impact, and long-run financial viability.
The Singularity Wars
(This is a introduction, for those not immersed in the Singularity world, into the history of and relationships between SU, SIAI [SI, MIRI], SS, LW, CSER, FHI, and CFAR. It also has some opinions, which are strictly my own.)
The good news is that there were no Singularity Wars.
The Bay Area had a Singularity University and a Singularity Institute, each going in a very different direction. You'd expect to see something like the People's Front of Judea and the Judean People's Front, burning each other's grain supplies as the Romans moved in.
"Epiphany addiction"
LW doesn't seem to have a discussion of the article Epiphany Addiction, by Chris at succeedsocially. First paragraph:
"Epiphany Addiction" is an informal little term I came up with to describe a process that I've observed happen to people who try to work on their personal issues. How it works is that someone will be trying to solve a problem they have, say a lack of confidence around other people. Somehow they'll come across a piece of advice or a motivational snippet that will make them have an epiphany or a profound realization. This often happens when people are reading self-help materials and they come across something that stands out to them. People can also come up with epiphanies themselves if they're doing a lot of writing and reflecting in an attempt to try and analyze their problems.
I like that article because it describes a dangerous failure mode of smart people. One example was the self-help blog of Phillip Eby (pjeby), where each new post seemed to bring new amazing insights, and after a while you became jaded. An even better, though controversial, example could be Eliezer's Sequences, if you view them as a series of epiphanies about AI research that didn't lead to much tangible progress. (Please don't make that statement the sole focus of discussion!)
The underlying problem seems to be that people get a rush of power from neat-sounding realizations, and mistake that feeling for actual power. I don't know any good remedy for that, but being aware of the problem could help.
Son of Shit Rationalists Say
A long time ago, in the colder seasons, I asked for suggestions for a Shit Rationalists Say video. Due to other concerns it took me this long to put it together, and the meme has long since passed. However, here it is.
It is my first time in front of a camera, so I'm shakey. But I learned, and there it is.
What we're losing
More and more, LessWrong's posts are meta-rationality posts, about how to be rational, how to avoid akrasia, in general, without any specific application. This is probably the intended purpose of the site. But they're starting to bore me.
What drew me to LessWrong is that it's a place where I can put rationality into practice, discussing specific questions of philosophy, value, and possible futures, with the goal of finding a good path through the Singularity. Many of these topics have no other place where rational discussion of them is possible, online or off. Such applied topics have almost all moved to Discussion now, and may be declining in frequency.
This isn't entirely new. Applied discussions have always suffered bad karma on LW (statistically; please do not respond with anecdotal data). I thought this was because people downvote a post if they find anything in it that they disagree with. But perhaps a lot of people would rather talk about rationality than use it.
Does anyone else have this perception? Or am I just becoming a LW old geezer?
At the same time, LW is taking off in terms of meetups and number of posts. Is it finding its true self? Does the discussion of rationality techniques have a larger market than debates over Sleeping Beauty (I'm even beginning to miss those!) Is the old concern with values, artificial intelligence, and the Singularity something for LW to grow out of?
(ADDED: Some rationality posts are good. I am also a lukeprog fan.)
[moderator action] The_Lion and The_Lion2 are banned
Accounts "The_Lion" and "The_Lion2" are banned now. Here is some background, mostly for the users who weren't here two years ago:
User "Eugine_Nier" was banned for retributive downvoting in July 2014. He keeps returning to the website using new accounts, such as "Azathoth123", "Voiceofra", "The_Lion", and he keeps repeating the behavior that got him banned originally.
The original ban was permanent. It will be enforced on all future known accounts of Eugine. (At random moments, because moderators sometimes feel too tired to play whack-a-mole.) This decision is not open to discussion.
Please note that the moderators of LW are the opposite of trigger-happy. Not counting spam, there is on average less than one account per year banned. I am writing this explicitly, to avoid possible misunderstanding among the new users. Just because you have read about someone being banned, it doesn't mean that you are now at risk.
Most of the time, LW discourse is regulated by the community voting on articles and comments. Stupid or offensive comments get downvoted; you lose some karma, then everyone moves on. In rare cases, moderators may remove specific content that goes against the rules. The account ban is only used in the extreme cases (plus for obvious spam accounts). Specifically, on LW people don't get banned for merely not understanding something or disagreeing with someone.
What does "retributive downvoting" mean? Imagine that in a discussion you write a comment that someone disagrees with. Then in a few hours you will find that your karma has dropped by hundreds of points, because someone went through your entire comment history and downvoted all comments you ever wrote on LW; most of them completely unrelated to the debate that "triggered" the downvoter.
Such behavior is damaging to the debate and the community. Unlike downvoting a specific comment, this kind of mass downvoting isn't used to correct a faux pas, but to drive a person away from the website. It has especially strong impact on new users, who don't know what is going on, so they may mistake it for a reaction of the whole community. But even in experienced users it creates an "ugh field" around certain topics known to invoke the reaction. Thus a single user has achieved disproportional control over the content and the user base of the website. This is not desired, and will be punished by the site owners and the moderators.
To avoid rules lawyering, there is no exact definition of how much downvoting breaks the rules. The rule of thumb is that you should upvote or downvote each comment based on the value of that specific comment. You shouldn't vote on the comments regardless of their content merely because they were written by a specific user.
Anthropic signature: strange anti-correlations
Imagine that the only way that civilization could be destroyed was by a large pandemic that occurred at the same time as a large recession, so that governments and other organisations were too weakened to address the pandemic properly.
Then if we looked at the past, as observers in a non-destroyed civilization, what would we expect to see? We could see years with no pandemics or no recessions; we could see mild pandemics, mild recessions, or combinations of the two; we could see large pandemics with no or mild recessions; or we could see large recessions with no or mild pandemics. We wouldn't see large pandemics combined with large recessions, as that would have caused us to never come into existence. These are the only things ruled out by anthropic effects.
Assume that pandemics and recessions are independent (at least, in any given year) in terms of "objective" (non-anthropic) probabilities. Then what would we see? We would see that pandemics and recessions appear to be independent when either of them are of small intensity. But as the intensity rose, they would start to become anti-correlated, with a large version of one completely precluding a large version of the other.
The effect is even clearer if we have a probabilistic relation between pandemics, recessions and extinction (something like: extinction risk proportional to product of recession size times pandemic size). Then we would see an anti-correlation rising smoothly with intensity.
Thus one way of looking for anthropic effects in humanity's past is to look for different classes of incidents that are uncorrelated at small magnitude, and anti-correlated at large magnitudes. More generally, to look for different classes of incidents where the correlation changes at different magnitudes - without any obvious reasons. Than might be the signature of an anthropic disaster we missed - or rather, that missed us.
LessWrong could grow a lot, but we're doing it wrong.
How do I know this? I got a copy of the website analytics.
The bounce rate for LessWrong's home page is 60%!
To be clear: Over half the people who visit LessWrong are going away without even clicking anything.
Yet how many NEW visitors are there? Almost half of the visitors are new!
Granted, new visitor statistics aren't perfect, but that's a LOT of people.

Simple math should tell us this:
If we got the bounce rate down around 30% (a reasonable rate for a good site) by making sure every visitor sees something awesome immediately, AND make sure that each visitor can quickly gauge how much they're going to relate to the community (assuming the new users are the right target audience), it would theoretically double the rate of growth, or more. There's a multiplier effect if the bounce rate is improved: you get better placement in search engines. Search engines get more users if they feel that the engine finds interesting content, not just relevant content.
It's been argued that it's possible that most of the bounces are returning visitors checking for new content. Well if half the visitors to the site each month are new, and we did a wonderful job of showing them that LessWrong is awesome, then the amount of returning visitors could double each month. We're getting a tiny, tiny fraction of that growth:

http://www.sitemeter.com/?a=stats&s=s18lesswrong&r=36
Why did I write you guys so much in the home page rewrites thread? Because I am a web professional who works with web marketing professionals at my job and to me it was blatantly obvious that there's that much room for improvement in the growth of LessWrong. Doing changes like the ones I suggested wouldn't even take long. Because I like this site, and I knew it had potential to grow by leaps and bounds if somebody just paid a little bit of attention to real web marketing. Because I was confused when I first found this site - I had no idea what it's about, or why it's awesome. I closed the home page, myself. Another friend mentioned LessWrong. Curiosity perked up again. I came back and read the about page. That didn't make things clearer either. I left again without going further. Friends kept telling me it was awesome. I came back one day and finally found an awesome article! It took me three tries to figure out why you guys are awesome because the web marketing is so bad. The new proposals, although they are well-meaning and it's obvious that John_Maxwell_IV cares about the site, are more of the same bad marketing.
I've been interested in web marketing for ten years. It's a topic I've accumulated a lot of information about. As I see it, the way these guys are going about this is totally counter-intuitive to web basic marketing principles. They don't even seem to know how harsh users are the first time they see a new website. They tend to just go away if it doesn't grab them in a few seconds. They're like "well we will put interesting links in" but that's not how it works! The links don't make the site interesting - the site has got to be interesting enough for users to click the links. Thinking the links will make the site interesting is backward. If you want to improve your bounce rate, your goal is to be awesome immediately in order to get the user to stay on the page long enough to want to click your link. If it wasn't usually hard to get users to click links, we wouldn't track bounce rates. These guys know this particular group of users better than I do, but I know web marketing principles that they're not even seeing when pointed out. To me, they seem to be totally unaware of the field of web marketing. The numbers don't lie and they're saying there's huge room for improvement.
If you want to grow, it's time to try something different.
Here's a thought: There is a lot awesome content that's on this website. We need to take what's awesome and make it in-your-face obvious. I wrote a plan for how to quickly find the most effective awesome content (the website statistics will tell you which pages keep new visitors on them the longest), and how to use them to effectively get the attention of new users - copy the first paragraph from one of those pages, which was most likely constructed by a competent writer in a way that hooks people (if it's keeping them on the page then it's essentially proven to!) and place that as bait right on the front page. (There is also a wrong way to do this.) Then of course, the user needs to find out why the LessWrong community might be a place where they belong. I shared ideas for that in "About us - Building Interest".
Don't let's assume that growth is going to be good. You're going to get more internet trolls, more spam, (there's a way to control spam which I would be happy to share) and more newbies who don't know what they're doing (I provided some suggestions to help get them on track quickly, preventing annoyance for both you and them). There will be people with new ideas, but if the wrong audience is targeted... well. We'd better choose what audience to target. I saw an internet forum take off once - it seemed to be growing slowly, until we looked at the curve and saw that it was exponential. That of course quickly turned to a dazzling exponential curve. Suddenly the new users outnumbered the old ones. That could happen here -- even if we do nothing. YOU can get involved. YOU can influence who to target. They're taking suggestions on rewrites right now. Go to the thread. I invite brutal honesty on everything I wrote there. Or pick my brain, if you'd prefer.
What do you want, LessWrong? Do you want to grow optimally? Who do you want to see showing up?
Hugo Awards - HP:MoR
This year I'm a supporting member of WorldCon for the first time, and I noticed that the Hugo's have a category for Best Fan Writer ("Any person whose writing has appeared in semiprozines or fanzines or in generally available electronic media during 2010."). I do believe Harry Potter and the Methods of Rationality counts, and I plan on nominating it. Even making it onto the ballot will probably expose it to a wider audience. Is anyone else here a WorldCon member and thinking of nominating MoR?
Bill Gates: problem of strong AI with conflicting goals "very worthy of study and time"
Steven Levy: Let me ask an unrelated question about the raging debate over whether artificial intelligence poses a threat to society, or even the survival of humanity. Where do you stand?
Bill Gates: I think it’s definitely important to worry about. There are two AI threats that are worth distinguishing. One is that AI does enough labor substitution fast enough to change work policies, or [affect] the creation of new jobs that humans are uniquely adapted to — the jobs that give you a sense of purpose and worth. We haven’t run into that yet. I don’t think it’s a dramatic problem in the next ten years but if you take the next 20 to 30 it could be. Then there’s the longer-term problem of so-called strong AI, where it controls resources, so its goals are somehow conflicting with the goals of human systems. Both of those things are very worthy of study and time. I am certainly not in the camp that believes we ought to stop things or slow things down because of that. But you can definitely put me more in the Elon Musk, Bill Joy camp than, let’s say, the Google camp on that one.
"Bill Gates on Mobile Banking, Connecting the World and AI", Medium, 2015-01-21
Optimal Exercise
Followup to: Lifestyle interventions to increase longevity.
What does it mean for exercise to be optimal?
- Optimal for looks
- Optimal for time
- Optimal for effort
- Optimal for performance
- Optimal for longevity
There may be even more criteria.
We're all likely going for a mix of outcomes, and optimal exercise is going to change depending on your weighting of different factors. So I'm going to discuss something close to a minimum viable routine based on meta-analyses of exercise studies.
Not knowing which sort of exercise yields the best results gives our brains an excuse to stop thinking about it. The intent of this post is to go over the dose responses to various types of exercise. We’re going to break through vague notions like “exercise is good” and “I should probably exercise more” with a concrete plan where you understand the relevant parameters that will cause dramatic improvements.
Dr. Jubjub predicts a crisis
Dr. Jubjub: Sir, I have been running some calculations and I’m worried about the way our slithy toves are heading.
Prof. Bandersnatch: Huh? Why? The toves seem fine to me. Just look at them, gyring and gimbling in the wabe over there.
Dr. Jubjub: Yes, but there is a distinct negative trend in my data. The toves are gradually losing their slithiness.
Prof. Bandersnatch: Hmm, okay. That does sound serious. How long until it becomes a problem?
Dr. Jubjub: Well, I’d argue that it’s already having negative effects but I’d say we will reach a real crisis in around 120 years.
Prof. Bandersnatch: Phew, okay, you had me worried there for a moment. But it sounds like this is actually a non-problem. We can carry on working on the important stuff – technology will bail us out here in time.
Dr. Jubjub: Sir! We already have the technology to fix the toves. The most straightforward way would be to whiffle their tulgey wood but we could also...
Prof. Bandersnatch: What?? Whiffle their tulgey wood? Do you have any idea what that would cost? And besides, people won’t stand for it – slithy toves with unwhiffled tulgey wood are a part of our way of life.
Dr. Jubjub: So, when you say technology will bail us out you mean you expect a solution that will be cheap, socially acceptable and developed soon?
Prof. Bandersnatch: Of course! Prof. Jabberwock assures me the singularity will be here around tea-time on Tuesday. That is, if we roll up our sleeves and don’t waste time with trivialities like your tove issue.
Maybe it’s just me but I feel like I run into a lot of conversations like this around here. On any problem that won’t become an absolute crisis in the next few decades, someone will take the Bandersnatch view that it will be more easily solved later (with cheaper or more socially acceptable technology) so we shouldn’t work directly on it now. The way out is forward - let’s step on the gas and get to the finish line before any annoying problems catch up with us.
For all I know, Bandersnatch is absolutely right. But my natural inclination is to take the Jubjub view. I think the chances of a basically business-as-usual future for the next 200 or 300 years are not epsilon. They may not be very high but they seem like they need to be seriously taken into account. Problems may prove harder than they look. Apparently promising technology may not become practical. Maybe we'll have the capacity for AI in 50 years - but need another 500 years to make it friendly. I'd prefer humanity to plan in such a way that things will gradually improve rather than gradually deteriorate, even in a slow-technology scenario.
Glenn Beck discusses the Singularity, cites SI researchers
From the final chapter of his new book Cowards, titled "Adapt or Die: The Coming Intelligence Explosion."
The year is 1678 and you’ve just arrived in England via a time machine. You take out your new iPhone in front of a group of scientists who have gathered to marvel at your arrival.
“Siri,” you say, addressing the phone’s voice-activated artificial intelligence system, “play me some Beethoven.”
Dunh-Dunh-Dunh-Duuunnnhhh! The famous opening notes of Beethoven’s Fifth Symphony, stored in your music library, play loudly.
“Siri, call my mother.”
Your mother’s face appears on the screen, a Hawaiian beach behind her. “Hi, Mom!” you say. “How many fingers am I holding up?”
“Three,” she correctly answers. “Why haven’t you called more—”
“Thanks, Mom! Gotta run!” you interrupt, hanging up.
“Now,” you say. “Watch this.”
Your new friends look at the iPhone expectantly.
“Siri, I need to hide a body.”
Without hesitation, Siri asks: “What kind of place are you looking for? Mines, reservoirs, metal foundries, dumps, or swamps?” (I’m not kidding. If you have an iPhone 4S, try it.)
You respond “Swamps,” and Siri pulls up a satellite map showing you nearby swamps.
The scientists are shocked into silence. What is this thing that plays music, instantly teleports video of someone across the globe, helps you get away with murder, and is small enough to fit into a pocket?
At best, your seventeenth-century friends would worship you as a messenger of God. At worst, you’d be burned at the stake for witchcraft. After all, as science fiction author Arthur C. Clarke once said, “Any sufficiently advanced technology is indistinguishable from magic.”
Now, imagine telling this group that capitalism and representative democracy will take the world by storm, lifting hundreds of millions of people out of poverty. Imagine telling them their descendants will eradicate smallpox and regularly live seventy-five or more years. Imagine telling them that men will walk on the moon, that planes, flying hundreds of miles an hour, will transport people around the world, or that cities will be filled with buildings reaching thousands of feet into the air.
They’d probably escort you to the madhouse.
Unless, that is, one of the people in that group had been a man named Ray Kurzweil.
Kurzweil is an inventor and futurist who has done a better job than most at predicting the future. Dozens of the predictions from his 1990 book The Age of Intelligent Machines came true during the 1990s and 2000s. His follow-up book, The Age of Spiritual Machines, published in 1999, fared even better. Of the 147 predictions that Kurzweil made for 2009, 78 percent turned out to be entirely correct, and another 8 percent were roughly correct. For example, even though every portable computer had a keyboard in 1999, Kurzweil predicted that most portable computers would lack a keyboard by 2009. It turns out he was right: by 2009, most portable computers were MP3 players, smartphones, tablets, portable game machines, and other devices that lacked keyboards.
Kurzweil is most famous for his “law of accelerating returns,” the idea that technological progress is generally “exponential” (like a hockey stick, curving up sharply) rather than “linear” (like a straight line, rising slowly). In nongeek-speak that means that our knowledge is like the compound interest you get on your bank account: it increases exponentially as time goes on because it keeps building on itself. We won’t experience one hundred years of progress in the twenty-first century, but rather twenty thousand years of progress (measured at today’s rate).
Many experts have criticized Kurzweil’s forecasting methods, but a careful and extensive review of technological trends by researchers at the Santa Fe Institute came to the same basic conclusion: technological progress generally tends to be exponential (or even faster than exponential), not linear.
So, what does this mean? In his 2005 book The Singularity Is Near, Kurzweil shares his predictions for the next few decades:
- In our current decade, Kurzweil expects real-time translation tools and automatic house-cleaning robots to become common.
- In the 2020s he expects to see the invention of tiny robots that can be injected into our bodies to intelligently find and repair damage and cure infections.
- By the 2030s he expects “mind uploading” to be possible, meaning that your memories and personality and consciousness could be copied to a machine. You could then make backup copies of yourself, and achieve a kind of technological immortality.
[sidebar]
Age of the Machines?
“We became the dominant species on this planet by being the most intelligent species around. This century we are going to cede that crown to machines. After we do that, it will be them steering history rather than us.”
—Jaan Tallinn, co-creator of Skype and Kazaa
[/sidebar]
If any of that sounds absurd, remember again how absurd the eradication of smallpox or the iPhone 4S would have seemed to those seventeenth-century scientists. That’s because the human brain is conditioned to believe that the past is a great predictor of the future. While that might work fine in some areas, technology is not one of them. Just because it took decades to put two hundred transistors onto a computer chip doesn’t mean that it will take decades to get to four hundred. In fact, Moore’s Law, which states (roughly) that computing power doubles every two years, shows how technological progress must be thought of in terms of “hockey stick” progress, not “straight line” progress. Moore’s Law has held for more than half a century already (we can currently fit 2.6 billion transistors onto a single chip) and there’s little reason to expect that it won’t continue to.
But the aspect of his book that has the most far-ranging ramifications for us is Kurzweil’s prediction that we will achieve a “technological singularity” in 2045. He defines this term rather vaguely as “a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed.”
Part of what Kurzweil is talking about is based on an older, more precise notion of “technological singularity” called an intelligence explosion. An intelligence explosion is what happens when we create artificial intelligence (AI) that is better than we are at the task of designing artificial intelligences. If the AI we create can improve its own intelligence without waiting for humans to make the next innovation, this will make it even more capable of improving its intelligence, which will . . . well, you get the point. The AI can, with enough improvements, make itself smarter than all of us mere humans put together.
The really exciting part (or the scary part, if your vision of the future is more like the movie The Terminator) is that, once the intelligence explosion happens, we’ll get an AI that is as superior to us at science, politics, invention, and social skills as your computer’s calculator is to you at arithmetic. The problems that have occupied mankind for decades— curing diseases, finding better energy sources, etc.— could, in many cases, be solved in a matter of weeks or months.
Again, this might sound far-fetched, but Ray Kurzweil isn’t the only one who thinks an intelligence explosion could occur sometime this century. Justin Rattner, the chief technology officer at Intel, predicts some kind of Singularity by 2048. Michael Nielsen, co-author of the leading textbook on quantum computation, thinks there’s a decent chance of an intelligence explosion by 2100. Richard Sutton, one of the biggest names in AI, predicts an intelligence explosion near the middle of the century. Leading philosopher David Chalmers is 50 percent confident an intelligence explosion will occur by 2100. Participants at a 2009 conference on AI tended to be 50 percent confident that an intelligence explosion would occur by 2045.
If we can properly prepare for the intelligence explosion and ensure that it goes well for humanity, it could be the best thing that has ever happened on this fragile planet. Consider the difference between humans and chimpanzees, which share 95 percent of their genetic code. A relatively small difference in intelligence gave humans the ability to invent farming, writing, science, democracy, capitalism, birth control, vaccines, space travel, and iPhones— all while chimpanzees kept flinging poo at each other.
[sidebar]
Intelligent Design?
The thought that machines could one day have superhuman abilities should make us nervous. Once the machines are smarter and more capable than we are, we won’t be able to negotiate with them any more than chimpanzees can negotiate with us. What if the machines don’t want the same things we do?
The truth, unfortunately, is that every kind of AI we know how to build today definitely would not want the same things we do. To build an AI that does, we would need a more flexible “decision theory” for AI design and new techniques for making sense of human preferences. I know that sounds kind of nerdy, but AIs are made of math and so math is really important for choosing which results you get from building an AI.
These are the kinds of research problems being tackled by the Singularity Institute in America and the Future of Humanity Institute in Great Britain. Unfortunately, our silly species still spends more money each year on lipstick research than we do on figuring out how to make sure that the most important event of this century (maybe of all human history)— the intelligence explosion— actually goes well for us.
[/sidebar]
Likewise, self-improving machines could perform scientific experiments and build new technologies much faster and more intelligently than humans can. Curing cancer, finding clean energy, and extending life expectancies would be child’s play for them. Imagine living out your own personal fantasy in a different virtual world every day. Imagine exploring the galaxy at near light speed, with a few backup copies of your mind safe at home on earth in case you run into an exploding supernova. Imagine a world where resources are harvested so efficiently that everyone’s basic needs are taken care of, and political and economic incentives are so intelligently fine-tuned that “world peace” becomes, for the first time ever, more than a Super Bowl halftime show slogan.
With self-improving AI we may be able to eradicate suffering and death just as we once eradicated smallpox. It is not the limits of nature that prevent us from doing this, but only the limits of our current understanding. It may sound like a paradox, but it’s our brains that prevent us from fully understanding our brains.
Turf Wars
At this point you might be asking yourself: “Why is this topic in this book? What does any of this have to do with the economy or national security or politics?”
In fact, it has everything to do with all of those issues, plus a whole lot more. The intelligence explosion will bring about change on a scale and scope not seen in the history of the world. If we don’t prepare for it, things could get very bad, very fast. But if we do prepare for it, the intelligence explosion could be the best thing that has happened since . . . literally ever.
But before we get to the kind of life-altering progress that would come after the Singularity, we will first have to deal with a lot of smaller changes, many of which will throw entire industries and ways of life into turmoil. Take the music business, for example. It was not long ago that stores like Tower Records and Sam Goody were doing billions of dollars a year in compact disc sales; now people buy music from home via the Internet. Publishing is currently facing a similar upheaval. Newspapers and magazines have struggled to keep subscribers, booksellers like Borders have been forced into bankruptcy, and customers are forcing publishers to switch to ebooks faster than the publishers might like.
All of this is to say that some people are already witnessing the early stages of upheaval firsthand. But for everyone else, there is still a feeling that something is different this time; that all of those years of education and experience might be turned upside down in an instant. They might not be able to identify it exactly but they realize that the world they’ve known for forty, fifty, or sixty years is no longer the same.
There’s a good reason for that. We feel it and sense it because it’s true. It’s happening. There’s absolutely no question that the world in 2030 will be a very different place than the one we live in today. But there is a question, a large one, about whether that place will be better or worse.
It’s human nature to resist change. We worry about our families, our careers, and our bank accounts. The executives in industries that are already experiencing cataclysmic shifts would much prefer to go back to the way things were ten years ago, when people still bought music, magazines, and books in stores. The future was predictable. Humans like that; it’s part of our nature.
But predictability is no longer an option. The intelligence explosion, when it comes in earnest, is going to change everything— we can either be prepared for it and take advantage of it, or we can resist it and get run over.
Unfortunately, there are a good number of people who are going to resist it. Not only those in affected industries, but those who hold power at all levels. They see how technology is cutting out the middlemen, how people are becoming empowered, how bloggers can break national news and YouTube videos can create superstars.
And they don’t like it.
A Battle for the Future
Power bases in business and politics that have been forged over decades, if not centuries, are being threatened with extinction, and they know it. So the owners of that power are trying to hold on. They think they can do that by dragging us backward. They think that, by growing the public’s dependency on government, by taking away the entrepreneurial spirit and rewards and by limiting personal freedoms, they can slow down progress.
But they’re wrong. The intelligence explosion is coming so long as science itself continues. Trying to put the genie back in the bottle by dragging us toward serfdom won’t stop it and will, in fact, only leave the world with an economy and society that are completely unprepared for the amazing things that it could bring.
Robin Hanson, author of “The Economics of the Singularity” and an associate professor of economics at George Mason University, wrote that after the Singularity, “The world economy, which now doubles in 15 years or so, would soon double in somewhere from a week to a month.”
That is unfathomable. But even if the rate were much slower, say a doubling of the world economy in two years, the shock-waves from that kind of growth would still change everything we’ve come to know and rely on. A machine could offer the ideal farming methods to double or triple crop production, but it can’t force a farmer or an industry to implement them. A machine could find the cure for cancer, but it would be meaningless if the pharmaceutical industry or Food and Drug Administration refused to allow it. The machines won’t be the problem; humans will be.
And that’s why I wanted to write about this topic. We are at the forefront of something great, something that will make the Industrial Revolution look in comparison like a child discovering his hands. But we have to be prepared. We must be open to the changes that will come, because they will come. Only when we accept that will we be in a position to thrive. We can’t allow politicians to blame progress for our problems. We can’t allow entrenched bureaucrats and power-hungry executives to influence a future that they may have no place in.
Many people are afraid of these changes— of course they are: it’s part of being human to fear the unknown— but we can’t be so entrenched in the way the world works now that we are unable to handle change out of fear for what those changes might bring.
Change is going to be as much a part of our future as it has been of our past. Yes, it will happen faster and the changes themselves will be far more dramatic, but if we prepare for it, the change will mostly be positive. But that preparation is the key: we need to become more well-rounded as individuals so that we’re able to constantly adapt to new ways of doing things. In the future, the way you do your job may change four to five or fifty times over the course of your life. Those who cannot, or will not, adapt will be left behind.
At the same time, the Singularity will give many more people the opportunity to be successful. Because things will change so rapidly there is a much greater likelihood that people will find something they excel at. But it could also mean that people’s successes are much shorter-lived. The days of someone becoming a legend in any one business (think Clive Davis in music, Steven Spielberg in movies, or the Hearst family in publishing) are likely over. But those who embrace and adapt to the coming changes, and surround themselves with others who have done the same, will flourish.
When major companies, set in their ways, try to convince us that change is bad and that we must stick to the status quo, no matter how much human inquisitiveness and ingenuity try to propel us forward, we must look past them. We must know in our hearts that these changes will come, and that if we welcome them into our world, we’ll become more successful, more free, and more full of light than we could have ever possibly imagined.
Ray Kurzweil once wrote, “The Singularity is near.” The only question will be whether we are ready for it.
The citations for the chapter include:
- Luke Muehlhauser and Anna Salamon, "Intelligence Explosion: Evidence and Import"
- Daniel Dewey, "Learning What to Value"
- Eliezer Yudkowsky, "Artificial Intelligence as a Positive and a Negative Factor in Global Risk"
- Luke Muehlhauser and Louie Helm, "The Singularity and Machine Ethics"
- Luke Muehlhauser, "So You Want to Save the World"
- Michael Anissimov, "The Benefits of a Successful Singularity"
Why people want to die
Over and over again, someones says that living for a very long time would be a bad thing, and then some futurist tries to persuade them that their reasoning is faulty. They tell them that they think that way now, but they'll change their minds when they're older.
The thing is, I don't see that happening. I live in a small town full of retirees, and those few I've asked about it are waiting for death peacefully. When I ask them about their ambitions, or things they still want to accomplish, they have none.
Suppose that people mean what they say. Why do they want to die?
Even better cryonics – because who needs nanites anyway?
Abstract: in this post I propose a protocol for cryonic preservation (with the central idea of using high pressure to prevent water from expanding rather than highly toxic cryoprotectants), which I think has a chance of being non-destructive enough for us to be able to preserve and then resuscitate an organism with modern technologies. In addition, I propose a simplified experimental protocol for a shrimp (or other small model organism (building a large pressure chamber is hard) capable of surviving in very deep and cold waters; shrimp is a nice trade-off between the depth of habitat and the ease of obtaining them on market), which is simple enough to be doable in a small lab or well-equipped garage setting.
Are there obvious problems with this, and how can they be addressed?
Is there a chance to pitch this experiment to a proper academic institution, or garage it is?
Originally posted here.
I do think that the odds of ever developing advanced nanomachines and/or brain scanning on molecular level plus algorithms for reversing information distortion - everything you need to undo the damage from conventional cryonic preservation and even to some extent that of brain death, according to its modern definition, if wasn't too late when the brain was preserved - for currently existing cryonics to be a bet worth taking. This is dead serious, and it's an actionable item.
Less of an action item: what if the future generations actually build quantum Bayesian superintelligence, close enough in its capabilities to Solomonoff induction, at which point even a mummified brain or the one preserved in formalin would be enough evidence to restore its original state? Or what if they invent read-only time travel, and make backups of everyone's mind right before they died (at which point it becomes indistinguishable from the belief in afterlife existing right now)? Even without time travel, they can just use a Universe-sized supercomputer to simulate every singe human physically possible, and naturally of of them is gonna be you. But aside from the obvious identity issues (and screw the timeless identity), that relies on unknown unknowns with uncomputable probabilities, and I'd like to have as few leaps of faith and quantum suicides in my life as possible.
So although vitrification right after diagnosed brain death relies on far smaller assumptions, and if totally worth doing - let me reiterate that: go sign up for cryonics - it'd be much better if we had preservation protocols so non-destructive that we could actually freeze a living human, and then bring them back alive. If nothing else, that would hugely increase the public outreach, grant the patient (rather than cadaver) status to the preserved, along with the human rights, get it recognized as a medical procedure covered by insurance or single payer, allow doctors to initiate the preservation of a dying patient before the brain death (again: I think everything short of information-theoretic death should potentially be reversible, but why take chances?), allow suffering patient opt for preservation rather than euthanasia (actually, I think it should be done right now: why on earth would anyone allow a person to do something that's guaranteed to kill them, but not allowed to do something that maybe will kill, or maybe will give the cure?), or even allow patients suffering from degrading brain conditions (e.g. Alzheimer's) to opt for preservation before their memory and personality are permanently destroyed.
Let's fix cryonics! First of all, why can't we do it on living organisms? Because of heparin poisoning - every cryoprotectant efficient enough to prevent the formation of ice crystals is a strong enough poison to kill the organism (leave alone that we can't even saturate the whole body with it - current technologies only allow to do it for the brain alone). But without cryoprotectants the water will expand upon freezing, and break the cells. But there's another way to prevent this. Under pressure above 350 MPa water slightly shrinks upon freezing rather than expanding:
So that's basically that: the key idea is to freeze (and keep) everything under pressure. Now, there are some tricks to that too.
It's not easy to put basically any animal, especially a mammal, under 350 MPa (which is 3.5x higher than in Mariana Trench). At this point even Trimix becomes toxic. Basically the only remaining solution is total liquid ventilation, which has one problem: it has never been applied successfully to a human. There's one fix to that I see: as far as I can tell, no one has ever attempted to do perform it under high pressure, and the attempts were basically failing because of the insufficient solubility of oxygen and carbon dioxide in perfluorocarbons. Well then, let's increase the pressure! Namely, go to 3 MPa on Trimix, which is doable, and only then switch to TLV, whose efficiency is improved by the higher gas solubility under high pressure. But there's another solution too. If you just connect a cardiopulmonary bypass (10 hours should be enough for the whole procedure), you don't need the surrounding liquid to even be breathable - it can just be saline. CPB also solves the problem of surviving the period after the cardiac arrest (which will occur at around 30 centigrade) but before the freezing happens - you can just keep the blood circulating and delivering oxygen.
Speaking of hypoxia, even with the CPB it's still a problem. You positively don't want the blood to circulate when freezing starts, lest it act like an abrasive water cutter. It's not that much of a problem under near-freezing temperatures, but still. Fortunately, this effect can be mitigated by administering insulin first (yay, it's the first proper academic citation in this post! Also yay, I thought about this before I even discovered that it's actually true). This makes sense: if oxygen is primarily used to metabolize glucose, less glucose means less oxygen consumed, and less damage done by hypoxia. Then there's another thing: on the phase diagram you can see that before going into the area of high temperature ice at 632 MPa, freezing temperature actually dips down to roughly -30 centigrade at 209~350 MPa. That would allow to really shut down metabolism for good when water is still liquid, and blood can be pumped by the CPB. From this point we have two ways. First, we can do the normal thing, and start freezing very slowly, so minimize the formation of ice crystals (even though they're smaller than the original water volume, they may still be sharp). Second, we can increase the pressure. That would lead to near-instantaneous freezing everywhere, thus completely eliminating the problem of hypoxia - before the freezing, blood still circulated, and freezing is very quick - way faster than can ever be achieved even by throwing a body into liquid helium under normal pressure. Video evidence suggests that quick freezing of water leads to the formation of a huge number of crystals, which is bad, but I don't know near-instantaneous freezing from supercooled state and near-instantaneous freezing upon raising the pressure will lead to the same effect. More experiments are needed, preferably not on humans.
So here is my preservation protocol:
- Anesthetize a probably terminally ill, but still conscious person.
- Connect them to a cardiopulmonary bypass.
- Replacing their blood with perfluorohexane is not necessary, since we seem to be already doing a decent job at having medium-term (several days) cardiopulmonary bypasses, but that could still help.
- Submerge them in perfluorohexane, making sure that no air bubbles are left.
- Slowly raise the ambient pressure to 350 MPa (~3.5kBar) without stopping the bypass.
- Apply a huge dose of insulin to reduce all their metabolic processes.
- Slowly cool them to -30 centigrade (at which point, given such pressure, water is still liquid), while increasing the dose of insulin, and raising the oxygen supply to the barely subtoxic level.
- Slowly raise the pressure to 1 GPa (~10kBar), at which point the water solidifies, but does so with shrinking rather than expanding. Don't cutoff the blood circulation until the moment when ice crystals starts forming in the blood/perfluorohexane flow.
- Slowly lower the temperature to -173 centigrade or lower, as you wish.
And then back:
- Raise the temperature to -20 centigrade.
- Slowly lower the pressure to 350 MPa, at which point ice melts.
- Start artificial blood circulation with a barely subtoxic oxygen level.
- Slowly raise the temperature to +4 centigrade.
- Slowly lower the pressure to 1 Bar.
- Drain the ambient perfluorohexane and replace it with pure oxygen. Attach and start a medical ventilator.
- Slowly raise the temperature to +32 centigrade.
- Apply a huge dose of epinephrine and sugar, while transfusing the actual blood (preferably autotransfusion), to restart the heart.
- Rejoice.
I claim that this protocol allows you freeze a living human to an arbitrarily low temperature, and then bring them back alive without brain damage, thus being the first true victory over death.
But let's start with something easy and small, like a shrimp. They already live in water, so there's no need to figure out the protocol for putting them into liquid. And they're already adapted to live under high pressure (no swim bladders or other cavities). And they're already adapted to live in cold water, so they should be expected to survive further cooling.
Small ones can be about 1 inch big, so let's be safe and use a 5cm-wide cylinder. To form ice III we need about 350MPa, which gives us 350e6 * 3.14 * 0.025^2 / 9.8 = 70 tons or roughly 690kN of force. Applying it directly or with a lever is unreasonable, since 70 tons of bending force is a lot even for steel, given the 5cm target. Block and tackle system is probably a good solution - actually, two of them, on each side of a beam used for compression, so we have 345 kN per system. And it looks like you can buy 40~50 ton manual hoists from alibaba, though I have no idea about their quality.
I'm not sure to which extent Pascal's law applies to solids, but if it does, the whole setup can be vastly optimized by creating a bottle neck for the pistol. One problem is that we can no longer assume that water in completely incompressible - it had to be compressed to about 87% its original volume - but aside from that, 350MPa per a millimeter thick rod is just 28kg. To compress a 0.05m by 0.1m cylinder to 87% its original volume we need to pump extra 1e-4 m^3 of water there, which amounts to 148 meters of movement, which isn't terribly good. 1cm thick rod, on the other hand, would require almost 3 tons of force, but will move only 1.5 meters. Or the problem of applying the constant pressure can be solved by enclosing the water in a plastic bag, and filling the rest of chamber with a liquid with a lower freezing point, but the same density. Thus, it is guaranteed that all the time it takes the water to freeze, it is under uniform external pressure, and then it just had nowhere to go.
Alternatively, one can just buy a 90'000 psi pump and 100'000 psi tubes and vessels, but let's face it: it they don't even list the price on their website, you probably don't even wanna know it. And since no institutions that can afford this thing seem to be interested in cryonics research, we'll have to stick to makeshift solutions (until at least the shrimp thing works, which would probably give in a publication in Nature, and enough academic recognition for proper research to start).
If you can see the box, you can open the box
First post here, and I'm disagreeing with something in the main sequences. Hubris acknowledged, here's what I've been thinking about. It comes from the post "Are your enemies innately evil?":
On September 11th, 2001, nineteen Muslim males hijacked four jet airliners in a deliberately suicidal effort to hurt the United States of America. Now why do you suppose they might have done that? Because they saw the USA as a beacon of freedom to the world, but were born with a mutant disposition that made them hate freedom?
Realistically, most people don't construct their life stories with themselves as the villains. Everyone is the hero of their own story. The Enemy's story, as seen by the Enemy, is not going to make the Enemy look bad. If you try to construe motivations that would make the Enemy look bad, you'll end up flat wrong about what actually goes on in the Enemy's mind.
If I'm misreading this, please correct me, but the way I am reading this is:
1) People do not construct their stories so that they are the villains,
therefore
2) the idea that Al Qaeda is motivated by a hatred of American freedom is false.
Reading the Al Qaeda document released after the attacks called Why We Are Fighting You you find the following:
What are we calling you to, and what do we want from you?
1. The first thing that we are calling you to is Islam.
A. The religion of tahwid; of freedom from associating partners with Allah Most High , and rejection of such blasphemy; of complete love for Him, the Exalted; of complete submission to his sharia; and of the discarding of all the opinions, orders, theories, and religions that contradict with the religion He sent down to His Prophet Muhammad. Islam is the religion of all the prophets and makes no distinction between them.
It is to this religion that we call you …
2. The second thing we call you to is to stop your oppression, lies, immorality and debauchery that has spread among you.
A. We call you to be a people of manners, principles, honor and purity; to reject the immoral acts of fornication, homosexuality, intoxicants, gambling and usury.
We call you to all of this that you may be freed from the deceptive lies that you are a great nation, which your leaders spread among you in order to conceal from you the despicable state that you have obtained.
B. It is saddening to tell you that you are the worst civilization witnessed in the history of mankind:
i. You are the nation who, rather than ruling through the sharia of Allah, chooses to invent your own laws as you will and desire. You separate religion from you policies, contradicting the pure nature that affirms absolute authority to the Lord your Creator….
ii. You are the nation that permits usury…
iii. You are a nation that permits the production, spread, and use of intoxicants. You also permit drugs, and only forbid the trade of them, even though your nation is the largest consumer of them.
iv. You are a nation that permits acts of immorality, and you consider them to be pillars of personal freedom.
"Freedom" is of course one of those words. It's easy enough to imagine an SS officer saying indignantly: "Of course we are fighting for freedom! For our people to be free of Jewish domination, free from the contamination of lesser races, free from the sham of democracy..."
If we substitute the symbol with the substance though, what we mean by freedom - "people to be left more or less alone, to follow whichever religion they want or none, to speak their minds, to try to shape society's laws so they serve the people" - then Al Qaeda is absolutely inspired by a hatred of freedom. They wouldn't call it "freedom", mind you, they'd call it "decadence" or "blasphemy" or "shirk" - but the substance is what we call "freedom".
Returning to the syllogism at the top, it seems to be that there is an unstated premise. The conclusion "Al Qaeda cannot possibly hate America for its freedom because everyone sees himself as the hero of his own story" only follows if you assume that What is heroic, what is good, is substantially the same for all humans, for a liberal Westerner and an Islamic fanatic.
(for Americans, by "liberal" here I mean the classical sense that includes just about everyone you are likely to meet, read or vote for. US conservatives say they are defending the American revolution, which was broadly in line with liberal principles - slavery excepted, but since US conservatives don't support that, my point stands).
When you state the premise baldly like that, you can see the problem. There's no contradiction in thinking that Muslim fanatics think of themselves as heroic precisely for being opposed to freedom, because they see their heroism as trying to extend the rule of Allah - Shariah - across the world.
Now to the point - we all know the phrase "thinking outside the box". I submit that if you can recognize the box, you've already opened it. Real bias isn't when you have a point of view you're defending, but when you cannot imagine that another point of view seriously exists.
That phrasing has a bit of negative baggage associated with it, that this is just a matter of pigheaded close-mindedness. Try thinking about it another way. Would you say to someone with dyscalculia "You can't get your head around the basics of calculus? You are just being so close minded!" No, that's obviously nuts. We know that different peoples minds work in different ways, that some people can see things others cannot.
Orwell once wrote about the British intellectuals inability to "get" fascism, in particular in his essay on H.G. Wells. He wrote that the only people who really understood the nature and menace of fascism were either those who had felt the lash on their backs, or those who had a touch of the fascist mindset themselves. I suggest that some people just cannot imagine, cannot really believe, the enormous power of faith, of the idea of serving and fighting and dying for your god and His prophet. It is a kind of thinking that is just alien to many.
Perhaps this is resisted because people think that "Being able to think like a fascist makes you a bit of a fascist". That's not really true in any way that matters - Orwell was one of the greatest anti-fascist writers of his time, and fought against it in Spain.
So - if you can see the box you are in, you can open it, and already have half-opened it. And if you are really in the box, you can't see the box. So, how can you tell if you are in a box that you can't see versus not being in a box?
The best answer I've been able to come up with is not to think of "box or no box" but rather "open or closed box". We all work from a worldview, simply because we need some knowledge to get further knowledge. If you know you come at an issue from a certain angle, you can always check yourself. You're in a box, but boxes can be useful, and you have the option to go get some stuff from outside the box.
The second is to read people in other boxes. I like steelmanning, it's an important intellectual exercise, but it shouldn't preclude finding actual Men of Steel - that is, people passionately committed to another point of view, another box, and taking a look at what they have to say.
Now you might say: "But that's steelmanning!" Not quite. Steelmanning is "the art of addressing the best form of the other person’s argument, even if it’s not the one they presented." That may, in some circumstances, lead you to make the mistake of assuming that what you think is the best argument for a position is the same as what the other guy thinks is the best argument for his position. That's especially important if you are addressing a belief held by a large group of people.
Again, this isn't to run down steelmanning - the practice is sadly limited, and anyone who attempts it has gained a big advantage in figuring out how the world is. It's just a reminder that the steelman you make may not be quite as strong as the steelman that is out to get you.
[EDIT: Link included to the document that I did not know was available online before now]
What Direct Instruction is
A couple of days ago, prompted by several recent posts by Owen_Richardson, I checked out the book "Theory of Instruction" (Engelmann and Carnine, 1982) from my university library and promised to read it this weekend and write a post about Direct Instruction. This is that post.
For FAI: Is "Molecular Nanotechnology" putting our best foot forward?
Molecular nanotechnology, or MNT for those of you who love acronyms, seems to be a fairly common trope on LW and related literature. It's not really clear to me why. In many of the examples of "How could AI's help us" or "How could AI's rise to power" phrases like "cracks protein folding" or "making a block of diamond is just as easy as making a block of coal" are thrown about in ways that make me very very uncomfortable. Maybe it's all true, maybe I'm just late to the transhumanist party and the obviousness of this information was with my invitation that got lost in the mail, but seeing all the physics swept under the rug like that sets off every crackpot alarm I have.
I must post the disclaimer that I have done a little bit of materials science, so maybe I'm just annoyed that you're making me obsolete, but I don't see why this particular possible future gets so much attention. Let us assume that a smarter than human AI will be very difficult to control and represents a large positive or negative utility for the entirety of the human race. Even given that assumption, it's still not clear to me that MNT is a likely element of the future. It isn't clear to me than MNT is physically practical. I don't doubt that it can be done. I don't doubt that very clever metastable arrangements of atoms with novel properties can be dreamed up. Indeed, that's my day job, but I have a hard time believing the only reason you can't make a nanoassembler capable of arbitrary manipulations out of a handful of bottles you ordered from Sigma-Aldrich is because we're just not smart enough. Manipulating individuals atoms means climbing huge binding energy curves, it's an enormously steep, enormously complicated energy landscape, and the Schrodinger Equation scales very very poorly as you add additional particles and degrees of freedom. Building molecular nanotechnology seems to me to be roughly equivalent to being able to make arbitrary lego structures by shaking a large bin of lego in a particular way while blindfolded. Maybe a super human intelligence is capable of doing so, but it's not at all clear to me that it's even possible.
I assume the reason than MNT is added to a discussion on AI is because we're trying to make the future sound more plausible via adding burdensome details. I understand that AI and MNT is less probable than AI or MNT alone, but that both is supposed to sound more plausible. This is precisely where I have difficulty. I would estimate the probability of molecular nanotechnology (in the form of programmable replicators, grey goo, and the like) as lower than the probability of human or super human level AI. I can think of all sorts of objection to the former, but very few objections to the latter. Including MNT as a consequence of AI, especially including it without addressing any of the fundamental difficulties of MNT, I would argue harms the credibility of AI researchers. It makes me nervous about sharing FAI literature with people I work with, and it continues to bother me.
I am particularly bothered by this because it seems irrelevant to FAI. I'm fully convinced that a smarter than human AI could take control of the Earth via less magical means, using time tested methods such as manipulating humans, rigging elections, making friends, killing its enemies, and generally only being a marginally more clever and motivated than a typical human leader. A smarter than human AI could out-manipulate human institutions and out-plan human opponents with the sort of ruthless efficiency that modern computers beat humans in chess. I don't think convincing people that smarter than human AI's have enormous potential for good and evil is particularly difficult, once you can get them to concede that smarter than human AIs are possible. I do think that waving your hands and saying super-intelligence at things that may be physically impossible makes the whole endeavor seem less serious. If I had read the chain of reasoning smart computer->nanobots before I had built up a store of good-will from reading the Sequences, I would have almost immediately dismissed the whole FAI movement a bunch of soft science fiction, and it would have been very difficult to get me to take a second look.
Put in LW parlance, suggesting things not known to be possible by modern physics without detailed explanations puts you in the reference class "people on the internet who have their own ideas about physics". It didn't help, in my particular case, that one of my first interactions on LW was in fact with someone who appears to have their own view about a continuous version of quantum mechanics.
And maybe it's just me. Maybe this did not bother anyone else, and it's an incredible shortcut for getting people to realize just how different a future a greater than human intelligence makes possible and there is no better example. It does alarm me though, because I think that physicists and the kind of people who notice and get uncomfortable when you start invoking magic in your explanations may be the kind of people FAI is trying to attract.
The Zeroth Skillset
Related: 23 Cognitive Mistakes that make People Play Bad Poker
Followed by: Situational Awareness And You
If epistemic rationality is the art of updating one's beliefs based on new evidence to better correspond with reality, the zeroth skillset of epistemic rationality-- the one that enables all other skills to function-- is that of situational awareness. Situational awareness-- sometimes referred to as "situation awareness" or simply "SA"-- is the skillset and related state of mind that allows one to effectively perceive the world around them.
One might ask how this relates to rationality at all. The answer is simple. Just as the skill of lucid dreaming is near-useless without dream recall,[1] the skills of updating based on evidence and actually changing your mind are near-useless without good awareness skills-- after all, you can't update based on evidence that you haven't collected! A high degree of situational awareness is thus an important part of one's rationalist toolkit, as it allows you to notice evidence about the world around you that you would otherwise miss. At times, this evidence can be of critical importance. I can attest that I have personally saved the lives of friends on two occasions thanks to good situational awareness, and have saved myself from serious injury or death many times more.
Situational awareness is further lauded by elite military units, police trainers, criminals, intelligence analysts, and human factors researchers. In other words, people who have to make very important-- often life-or-death-- decisions based on limited information consider situational awareness a critical skill. This should tell us something-- if those individuals for whom correct decisions are most immediately relevant all stress the importance of situational awareness, it may be a more critical skill than we realize.
Unfortunately, the only discussion of situational awareness that I've seen on LessWrong or related sites has been a somewhat oblique reference in Louie Helm's "roadmap of errors" from 23 Cognitive Mistakes that make People Play Bad Poker.[2] I believe that situational awareness is important enough that it merits an explicit sequence of posts on its advantages and how to cultivate it, and this post will serve as the introduction to that sequence.
The first post in the sequence, unimaginatively titled "Situational Awareness and You," will be posted within the week. Other planned posts include "Cultivating Awareness," "How to Win a Duel," "Social Awareness," "Be Aware of Your Reference Class," "Signaling and Predation," and "Constant Vigilance!"
If you have any requests for things to add, general questions about the sequence, meta-thoughts about SA, and so on, this post is an appropriate place for that discussion; as this is primarily a meta post, it has been posted to Discussion. Core posts in the sequence will be posted to Main.
[1] What good are lucid dreams if you can't remember them?
[2] This is a very useful summary and you should read it even if you don't play poker.
From First Principles
Related: Truly a Part of You, What Data Generated That Thought
Some Case Studies
The other day my friend was learning to solder and he asked an experienced hacker for advice. The hacker told him that because heat rises, you should apply the soldering iron underneath the work to maximize heat transfer. Seems reasonable, logically inescapable, even. When I heard of this, I thought through to why heat rises and when, and saw that it was not so. I don't remember the conversation, but the punchline is that hot things become less dense, and less dense things float, and if you're not in a fluid, hot fluids can't float. In the case of soldering, the primary mode of heat transfer is conduction through the liquid metal, so to maximize heat transfer, get the tip wet before you stick it in, and don't worry about position.
This is a case of surface reasoning failing because the heuristic (heat rises) was not truly a part of my friend or the random hacker. I want to focus on the actual 5-second skill of going back To First Principles that catches those failures.
Here's another; watch for the 5 second cues and responses: A few years ago, I was building a robot submarine for a school project. We were in the initial concept design phase, wondering what it should look like. My friend Peter said, "It should be wide, because stability is important". I noticed the heuristic "low and wide is stable" and thought to myself "Where does that come from? When is it valid?". In the case of catamarans or sports cars, wide is stable because it increases the lever arm between restoring force (gravity) and support point (wheel or hull), and low makes the tipping point harder to reach. Under water, there is no tipping point, and things are better modeled as hanging from their center of volume. In other words, underwater, the stability criteria is vertical separation, instead of horizontal separation. (More precisely, you can model the submarine as a damped pendulum, and notice that you want to tune the parameters for approximately critical damping). We went back to First Principles and figured out what actually mattered, then went on to build an awesome robot.
Let's review what happened. We noticed a heuristic or bit of qualitative knowledge (wide is stable), and asked "Why? When? How much?", which led us to the quantitative answer, which told us much more precisely exactly what matters (critical damping) and what does not matter (width, maximizing restoring force, etc).
A more Rationality-related example: I recently thought about Courage, and the fact that most people are too afraid of risk (beyond just utility concavity), and as a heuristic we should be failing more. Around the same time, I'd been hounding Michael Vassar (at minicamp) for advice. One piece that stuck with me was "use decision theory". Ok, Courage is about decisions; let's go.
"You should be failing more", they say. You notice the heuristic, and immediately ask yourself "Why? How much more? Prove it from first principles!" "Ok", your forked copy says. "We want to take all actions with positive expected utility. By the law of large numbers, in (non-black-swan) games we play a lot of, observed utility should approximate expected utility, which means you should be observing just as much fail as win on the edge of what you're willing to do. Courage is being well calibrated on risk; If your craziest plans are systematically succeeding, you are not well calibrated and you need to take more risks." That's approximately quantitative, and you can pull out the equations to verify if you like.
Notice all the subtle qualifications that you may not have guessed from the initial advice; (non-pascalian/lln applies, you can observe utility, your craziest plans, just as much fail as win (not just as many, not more)). (example application: one of the best matches for those conditions is social interaction) Those of you who actually busted out the equations and saw the math of it, notice how much more you understand than I am able to communicate with just words.
Ok, now I've named three, so we can play the generalization game without angering the gods.
On the Five-Second Level
Trigger: Notice an attempt to use some bit of knowledge or a heuristic. Something qualitative, something with unclear domain, something that affects what you are doing, something where you can't see the truth.
Action: Ask yourself: What problem does it try to solve (what's its interface, type signature, domain, etc)? What's the specific mechanism of its truth when it is true? In what situations does that hold? Is this one of those? If not, can we derive what the correct result would be in this case? Basically "prove it". Sometimes it will take 2 seconds, sometimes a day or two; if it looks like you can't immediately see it, come up with whatever quick approximation you can and update towards "I don't know what's going on here". Come back later for practice.
It doesn't have to be a formal proof that would convince even the most skeptical mathematician or outsmart even the most powerful demon, but be sure to see the truth.
Without this skill of going back to First Principles, I think you would not fully get the point of truly a part of you. Why is being able to regenerate your knowledge useful? What are the hidden qualifications on that? How does it work? (See what I'm doing here?) Once you see many examples of the kind of expanded and formidably precise knowledge you get from having performed a derivation, and the vague and confusing state of having only a theorem, you will notice the difference. What the difference is, in terms of a derivation From First Principles, is left as an exercise for the reader (ie. I don't know). Even without that, though, having seen the difference is a huge step up.
From having seen the difference between derived and taught knowledge, I notice that one of the caveats of making knowledge Truly a Part of You is that just being able to get it From First Principles is not enough; Actually having done the proof tells you a lot more than simply what the correct theorem is. Do not take my word for it; go do some proofs; see the difference.
So far I've just described something that has been unusually valuable for me. Can it be taught? Will others gain as much? I don't know; I got this one more or less by intellectual lottery. It can probably be tested, though:
Testing the "Prove It" Habit
In school, we had this awesome teacher for thermodynamics and fluid dynamics. He was usually voted best in faculty. His teaching and testing style fit perfectly with my "learn first principles and derive on the fly" approach that I've just outlined above, so I did very well in his classes.
In the lectures and homework, we'd learn all the equations, where they came from (with derivations), how they are used, etc. He'd get us to practice and be good at straightforward application of them. Some of the questions required a bit of creativity.
On the exams, the questions were substantially easier, but they all required creativity and really understanding the first principles. "Curve Balls", we called them. Otherwise smart people found his tests very hard; I got all my marks from them. It's fair to say I did well because I had a very efficient and practiced From First Principles groove in my mind. (This was fair, because actually studying for the test was a reasonable substitute.)
So basically, I think a good discriminator would be to throw people difficult problems that can be solved with standard procedure and surface heuristics, and then some easier problems that require creative application of first principles, or don't quite work with standard heuristics (but seem to).
If your subjects have consistent scores between the two types, they are doing it From First Principles. If they get the standard problems right, but not the curve balls, they aren't.
Examples:
Straight: Bayesian cancer test. Curve: Here's the base rate and positive rate, how good is the test (liklihood ratio)?
Straight: Sunk cost on some bad investment. Curve: Something where switching costs, opportunity for experience make staying the correct thing.
Straight: Monty Hall. Curve: Ignorant Monty Hall.
Etc.
Exercises
Again, maybe this can't be taught, but here's some practice ideas just in case it can. I got substantial value from figuring these out From First Principles. Some may be correct, others incorrect, or correct in a limited range. The point is to use them to point you to a problem to solve; once you know the actual problem, ignore the heuristic and just go for truth:
Science says good theories make bold predictions.
Deriving From First Principles is a good habit.
Boats go where you point them, so just sail with the bow pointed to the island.
People who do bad things should feel guilty.
I don't have to feel responsible for people getting tortured in Syria.
If it's broken, fix it.
(post more in comments)
Value of Information: 8 examples
ciphergoth just asked what the actual value of Quantified Self/self-experimentation is. This finally tempted me into running value of information calculations on my own experiments. It took me all afternoon because it turned out I didn’t actually understand how to do it and I had a hard time figuring out the right values for specific experiments. (I may not have not gotten it right, still. Feel free to check my work!) Then it turned out to be too long for a comment, and as usual the master versions will be on my website at some point. But without further ado!
Teenage Rationalists and Changing Your Mind
I remember the moment when I became an atheist.
I was reading Religion's Claim to Be Non-Disprovable, an uneasy feeling growing in my head, and then I reached the bottom of the article, stared at the screen for a couple of seconds, and got it.
"There is no God," I whispered. (Then I braced myself to be hit by a thunderbolt from the sky, so the belief was still paying rent, right to the very end).
No thunderbolt came. I tried again, a little louder. "There is no God."
It was...
kinda obvious, actually. I mostly felt disappointed in myself for needing someone to explain it to me, like I'd failed a test and hadn't even realized it was a test until it was too late. Friendly AI? Never would have figured that one out myself. But it shouldn't have taken Eliezer-level intelligence to point out that there's no one sensible in charge of the universe. And so - without a crisis of faith, without worry, without further drama - I changed my mind.
Over the last 6 months, I've changed my beliefs about a lot of things. I get the impression that's pretty standard, for a first read-through of the sequences. The interesting part is that it wasn't hard. After reading everything on How to Actually Change Your Mind, I'd expected letting go of beliefs I'd held my entire life to be a bit of an ordeal. It really wasn't. I didn't agree with the LessWrong consensus on every issue (I still don't), but whenever I came to agree (or to modify my position in that direction) I said so, and reevaluated the appropriate assumptions, and adjusted my model of the world, and then went on to the next article.
When I started the Sequences, I was 16. I don't think I'm generalizing from one example in terms of my ease of accepting new ideas; when I've explained these concepts to other smart teenagers, they usually also get the implications immediately and change their mind without apparent difficulty. It may be that most people rarely change their mind, but teenagers - at least the teenagers I know - change their mind a lot. I've watched my friends change their mind on life-changing decisions - colleges, careers, religion - every couple of weeks. Eliezer writes in "We Change our Mind Less Often Than We Think":
once I could guess what my answer would be - once I could assign a higher probability to deciding one way than other - then I had, in all probability, already decided.
I haven't asked my friends to specify the probability they'll make a given decision (typical people find this annoying for some reason), but I've routinely heard them express high levels of confidence in a choice, only to have made a totally different decision the next day.
There are both advantages and disadvantages to changing your mind easily, but I think it's worth looking at the reasons it's easier for younger people to change their mind, and whether they have any implications for changing your mind in general. I've identified a couple reasons why it seems to be easier for teenagers to change their mind:
- There is less social pressure to be consistent when you're younger. Most adults I know remember switching their major four times in college, and switching which college they wanted to go to more often than that. Adults who change their career 4 times in 4 years are undesirable employees, indecisive, and probably untrustworthy; kids who do the same are taking advantage of all the opportunities available to them.
Lessons for Rationalists: Social pressure to be consistent is one of the big reasons why people don't change their minds. Don't state opinions publicly if you'll later feel pressured to stick by them; ask yourself how much of your attachment to a belief is related to what other people will think of you; foster a community where changing your mind is expected and encouraged. I think LessWrong does really well at all of these.
- Kids have less invested in their beliefs. If you're married to a theist and raising your kids in the tradition of a particular religion, it's a lot harder to suddenly change your mind about the foundations of your life. Similarly, people who've already experienced the loss of people close to them seem to have a lot more invested in the idea that death is the natural cycle of life.
Lessons for Rationalists: It's been suggested before (as a way of avoiding the sunk cost fallacy) that you imagine you've been teleported into this life, and have to decide what paths to take (independent of what the person-who-used-to-be-here was doing with their life). Ask yourself what you have invested in your current beliefs and what you would give up if you changed your mind. Try to find a third alternative between rejecting everything you once believed and clinging stubbornly to a lie; those are rarely really the only options.
- The fewer Fully General Counterarguments you know, the harder it is to maintain a belief in the face of opposing evidence. It's easier to convince a regular religious person of atheism than a theistic philosopher; if you haven't heard all the arguments for atheism before, they seem pretty devastating; if you have already heard them, and built up an elaborate mental defense system, it's easier to ignore them. Knowing about biases can hurt people; knowing more in general seems to also hurt people, unless they first learn how to avoid motivated skepticism.
Lessons for Rationalists: We really should start teaching this stuff in elementary schools. The more people learn about rationality before they get good at clever arguments, the better the odds they'll internalize it. LessWrong has discussed this a fair bit, but not done a ton about it. If people agree this is important, I'm planning a couple more posts on outreach to teenagers.
What other explanations are there?
tl/dr: Changing your mind is easier when you're younger. When you want to change your mind, try thinking like a teenager; if you want to be involved in rationality outreach, teach kids.
Recent site changes
Recent site changes have generated more unhappiness than I expected. This post is a brief note to share resources that will make it easier for concerned site users to track what's happening and what we intend.
- First, know that we're listening. We'll make further site changes next week that will likely include some reversions.
- The official site issue tracker remains unchanged, but for the next week or so we'll work from this public Google Doc (just because it's lighter weight). Nothing on that document is a promise - just evidence of our current thinking. We'll strike out items on that list as we deliver them to our (private) staging server, and will roll them out onto the live site soon after.
- I've reached out to a small handful of SIAI and LessWrong heavyweights to track my balance as we make these changes. My feed should make it clear that I'm trying to act with calm rationality, but I'm obviously invested in the work we've shared to date and asking for some external help seems prudent.
- I'll track discussion on this post.
Meta: Cleaning the front page
All the meetup announcements get promoted, so the front page ends up full of 'em: half of it right now (5/10) is meetup announcements, and with the addition of the quote threads only 30% of the front page is currently 'content'. While meetup announcements are all well and good, it seems counterproductive to have them up there after the meetup date, as is the case with four out of the current five -- it just clutters up the front page even more without providing any benefit.
If post promotion is reversible, it would seem to be a simple step for one of the moderators to depromote each meetup announcement once it's taken place.
(Apologies if this is the wrong place to put an organizational suggestion; I didn't find any obvious better place.)
Wear a Helmet While Driving a Car
A 2006 study showed that “280,000 people in the U.S. receive a motor vehicle induced traumatic brain injury every year” so you would think that wearing a helmet while driving would be commonplace. Race car drivers wear helmets. But since almost no one wears a helmet while driving a regular car, you probably fear that if you wore one you would look silly, attract the notice of the police for driving while weird, or the attention of another driver who took your safety attire as a challenge. (Car drivers are more likely to hit bicyclists who wear helmets.)
The $30+shipping Crasche hat is designed for people who should wear a helmet but don’t. It looks like a ski cap, but contains concealed lightweight protective material. People who have signed up for cryonics, such as myself, would get an especially high expected benefit from using a driving helmet because we very much want our brains to “survive” even a “fatal” crash. I have been using a Crasche hat for about a week.
Announcement: The Sequences eBook will be released in mid-March
The Sequences are being released as an eBook, titled Rationality: From AI to Zombies, on March 12.
We went with the name "Rationality: From AI to Zombies" (based on shminux's suggestion) to make it clearer to people — who might otherwise be expecting a self-help book, or an academic text — that the style and contents of the Sequences are rather unusual. We want to filter for readers who have a wide-ranging interest in (/ tolerance for) weird intellectual topics. Alternative options tended to obscure what the book is about, or obscure its breadth / eclecticism.
The book's contents
Around 340 of Eliezer's essays from 2009 and earlier will be included, collected into twenty-six sections ("sequences"), compiled into six books:
- Map and Territory: sequences on the Bayesian conceptions of rationality, belief, evidence, and explanation.
- How to Actually Change Your Mind: sequences on confirmation bias and motivated reasoning.
- The Machine in the Ghost: sequences on optimization processes, cognition, and concepts.
- Mere Reality: sequences on science and the physical world.
- Mere Goodness: sequences on human values.
- Becoming Stronger: sequences on self-improvement and group rationality.
The six books will be released as a single sprawling eBook, making it easy to hop back and forth between different parts of the book. The whole book will be about 1,800 pages long. However, we'll also be releasing the same content as a series of six print books (and as six audio books) at a future date.
The Sequences have been tidied up in a number of small ways, but the content is mostly unchanged. The largest change is to how the content is organized. Some important Overcoming Bias and Less Wrong posts that were never officially sorted into sequences have now been added — 58 additions in all, forming four entirely new sequences (and also supplementing some existing sequences). Other posts have been removed — 105 in total. The following old sequences will be the most heavily affected:
- Map and Territory and Mysterious Answers to Mysterious Questions are being merged, expanded, and reassembled into a new set of introductory sequences, with more focus placed on cognitive biases. The name 'Map and Territory' will be re-applied to this entire collection of sequences, constituting the first book.
- Quantum Physics and Metaethics are being heavily reordered and heavily shortened.
- Most of Fun Theory and Ethical Injunctions is being left out. Taking their place will be two new sequences on ethics, plus the modified version of Metaethics.
I'll provide more details on these changes when the eBook is out.
Unlike the print and audio-book versions, the eBook version of Rationality: From AI to Zombies will be entirely free. If you want to purchase it on Kindle Store and download it directly to your Kindle, it will also be available on Amazon for $4.99.
To make the content more accessible, the eBook will include introductions I've written up for this purpose. It will also include a LessWrongWiki link to a glossary, which I'll be recruiting LessWrongers to help populate with explanations of references and jargon from the Sequences.
I'll post an announcement to Main as soon as the eBook is available. See you then!
Bayes Academy: Development report 1
Some of you may remember me proposing a game idea that went by the name of The Fundamental Question. Some of you may also remember me talking a lot about developing an educational game about Bayesian Networks for my MSc thesis, but not actually showing you much in the way of results.
Insert the usual excuses here. But thanks to SSRIs and mytomatoes.com and all kinds of other stuff, I'm now finally on track towards actually accomplishing something. Here's a report on a very early prototype.
This game has basically two goals: to teach its players something about Bayesian networks and probabilistic reasoning, and to be fun. (And third, to let me graduate by giving me material for my Master's thesis.)
We start with the main character stating that she is nervous. Hitting any key, the player proceeds through a number of lines of internal monologue:
I am nervous.
I’m standing at the gates of the Academy, the school where my brother Opin was studying when he disappeared. When we asked the school to investigate, they were oddly reluctant, and told us to drop the issue.
The police were more helpful at first, until they got in contact with the school. Then they actually started threatening us, and told us that we would get thrown in prison if we didn’t forget about Opin.
That was three years ago. Ever since it happened, I’ve been studying hard to make sure that I could join the Academy once I was old enough, to find out what exactly happened to Opin. The answer lies somewhere inside the Academy gates, I’m sure of it.
Now I’m finally 16, and facing the Academy entrance exams. I have to do everything I can to pass them, and I have to keep my relation to Opin a secret, too.
???: “Hey there.”
Eep! Someone is talking to me! Is he another applicant, or a staff member? Wait, let me think… I’m guessing that applicant would look a lot younger than staff members! So, to find that out… I should look at him!
[You are trying to figure out whether the voice you heard is a staff member or another applicant. While you can't directly observe his staff-nature, you believe that he'll look young if he's an applicant, and like an adult if he's a staff member. You can look at him, and therefore reveal his staff-nature, by right-clicking on the node representing his apperance.]
Here is our very first Bayesian Network! Well, it's not really much of a network: I'm starting with the simplest possible case in order to provide an easy start for the player. We have one node that cannot be observed ("Student", its hidden nature represented by showing it in greyscale), and an observable node ("Young-looking") whose truth value is equal to that of the Student node. All nodes are binary random variables, either true or false.
According to our current model of the world, "Student" has a 50% chance of being true, so it's half-colored in white (representing the probability of it being true) and half-colored in black (representing the probability of it being false). "Young-looking" inherits its probability directly. The player can get a bit of information about the two nodes by left-clicking on them.
The game also offers alternate color schemes for colorblind people who may have difficulties distinguishing red and green.
Now we want to examine the person who spoke to us. Let's look at him, by right-clicking on the "Young-looking" node.
Not too many options here, because we're just getting started. Let's click on "Look at him", and find out that he is indeed young, and thus a student.
This was the simplest type of minigame offered within the game. You are given a set of hidden nodes whose values you're tasked with discovering by choosing which observable nodes to observe. Here the player had no way to fail, but later on, the minigames will involve a time limit and too many observable nodes to inspect within that time limit. It then becomes crucial to understand how probability flows within a Bayesian network, and which nodes will actually let you know the values of the hidden nodes.
The story continues!
Short for an adult, face has boyish look, teenagerish clothes... yeah, he looks young!
He's a student!
...I feel like I’m overthinking things now.
...he’s looking at me.
I’m guessing he’s either waiting for me to respond, or there’s something to see behind me, and he’s actually looking past me. If there isn’t anything behind me, then I know that he must be waiting for me to respond.
Maybe there's a monster behind me, and he's paralyzed with fear! I should check that possibility before it eats me!
[You want to find out whether the boy is waiting for your reply or staring at a monster behind you. You know that he's looking at you, and your model of the world suggests that he will only look in your direction if he's waiting for you to reply, or if there's a monster behind you. So if there's no monster behind you, you know that he's waiting for you to reply!]
Slightly more complicated network, but still, there's only one option here. Oops, apparently the "Looks at you" node says it's an observable variable that you can right-click to observe, despite the fact that it's already been observed. I need to fix that.
Anyway, right-clicking on "Attacking monster" brings up a "Look behind you" option, which we'll choose.
You see nothing there. Besides trees, that is.
Boy: “Um, are you okay?”
“Yeah, sorry. I just… you were looking in my direction, and I wasn’t sure of whether you were expecting me to reply, or whether there was a monster behind me.”
He blinks.
Boy: “You thought that there was a reasonable chance for a monster to be behind you?”
I’m embarrassed to admit it, but I’m not really sure of what the probability of a monster having snuck up behind me really should have been.
My studies have entirely focused on getting into this school, and Monsterology isn’t one of the subjects on the entrance exam!
I just went with a 50-50 chance since I didn’t know any better.
'Okay, look. Monsterology is my favorite subject. Monsters avoid the Academy, since it’s surrounded by a mystical protective field. There’s no chance of them getting even near! 0 percent chance.'
'Oh. Okay.'
[Your model of the world has been updated! The prior of the variable 'Monster Near The Academy' is now 0%.]
Then stuff happens and they go stand in line for the entrance exam or something. I haven't written this part. Anyway, then things get more exciting, for a wild monster appears!
Stuff happens
AAAAAAH! A MONSTER BEHIND ME!
Huh, the monster is carrying a sword.
Well, I may not have studied Monsterology, but I sure did study fencing!
[You draw your sword. Seeing this, the monster rushes at you.]
He looks like he's going to strike. But is it really a strike, or is it a feint?
If it's a strike, I want to block and counter-attack. But if it's a feint, that leaves him vulnerable to my attack.
I have to choose wisely. If I make the wrong choice, I may be dead.
What did my master say? If the opponent has at least two of dancing legs, an accelerating midbody, and ferocious eyes, then it's an attack!
Otherwise it's a feint! Quick, I need to read his body language before it's too late!
Now get to the second type of minigame! Here, you again need to discover the values of some number of hidden variables within a time limit, but here it is in order to find out the consequences of your decision. In this one, the consequence is simple - either you live or you die. I'll let the screenshot and tutorial text speak for themselves:
[Now for some actual decision-making! The node in the middle represents the monster's intention to attack (or to feint, if it's false). Again, you cannot directly observe his intention, but on the top row, there are things about his body language that signal his intention. If at least two of them are true, then he intends to attack.]
[Your possible actions are on the bottom row. If he intends to attack, then you want to block, and if he intends to feint, you want to attack. You need to inspect his body language and then choose an action based on his intentions. But hurry up! Your third decision must be an action, or he'll slice you in two!]
In reality, the top three variables are not really independent of each other. We want to make sure that the player can always win this battle despite only having three actions. That's two actions for inspecting variables, and one action for actually making a decision. So this battle is rigged: either the top three variables are all true, or they're all false.
...actually, now that I think of it, the order of the variables is wrong. Logically, the body language should be caused by the intention to attack, and not vice versa, so the arrows should point from the intention to body language. I'll need to change that. I got these mixed up because the prototypical exemplar of a decision minigame is one where you need to predict someone's reaction from their personality traits, and there the personality traits do cause the reaction. Anyway, I want to get this post written before I go to bed, so I won't change that now.
Right-clicking "Dancing legs", we now see two options besides "Never mind"!
We can find out the dancingness of the enemy's legs by thinking about our own legs - we are well-trained, so our legs are instinctively mirroring our opponent's actions to prevent them from getting an advantage over us - or by just instinctively feeling where they are, without the need to think about them! Feeling them would allow us to observe this node without spending an action.
Unfortunately, feeling them has "Fencing 2" as a prerequisite skill, and we don't have that. Neither could we have them, in this point of the game. The option is just there to let the player know that there are skills to be gained in this game, and make them look forward to the moment when they can actually gain that skill. As well as giving them an idea of how the skill can be used.
Anyway, we take a moment to think of our legs, and even though our opponent gets closer to us in that time, we realize that our legs our dancing! So his legs must be dancing as well!
With our insider knowledge, we now know that he's attacking, and we could pick "Block" right away. But let's play this through. The network has automatically recalculated the probabilities to reflect our increased knowledge, and is now predicting a 75% chance for our enemy to be attacking, and for "Blocking" to thus be the right decision to make.
Next we decide to find out what his eyes say, by matching our gaze with his. Again, there would be a special option that cost us no time - this time around, one enabled by Empathy 1 - but we again don't have that option.
Except that his gaze is so ferocious that we are forced to look away! While we are momentarily distracted, he closes the distance, ready to make his move. But now we know what to do... block!
Success!
Now the only thing that remains to do is to ask our new-found friend for an explanation.
"You told me there was a 0% chance of a monster near the academy!"
Boy: “Ehh… yeah. I guess I misremembered. I only read like half of our course book anyway, it was really boring.”
“Didn’t you say that Monsterology was your favorite subject?”
Boy: “Hey, that only means that all the other subjects were even more boring!”
“. . .”
I guess I shouldn’t put too much faith on what he says.
[Your model of the world has been updated! The prior of the variable 'Monster Near The Academy' is now 50%.]
[Your model of the world has been updated! You have a new conditional probability variable: 'True Given That The Boy Says It's True', 25%]
And that's all for now. Now that the basic building blocks are in place, future progress ought to be much faster.
Notes:
As you might have noticed, my "graphics" suck. A few of my friends have promised to draw art, but besides that, the whole generic Java look could go. This is where I was originally planning to put in the sentence "and if you're a Java graphics whiz and want to help fix that, the current source code is conveniently available at GitHub", but then getting things to his point took longer than I expected and I didn't have the time to actually figure out how the whole Eclipse-GitHub integration works. I'll get to that soon. Github link here!
I also want to make the nodes more informative - right now they only show their marginal probability. Ideally, clicking on them would expand them to a representation where you could visually see what components their probability composed of. I've got some scribbled sketches of what this should look like for various node types, but none of that is implemented yet.
I expect some of you to also note that the actual Bayes theorem hasn't shown up yet, at least in no form resembling the classic mammography problem. (It is used implicitly in the network belief updates, though.) That's intentional - there will be a third minigame involving that form of the theorem, but somehow it felt more natural to start this way, to give the player a rough feeling of how probability flows through Bayesian networks. Admittedly I'm not sure of how well that's happening so far, but hopefully more minigames should help the player figure it out better.
What's next? Once the main character (who needs a name) manages to get into the Academy, there will be a lot of social scheming, and many mysteries to solve in order for her to find out just what did happen to her brother... also, I don't mind people suggesting things, such as what could happen next, and what kinds of network configurations the character might face in different minigames.
(Also, everything that you've seen might get thrown out and rewritten if I decide it's no good. Let me know what you think of the stuff so far!)
= 783df68a0f980790206b9ea87794c5b6)














Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)