Harry Yudkowsky and the Methods of Postrationality: Chapter One: Em Dashes Colons and Ellipses, Littérateurs Go Wild
"If you give George Lukács any taste at all, immediately become the Deathstar." — Old Klingon Proverb
There was no nice way to put it: Harry James Potter-Yudkowsky was half Potter, half Yudkowsky. Harry just didn’t fit in. It wasn't that he lacked humanity. It was just that no one else knew (P)Many_Worlds, (P)singularity, or (P)their_special_insight_into_the_true_beautiful_Bayesian_fractally_recursive_nature_of_reality. Other people were roles—and how shall an actor, an agent, relate to those who are merely what they are, merely their roles? Merely their roles, without pretext or irony? How shall the PC fuck with the NPCs? Harry James Potter-Yudkowsky oft asked himself this question, but his 11-year-old mind lacked the g to grasp the answer. For if you are to draw any moral from this tale, godforsaken readers, the moral you must draw is this: P!=NP.
One night Harry Potter-Yudkowsky was outside, pretending to be Keats, staring at the stars and the incomprehensibly vast distances between them, pondering his own infinite significance in the face of such an overwhelming sea of stupidity, when an owl dropped a letter directly on his head, winking slyly. “You’re a wizard,” said the letter, while the owl watched, increasingly gloatingly, “and we strongly suggest you attend our school, which goes by the name Hogwarts. 'Because we’re sexy and you know it.’”
Harry pondered this for five seconds. “Curse the stars!, literally curse them!, Abra Kadabra!, for I must admit what I always knew in my heart to be true,” lamented Harry. “This is fanfic.”
“Meh.”
And so, as they'd been furiously engaged in for months, the divers models of Harry Potter-Yudkowsky gathered dust. In layman’s terms...
Harry didn’t update at all.
Harry: 1
Author: 0
(To be fair, the author was drunk.)
Next chapter: "Analyzing the Fuck out of an Owl"
...
Criticism appreciated.
Suggestion : make it easier to work out which tags to put on your article
It would improve the usefulness of article navigation, if people tended to use the same tag for the same thing.
Currently, if you want to decide whether to tag your article "fai" or "friendly_ai", your best bet is to manually try:
http://lesswrong.com/tag/friendly_ai/
And count how many articles use which variant. But, even then, there might be other similar variants you didn't think to check.
What would be nice is a tag cloud, listing how many articles there are (possibly weighted by ranking) that use each variant. The list of tags on the wiki isn't dynamically generated, and is very incomplete.
It wouldn't need to be something fancy, like:
Just an alphabetical list, with a number by each entry, would be an improvement over the current situation.
If you are downvoting this article, and would like to provide constructive feedback, here's a place to provide it: LINK
Peer review me
I wrote an article that I hoped to post on the main page, but then I got stage fright and was afraid to even put it here. So I guess I'm just going to show it to whichever of you is willing to review it privately.
Any takers? Qualifications: must be a fan of Z-movies.
Would the world be better off without 50% of the people in it?
I made a stupid mistake of posting a conclusion before I had the whole analysis typed up or had looked up my references. I knew I would be called on it. I’ll appreciate any help with the <ref>'s. Also: I'm under Crocker's Rules, and criticism is welcome. So here goes nothi....
There's a theory out there that states that new inventions are combinations of old inventions <ref>. So if your hunter-gatherer tribe has knife-like rocks and sticks, just about the only thing you can invent is a spear. Fire + clay = pots. Little bones with holes + animal sinews + skins = needle => clothes. But if you were modern day's best chemist transported into the past, with all your knowledge intact, you'd be unlikely to make any aspirin. Why? Because the tools you need haven't been invented.
Instead of looking at what's projected to happen, consider what has been happening happened. With the increase in world population, the level technology and average standard of living have been going up.
I argue that more population => better technology => easier life => more population.
In the modern day, consider: US population, US Patents per year.
So what about the “unproductive” people? Those who “don't pull their own weight?” Those “living off of welfare, charity donations, etc?” Those who just barely survive off of subsistence living? They put a drain on world resources without adding anything back. Wouldn't the world be better off without them?
Suppose Omega made a backup copy of the Solar system. It created a perfect copy of everything else, but it only replicated 50% of humanity. Pick your favorite selection criterion for who will be copied. You will go to the copied world, and other you will live on as a zombie.
Suppose the people who work in sweatshops get copied. But subsistence farmers from the same regions don't. Then it's reasonable to predict that some people from sweatshops would quit their jobs and fill up the niche you left available. Fewer people would be supporting the developed world.
Historically, people used technology to solve population problems only when those problems became bad enough. Farming wasn't invented until there were too many hunter-gatherers. Industry was not invented until there were too many farmers. Sewers were not invented until there was a problem with urban pollution.
I'll skip the statistical argument1. If truly brilliant people (the likes of whom had invented the wheel, the steam engine and the computer) are 1 in a billion, then having more billions means having more of those people.
Why do people have no confidence that we can invent ourselves out of the immense pressure we're putting on the environment? Technology is already there to supply humanity with renewable energy.
If you could choose whether your consciousness would go to Omega's backup world or stay on the original Earth, where would you choose? And if you chose the copied world, what selection criterion would you use to pick who would go with you?
Footnote1 : Statistics pop quiz (read: check my numbers, please). The world population is (~6,887,656,866). Let’s guess that “inventiveness” is distributed normally.I wouldn't be surprised if it were strongly correlated with IQ. How many people would you expect to find 6 standard deviations above the mean? IQ 190 for comparison. (upside down answer: 6.8). What about when the world population was 1 billion around 1800? (no calculators! just 1). We need to multiply 113 times to produce a person more than 7 standard deviations above the mean (IQ 205). The tail ends aren't necessarily this well-behaved, but then, given any distribution over the infinite competence axis, increasing the number of people would increase the number of people at each competence level.
EDIT: I rewrote this article. If you had managed to wade through the blabber I had before, my point stayed the same.
Simple friendliness: Plan B for AI
Friendly AI, as believes by Hanson, is doomed to failure, since if the friendliness system is too complicated, the other AI projects generally will not apply it. In addition, any system of friendliness may still be doomed to failure - and more unclear it is, the more chances it has to fail. By fail I mean that it will not be accepted by most successful AI project. Thus, the friendliness system should be simple and clear, so it can be spread as widely as possible. I roughly figured, what principles could form the basis of a simple friendliness:
1) Any one should understood that AI can be global risks and the friendliness of the system is needed. This basic understanding should be shared by maximum number of AI-groups (I think this is alrready done)
2) Architecture of AI should be such that it would use rules explicitly. (I.e. no genetic algorithms or neural networks)
3) the AI should obey commands of its creator, and clearly understand who is the creator and what is the format of commands.
4) AI must comply with all existing criminal an civil laws. These laws are the first attempt to create a friendly AI – in the form of state. That is an attempt to describe good, safe human life using a system of rules. (Or system of precedents). And the number of volumes of laws and their interpretation speaks about complexity of this problem - but it has already been solved and it is not a sin to use the solution.
5) the AI should not have secrets from their creator. Moreover, he is obliged to inform him of all his thoughts. This avoids rebel of AI.
6) Each self optimizing of AI should be dosed in portions, under the control of the creator. And after each step must be run a full scan of system goals and effectiveness.
7) the AI should be tested in a virtual environment (such as Second Life) for safety and adequacy.
8) AI projects should be registrated by centralized oversight bodies and receive safety certification from it.
Such obvious steps do not create absolutely safe AI (you can figure out how to bypass it out), but they make it much safer. In addition, they look quite natural and reasonable so they could be use by any AI project with different variations. Most of this steps are fallable. But without them the situation would be even worse. If each steps increase safety two times, 8 steps will increase it 256 times, which is good. Simple friendliness is plan B if mathematical FAI fails.
DRAFT: Three Intellectual Temperaments: Birds, Frogs and Beavers
Here is a draft of a potential top-level post which I'd welcome feedback on. I would appreciate any suggestions, corrections, additional examples, qualifications, or refinements.
Beauty in Mathematics
Serious mathematicians are often drawn toward the subject and motivated by a powerful aesthetic response to mathematical stimuli. In his essay on Mathematical Creation, Henri Poincare wrote
It may be surprising to see emotional sensibility invoked à propos of mathematical demonstrations which, it would seem, can interest only the intellect. This would be to forget the feeling of mathematical beauty, of the harmony of numbers and forms, of geometric elegance. This is a true aesthetic feeling that all real mathematicians know, and surely it belongs to emotional sensibility.
The prevalence and extent of the feeling of mathematical beauty among mathematicians is not well known. In this article I'll describe some of the reasons for this and give examples of the phenomenon. I've excised many of the quotations in this article from the extensive collection of quotations compiled by my colleague Laurens Gunnarsen.
= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)