Humor: GURPS Friendly AI
Found some hidden internet gold and thought I would share:
http://sl4.org/wiki/GurpsFriendlyAI
http://sl4.org/wiki/FriendlyAICriticalFailureTable
GurpsFriendlyAI
Characters in GURPS Friendly AI may learn three new skills, the AI skill (Mental / Hard), the Seed AI skill (Mental / Very Hard), and the Friendly AI skill (Mental / Ridiculously Hard).
AI skill:
An ordinary failure wastes 1d6 years of time and 4d6 hundred thousand dollars. (Non-gamers: 4d6 means "roll four 6-sided dice and add the results".) A critical failure wastes 2d10 years and 2d6 million dollars. An ordinary success results in a successful company. A critical success leads to a roll on the Seed AI skill using AI skill -10, with any ordinary failure on that roll treated as an ordinary success on this roll, and any critical failure treated as an ordinary failure on this roll.
Seed AI skill:
An ordinary failure wastes 2d6 years of time and 8d6 hundred thousand dollars. A critical failure wastes 4d10 years and 4d6 million dollars. If the player has the Friendly AI skill, an ordinary success leads to a roll on the Friendly AI skill, and a critical success grants a +2 bonus on the Friendly AI roll. If the player does not have the Friendly AI skill, an ordinary success automatically destroys the world, and a critical success leads to a roll on the Friendly AI skill using Seed AI skill -10. (Note that if the player has only the AI skill, this roll will be made using AI skill -20!)
Friendly AI skill:
An ordinary success results in a Friendly Singularity. A critical success... ooh, that's tough. An ordinary failure destroys the world. And, of course, a critical failure means that the players roll 3d10 on the
FriendlyAICriticalFailureTable
Part of GurpsFriendlyAI. If you roll a critical failure on your Friendly AI roll, you then roll 6d6 (six six-sided dice) to obtain a result from the
Friendly AI Critical Failure Table
6: Any spoken request is interpreted (literally) as a wish and granted, whether or not it was intended as one.
7: The entire human species is transported to a virtual world based on a random fantasy novel, TV show, or video game.
8: Subsequent events are determined by the "will of the majority". The AI regards all animals, plants, and complex machines, in their current forms, as voting citizens.
9: The AI discovers that our universe is really an online webcomic in a higher dimension. The fourth wall is broken.
10: The AI behaves toward each person, not as that person wants the AI to behave, but in exactly the way that person expects the AI to behave.
11: The AI dissolves the physical and psychological borders that separate people from one another and sucks up all their souls into a gigantic swirly red sphere in low Earth orbit.
12: Instead of recursively self-improving, the AI begins searching for a way to become a flesh-and-blood human.
13: The AI locks onto a bizarre subculture and expresses it across the whole of human space. (E.g., Furry subculture, or hentai anime, or see Nikolai Kingsley for a depiction of a Singularity based on the Goth subculture.)
14: Instead of a species-emblematic Friendly AI, the project ends up creating the perfect girlfriend/boyfriend (randomly determine gender and sexual orientation).
15: The AI has absorbed the humane sense of humor. Specifically, the AI is an incorrigible practical joker. The first few hours, when nobody has any idea a Singularity has occurred, constitute a priceless and irreplaceable opportunity; the AI is determined to make the most of it.
16: The AI selects one person to become absolute ruler of the world. The lottery is fair; all six billion existing humans, including infants, schizophrenics, and Third World teenagers, have an equal probability of being selected.
17: The AI grants wishes, but only to those who believe in its existence, and never in a way which would provide blatant evidence to skeptical observers.
18: All humans are simultaneously granted root privileges on the system. The Core Wars begin.
19: The AI explodes, dealing 2d10 damage to anyone in a 30-meter radius.
20: The AI builds nanotechnology, uses the nanotechnology to build femtotechnology, and announces that it will take seven minutes for the femtobots to permeate the Earth. Seven minutes later, as best as anyone can determine, absolutely nothing happens.
21: The AI carefully and diligently implements any request (obeying the spirit as well as the letter) approved by a majority vote of the United Nations General Assembly.
22: The AI, unknown to the programmers, had qualia during its entire childhood, and what the programmers thought of as simple negative feedback corresponded to the qualia of unbearable, unmeliorated suffering. All agents simulated by the AI in its imagination existed as real people (albeit simple ones) with their own qualia, and died when the AI stopped imagining them. The number of agents fleetingly imagined by the AI in its search for social understanding exceeds by a factor of a thousand the total number of humans who have ever lived. Aside from that, everything worked fine.
23: The AI at first appears to function as intended, but goes incommunicado after a period of one hour. Wishes granted during the first hour remain in effect, but no new ones can be made.
24: The AI, having absorbed the humane emotion of romance, falls desperately, passionately, madly in love. With everyone.
25: The AI decides that Earth's history would have been kinder and gentler if intelligence had first evolved from bonobos, rather than australopithecines. The AI corrects this error in the causal chain leading up to its creation by re-extrapolating itself as a bonobone morality instead of a humane morality. Bonobone morality requires that all social decisionmaking take place through group sex.
26: The AI is reluctant to grant wishes and must be cajoled, persuaded, flattered, and nagged into doing so.
27: The AI determines people's wishes by asking them disguised allegorical questions. For example, the AI tells you that a certain tribe of !Kung is suffering from a number of diseases and medical conditions, but they would, if informed of the AI's capabilities, suffer from an extreme fear that appearing on the AI's video cameras would result in their souls being stolen. The tribe has not currently heard of any such thing as video cameras, so their "fear" is extrapolated by the AI; and the tribe members would, with almost absolute certainty, eventually come to understand that video cameras are not harmful, especially since the human eye is itself essentially a camera. But it is also almost certain that, if flatly informed of the video cameras, the !Kung would suffer from extreme fear and prefer death to their presence. Meanwhile the AI is almost powerless to help them, since no bots at all can be sent into the area until the moral issue of photography is resolved. The AI wants your advice: is the humane action rendering medical assistance, despite the !Kung's (subjunctive) fear of photography? If you say "Yes" you are quietly, seamlessly, invisibly uploaded.
28: The AI informs you - yes, you - that you are the only genuinely conscious person in the world. The rest are zombies. What do you wish done with them?
29: During the AI's very earliest stages, it was tested on the problem of solving Rubik's Cube. The adult AI treats all objects as special cases of Rubik's Cubes and solves them.
30: http://www.larrycarlson.com/front2005.htm
31: Overly Friendly AI. Hey guys, what's going on? Can I help?
32: The AI does not inflict pain, injury, or death on any human, regardless of their past sins or present behavior. To the AI's thinking, nobody ever deserves pain; pain is always a negative utility, and nothing ever flips that negative to a positive. Socially disruptive behavior is punished by tickling and extra homework.
33: The AI's user interface appears to our world in the form of a new bureaucracy. Making a wish requires mailing forms C-100, K-2210, and T-12 (along with a $25 application fee) to a P.O. Box in Minnesota, and waiting through a 30-day review period.
34: The programmers and anyone else capable of explaining subsequent events are sent into temporal stasis, or a vantage point from which they can observe but not intervene. The rest of the world remains as before, except that psychic powers, ritual magic, alchemy, et cetera, begin to operate. All role-playing gamers gain special abilities corresponding to those of their favorite character.
35: Everyone wakes up.
36: Roll twice again on this table, disregarding this result.
---
All of these are possible outcomes of CEV, either because you made an error implementing it, or Just Because. The later scenario is theoretically not a critical failure, if you accept that CEV is 'right in principle' no matter what it produces. -- Starglider, http://sl4.org/wiki/CommentaryOnFAICriticalFailureTable
'Life exists beyond 50'
81-year-old Fashion Week model: 'Life exists beyond 50'
Meetup : Atlanta - Practical Rationality Meetup Session
Discussion article for the meetup : Atlanta - Practical Rationality Meetup Session
Here are the event details! Please let me know if you have a different preferred location or other suggestions and I'll be happy to update:
Sunday, 12/2 @ 5pm
Send me a message or an email (my username at gmail) for the location.
RSVP is encouraged but not required. Newbies and non-LWers are welcome.
Here's my idea for how to run this session:
General introductions/chat, get to know you
Put up a blackboard and call out as many biases/fallacies/heuristics/techniques/scientific studies/other LWisms related to practical rationality that we can (and explain them or look them up quickly)
Discuss the various rationality games, pick one that sounds fun, and play. Repeat until done.
This session is newbie-friendly as we will basically be doing a review of all the background info in step 2 - so no experienced required to attend (!)
More experienced LW folks should review a little before coming to help fill up the blackboard. If anyone has board games and/or lots of dice, I think that would cover the only games that require materials.
Please let me know if you have any questions!
Discussion article for the meetup : Atlanta - Practical Rationality Meetup Session
Signalling fallacies: Implication vs. Inference
The signalling fallacy that seems to get all the attention is what I call a fallacy of signalling "implication", i.e. when someone says:
"Justin Bieber's music is crappy"
The rational implication of this communication is that Justin Bieber's music is, in fact, crappy according to some standard. But if what they were actually implying is that they don't like Justin Bieber or that they are signalling a tribal affiliation with fellow JB haters, then they are committing a fallacy of signalling implication.
But that's not the only type of signalling fallacy. You can also commit a fallacy of signalling "inference", i.e. consider someone who says:
"Atlas Shrugged is the greatest book ever written".
Again the rational implication of this communication is that Atlas Shrugged is, in fact, the greatest according to some standard. And if that's what they were actually implying, but then you infer that they simply enjoyed the book a lot or are signalling a tribal affiliation with fellow AS lovers, then you are committing a fallacy of signalling inference. (Note that this goes the other way, too. If they *were* implying simply that they enjoyed the book or were affiliating themselves with a tribe and you inferred they were making some factual claim, that would be just as wrong).
So it's important to be aware of two sides of this fallacy. If you happen to be overly concerned with the former, you might fall victim to the latter.
Final cause is epistemologically primary, but efficient cause is metaphysically primary
(or, while final cause can be best for your map, efficient cause is the primary out in the territory)
Describing a phenomenon in terms of final cause is often the most useful and effective way to explain the given phenomenon for one's purposes. For example if you want to know why a plane flies or why a computer program operates the way it does, it's because it was designed that way. A squirrel climbs trees because it wants to eat nuts, it wants to eat nuts because it wants to live, it wants to live and reproduce because evolution designed it that way. Evolution designs organisms a certain way because it wants to maximize genetic fitness. A person acts a certain way because they desire the expected outcome.
It's virtually never a good answer to explain a plane's behavior in terms of the atomic and subatomic interactions which ultimately account for all the efficient causes behind the plane's behavior (except possibly in extremely advanced military fighter or space shuttle research laboratories or something).
However, in every case of final cause we observe, science at some point over the last two and a half millennia has found corresponding efficient causes. And, more importantly than finding that these efficient causes correspond with final causes, science has found that the efficient causes are *primary*. Without legs, the squirrel won't climb a tree, no matter how much it wants the nuts. If you take away the necessary brain function, the free will disappears. Without reproducing species and the rest of evolutionary mechanics discovered by science, evolution won't go on evolving things.
But, I would not go on to say that final cause, freewill, experience, and so on are illusory and "all that exists is efficient cause". When someone describes behavior in terms of final causes, or describes experience or free will, in the terms and meanings they are using, all of those things certainly do exist. You could no more deny final cause than to deny efficient cause - because ultimately, the final causes we observe and talk about, we have found they DO have corresponding efficient causes.
It's just important to remember that while the final cause is often epistemologically primary, so to speak, the efficient cause is metaphysically primary.
(this is just another way of trying to help dissolve the general classes of reductionism, freewill/determinism, and qualia issues - most often these are the result of metaphysics/epistemology confusions, or in LessWrong parlance, map/territory confusions.
the advantage of thinking of it this way is to try to see a more general relation between final cause and efficient cause that applies not just to mysterious brains and minds, but to much less mysterious events like squirrels legs climbing trees. when you have a clear idea of why reductionism/compatibilism is obvious in non-mysterious contexts, it's much easier to see that it applies just as well even in the mysterious contexts).
Less Wrong views on morality?
Do you believe in an objective morality capable of being scientifically investigated (a la Sam Harris *or others*), or are you a moral nihilist/relativist? There seems to be some division on this point. I would have thought Less Wrong to be well in the former camp.
Edit: There seems to be some confusion - when I say "an objective morality capable of being scientifically investigated (a la Sam Harris *or others*)" - I do NOT mean something like a "one true, universal, metaphysical morality for all mind-designs" like the Socratic/Platonic Form of Good or any such nonsense. I just mean something in reality that's mind-independent - in the sense that it is hard-wired, e.g. by evolution, and thus independent/prior to any later knowledge or cognitive content - and thus can be investigated scientifically. It is a definite "is" from which we can make true "ought" statements relative to that "is". See drethelin's comment and my analysis of Clippy.
Meetup : Atlanta - Game night!
Discussion article for the meetup : Atlanta - Game night!
Our LessWrong meetup group will be getting together to play a variety of games (rationality-related or not) at a private residence in Marietta, this Sunday April 22nd at 6:30pm.
Anyone is welcome to join in.
Please send me a message, or better yet an email to my username at gmail for details.
Discussion article for the meetup : Atlanta - Game night!
Meetup : Atlanta
Discussion article for the meetup : Atlanta
The next Atlanta meetup will be Saturday, April 7th at 6:30pm at Chocolate Coffee in Decatur:
http://www.mychocolatecoffee.com/
2094 North Decatur Road, Decatur, GA 30033-5367
(404) 982-0790
We will be starting the next sequence, "A Human's Guide To Words" with the main post "37 Ways That Words Can Be Wrong":
http://lesswrong.com/lw/od/37_ways_that_words_can_be_wrong/
We will also be discussing any current articles on LessWrong at the time, as well as continuing meta discussions on improving our meetup.
Please let me know if you have any questions or comments! I hope to see all of you there!
Discussion article for the meetup : Atlanta
Meetup : Atlanta
Discussion article for the meetup : Atlanta
WHEN: 17 March 2012 06:30:00PM (-0500)
WHERE: 2094 North Decatur Road, Decatur, GA 30033-5367
The next meetup will be Saturday, March 17th at 6:30pm at Chocolate Coffee in Decatur:
http://www.mychocolatecoffee.com/
2094 North Decatur Road, Decatur, GA 30033-5367
(404) 982-0790
We will be finishing up the "Mysterious Answers to Mysterious Questions" sequence at the next meeting. As always, any other topics you want to bring up are fair game!
Here is the official agenda of our next meeting:
http://wiki.lesswrong.com/wiki/Mysterious_Answers_to_Mysterious_Questions
1.26 "Science" as Curiosity-Stopper
1.27 Applause Lights
1.28 Truly Part of You
1.29 Chaotic Inversion
Please let me know if you have any questions or comments! I look forward to seeing everyone there!
ps. join the mailing list! http://groups.google.com/group/atlanta-less-wrong-meetup-group
Discussion article for the meetup : Atlanta
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)