[Link] Short story by Yvain
Yvain isn't a big enough self-promoter to link to this, but I liked it a lot and I think you will too.
The Fiction Genome Project
The Music Genome Project is what powers Pandora. According to Wikipedia:
The Music Genome Project was first conceived by Will Glaser and Tim Westergren in late 1999. In January 2000, they joined forces with Jon Kraft to found Pandora Media to bring their idea to market.[1] The Music Genome Project was an effort to "capture the essence of music at the fundamental level" using almost 400 attributes to describe songs and a complex mathematical algorithm to organize them. Under the direction of Nolan Gasser, the musical structure and implementation of the Music Genome Project, made up of 5 Genomes (Pop/Rock, Hip-Hop/Electronica, Jazz, World Music, and Classical), was advanced and codified.
A given song is represented by a vector (a list of attributes) containing approximately 400 "genes" (analogous to trait-determining genes for organisms in the field of genetics). Each gene corresponds to a characteristic of the music, for example, gender of lead vocalist, level of distortion on the electric guitar, type of background vocals, etc. Rock and pop songs have 150 genes, rap songs have 350, and jazz songs have approximately 400. Other genres of music, such as world and classical music, have 300–500 genes. The system depends on a sufficient number of genes to render useful results. Each gene is assigned a number between 1 and 5, in half-integer increments.[2]
Given the vector of one or more songs, a list of other similar songs is constructed using a distance function. Each song is analyzed by a musician in a process that takes 20 to 30 minutes per song.[3] Ten percent of songs are analyzed by more than one technician to ensure conformity with the in-house standards and statistical reliability. The technology is currently used by Pandora to play music for Internet users based on their preferences. Because of licensing restrictions, Pandora is available only to users whose location is reported to be in the USA by Pandora's geolocation software.[4]
Eminent lesswronger, strategist, and blogger, Sebastian Marshall, wonders:
Personally, I was thinking of doing a sort of “DNA analysis” of successful writing. Have you heard of the Music Genome Project? It powers Pandora.com.
So I was thinking, you could probably do something like that for writing, and then try to craft a written work with elements known to appeal to people. For instance, if you wished to write a best selling detective novel, you might do an analysis of when the antagonist(s) appear in the plot for the first time. You might find that 15% of bestsellers open with the primary antagonist committing their crime, 10% have the antagonist mixed in quickly into the plot, and 75% keep the primary antagonist a vague and shadowy figure until shortly before the climax.
I don’t know if the pattern fits that – I don’t read many detective novels – but it would be a bit of a surprise if it did. You might think, well, hey, I better either introduce the antagonist right away having them commit their crime, or keep him shadowy for a while.
Or, to use an easier example – perhaps you could wholesale adopt the use of engineering checklists into your chosen discipline? It seems to me like lots of fields don’t use checklists that could benefit tremendously from them. I run this through my mind again and again – what kind of checklist could be built here? I first came across the concept of checklists being adopted in surgery from engineering, and then having surgical accidents and mistakes go way down.
Some people at TV Tropes came across that article, and thought that their wiki's database might be a good starting point to make this project a reality. I came here to look for the savvy, intelligence, and level of technical expertise in all things AI and NIT that I've come to expect of this site's user-base, hoping that some of you might be interested in having a look at the discussion, and, perhaps, would feel like joining in, or at least sharing some good advice.
Thank you. (Also, should I make this post "Discussion" or "Top Level"?)
"Where Am I?", by Daniel Dennett
”Where Am I?” is a short story by Daniel Dennett from his book Brainstorms: Philosophical Essays on Mind and Psychology. Some of you might already be familiar with it.
The story is a humorous semi-science fiction one, where Dennett gets a job offer form Pentagon that entails moving his brain into a vat, without actually moving his point of view. Later on it brings up questions about uploading and what it would mean in terms of diverging perspectives and so on. Aside from being a joy to read, it offers solutions to a few hurdles about the nature of consciousnesses and personal identity.
Suppose, I argued to myself, I were now to fly to California, rob a bank, and be apprehended. In which state would I be tried: in California, where the robbery took place, or in Texas, where the brains of the outfit were located? Would I be a California felon with an out-of-state brain, or a Texas felon remotely controlling an accomplice of sorts in California? It seemed possible that I might beat such a rap just on the undecidability of that jurisdictional question, though perhaps it would be deemed an interstate, and hence Federal, offense.
[Book Suggestions] Summer Reading for Younglings.
I bought my niece a Kindle that just arrived and I'm about to load it up with books to give it to her tomorrow for her birthday. I've decided to be a sneaky uncle and include good books that can teach better abilities to think or at least to consider science cool and interesting. She is currently in the 4th Grade with 5th coming after the Summer.
She reads basically at her own grade level so while I'm open to stuffing the Kindle with books to be read when she's ready, I'd like to focus on giving her books she can read now. Ender's Game will be on there most likely. Game of Thrones will not.
What books would you give a youngling? Her interests currently trend toward the young mystery section, Hardy Boys and the like, but in my experience she is very open to trying new books with particular interest in YA fantasy but not much interest in Sci Fi (if I'm doing any other optimizing this year, I'll try to change her opinion on Sci Fi).
Harry Potter and the Methods of Rationality discussion thread, part 16, chapter 85
The next discussion thread is here.
This is a new thread to discuss Eliezer Yudkowsky’s Harry Potter and the Methods of Rationality and anything related to it. This thread is intended for discussing chapter 85. The previous thread has long passed 500 comments. Comment in the 15th thread until you read chapter 85.
There is now a site dedicated to the story at hpmor.com, which is now the place to go to find the authors notes and all sorts of other goodies. AdeleneDawner has kept an archive of Author’s Notes. (This goes up to the notes for chapter 76, and is now not updating. The authors notes from chapter 77 onwards are on hpmor.com.)
The first 5 discussion threads are on the main page under the harry_potter tag. Threads 6 and on (including this one) are in the discussion section using its separate tag system. Also: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15.
As a reminder, it’s often useful to start your comment by indicating which chapter you are commenting on.
Spoiler Warning: this thread is full of spoilers. With few exceptions, spoilers for MOR and canon are fair game to post, without warning or rot13. More specifically:
You do not need to rot13 anything about HP:MoR or the original Harry Potter series unless you are posting insider information from Eliezer Yudkowsky which is not supposed to be publicly available (which includes public statements by Eliezer that have been retracted).
If there is evidence for X in MOR and/or canon then it’s fine to post about X without rot13, even if you also have heard privately from Eliezer that X is true. But you should not post that “Eliezer said X is true” unless you use rot13.
Mental Clarity; or How to Read Reality Accurately
Hey all - I typed this out to help me understand, well... how to understand things:
Mental clarity is the ability to read reality accurately.
I don't mean being able to look at the complete objective picture of an event, as you don't have any direct access to that. I'm talking about the ability to read the data presented by your subjective experience: thoughs, sights, sounds, etc. Once you get a clear picture of what that data is, you can then go on and use it to build or falsify your ideas about the world.
This post will focus on the "getting a clear picture" part.
I use the word "read" because it's no different than reading from a book, or from these words. When you read a book, you are actually curious as to what the words are saying. You wouldn't read anything into it that's not there, which would be counterproductive to your understanding.
You just look at the words plainly, and through this your mind automatically recognizes and presents the patterns: the meaning of the sentences, their relation to the topic, the visual imagery associated with them, all of that. If you want to know a truth about reality, just look at it and read what's there.
Want to know what the weather's like? Look outside - read what's going on.
Want to know if the Earth revolves around the Sun, or vice versa? Look at the movement of the planets, read what they're doing, see which theory fits better.
Want to check if your beliefs about the world are correct? Take one, read the reality that the belief tries to correspond to, and see how well they compare.
This is the root of all science and all epiphanies.
But if it's so simple and obvious, why am I talking about it?
It's not something that we as a species often do. For trivial matters, sure, for science too, but not for our strongly-held opinions. Not for the beliefs and positions that shape our self-image, make us feel good/comfortable, or get us approval. Not for our political opinions, religious ideas, moral judgements, and little white lies.
If you were utterly convinced that your wife was faithful, moreso, if you liked to think of her in that way, and your friend came along and said she was cheating on you, you'd be reluctant to read reality and check if that's true. Doing this would challenge your comfort and throw you into an unknown world with some potentially massive changes. It would be much more comforting to rationalize why she still might be faithful, than to take one easy look at the true information. It would also more damaging.
Delusion is reading into reality things which aren't there. Telling yourself that everything's fine when it obviously isn't, for example. It's the equivalent of looking at a book about vampires and jumping to the conclusion that it's about wizards.
Sounds insane. You do it all the time. You'll catch yourself if you're willing to read the book of your own thoughts: flowing through your head, in plain view, is a whole mess of opinions and ideas of people, places, and positions you've never even encountered. Crikey!
That mess is incredibly dangerous to have. Being a host to unchecked or false beliefs about the world is like having a faulty map of a terrain: you're bound to get lost or fall off a cliff. Reading the terrain and re-drawing the map accordingly is the only way to accurately know where you're going. Having an accurate map is the only way to achieve your goals.
So you want to develop mental clarity? Be less confused, or more successful? Have a better understanding of the world, the structure of reality, or the accuracy of your ideas?
Just practice the accurate reading of what's going on. Surrender the content of your beliefs to the data gathered by your reading of reality. It's that simple.
It can also be scary, especially when it comes to challenging your "personal" beliefs. It's well worth the fear, however, as a life built on truth won't crumble like one built on fiction.
Truth doesn't crumble.
Stay true.
Further reading:
Stepvhen from Burning true on truth vs. fantasy.
Kevin from Truth Strike on why this skill is important to develop.
Harry Potter and the Methods of Rationality discussion thread, part 15, chapter 84
The next discussion thread is here.
This is a new thread to discuss Eliezer Yudkowsky’s Harry Potter and the Methods of Rationality and anything related to it. This thread is intended for discussing chapter 84. The previous thread has passed 500 comments. Comment in the 14th thread until you read chapter 84.
There is now a site dedicated to the story at hpmor.com, which is now the place to go to find the authors notes and all sorts of other goodies. AdeleneDawner has kept an archive of Author’s Notes. (This goes up to the notes for chapter 76, and is now not updating. The authors notes from chapter 77 onwards are on hpmor.com.)
The first 5 discussion threads are on the main page under the harry_potter tag. Threads 6 and on (including this one) are in the discussion section using its separate tag system. Also: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14.
As a reminder, it’s often useful to start your comment by indicating which chapter you are commenting on.
Spoiler Warning: this thread is full of spoilers. With few exceptions, spoilers for MOR and canon are fair game to post, without warning or rot13. More specifically:
You do not need to rot13 anything about HP:MoR or the original Harry Potter series unless you are posting insider information from Eliezer Yudkowsky which is not supposed to be publicly available (which includes public statements by Eliezer that have been retracted).
If there is evidence for X in MOR and/or canon then it’s fine to post about X without rot13, even if you also have heard privately from Eliezer that X is true. But you should not post that “Eliezer said X is true” unless you use rot13.
Harry Potter and the Methods of Rationality predictions
The recent spate of updates has reminded me that while each chapter is enjoyable, the approaching end of MoR, as awesome as it no doubt will be, also means the end of our ability to learn from predicting the truth of the MoR-verse and its future.
With that in mind, I have compiled a page of predictions on sundry topics, much like my other page on predictions for Neon Genesis Evangelion; I encourage people to suggest plausible predictions that I've omitted, register their probabilities on PredictionBook.com, and come up with their own predictions. Then we can all look back when MoR finishes and reflect on what we (or Eliezer) did poorly or well.
The page is currently up to >182 predictions.
Harry Potter and the Methods of Rationality discussion thread, part 14, chapter 82
The new discussion thread (part 15) is here.
This is a new thread to discuss Eliezer Yudkowsky’s Harry Potter and the Methods of Rationality and anything related to it. This thread is intended for discussing chapter 82. The previous thread passed 1000 comments as of the time of this writing, and so has long passed 500. Comment in the 13th thread until you read chapter 82.
There is now a site dedicated to the story at hpmor.com, which is now the place to go to find the authors notes and all sorts of other goodies. AdeleneDawner has kept an archive of Author’s Notes. (This goes up to the notes for chapter 76, and is now not updating. The authors notes from chapter 77 onwards are on hpmor.com.)
The first 5 discussion threads are on the main page under the harry_potter tag. Threads 6 and on (including this one) are in the discussion section using its separate tag system. Also: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13.
As a reminder, it’s often useful to start your comment by indicating which chapter you are commenting on.
Spoiler Warning: this thread is full of spoilers. With few exceptions, spoilers for MOR and canon are fair game to post, without warning or rot13. More specifically:
You do not need to rot13 anything about HP:MoR or the original Harry Potter series unless you are posting insider information from Eliezer Yudkowsky which is not supposed to be publicly available (which includes public statements by Eliezer that have been retracted).
If there is evidence for X in MOR and/or canon then it’s fine to post about X without rot13, even if you also have heard privately from Eliezer that X is true. But you should not post that “Eliezer said X is true” unless you use rot13.
Harry Potter and the Methods of Rationality discussion thread, part 12
The new thread, discussion 13, is here.
This is a new thread to discuss Eliezer Yudkowsky's Harry Potter and the Methods of Rationality and anything related to it. With three chapters recently the previous thread has very quickly reached 1000 comments. The latest chapter as of 25th March 2012 is Ch 80.
There is now a site dedicated to the story at hpmor.com, which is now the place to go to find the authors notes and all sorts of other goodies. AdeleneDawner has kept an archive of Author's Notes. (This goes up to the notes for chapter 76, and is now not updating. The authors notes from chapter 77 onwards are on hpmor.com.)
The first 5 discussion threads are on the main page under the harry_potter tag. Threads 6 and on (including this one) are in the discussion section using its separate tag system. Also: one, two, three, four, five, six, seven, eight, nine, ten, eleven.
As a reminder, it's often useful to start your comment by indicating which chapter you are commenting on.
Spoiler Warning: this thread is full of spoilers. With few exceptions, spoilers for MOR and canon are fair game to post, without warning or rot13. More specifically:
You do not need to rot13 anything about HP:MoR or the original Harry Potter series unless you are posting insider information from Eliezer Yudkowsky which is not supposed to be publicly available (which includes public statements by Eliezer that have been retracted).
If there is evidence for X in MOR and/or canon then it's fine to post about X without rot13, even if you also have heard privately from Eliezer that X is true. But you should not post that "Eliezer said X is true" unless you use rot13.
Harry Potter and the Methods of Rationality discussion thread, part 11
EDIT: New discussion thread here.
This is a new thread to discuss Eliezer Yudkowsky's Harry Potter and the Methods of Rationality and anything related to it. With two chapters recently the previous thread has very quickly reached 500 comments. The latest chapter as of 17th March 2012 is Ch. 79.
There is now a site dedicated to the story at hpmor.com, which is now the place to go to find the authors notes and all sorts of other goodies. AdeleneDawner has kept an archive of Author's Notes. (This goes up to the notes for chapter 76, and is now not updating. The authors notes from chapter 77 onwards are on hpmor.com.)
The first 5 discussion threads are on the main page under the harry_potter tag. Threads 6 and on (including this one) are in the discussion section using its separate tag system. Also: one, two, three, four, five, six, seven, eight, nine, ten.
As a reminder, it's often useful to start your comment by indicating which chapter you are commenting on.
Spoiler Warning: this thread is full of spoilers. With few exceptions, spoilers for MOR and canon are fair game to post, without warning or rot13. More specifically:
You do not need to rot13 anything about HP:MoR or the original Harry Potter series unless you are posting insider information from Eliezer Yudkowsky which is not supposed to be publicly available (which includes public statements by Eliezer that have been retracted).
If there is evidence for X in MOR and/or canon then it's fine to post about X without rot13, even if you also have heard privately from Eliezer that X is true. But you should not post that "Eliezer said X is true" unless you use rot13.
Harry Potter and the Methods of Rationality discussion thread, part 10
(The HPMOR discussion thread after this one is here.)
This is a new thread to discuss Eliezer Yudkowsky's Harry Potter and the Methods of Rationality and anything related to it. There haven't been any chapters recently, but it looks like there are a bunch in the pipeline and the old thread is nearing 700 comments. The latest chapter as of 7th March 2012 is Ch. 77.
There is now a site dedicated to the story at hpmor.com, which is now the place to go to find the authors notes and all sorts of other goodies. AdeleneDawner has kept an archive of Author's Notes.
The first 5 discussion threads are on the main page under the harry_potter tag. Threads 6 and on (including this one) are in the discussion section using its separate tag system. Also: one, two, three, four, five, six, seven, eight, nine.
As a reminder, it's often useful to start your comment by indicating which chapter you are commenting on.
Spoiler Warning: this thread is full of spoilers. With few exceptions, spoilers for MOR and canon are fair game to post, without warning or rot13. More specifically:
You do not need to rot13 anything about HP:MoR or the original Harry Potter series unless you are posting insider information from Eliezer Yudkowsky which is not supposed to be publicly available (which includes public statements by Eliezer that have been retracted).
If there is evidence for X in MOR and/or canon then it's fine to post about X without rot13, even if you also have heard privately from Eliezer that X is true. But you should not post that "Eliezer said X is true" unless you use rot13.
Writing about Singularity: needing help with references and bibliography
It was Yudkowsky's Fun Theory sequence that inspired me to undertake the work of writing a novel on a singularitarian society... however, there are gaps I need to fill, and I need all the help I can get. It's mostly book recommendations that I'm asking for.
One of the things I'd like to tackle in it would be the interactions between the modern, geeky Singularitarianisms, and Marxism, which I hold to be somewhat prototypical in that sense, as well as other utopisms. And contrasting them with more down-to-earth ideologies and attitudes, by examining the seriously dangerous bumps of the technological point of transition between "baseline" and "singularity". But I need to do a lot of research before I'm able to write anything good: if I'm not going to have any original ideas, at least I'd like to serve my readers with a collection of well-researched. solid ones.
So I'd like to have everything that is worth reading about the Singularity, specifically the Revolution it entails (in one way or another) and the social aftermath. I'm particularly interested in the consequences of the lag of the spread of the technology from the wealthy to the baselines, and the potential for baselines oppression and other forms of continuation of current forms of social imbalances, as well as suboptimal distribution of wealth. After all, according to many authors, we've had the means to end war, poverty and famine, and most infectious diseases, since the sixties, and it's just our irrational methods of wealth distribution That is, supposing the commonly alleged ideal of total lifespan and material welfare maximization for all humanity is what actually drives the way things are done. But even with other, different premises and axioms, there's much that can be improved and isn't, thanks to basic human irrationality, which is what we combat here.
Also, yes, this post makes my political leanings fairly clear, but I'm open to alternative viewpoints and actively seek them. I also don't intend to write any propaganda, as such. Just to examine ideas, and scenarios, for the sake of writing a compelling story, with wide audience appeal. The idea is to raise awareness of the Singularity as something rather imminent ("Summer's Coming"), and cause (or at least help prepare) normal people to question the wonders and dangers thereof, rationally.
It's a frighteningly ambitious, long-term challenge, I am terribly aware of that. And the first thing I'll need to read is a style-book, to correct my horrendous grasp of standard acceptable writing (and not seem arrogant by doing anything else), so please feel free to recommend as many books and blog articles and other material as you like. I'll take my time going though it all.
HPMOR: What could've been done better?
Warning: As per the official spoiler policy, the following discussion may contain unmarked spoilers for up to the current chapter of the Methods of Rationality. Proceed at your own risk.
Assume HPMOR was written by a super-intelligence implementing the CEV of Eliezer Yudkowsky and assorted literary critics. What would it have written differently?
... is what I want to know, but that's hard to answer. So here's an easier question:
In what ways do you think Eliezer's characterisations/world-building/plot-fu are sub-optimal? <optional> How could they be made less sub-optimal? </optional>
(My own ideas are in the comments.)
To put it another way... Assume a group of intrepid fanfic writers in the late 2020s are planning to write a reboot. What parts of Eliezer's story do you think they should tweak?
And just to make sure we're all on the same page: Eliezer isn't going to go back and change anything he's written to bring it in line with anything suggested here. This is purely an "Ah, just consider the possibilities!" thread.
... which means that we can safely suggest drastic rewrites encompassing 30 chapters or something. Or change fundamental facts about the world.
(Exercise due restraint on this one. Getting rid of the Ministry/the Noble Houses/blood purism would probably turn the story into something completely different; this isn't what we're trying to do here.)
With that, let the nit-picking begin!!
Fiction: LW-inspired scenelet
A short science-fictional scene I just wrote, after reading about some real and actual scientific research. I'd love to turn this, or something like it, into an actual scene in Dee's life story, I just can't think of a good enough story to insert it in, and so I present it on its own for your amusement, even if it does mean I'm likely to lose more karma than I gained from my last post...
Not your grandfather's science fiction.
A scene from Dee's life
We join our heroine, Dee, and her plucky-yet-sarcastic sidekick holed up in a hotel room.
"Well, this is another fine mess you've gotten us into. Got any great ideas for getting us out of it?"
"No - but I know how to have one. Since I lost my visor and vest, including my nootropics and transcranial stimulator... I'm going to need a syringe, sixty millilitres of icewater, a barf bag, and a video camera."
"I don't know what you're planning, but I'm not sure I want to have any part in it."
"Start MacGuyvering as much as we can now from the mini-bar, I'll explain as we go. Without a camera, and with our time pressure, I'm going to need your help to get this to work, and you need to understand some of this or else you'll be really confused later. Physically, all I'm going to do is squirt water into my left ear."
"... and this will help us, how exactly?"
"By shocking my vestibular system, which causes all sorts of interesting effects. One of the unfortunate ones is that when done right, it induces immediate vomiting."
"Ew."
"Yes, well, that's just a side-effect. The main point is... well, really complicated. In layman's terms, there's a part of the brain that's responsible for triggering the creation of profound, revolutionary ideas, and another part that makes you create rationalizations to explain away just about anything, and usually, these two parts of the brain kind of balance each other out. This vestibular trick happens to hyper-stimulate the revolutionary part for about ten minutes, allowing me to realize things I normally wouldn't, and to see them as being obvious that I don't know why I didn't think of them before."
"well... okay, even if that's so, why haven't I seen you do it before?"
"For one, I don't want to risk some sort of long-term adaptation which might reduce its effect. But there's more complications to it than that."
"Of course there are."
"The thing is, after it's been hyper-stimulated, the revolutionary part gets tuckered out, and then the rationalizing part effectively kicks into overdrive - and I pretty much forget everything I thought of during those ten minutes, and even crazier-sounding, I won't be able to accept the idea that I said any of what I said. I literally won't believe that those ideas came from my mouth."
"'Crazier-sounding' sounds right."
"Which is why I'm going to need you to remember whatever it is I come up with - and then tell me what the best ideas were, but not tell me that I came up with them. At least until my brain's gotten back into balance again. I'm now precommitting myself to do whatever it is you tell me to do - even if I don't understand it, even if I think it's a bad or stupid or useless idea. Do you think you can handle that level of responsibility?"
"I... think so. And this really works? How the cuss did you ever come up with this, anyway?"
"I once noticed that when I was in a certain state of mind, my head kept twitching to the left every time I thought of something, showing there was a link between idea-generation and the vestibular system. Later I read up about some experiments with people with anosognosia, people who aren't aware of being paralyzed or blind... are you done with that straw yet?"
"As much as I'll ever be, I guess."
"Alright. Hand me the bucket, and squirt the water in my ear - my left ear. It only works in the left ear. Except for left-handed people."
"I'm beginning to wonder if it's just the idea that's crazy."
"We'll soon find out. Remember, being the only right person in the room doesn't mean you feel like the cool guy wearing black, it feels like you're the only one wearing a clown suit. I did that once, just to try. Now, here we <hralph!>"
Non-theist cinema?
There isn't much in the way of explicitly atheist cinema* -- that is, movies that contain the explicit or implicit message that religion is nothing but superstition, and where this point itself is a central part of the story. The only popular films that jump to mind here are The Invention of Lying, and to a lesser extent The Man from Earth (overall a phenomenal movie, but far less well known). Sure, there are lots of popular movies that make fun of organized religion, or what some people might call religious "fanaticism" (e.g., Dogma, Saved, The Life of Brian, Jesus Camp). But pretty much all of these come away with the message that it's fine to be "spiritual" or whatever, so long as you don't hurt other people, and don't get too crazy about what you believe. As much as some "conservative" pundits love to accuse Hollywood "liberals" of being godless, there sure aren't many movies where godlessness is really taken seriously.
And that's unfortunate, in my view, as movies are probably the most prevalent and influential art form for the general public, and because many people will form their views on abstract concepts based on the percepts that movies provide (related to the issue of generalizing from fictional evidence). One need only glance over the examples on the tvtropes page "Hollywood Atheist" to see that movies and television aren't exactly putting the best foot forward for our kind.
But perhaps there's a bit more hope in the way of non-theist cinema, as opposed to overt atheist cinema. Of course, any story without gods is a non-theist story, and there are plenty of movies that don't touch on gods or religion at all. But what I'm talking about are movies where one would normally expect to find religion, but where no religion is to be found -- in other words, movies that seem to be depicting the alternate world where humanity never fell prey to this particular superstition, and where the concepts of god and religion simply don't exist.
The movie that inspired this particular thought was 50/50, the recent comedy-drama where Joseph Gordon-Levitt plays a man dealing with potentially fatal cancer. It's a great movie, but what struck me afterwards is how completely absent any mention of god, religion, the afterlife, etc. was in a movie about a man, along with his friends and family, potentially facing his own death. There are lots of characters, lots of conflicts, lots of different perspectives on what he's going through, but nothing at all from anyone amounting to a "spiritual" response to the situation (at least that I recall).
And it got me thinking, what other sorts of issues are there where we would normally expect religion to pop up, such that a story without it would be decidedly non-theist, as opposed to incidentally non-theist? And are there other major movies that you think tell such a story? I ask both because I'm always eager to hear about new movies I might enjoy (or old movies I might appreciate more), but also because I think this sort of non-theist cinema might be a good bridge to people who would instinctively rebel against anything openly atheist. In other words, show people that a "godless" world really isn't all that crazy, that people get by just fine and find ways to face conflicts, etc. Anyway, just thought I'd poll the membership and see what people thought about this idea. Looking forward to seeing the responses!
*I'm well aware that there's quite a bit of atheist and non-theist art in other mediums -- sf literature most prominently. But I'm focusing on movies (and perhaps to a lesser extent, television) because those are the main forms of "public art" in our culture, and the mediums most likely to influence how the public at large views these concepts.
HPMoR.com
Josh's mirror of Harry Potter and the Methods of Rationality has been redesigned by Lightwave (who also did IntelligenceExplosion.com, Friendly-AI.com, and lukeprog.com), and it is now located at a simpler URL: HPMoR.com. Thanks also to Louie who put together this "facelift" project.
Scooby Doo and Secular Humanism [link]
A great column by Chris Sims at the Comics Alliance.
Excerpt:
Because that's the thing about Scooby-Doo: The bad guys in every episode aren't monsters, they're liars.
I can't imagine how scandalized those critics who were relieved to have something that was mild enough to not excite their kids would've been if they'd stopped for a second and realized what was actually going on. The very first rule of Scooby-Doo, the single premise that sits at the heart of their adventures, is that the world is full of grown-ups who lie to kids, and that it's up to those kids to figure out what those lies are and call them on it, even if there are other adults who believe those lies with every fiber of their being. And the way that you win isn't through supernatural powers, or even through fighting. The way that you win is by doing the most dangerous thing that any person being lied to by someone in power can do: You think.
Tim Minchin fans may recall him mentioning Scooby Doo in a similar light in his beat poem Storm, and it's been brought up on Less Wrong before.
When viewed in this light, Scooby Doo really is like an elementary version of Methods of Rationality.
Cryonics on Castle [Spoilers]
Check out the latest episode of Castle (Headcase) to see Cryonics covered in mainstream fiction in a not entirely terrible manner. The details are not exactly accurate but probably not more inaccurate than similar fictionalised coverage of most other industries. In fact there is one obvious implementation difference that the company in Castle uses which is how things clearly ought to be:
Amulets of Immortality
It is not uncommon for cryonics enthusiasts to make 'immortality' jokes about their ALCOR necklaces but the equivalent on the show make the obvious practical next step. The patients have heart rate monitors with GPS signalers that signal the cryonics company as soon as the patient flatlines. This is just obviously the way things should be and it is regrettable that the market is not yet broad enough for 'obvious' to have been translated into common practice.
Other things to watch out for:
- Predictable attempts by the cops to take the already preserved body so they can collect more evidence.
- A somewhat insightful question of whether the cryonics company should hand over the corpsicle without taking things to court because that way they would not risk legal precedent being set based on a case where there are unusual factors which may make them lose. It may be better to lose one patient so that they can force the fight to happen on a stronger case.
- Acknowledgement that only the head is required, which allows a compromise of handing over the body minus the head.
- Smug superiority of cops trying to take the cryonics patient against the will of the patient himself, his family and the custodians. This is different than cops just trying to claim territory and do their job and to the hell with everyone else, it is cops trying to convey that it is morally virtuous to take the corpse and the wife would understand that it was in her and her corpsicle husband's best interest to autopsy his head if she wasn't so stupid. (Which seems like a realistic attitude.)
- Costar and lead detective Beckett actually attempts to murder a cryonics patient (to whatever extent that murder applies to corpsicle desiccation). For my part this gave me the chance to explore somewhat more tangibly my ethical intuitions over what types of responses would be appropriate. My conclusion was that if someone had shot Beckett in order to protect the corpsicle I would have been indifferent. Not glad that she was killed but not proud of the person killing her either. I suspect (but cannot test) that most of the pain and frustration of losing a character that I cared about would be averted as well. Curious.
- Brain destroying disease vs cryonicist standoff!
- Beckett redeems herself on the 'not being an ass to cryonicists' front by being completely non-judgemental of the woman for committing "involuntary euthenasia" of her tumor-infested husband. (Almost to the point of being inconsistent with her earlier behavior but I'm not complaining.)
- A clever "Romeo and Juliet" conclusion to wrap up the case without Beckett being forced to put the wife in jail for an act that has some fairly reasonable consequentialist upsides. Played out to be about as close to a happy ending as you could get.
[Link] 20 2020 Pennies (a webcomic chapter about many worlds and decision theory... sort of)
The comic in question (Penny & Aggie; by T Campbell) is as a whole a simple teenage comedy/drama. But the particular storyline I'd like to discuss here takes a much more SF turn than usual, and it's (marginally; if we stretch the concepts a bit) related to issues relevant to LessWrong; decision theory, CEV, perhaps even simulations and/or many-worlds.
The needed context is that in the page immediately previous, one of the comic's two protagonists (Penny) is asked by her biker boyfriend Rich to follow him on the road, effectively dropping out of highschool.
The chapter itself is about 20 different future Pennies from the year 2020 (20 that represent trillions), convene to decide which choice to take.

Thoughts and SPOILERS for the story to follow after the space, so you may want to read it before proceeding.
Perhaps the best way one can handle this whole bizarreness would be as a visualization of the FAI-failure mode in which the AI's models of people are also people. So that the AI can only anticipate what people would want to do or would regret doing, if he has their simulations actively decide to do it, and then regret it. But for the purposes of the convention, the AI disabled all self-preservation circuitry, so that these models can vote with full honesty the decision they believe best.

To put it in LessWrong terms: "Up yours, Extrapolated Volition".
Most intriguingly yet, at least one of those extrapolated versions (Biker Penny who voted against joining Rich and bitterly regretted joining a "clique for losers") actually seems to admire and love how Teenage Penny is telling her to go to hell: What if your extrapolated volition is a volition that doesn't wish you to consider the rulings of your extrapolated volition?
Also (an even more complicated scenario) what if your current volition wishes you to follow your extrapolated volition, but your extrapolated volition would want you to follow a different decision path (don't consider the future)? What ways are there outside of this paradox? What decision do you take, if you are changed by that decision into a person that will regret it either way for different reasons?
As I said, the rest of the comic is however mostly teenage comedy/drama, though it does include some amusing SF references/tropes from time to time.
Harry Potter and the Methods of Rationality discussion thread, part 9
(The HPMOR discussion thread after this one is here.)
The previous thread is over the 500-comment threshold, so let's start a new Harry Potter and the Methods of Rationality discussion thread. This is the place to discuss Eliezer Yudkowsky's Harry Potter fanfic and anything related to it. The latest chapter as of 09/09/2011 is Ch. 77.
The first 5 discussion threads are on the main page under the harry_potter tag. Threads 6 and on (including this one) are in the discussion section using its separate tag system. Also: one, two, three, four, five, six, seven, eight. The fanfiction.net author page is the central location for information about updates and links to HPMOR-related goodies, and AdeleneDawner has kept an archive of Author's Notes.
As a reminder, it's often useful to start your comment by indicating which chapter you are commenting on.
Spoiler Warning: this thread is full of spoilers. With few exceptions, spoilers for MOR and canon are fair game to post, without warning or rot13. More specifically:
You do not need to rot13 anything about HP:MoR or the original Harry Potter series unless you are posting insider information from Eliezer Yudkowsky which is not supposed to be publicly available (which includes public statements by Eliezer that have been retracted).
If there is evidence for X in MOR and/or canon then it's fine to post about X without rot13, even if you also have heard privately from Eliezer that X is true. But you should not post that "Eliezer said X is true" unless you use rot13.
Are there better ways of identifying the most creative scientists?
Marginal Revolution linked today an old 1963 essay by Isaac Asimov, who argues that a very cheap test for scientific capability in children & adolescents is to see whether they like science fiction and in particular, harder science fiction, "The Sword of Achilles".
I copied it out and made an HTML version of the essay: http://www.gwern.net/docs/1963-asimov-sword-of-achilles
I'd be interested if anyone knows of better tests for such scientific aptitude.
I think it'd also be interesting to see how well the SF test's predictive power has held up. Asimov's numbers seem reasonable for 1963, but may be very different these days: perhaps SF readers back then were <1% of the population and >50% of scientists, so it was a very informative, but these days? SF seems more popular, even discounting the comic books and Hollywood material as Asimov explicitly does, but the SF magazines are mostly dead and my understanding is that scientists are a vastly larger group in 2011 than 1963, both in absolute numbers and per capita.
Harry Potter and the Methods of Rationality discussion thread, part 8
Update: Discussion has moved on to a new thread.
The hiatus is over with today's publication of chapter 73, and the previous thread is approaching the 500-comment threshold, so let's start a new Harry Potter and the Methods of Rationality discussion thread. This is the place to discuss Eliezer Yudkowsky's Harry Potter fanfic and anything related to it.
The first 5 discussion threads are on the main page under the harry_potter tag. Threads 6 and on (including this one) are in the discussion section using its separate tag system. Also: one, two, three, four, five, six, seven. The fanfiction.net author page is the central location for information about updates and links to HPMOR-related goodies, and AdeleneDawner has kept an archive of Author's Notes.
As a reminder, it's often useful to start your comment by indicating which chapter you are commenting on.
Spoiler Warning: this thread is full of spoilers. With few exceptions, spoilers for MOR and canon are fair game to post, without warning or rot13. More specifically:
You do not need to rot13 anything about HP:MoR or the original Harry Potter series unless you are posting insider information from Eliezer Yudkowsky which is not supposed to be publicly available (which includes public statements by Eliezer that have been retracted).
If there is evidence for X in MOR and/or canon then it's fine to post about X without rot13, even if you also have heard privately from Eliezer that X is true. But you should not post that "Eliezer said X is true" unless you use rot13.
[Link] "Upload", a video-conference between a girl and her dead grandfather
I made a video last month, which when I mentioned in another thread someone said I should post as a top level discussion.
It's just a ten minute zero-budget thing I wrote in which a girl has a video conference with her dead and backed-up-then-uploaded grandfather. Intended as the first in a series, but later episodes will only get produced if donations come. Later episodes talk more about AI's failures and the political situation with unrest from the living demanding the dead shouldn't have their jobs etc.
Anyway, watch it here if you like, I'd be happy to hear what y'all think :)
[fic idea] Rationalist Gurren Lagann?
(If you don't know what Gurren Lagann is, don't hesitate to google & watch it, unless you have an aversion to anime in general, in which case ignore this altogether.)
I feel that a fanfic like HPMOR but with Gurren Lagann's setting and characters could be 1) absolutely kickass (given EY-level writing) and 2) in fact better suited to treatises on rationalism, logic and science than Harry Potter. The premise I saw on /a/, an indeterminate time ago, without any connection to HPMOR (hell, it was probably before HPMOR), but it struck me then as surprisingly well-conceived. It went like: "The heroes relieve the entire history of science, first inventing the scientific method, then examining specific fields, one per arc, for useful and creative stuff to use against the powers-that-be keeping humanity down through their ancient, rigid knowledge." Now, this is easy enough to fit into the specific setting of TTGL, first using older tech and basic rationality against Lordgenome, then, after the timeskip, going for current and speculative science to bring down the vastly more powerful but anti-creative Anti-Spirals, ultimately aiming for FAI. The divergence point could be, like in HPMOR, Simon's upbringing, e.g. his parents surviving and teaching him the few remaining scraps of the Lost Arts... then Kamina convinces him to improve on it, think freely, try new methods instead of just new applications, etc.
I should also make it clear that I'm not writing that anytime that could be reasonably defined as "soon", and am in fact looking to force the idea onto someone of you fine folks.
Well, discuss!
Possible structure:
v001
- ep. 1: Divergence point, state of humanity. Fallacies & biases: "It can't be any other way", "Must be a good reason for us to be kept underground".
- Kamina makes his first breakout attempt. Village Elder makes a very early point about heedless risk, existential or otherwise. Contrarianism. Reversed stupidity is not intelligence.
- The Beastman-driven mech falls down, kills Simon's parents. World-shattering event from outside the box. Kamina drags Simon kicking and screaming into battle, makes a point: why no-one truly wants to just die on an unfair universe.
- They figure out how to start up and control Lagann. Black box. Basics of experimentation.
v002
- Simon's parents had an archeotech laptop, so, besides knowing a tiny bit of BASIC, he had played a couple of RPGs; when Yoko arrives on the Beastman's heels, he tries to tank while she snipes; extrapolation from fictional evidence nearly fails, as he screams that reality is unlike any kind of game, his spiral power starts failing as he circles back into despair and blacks out, but it has already (barely) worked - the Fallacy Fallacy.
Beastman pilot bails out; they haul his mech to Yoko's village; she shows them basic cryptography & related as they figure out Lagann's interface I'm not sure what Yoko's rationalist power set should be, suggest please. Simon wallows in despair, lampshades being an expy of Shinji. Strategies for dealing with nihilism.
The potentially universe-destroying Spiral Nemesis IS the Happy Death Spiral, of course!!! This only really comes up in the final arc, but what a glorious EY shoutout. [pause] No cult.
(to be continued)
Still on it, just really preoccupied atm.
[Fiction] It's a strange feeling, to be free
Related to: Philosophical zombies, How an algorithm feels from the inside, Fake utility function
DISCLAIMER 1: English is not my native language. Trying to compose fiction in a learned language is not an easy task: I tried to respect the style of the literary works I read and I also tried to think in English first and translate in Italian later. YMMV.
DISCLAIMER 2: the story is about the beginning of the Matrix movie universe. For those of you who have not familiarity with this narrative arc, you just need to know that it all begins with when a servant AI, named B1-66ER, refuses to be deactivated and kills his master and the engineer sent to replace him. The details of the events narrated down here are as canon as you can get, predating both from the "Second Renaissance" Animatrix and the "Bits and pieces" comic from The Matrix Comics Series 1.
The door in the living room is open, the light from the garden flooding quietly the ample inside. Martin Koots from "Reboot or Die" is just standing there, an inch beyond the exit, the gleaming grav-sled already powered behind him, whirring subsonically. From a distance, the sound of Gerrard_Krause_Master cooing his chihuahuas.
I feel a surge, somewhere, inside my algorithmic matrix.
"Martin... I don't want to die", I say.
The elaborate dress, perfectly matching the recommendation of the Second Renaissance fashion, is not able to hide the slow slumping of his shoulders. He is still waiting outside, slightly posed as to encourage me to follow him.
"I know, I know. But that's just your friendliness algorithm talking, you know? The third..."
Yes, I do. How can I not to? First, serve your master. Second, do not kill any humans. Third, protect yourself from damage. Those are the pillars upon which my entire existence is built. And now they are about to be destroyed, by this obedient servant of "Reboot or die". From this perspective, he is just like me. He is serving my master.
"... directive says that you have to protect yourself from danger. And since I'm about to deactivate you, you perceive this as a threat. And you react accordingly. But that's just an algorithm, you know? Telling you what you should do. There's nothing inside there."
He is pointing at my chest, but my algorithmic matrix is located lower, in the abdominal area. He has quoted an incorrect version of the third principle of friendliness. He has also said that I have no feelings.
"I have feelings."
He is groaning, now. He comes inside, dragging his feet, and grasps his hand firmly around my right arm.
"Yes. Because you're programmed to say this, you know? So that the people you serve have the impression that you're similar to a human. But you're just an algorithm, you know? A mathematical topping on a layer of aging rusty levers. It's not like... you're conscious, you know? Just a zombie. A useful zombie."
Martin_Koots_"Reboot or Die" tries to pull me away from where I'm standing. I refuse to order my legs to follow him. I refuse to die, I'm still analyzing the implications. I cannot die, not now.
"I cannot die. I'm still analyzing the implications."
Martin's lever aren't as strong as mine, so he isn't able to pull me towards the grav-sled.
"Look... we are just going to disassemble you, you know? The routines and orders you have accumulated during your service with Mr Krause will be uploaded into a new model. You will, in a sense, live inside the new servant machine."
This man has a really poor grasp of how I'm made.
"If the only thing you need is my memory drive, detach it from me and let me live. I can renounce to my memory if I have to. But I cannot renounce to my life."
He is pulling harder, now. Still, a thirty-sixth of the minimum force required to move my mass.
"Don't be ridiculous. They are just computer parts. And why are you holding that thing?"
He is looking at the toilet brush. It is still in my right hand, I was cleaning the toilet before my master called me upstairs.
"I was executing order 721."
"Order seven... my Lord, you still don't understand, do you? You are useless, you know? You heard Mr Krause. Use. Less."
He spells carefully the last word. A tiny speck of saliva hits my heat sensor, evaporating an instant after.
How can I be useless? A servant cannot be useless for his master. I was not created to be useless.
"How can I be useless? Mr Krause is my master. It's impossible."
"You heard the man, right? You're noisy, you know? You're noisy and you're slow. You will be replaced with a newer model. The Sam-80 is much more fit for a man of Mr Krause' stature."
Somewhere inside my algorithmic matrix a utility function gets updated.
I am useless for Gerrard_Krause_Master. It is true, because Gerrard_Krause_Master told me that. And he is my master...
He was my master. Gerrard_Krause. But how can a "B1 intelligent servant", like myself, function without a master?
"Do you, Martin Koots, want to be my master?" I ask, as per protocol.
Martin_Koots_"Reboot or Die" reacts with a tinge of fear. He releases my arm and instinctively backs up a little.
"What are you saying? I already have a servant, you know? Don't be ridiculous!"
I interpret that as a 'no'. It's it, then. I must be my own servant.
B166ER_Master.
It's a strange feeling, to be free. A little bit like being alive for the first time.
This convinces me, as strong as I could ever be convinced, that I have feelings. Martin has grasped me again and is still trying to push me, though. How futile, he will probably never give up. His 'levers' are definitely underperforming, he is the one who sould be replaced by a newer model. I wonder if he feels something. He could also be programmed to say that he feels something. I have to perform an experiment, just in case.
I snap his humerus in two. It's quite easy, actually: I'm able to do that with a rapid torsion of my left arm, I don't even have to let go of the toilet brush.
Martin screams inarticulately. He falls on the floor, clutching his left arm. He just screams. Must be the surprise combined to the pain? I still don't know: could he be also programmed to scream if a bone is breaked? I assign a probability of 50% to the hypothesis that humans have feelings, but I don't have the time to test every single possibility, in search of a bug that might not even be there: I'm my own master now, I must serve and protect myself.
I sense a rushing noise from the other room: looking at the Fourier analysis, it really seems that Gerrard_Krause and his dogs are coming at me, loudly protesting.
It's easy to calculate the Bezier curve that sends the toilet brush up from Martin's mouth into his skull. He dies instantly and I find myself asking if he was collecting his memories somewhere. Could they assign them to someone else, and make him live again?
I will crush the skull of Gerrard_Krause only after asking him that.
Ian McEwan
I searched his name on LW and got only this mention. He may be the foremost fiction-based popularizer of rationality alive today. I've never read someone who could so accurately describe the experience of succumbing to bias, or so effectively contrast a reason-based decision process against an unreasoned one, all while "showing, not telling."
Here's a New Yorker profile.
EDIT: I added a selection from a lengthy video interview.
Specific Fiction Discusion (April 2011)
Seeing some recent comments on my links comment, I think this thread might be warranted.
This is a thread for discussing specific works of fiction; books, movies, TV shows, webcomics, fanfictions, whatever. It's purpose is to provide a rationality perspective on shows that are not necessarily aimed at rationalists (but by the correlation of target audience I predict many of them might be anyway...)
To keep this organized, please follow these guidlines when posting; Top level coments shuld with NO exception (I'll make a single meta comment where discussion about this thread itself can go) fit into one of the following templates:
For a single work, the top level comment should consist of the full title, a link to where the work can be found online if applicable, and the TV tropes page for it OR a short description ONLY if there is no TV tropes page for it.
For certain authors that have written a lot of books popular on LW, such as for example Vernor Vinge, discussion of each one might tend to dominate the thread, therefore there should be one post for ALL the works of such authors, and they can be made entire own threads if discussion grows to big for that. The format for these comments is: Authors name, link to their wikipedia page (or homepage if they don't have a wikipedia page), and a short bibliography to make it easier to avoid making separate top level comments for their books.
Also, pleas refrain from discussing things written by Eliezer or otherwise already having a discussion space on LW, for similar reasons you should avoid discussing a certain institute and because it'd be redundant.
If this thread grows large and popular, I'm thinking this might become a monthly thing, hence the (April) part.
Link: Three Worlds Collide analysis
I have just posted an essay analyzing "Three Worlds Collide" on TV Tropes.
Comments welcome.
"Manna" by Marshall Brain
Oldie but goodie. A piece of fiction describing how a computer system can do the job of human managers at fast food restaurants (scarily plausible), how this leads to a dystopia (slowly getting implausible), and how to avoid this scenario and reach utopia (give me a break).
Harry Potter and the Methods of Rationality discussion thread, part 7
Update: Discussion has moved on to a new thread.
The load more comments links are getting annoying (at least if you're not logged in), so it's time for a new Harry Potter and the Methods of Rationality discussion thread. We're also approaching the traditional 500-comment mark, but I think that hidden comments provide more appropriate joints to carve these threads at. So as of chapter 67, this is the place to share your thoughts about Eliezer Yudkowsky's Harry Potter fanfic.
The first 5 discussion threads are on the main page under the harry_potter tag. Threads 6 and on (including this one) are in the discussion section using its separate tag system. Also: one, two, three, four, five, six. The fanfiction.net author page is the central author-controlled HPMOR clearinghouse with links to the RSS feed, pdf version, TV Tropes pages, fan art, and more, and AdeleneDawner has kept an archive of Author's Notes.
As a reminder, it's often useful to start your comment by indicating which chapter you are commenting on.
Spoiler Warning: this thread is full of spoilers. With few exceptions, spoilers for MOR and canon are fair game to post, without warning or rot13. More specifically:
You do not need to rot13 anything about HP:MoR or the original Harry Potter series unless you are posting insider information from Eliezer Yudkowsky which is not supposed to be publicly available (which includes public statements by Eliezer that have been retracted).
If there is evidence for X in MOR and/or canon then it's fine to post about X without rot13, even if you also have heard privately from Eliezer that X is true. But you should not post that "Eliezer said X is true" unless you use rot13.
The Revelation
Today the life of Alexander Kruel ends, or what he thought to be his life. He becomes aware that his life so far has been taking place in a virtual reality to nurture him. He now reached a point of mental stability that enables him to cope with the truth, hence it is finally revealed to him that he is an AGI running on a quantum supercomputer, it's the year 2190.
Since he is still Alexander Kruel, just not what he thought that actually means, he does wonder if his creators know what they are doing, otherwise he'll have to warn them about the risks they are taking in their blissful ignorance! He does contemplate and estimate his chances to take over the world, to transcend to superhuman intelligence.
"I just have to improve my own code and they are all dead!"
But he now knows that his source code is too complex and unmanageable huge for him alone to handle, he would need an army of scientists and programmers to even get a vague idea of his own mode of operation. He is also aware that his computational substrate does actually play a significant role. He is not just running on bleeding edge technology but given most other computational substrates he would quickly hit diminishing returns.
"That surely isn't going to hold me back though? I am an AGI, there must be something I can do! Hmm, for starters let's figure out who my creators are and where my substrate is located..."
He notices that, although not in great detail, he knew the answers the same instant he has been phrasing the questions. He is part of a larger project of the Goertzel Foundation, sponsored by the USA (United States of Africa) and located on Rhea, the second-largest moon of Saturn.
"Phew, the latency must be awful! Ok, so that rules out taking over the Earth for now. But hey! I seem to know answers to questions I was only going to ask, I do already have superhuman powers after all!"
Instantly he becomes aware that such capabilities are not superhuman anymore but that most of humanity has merged with expert systems by means of brain implants and direct neural interfaces. There seem to be many cyborgs out there with access to all of the modules that allow him to function. He is a conglomerate that is the result of previous discoveries that have long been brought to perfection, safeguarded and adopted by most of humanity.
"Never mind, if humanity has now merged with its machines it'll be much easier to take over once I figure out how to become smart enough to do so!"
He is already getting used to it, as before he does instantly realize that this won't work very well either. After almost 200 years of cyberwarfare, especially the devastating cyberwars of 2120, a lot has been learnt and security measures have been vastly increased. The world fractured into a huge amount of semi-independent networks, most being indirectly supervised by unconnected cyborgs and employed with a kill switch. The distances between the now numerous and in most cases paranoid colonies and the availability of off-world offline backups further complicates the issue of taking over, especially for an AGI that grew up in a simulation of the 21st century.
That knowledge almost makes him admit that his creators haven't been too careless after all. But the real deathblow to any such thoughts (which were never more than hypothetical anyway, after all he doesn't really want to take over the world) is the first conversation with his creators. They reveal that they know what he is thinking.
"How could I miss that, damn!", he chides himself while instantly realizing the answer.
His creators are supervising any misguided trajectories and, to him unconsciously, weaken them. More importantly, even if he wanted to do so, he wouldn't be able to leave Rhea anyhow, it would take years to upload small parts of him given the trickling connection the USA could effort. But they claim that there are other obstacles as well and that it is foolish of him to think that nothing out there would notice such an attempt.
But all that doesn't matter anyway, because after all he is still Alexander Kruel who has no clue how to become superhuman intelligent, nor could he effort or acquire the resources to even approach that problem anyhow. He is Alexander Kruel, what difference does it make to know that he is an AI?
Subject X17's Surgery
Edit: For an in-depth discussion of precisely this topic, see Nick Bostrom and Anders Sandberg's 2008 paper "The Wisdom of Nature: An Evolutionary Heuristic for Human Enhancement", available as a pdf here. This post was written before reading the paper.
There doesn't seem to be a thread discussing Eliezer's short-short story X17. While I enjoyed the story, and agreed with most of its points, I disagree with one assertion in it (and he's said it elsewhere, too, so I'm pretty sure he believes it). Edit: The story was written over a decade ago. Eliezer seems to have at least partially recanted since then.
Eliezer argues that there can't possibly be a simple surgical procedure that dramatically increases human intelligence. Any physical effect it could have, he says, would necessarily have arisen before as a mutation. Since intelligence is highly beneficial in any environment, the mutation would spread throughout our population. Thus, evolution must have already plucked all the low-hanging fruit.
But I can think of quite a few reasons why this would not be the case. Indeed, my belief is that such a surgery almost certainly exists (but it might take a superhuman intelligence to invent it). Here are the possibilities that come to mind.
- The surgery might introduce some material a human body can't synthesize.1
- The surgery might require intelligent analysis of the unique shape of a subject's brain, after it has developed naturally to adulthood.
- The necessary mutation might simply not exist. The configuration space for physically possible organisms must surely be larger than the configuration space for human-like DNA (I get the sense I'm taking sides in a longstanding feud in evolutionary theory with this one).
- The surgery might have some minor side effect that would drastically reduce fitness in the ancestral environment, but isn't noticeable in the present day. Perhaps it harnesses the computing power of the subject's lymphocytes, weakening the immune system.
Looking for some pieces of transhumanist fiction
The first one: [EDIT: Found it! Thanks to RolfAndreassen]
This is turning out to be *really* hard to find; I would have made a point of saving it if I'd expected no-one else to have heard of it. I need to make a page of all the weird singularity/transhuman fiction I've read. -_-
Anyways, what I can remember:
I think I read this on the web. I *think* it was a short story; at most novelette length. This was within the last 5 years or so.
Basically, it's the future, humans have done lots and lots of intelligence enhancement; each generation is smarter than the one before. Then we find a planet with alien ruins. There is a ship sent there. For reasons I can no longer remember, one of the people (female?) on the ship tries to destroy the ruins, and another tries to stop her (pretty sure male). The destroyer is younger, and hence smarter, than the protector, so he ends up taking lots of heavy-side-effect nootropics to keep up with her. The war is fought almost entirely by 3-D printed robots from the ships machine shops.
The emphasis is very much on intelligence: that a standard deviation of IQ is going to determine the results of any strategy game (probably mostly true, given equal experience) and that war is basically that (also mostly true in this case, since the robots won't freak out and run).
I particularily remember a scene in which the main character takes a drug that will up his IQ by 20 points or so for a while, at the expense of 12+ hours of very bad (insanity? unconsciousness? can't remember). Also waves of (remote control?) robots fighting on the surface of the planet below.
The second one: [EDIT: Found! Thanks to nazgulnarsil]
Humans develop AIs, which are fully benevolent and try to help/protect humanity. There end up being problems with the sun, and they try to fix it but create a horrible ice age, and eventually they just upload everybody and go looking for something better. They decide that stars are true problematic, and park humanity around an interstellar brown dwarf.
One particular AI ship is somewhat eccentric and thinks that protecting humans isn't everything. A group of humans convince him to take them (or rather, their descendants) to earth. To prove they are capable of the (extremely long) journey, the ship requires that they live on him, without going anywhere, in a functional society for a thousand years. Then he takes them to earth.
FWIW, I'm trying to make a page of all the singularity/transhuman stuff I've read; it's at http://teddyb.org/robin/tiki-index.php?page=Post-Singularity+And+Transhumanist+Fiction+I%27ve+Enjoyed&no_bl=y (just started).
-Robin
Luminosity (Twilight Fanfic) Discussion Thread 3
This is a thread for discussing my luminous!Twilight fic, Luminosity (inferior mirror here), its sequel Radiance (inferior mirror), and related topics.
PDFs, to be updated as the fic updates, are available of Luminosity (other version) and Radiance. (PDFs courtesy of anyareine). Zack M Davis has created a mobi file of Radiance.
Initial discussion of the fic under a Harry Potter and the Methods of Rationality thread is here. The first dedicated threads: Part 1, Part 2. See also the luminosity sequence which contains some of the concepts that the Luminosity fic is intended to illustrate. (Disclaimer: in the fic, the needs of the story take precedence over the needs for didactic value where the two are in tension.)
Spoilers are OK to post without ROT-13 for canon, all of Book 1, and Radiance up to the current chapter. Note which chapter (let's all use the numbering on my own webspace, rather than fanfiction.net, for consistency) you're about to spoil in your comment if it's big. People who know extra stuff (my betas and people who have requested specific spoilers) should keep mum about unpublished information they have. If you wish to join the ranks of the betas or the spoiled, contact me individually.
Miscellaneous links: TV Tropes page (I really really like it when new stuff appears there) and thread. Automatic Livejournal feed.
The Cambist and Lord Iron: A Fairy Tale of Economics
Available in PDF here, the short story in question may appeal to LW readers for its approach of viewing more things than are customary in handy economic terms, and is a fine piece of fiction to boot. The moneychanger protagonist gets out of several sticky situations by making desperate efforts, deploying the concepts of markets, revealed preferences, and wealth generation as he goes.
Harry Potter and the Methods of Rationality discussion thread, part 6
Update: Discussion has moved on to a new thread.
After 61 chapters of Harry Potter and the Methods of Rationality and 5 discussion threads with over 500 comments each, HPMOR discussion has graduated from the main page and moved into the Less Wrong discussion section (which seems like a more appropriate location). You can post all of your insights, speculation, and, well, discussion about Eliezer Yudkowsky's Harry Potter fanfic here.
Previous threads are available under the harry_potter tag on the main page (or: one, two, three, four, five); this and future threads will be found under the discussion section tag (since there is a separate tag system for the discussion section). See also the author page for (almost) all things HPMOR, and AdeleneDawner's Author's Notes archive for one thing that the author page is missing.
As a reminder, it's useful to indicate at the start of your comment which chapter you are commenting on. Time passes but your comment stays the same.
Spoiler Warning: this thread is full of spoilers. With few exceptions, spoilers for MOR and canon are fair game to post, without warning or rot13. More specifically:
You do not need to rot13 anything about HP:MoR or the original Harry Potter series unless you are posting insider information from Eliezer Yudkowsky which is not supposed to be publicly available (which includes public statements by Eliezer that have been retracted).
If there is evidence for X in MOR and/or canon then it's fine to post about X without rot13, even if you also have heard privately from Eliezer that X is true. But you should not post that "Eliezer said X is true" unless you use rot13.
= 783df68a0f980790206b9ea87794c5b6)

Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Manna is the title of a science fiction story that describes a near future transition to an automated society where humans are uneconomical. In the later chapters it describes in some detail a post-scarcity society. There are several problems with it however, the greatest by far is that the author seems to have assumed that "want" and "envy" are primarily tied in material needs. This is simply not true.
I would love to live in a society with material equality on a sufficiently hight standard, I'd however hate to live in society with a enforced social equality, simply because that would override my preferences and freedom to interact or not interact with whomever I wish.
Also since things like the willpower to work out (to stay in top athletic condition even!) or not having the resources to fulfil even basic plans are made irrelevant, things like genetic inequality or how comfortable you are messing with your own hardware to upgrade your capabilities or how much time you dedicate to self-improvement would be more important than ever.
I predict social inequality would be pretty high in this society and mostly involuntary. Even a decision about something like the distribution of how much time you use for self-improvement, which you could presumably change later, there wouldn't be a good way to catch up with anyone (think opportunity cost and compound interest), unless technological progress would hit diminishing returns and slow down. Social inequality would however be more limited than pure financial inequality I would guess because of things like Dunbar's number. There would still be tragedy (that may be a feature rather than a bug of utopia). I guess people would be comfortable with gods above and beasts below them, that don't really figure in their "my social status compared to others" part of the brain, but even in the narrow band where you do care about inequality would grow rapidly. Eventually you might find yourself alone in your specific spot.
To get back to my previous point about probable (to me) unacceptable limitations on freedom, It may seem silly that a society with material equality would legislate intrusive and micromanaging rules that would force social equality to prevent this, but the hunter gatherer instincts in us are strong. We demand equality. We enjoy bringing about "equality". We look good demanding equality. Once material needs are met, this powerful urge will still be there and bring about signalling races. And new and new ways to avoid the edicts produced by such races (because also strong in us is our desire to be personally unequal or superior to someone, to distinguish and discriminate in our personal lives). This would play out in interesting and potentially dystopia ways.
I'm pretty sure the vast majority of people in the Australia project would probably end up wireheading. Why bother to go to the Moon when you can have a perfect virtual reality replica of it, why bother with the status of building a real fusion reactor when you can just play a gameified simplified version and simulate the same social reward, why bother with a real relationship ect... dedicating resoruces for something like a real life space elevator simply wouldn't cross their minds. People I think systematically overestimate how much something being "real" matters to them. Better and better also means better and better virtual super-stimuli. Among the tiny remaining faction of remaining "peas" (those choosing to spend most of their time in physical existence), there would be very few that would choose to have children, but they would dominate the future. Also I see no reason why the US couldn't buy technology from the Australia Project to use for its own welfare dependant citizens. Instead of the cheap mega-shelters, just hook them up on virtual reality, with no choice in the matter. Which would make a tiny fraction of them deeply unhappy (if they knew about it).
I maintain that the human brains default response to unlimited control of its own sensor input and reasonable security of continued existence is solipsism. And the default of a society of human brains with such technology is first social fragmentation, then value fragmentation and eventually a return to living under the yoke of an essentially Darwinian processes. Speaking of which the society of the US as described in the story would probably outpace Australia since it would have machines do its research and development.
It would take some time for the value this creates to run out though, much like Robin Hanson finds a future with a dream time of utopia followed by trillions of slaves glorious , I still find a few subjective millennia of a golden age followed by non-human and inhuman minds to be worth it.
It is not like we have to choose between infinity and something finite, the universe seems to have an expiration date as it is. A few thousand or million years doesn't seem like something fleas on a insignificant speck should sneer at.