Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Zubon comments on Failed Utopia #4-2 - Less Wrong

53 Post author: Eliezer_Yudkowsky 21 January 2009 11:04AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (248)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Zubon 29 January 2009 11:14:00PM 9 points [-]

Eliezer, since you are rejecting the Wolfean praise, I will take the constructive criticism route. This is not your best writing, but you know that since you spent a night on it.

We have three thousand words here. The first thousand are disorientation and describing the room and its occupants. The second thousand is a block of exposition from the wrinkled figure. The third thousand is an expression of outrage and despair. Not a horrid structure, although you would want to trim the first and have the second be less of a barely interrupted monologue.

As a story, the dominant problem is that the characters are standing in a blank room being told what has already happened, and that "what" is mostly "I learned then changed things all at once." There have been stories that do "we are just in a room talking" well or badly; the better ones usually either make the "what happened" very active (essentially a frame story) or accept the recumbent position and make it entirely cerebral; the worse ones usually fall into a muddled in-between.

As a moral lesson, the fridge logic keeps hitting you in these comments, notably that this is a pure Pareto improvement for much of the species. Even as a failed utopia, you accept it as a better place from which to work on a real one. And 89.8% want to kill the AI? The next most common objection has been how this works outside heteronormativity, or for a broad range of sexual preferences. Enabling endless non-fatal torture is another winner for "how well did you think that through?" So it is not bad enough to fulfill its intent, its "catch" seems inadequately conceived, and there are other problems that make the whole scenario suspect.

My first thought of a specific way to better fulfill the story's goals would be to tell it from Helen's perspective, or at least put more focus on her and Lisa. You have many male comments of "hey, not bad." They are thinking of their own situations. They are not thinking of their wives and daughters being sexually serviced by boreana. The AI gets one line about this, but Stephen seems more worried about his fidelity than hers. With a substantially male audience, that is where you want to shove the dagger. Take it in the other direction by having the AI be helpful to Helen. While she does not want to accept her overwhelming attraction to her crafted partner, the AI wants her to make a clean break so she can be happier. It will gladly tell her about how Stephen's partner is more attractive to him than she could ever be, how long it will take for his affection to be alienated, and how rarely he will think about Helen after they have spent more time on different planets than they spent in the same house. Keep the sense of family separation by either making the child a son or noting that the daughter is somewhere on the planet, happier beyond her mother's control; in either case, note that s/he also woke up with a very attractive member of the opposite sex whose only purpose in life is to please him/her. This could be the point to note those male sexual enhancements, and monogamy is not what makes everyone happiest, so maybe Lisa wakes up with a few boreana.

And maybe this is just me, but the AI could seem a bit less like the Dungeonmaster from the old D&D cartoon.