Epiphany comments on Welcome to Less Wrong! (July 2012) - Less Wrong

20 Post author: ciphergoth 18 July 2012 05:24PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (843)

You are viewing a single comment's thread. Show more comments above.

Comment author: Epiphany 23 February 2013 09:20:52AM *  1 point [-]

What does "living" mean, exactly ?

"Living" the way I used it means "living to the fullest" or, a little more specifically "feeling really engaged in life" or "feeling fulfilled".

I understand that you find your personal creative projects highly enjoyable, and that's great. But you aren't merely saying, "I enjoy X", you're saying, "enjoying Y instead of X is objectively wrong" (if I understand you correctly).

I used "living" to refer to a subjective state. There's nothing objective about it, and IMO, there's nothing objectively right or wrong about having a subjective state that is (even in your own opinion) not as good as the ideal.

I feel like your real challenge here is more similar to Kawoomba's concern. Am I right?

They consume entertainment because it is enjoyable,

Do you find it more enjoyable to passively watch entertainment than to do your own projects? Do you think most people do? If so, might that be because the fun was taken out of learning, or people's creativity was reduced to the point where doing your own project is too challenging, or people's self-confidence was made too dependent on others such that they don't feel comfortable pursuing that fulfilling sense of having done something on their own?

or because it facilitates social contact (which they in turn find enjoyable), not because they believe it will make them more efficient (though see below).

I puzzle at how you classify watching something together as "social contact". To me, being in the same room is not a social life. Watching the same entertainment is not quality time. The social contact I yearn for involves emotional intimacy - contact with the actual person inside, not just a sense of being in the same room watching the same thing. I don't understand how that can be called social contact.

Many people -- yourself not among them, admittedly -- find that they are able to internalize new ideas much more thoroughly if these ideas are tied into a narrative.

I've been thinking about this and I think what might be happening is that I make my own narratives.

Similarly, other people find it easier to communicate their ideas in the form of narratives

This, I can believe about Eliezer. There are places where he could have been more incisive but is instead gets wordy to compensate. That's an interesting point.

I am just not convinced that this statement applies to anything like a majority of "person+idea" combinations.

Okay, so to clarify, your position is that entertainment is a more efficient way to learn?

Comment author: Bugmaster 24 February 2013 09:59:38PM 2 points [-]

"Living" the way I used it means "living to the fullest" or, a little more specifically "feeling really engaged in life" or "feeling fulfilled".

I understand that you do not feel fulfilled when watching TV, but other people might. I would agree with your reply on Kawoomba's sub-thread:

Now, if you want to disagree with me on whether they think they are "really living", that might be really interesting. I acknowledge that mind projection fallacy might be causing me to think they want what I want.

For better or for worse, passive entertainment such as movies, books, TV shows, music, etc., is a large part of our popular culture. You say:

I puzzle at how you classify watching something together as "social contact". To me, being in the same room is not a social life.

Strictly speaking this is true, but people usually discuss the things they watch (or read, or listen to, etc.), with their friends or, with the advent of the Internet, even with random strangers. The shared narratives thus facilitate the "emotional intimacy" you speak about. Furthermore, some specific works of passive entertainment, as well as generalized common tropes, make up a huge chunk of the cultural context without which it would be difficult to communicate with anyone in our culture on an emotional level (as opposed to, say, presenting mathematical proofs or engineering schematics to each other).

For example, if you take a close look at various posts on this very site, you will find references to the genres of science fiction and fantasy, as well as media such as movies or anime, which the posters simply take for granted (sometimes too much so, IMO; f.ex., not everyone knows what "tsuyoku naritai" means right off the bat). A person who did not share this common social context would find it difficult to communicate with anyone here.

Note, though, that once again I am describing a situation that exists, not prescribing a behavior. In terms of raw productivity per unit of time, I cannot justify any kind of entertainment at all. While it is true that entertainment has been with us since the dawn of civilization, so has cancer; just because something is old, doesn't mean that it's good.

Okay, so to clarify, your position is that entertainment is a more efficient way to learn?

No, this phrasing is too strong. I meant what I said before: many people find it easier to internalize new ideas when they are presented as part of a narrative. This doesn not mean that entertainment is a more efficient way to learn all things for all people, or that it is objectively the best technique for learning things, or anything of the sort.

Comment author: Desrtopa 28 February 2013 06:14:32AM 2 points [-]

Note, though, that once again I am describing a situation that exists, not prescribing a behavior. In terms of raw productivity per unit of time, I cannot justify any kind of entertainment at all. While it is true that entertainment has been with us since the dawn of civilization, so has cancer; just because something is old, doesn't mean that it's good.

Why try to justify entertainment in terms of productivity per time? Is there any reason this makes more sense than, say, justifying productivity in terms of how much entertainment it allows for?

Comment author: Bugmaster 28 February 2013 10:07:38AM 1 point [-]

Presumably, if your goal is to optimize the world, or to affect any part of it besides yourself in a non-trivial way, you should strive to do so as efficiently as possible. This means that spending time on any activities that do not contribute to this goal is irrational. A paperclip maximizer, for example, wouldn't spend any time on watching soap operas or reading romance novels -- unless doing so would lead to more paperclips (which is unlikely).

Of course, one could argue that consumption of passive entertainment does contribute to the average human's goals, since humans are unable to function properly without some downtime. But I don't know if I'd go so far as to claim that this is a feature, and not a bug, just like cancer or aging or whatever else evolution had saddled us with.

Comment author: RichardKennaway 28 February 2013 02:38:05PM 3 points [-]

Presumably, if your goal is to optimize the world, or to affect any part of it besides yourself in a non-trivial way, you should strive to do so as efficiently as possible.

A decision theory that leads to the conclusion that we should all work like slaves for a future paradise, the slightest lapse incurring a cost equivalent to untold numbers of dead babies, and the enormity of the task meaning that we shall never experience it ourselves, is prima facie a broken decision theory. I'd even call it the sort of toxic mindwaste that RationalWiki loves to mock.

Once you've built that optimised world, who gets to slack off and just live in it, and how will they spend their time?

Comment author: Viliam_Bur 28 February 2013 08:05:02PM *  3 points [-]

A decision theory that leads to the conclusion that we should all work like slaves for a future paradise, the slightest lapse incurring a cost equivalent to untold numbers of dead babies, and the enormity of the task meaning that we shall never experience it ourselves, is prima facie a broken decision theory.

Why exactly? I mean, my intuition also tells me it's wrong... but my intuition has a few assumptions that disagree with the proposed scenario. Let's make sure the intuition does not react to a strawman.

For example, when in real life people "work like slaves for a future paradise", the paradise often does not happen. Typically, the people have a wrong model of the world. (The wrong model is often provided by their leader, and their work in fact results in building their leader's personal paradise, nothing more.) And even if their model is right, their actions are more optimized for signalling effort than for real efficiency. (Working very hard signals more virtue than thinking and coming up with a smart plan to make a lot of money and pay someone else to do more work than we could.) Even with smart and honest people, there will typically be something they ignored or could not influence, such as someone powerful coming and taking the results of their work, or a conflict starting and destroying their seeds of the paradise. Or simply their internal conflicts, or lack of willpower to finish what they started.

The lesson we should take from this is that even if we have a plan to work like a slaves for a future paradise, there is very high prior probability that we missed something important. Which means that in fact we do not work for a future paradise, we only mistakenly think so. I agree that the prior probability is so high that even the most convincing reasoning and plans are unlikely to overweight it.

However, for the sake of experiment, imagine that Omega comes and tells you that if you will work like a slave for the next 20 or 50 years, the future paradise will happen with probability almost 1. You don't have to worry about mistakes in your plans, because either Omega verified their correctness, or is going to provide you corrections when needed and predicts that you will be able to follow those corrections successfully. Omega also predicts that it you commit to the task, you will have enough willpower, health, and other necessary resources to complete it successfully. In this scenario, is committing for the slave work a bad decision?

In other words, is your objection "in situation X the decision D is wrong", or is it "the situation X is so unlikely that any decision D based on assumption of X will in real life be wrong"?

Comment author: RichardKennaway 28 February 2013 10:52:46PM 1 point [-]

However, for the sake of experiment, imagine that Omega comes and tells you

When Omega enters a discussion, my interest in it leaves.

Comment author: wedrifid 01 March 2013 09:44:58AM 1 point [-]

When Omega enters a discussion, my interest in it leaves.

To that extent that someone is unable to use established tools of thought to focus attention on the important aspects of the problem their contribution to a conversation is likely to be negative. This is particularly the case when it comes to decision theory where it correlates strongly with pointless fighting of the counterfactual and muddled thinking.

Comment author: RichardKennaway 08 March 2013 11:29:43PM 1 point [-]

Omega has its uses and its misuses. I observe the latter on LW more often than the former. The present example is one such.

And in future, if you wish to address a comment to me, I would appreciate being addressed directly, rather than with this pseudo-impersonal pomposity.

Comment author: wedrifid 09 March 2013 01:24:26AM *  2 points [-]

And in future, if you wish to address a comment to me, I would appreciate being addressed directly, rather than with this pseudo-impersonal pomposity.

I intended the general claim as stated. I don't know you well enough for it to be personal. I will continue to support the use of Omega (and simplified decision theory problems in general) as a useful way to think.

For practical purposes pronouncements like this are best interpreted as indications that the speaker has nothing of value to say on the subject, not as indications that the speaker is too sophisticated for such childish considerations.

Comment author: Peterdjones 09 March 2013 09:40:22AM 0 points [-]

It is counterintuitive that you should slave for people you don't know, perhaps because you can't be sure you are serving their needs effectively. Even if that objection is removed by bringing in an omniscient oracle,there still seems to be a problem because the prospect of one generation slaving to create paradise for another isn't fair. the simple version of utilitiarianism being addressed here only sums individual utilities, and us blind to things that can only be defined at the group level like justice and equaliy.

Comment author: [deleted] 01 March 2013 12:59:39PM 0 points [-]

However, for the sake of experiment, imagine that Omega comes and tells you that if you will work like a slave for the next 20 or 50 years, the future paradise will happen with probability almost 1. You don't have to worry about mistakes in your plans, because either Omega verified their correctness, or is going to provide you corrections when needed and predicts that you will be able to follow those corrections successfully. Omega also predicts that it you commit to the task, you will have enough willpower, health, and other necessary resources to complete it successfully. In this scenario, is committing for the slave work a bad decision?

For the sake of experiment, imagine that air has zero viscosity. In this scenario, would a feather and a cannon ball fall in the same time?

Comment author: Bugmaster 01 March 2013 10:11:53PM 0 points [-]

For the sake of experiment, imagine that air has zero viscosity. In this scenario, would a feather and a cannon ball fall in the same time?

I believe the answer is "yes", but I had to think about that for a moment. I'm not sure how that's relevant to the current discussion, though.

I think your real point might be closer to something like, "thought experiments are useless at best, and should thus be avoided", but I don't want to put words into anyone's mouth.

Comment author: [deleted] 02 March 2013 11:57:35AM 0 points [-]

My point was something like, “of course if you assume away all the things that cause slave labour to be bad then slave labour is no longer bad, but that observation doesn't yield much of an insight about the real world”.

Comment author: Bugmaster 04 March 2013 09:13:25PM 0 points [-]

That makes sense, but I don't think it's what Viliam_Bur was talking about. His point, as far as I could tell, was that the problem with slave labor is the coercion, not the labor itself.

Comment author: Jack 09 March 2013 01:45:32AM 2 points [-]

"Decision theory" doesn't mean the same thing as "value system" and we shouldn't conflate them.

Comment author: Peterdjones 09 March 2013 09:51:37AM 1 point [-]

Yep. A morality that leads to the conclusion that we should all work like slaves for a future paradise, the slightest lapse incurring a cost equivalent to untold numbers of dead babies, and the enormity of the task meaning that we shall never experience it ourselves, is prima facie a broken morality.

Comment author: Bugmaster 28 February 2013 04:48:02PM 1 point [-]

A decision theory that leads to the conclusion that we should all work like slaves for a future paradise ... is prima facie a broken decision theory.

Why ? I mean, I do agree with you personally, but I don't see why such a decision theory is objectively bad. You ask,

Once you've built that optimised world, who gets to slack off and just live in it, and how will they spend their time?

But the answer depends entirely on your goals. These can be as relatively modest as, "the world will be just like it is today, but everyone wears a party hat". Or it could be as ambitious as, "the world contains as many paperclips as physically possible". In the latter case, if you asked the paperclip maximizer "who gets to slack off ?", it wouldn't find the question relevant in the least. It doesn't matter who gets to do what, all that matters are the paperclips.

You might argue that a paperclip-filled world would be a terrible place, and I agree, but that's just because you and I don't value paperclips as much as Clippy does. Clippy thinks your ideal world is terrible too, because it contains a bunch of useless things like "happy people in party hats", and not nearly enough paperclips.

However, imagine if we ran two copies of Clippy in a grand paperclipping race: one that consumed entertainment by preference, and one that did not. The non-entertainment version would win every time. Similarly, if you want to make the world a better place (whatever that means for you), every minute you spend on doing other things is a minute wasted (unless they are explicitly included in your goals). This includes watching TV, eating, sleeping, and being dead. Some (if not all) of such activities are unavoidable, but as I said, I'm not sure whether it's a bug or a feature.

Comment author: RichardKennaway 28 February 2013 05:52:19PM 3 points [-]

However, imagine if we ran two copies of Clippy in a grand paperclipping race: one that consumed entertainment by preference, and one that did not. The non-entertainment version would win every time.

This is proving the conclusion by assuming it.

Similarly, if you want to make the world a better place (whatever that means for you), every minute you spend on doing other things is a minute wasted (unless they are explicitly included in your goals). This includes watching TV, eating, sleeping, and being dead. Some (if not all) of such activities are unavoidable, but as I said, I'm not sure whether it's a bug or a feature.

The words make a perfectly logical pattern, but I find that the picture they make is absurd. The ontology has gone wrong.

Some businessman wrote a book of advice called "Never Eat Alone", the title of which means that every meal is an opportunity to have a meal with someone to network with. That is what the saying "he who would be Pope must think of nothing else" looks like in practice. Not wearing oneself out like Superman in the SMBC cartoon, driven into self-imposed slavery by memetic immune disorder.

BTW, for what it's worth, I do not watch TV. And now I am imagining a chapter of that book entitled "Never Sleep Alone".

Comment author: ygert 28 February 2013 05:58:01PM *  7 points [-]

Some businessman wrote a book of advice called "Never Eat Alone", the title of which means that every meal is an opportunity to have a meal with someone to network with. That is what the saying "he who would be Pope must think of nothing else" looks like in practice. Not wearing oneself out like Superman in the SMBC cartoon, driven into self-imposed slavery by memetic immune disorder.

Actually, I think that the world described in that SMBC cartoon is far preferable to the standard DC comics world with Superman. I do not think that doing what Superman did there is a memetic immune disorder, but rather a (successful) attempt to make the world a better place.

Comment author: RichardKennaway 28 February 2013 06:37:19PM 1 point [-]

You would, then, not walk away from Omelas?

Comment author: Desrtopa 28 February 2013 07:05:28PM 10 points [-]

I definitely wouldn't. A single tormented child seems to me like an incredibly good tradeoff for the number of very high quality lives that Omelas supports, much better than we get with real cities.

It sucks to actually be the person whose well-being is being sacrificed for everyone else, but if you're deciding from behind a veil of ignorance which society to be a part of, your expected well being is going to be higher in Omelas.

Back when I was eleven or so, I contemplated this, and made a precommitment that if I were ever in a situation where I'm offered a chance to improve total wellfare for everyone at the cost of personal torment, I should take it immediately without giving myself any time to contemplate what I'd be getting myself into, so in that sense I've effectively volunteered myself to be the tormented child.

I don't disagree with maximally efficient altruism, just with the idea that it's sensible to judge entertainment only as an instrumental value in service of productivity.

Comment author: Bugmaster 28 February 2013 07:58:45PM 0 points [-]

This is proving the conclusion by assuming it.

How so ? Imagine that you have two identical paperclip maximizers; for simplicity's sake, let's assume that they are not capable of radical self-modification (though the results would be similar if they were). Each agent is capable of converting raw titanium to paperclips at the same rate. Agent A spends 100% of its time on making paperclips. Agent B spends 80% of its time on paperclips, and 20% of its time on watching TV. If we gave A and B two identical blocks of titanium, which agent would finish converting all of it to paperclips first ?

That is what the saying "he who would be Pope must think of nothing else" looks like in practice.

FeepingCreature addressed this better than I could in this comment . I understand that you find the idea of making paperclips (or political movements, or software, or whatever) all day every day with no breaks abhorrent, and so do I. But then, some people find polyamory abhorrent as well, and then they "polyhack" themselves and grow to enjoy it. Is entertainment your terminal value, or a mental bias ? And if it is a terminal value, is it the best terminal value that you could possibly have ?

Comment author: RichardKennaway 01 March 2013 12:00:10AM *  1 point [-]

WARNING: This comment contains explicit discussion of an information hazard.

Imagine that you have two identical paperclip maximizers

I decline to do so. What imaginary creatures would choose whose choice has been written into their definition is of no significance. (This is also a reply to the comment of FeepingCreature you referenced.) I'm more interested in the practical question of how actual human beings, which this discussion began with, can avoid the pitfall of being taken over by a utility monster they've created in their own heads.

This is a basilisk problem. Unlike Roko's, which depends on exotic decision theory, this one involves nothing more than plain utilitarianism. Unlike the standard Utility Monster scenario, this one involves no imaginary entities or hypothetical situations. You just have to look at the actual world around you through the eyes of utilitarianism. It's a very short road from the innocent-sounding "the greatest good for the greatest number" to this: There are seven billion people on this planet. How can the good you could do them possibly be outweighed by any amount of your own happiness? Just by sitting there reading LessWrong you're killing babies! Having a beer? You're drinking dead babies. Own a car? You're driving on a carpet of dead babies! Murderer! Murderer! Add a dash of transhumanism and you can up the stakes to an obligation to bringing about billions of billions of future humans throughout the universe living lives billions of times better than ours.

But even Peter Singer doesn't go that far, continuing to be an academic professor and paying his utilitarian obligations by preaching utilitarianism and donating twenty percent of his salary to charity.

This is such an obvious failure mode for utilitarianism, a philosophy at least two centuries old, that surely philosophers must have addressed it. But I don't know what their responses are.

Christianity has the same problem, and handles it in practice by testing the vocation of those who come to it seeking to devote their whole life to the service of God, to determine whether they are truly called by God. For it is written that many are called, yet few are chosen. In non-supernatural terms, that means determining whether the applicant is psychologically fitted for the life they feel called to, and if not, deflecting their mania into some more productive route.

Comment author: TheOtherDave 01 March 2013 03:30:12AM 3 points [-]

Consider two humans, H1 and H2, both utilitarians.

H1 looks at the world the way you describe Peter Singer here.
H2 looks at the world "through the eyes of utilitarianism" as you describe it here.

My expectation is that H1 will do more good in their lifetime than H2.
What's your expectation?

Comment author: Eliezer_Yudkowsky 01 March 2013 06:52:18PM 0 points [-]

Infohazard reference with no warning sign. Edit and reply to this so I can restore.

Comment author: [deleted] 09 March 2013 11:39:26AM *  0 points [-]

(Warning: replying to discussion of a potential information hazard.)

Whfg ol fvggvat gurer ernqvat YrffJebat lbh'er xvyyvat onovrf! Univat n orre? Lbh'er qevaxvat qrnq onovrf.

Gung'f na rknttrengvba (tvira gung ng gung cbvag lbh unqa'g nqqrq zragvbarq genafuhznavfz lrg) -- nf bs abj, vg'f rfgvzngrq gb gnxr zber guna gjb gubhfnaq qbyynef gb fnir bar puvyq'f yvsr jvgu Tvirjryy'f gbc-engrq punevgl. (Be vf ryrpgevpvgl naq orre zhpu zber rkcrafvir jurer lbh'er sebz?)

Comment author: Bugmaster 01 March 2013 01:04:25AM 0 points [-]

What imaginary creatures would choose whose choice has been written into their definition is of no significance.

Are you saying that human choices are not "written into their definition" in some measure ?

Also, keep in mind that a goal like "make more paperclips" does leave a lot of room for other choices. The agent could spend its time studying metallurgy, or buying existing paperclip factories, or experimenting with alloys, or attempting to invent nanotechnology, or some combination of these and many more activities. It's not constrained to just a single path.

Just by sitting there reading LessWrong you're killing babies! ... Add a dash of transhumanism and you can up the stakes to an obligation to bringing about billions of billions of future humans throughout the universe living lives billions of times better than ours.

On the one hand, I do agree with you, and I can't wait to see your proposed solution. On the other hand, I'm not sure what this has to do with the topic. I wasn't talking about billions of future humans or anything of the sort, merely about a single (semi-hypothetical) human and his goals; whether entertainment is a terminal or instrumental goal; and whether it is a good goal to have.

Let me put it in a different way: if you could take a magic pill which would remove (or, at the very least, greatly reduce) your desire for passive entertainment, would you do it ? People with extremely low preferences for passive entertainment do exist, after all, so this scenario isn't entirely fantastic (other than for the magic pill part, of course).

Comment author: whowhowho 09 March 2013 12:55:24AM 0 points [-]

If we gave A and B two identical blocks of titanium, which agent would finish converting all of it to paperclips first ?

That has no relevance to morality. Morality is not winning, is not efficiently fulfilling an arbitrary UF.

Comment author: IlyaShpitser 28 February 2013 04:55:25PM *  1 point [-]

I mean, I do agree with you personally, but I don't see why such a decision theory is objectively bad.

This decision theory is bad because it fails the "Scientology test."

Comment author: FeepingCreature 28 February 2013 05:32:07PM *  3 points [-]

That's hardly objective. The challenge is to formalize that test.

Btw: the problem you're having is not due to any decision theory but due to the goal system. You want there to be entertainment and fun and the like. However, the postulated agent had a primary goal that did not include entertainment and fun. This seems alien to us, but for the mindset of such an agent "eschew entertainment and fun" is the correct and sane behavior.

Comment author: Bugmaster 28 February 2013 08:14:26PM 0 points [-]

Exactly, though see my comment on a sibling thread.

Out of curiosity though, what is the "Scientology test" ? Is that some commonly-accepted term from the Less Wrong jargon ? Presumably it doesn't involve poorly calibrated galvanic skin response meters... :-/

Comment author: FeepingCreature 01 March 2013 07:06:27PM *  2 points [-]

Not the commenter, but I think it's just "it makes you do crazy things, like scientologists". It's not a standard LW thing.

Comment author: [deleted] 01 March 2013 12:54:08PM 0 points [-]

if your goal is to optimize the world

Optimize it for what?

Comment author: Bugmaster 01 March 2013 04:46:57PM 1 point [-]

That is kind of up to you. That's the problem with terminal goals...

Comment author: [deleted] 09 March 2013 01:02:06PM *  0 points [-]

For better or for worse, passive entertainment such as movies, books, TV shows, music, etc., is a large part of our popular culture.

<nitpick>Music is only passive entertainment if you just listen at it, not if you sing it, play it, or dance at it.</nitpick>

Strictly speaking this is true, but people usually discuss the things they watch (or read, or listen to, etc.), with their friends or, with the advent of the Internet, even with random strangers. The shared narratives thus facilitate the "emotional intimacy" you speak about. Furthermore, some specific works of passive entertainment, as well as generalized common tropes, make up a huge chunk of the cultural context without which it would be difficult to communicate with anyone in our culture on an emotional level (as opposed to, say, presenting mathematical proofs or engineering schematics to each other).

I agree that people spend lots of time talking about these kind of things, and that the more shared topics of conversation you have with someone the easier it is to socialize with them, but I disagree that there are few non-technical things one can talk about other than what you get from passive entertainment. I seldom watch TV/films/sports, but I have plenty of non-technical things I can talk about with people -- parties we've been to, people we know, places we've visited, our tastes in food and drinks, unusual stuff that happened to us, what we've been doing lately, our plans for the near future, ranting about politics, conspiracy theories, the freakin' weather, whatever -- and I'd consider talking about some of these topic to build more ‘emotional intimacy’ than talking about some Hollywood movie or the Champions League or similar. (Also, I take exception to the apparent implication of the parenthetical at the end of the paragraph -- it is possible to entertain people by talking about STEM topics, if you're sufficiently Feynman-esque about that.)

For example, if you take a close look at various posts on this very site, you will find references to the genres of science fiction and fantasy, as well as media such as movies or anime, which the posters simply take for granted (sometimes too much so, IMO; f.ex., not everyone knows what "tsuyoku naritai" means right off the bat). A person who did not share this common social context would find it difficult to communicate with anyone here.

I have read very little of that kind of fiction, and still I haven't felt excluded by that in the slightest (well, except that one time when the latest HPMOR thread clogged up the top Discussion comments of the week when I hadn't read HPMOR yet, and the occasional Discussion threads about MLP -- but that's a small minority of the time).

Comment author: Bugmaster 24 February 2013 10:40:53PM 0 points [-]

This article, courtesy of the recent Seq Rerun, seems serendipitous:

http://lesswrong.com/lw/yf/moral_truth_in_fiction/