Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Eliezer_Yudkowsky 20 September 2009 08:33:04PM 6 points [-]

The story is an alternate history of A Fire Upon the Deep, but it's a sequel to Permutation City - it's not an alternative to Egan's ending, but something that could have happened after Egan's ending took place as written.

Comment author: swestrup 23 September 2009 03:41:51AM 13 points [-]

Oh, I understood that. Except that your explanation of what happened at the end of Permutation City made sense whereas how that story actually ended did not. Hence I prefer your explanation of the ending of Permutation City to the one provided in the book.

Comment author: swestrup 20 September 2009 08:13:25PM 6 points [-]

I really enjoyed the story, and I have to say that I prefer your ending to Permutation City than the one that Egan wrote.

In response to comment by swestrup on The Sword of Good
Comment author: Aurini 05 September 2009 09:17:41AM *  19 points [-]

Huh; I thought my browser had failed, and this post hadn't appeared. Anyway...

There's an old army saying: "Being in the army ruins action movies for you." I feel the same way about 'scifi' - Aside from season 3, every episode of Torchwood (that I've recently started watching, now that I finished Sopranos) is driving me up the wall. I propose a corollary saying:

"Understanding philosophical materialism and the implications thereof ruins 99% of Science Fiction... and don't get me started on Fantasy!"

In my opinion, there are three essential rules to Fantasy:

  1. The protagonist is a priori important; by their very nature they have metaphysical relevance (even though they don't know it yet!). All other characters are living their rightful and deserved life, unless they are below their means with a Heart of Gold.

  2. The scientific method (hypothesis, experiment, conclusion, theory) only works in the immediate sense, not the broad sense; your immediate world will be logical, but the world as a whole is incomprehensible. You can only build machines if a) they already exist; or b) they serve no practical purpose. Magic, on the other hand, generally works as intended; the human will guides it, and can only be countervened by another magical authority (a navigation spell will not require knowledge of the local plant life, nor will it require accurate grid coordinates given a non-simultaneous Relativistic geometry).::If magic doesn't work as the protagonists intend, it will be working under a higher moral power.

  3. There is an abstract and absolute division between Right and Wrong; somebody is keeping score, and no actions are hidden. Your evil acts might escape the notice of the local authorities, but they will show through by your bearing, your beauty, or your image.

Heh, this might be worth a top level post except tvtropes has covered it all already.

In response to comment by Aurini on The Sword of Good
Comment author: swestrup 07 September 2009 10:51:51AM 0 points [-]

I agree, which is why I tend to shy away from performing a moral analysis of Fantasy stories in the first place. That way lies a bottomless morass.

In response to comment by swestrup on The Sword of Good
Comment author: Aurini 04 September 2009 03:01:33AM 7 points [-]

Though it wasn't explicitly said, it was heavily implied that either choice would be for a potentially infinite duration. This is a world of fantasy and prophecy, after all: I got the impression that the current social order was stable, and given that there was magic (not psychic ability but magic) it's also fair to assume that the scientific method doesn't work (not that this makes any sense, but you have to suspend that disbelief for magic to work [gnomes are still allowed to build complex machines, they're just not allowed to build useful machines]).

The way I interpreted it was that he had a choice between the status quo for 1000 years, or and unknown change, guided by good intentions, for 1000 years.

Besides, the Big Bad was Marty Stu. How could I not side with him?

(Another great work, Yudkowski - you really should send one of these to Asimov's SciFi)

In response to comment by Aurini on The Sword of Good
Comment author: swestrup 04 September 2009 04:47:28AM 0 points [-]

Interesting. Is hard to reconstruct my reasoning exactly, but I think that I assumed that things I didn't know were simply things I didn't know, and based my answer on the range of possibilities -- good and bad.

In response to comment by swestrup on The Sword of Good
Comment author: Kaj_Sotala 03 September 2009 06:26:50PM 6 points [-]

Does no one else have trouble deciding which is the lesser problem?

I gathered that the choice being a difficult one was the whole point. It's not a genuine choice if the right choice is obvious, that much was explicitly stated.

You say it "clearly" wasn't a Choice Between Good and Evil, but I don't think that's clear. One choice might still have a good outcome and the other an evil one. It's just that we don't know which one is which.

Comment author: swestrup 03 September 2009 09:21:34PM *  3 points [-]

It would say that the likelihood is overwhelming that BOTH choices will lead to bad ends. The only question is: which is worse. That's why I was saying it was between two evils.

Besides, its hard to reconcile the concept of 'Good' with a single flawed individual deciding the fate of the world, possibly for an infinite duration. The entire situation is inherently evil.

In response to The Sword of Good
Comment author: swestrup 03 September 2009 03:44:32PM *  19 points [-]

My first impression of this story was very positive, but as it asks us to ask moral questions about the situation, I find myself doing so and having serious doubts about the moral choices offered.

First of all, it appears to be a choice between two evils, not evil and good. On one hand is a repressive king-based classist society that is undeniably based on socially evil underpinnings. On the other hand we have an absolute unquestionably tyranny that plans to do good. Does no one else have trouble deciding which is the lesser problem?

Secondly, we know for a fact that, in our world, kingdoms and repressive regimes sometimes give way to more enlightened states, and we don't know enough about the world to even know how many different kingdoms there are or what states of enlightenment exist elsewhere. For all we know things are on the verge of a (natural) revolution. We can't say much about rule by an infinite power, having no examples to hand, but there is the statement that "power corrupts". Now, I'm not going to say that this is inevitable, but I have at least to wonder if an integration over total sentient happiness going forward is higher in the old regime and its successors, or in the Infinite Doom regime.

Finally, the hero is big into democracy. Where in either of these choices does the will of the peasants fit in anywhere?

EDIT: One more point I wanted to add, since its clearly not a Choice Between Good and Evil as the prophesy states, why assume there is a choice, or that there are only two options. Would not a truly moral person look for a third alternative?

Comment author: SilasBarta 16 July 2009 07:48:39PM *  32 points [-]

Okay, now for my attempt to actually answer the prompt:

Your supposed "taste" for alcoholic beverages is a lie.

Summary: I've never enjoyed the actual process of drinking alcohol in the way that I e.g. enjoy ice cream. (The effects on my mind are a different story, of course.)

So for a long time I thought that, hey, I just have weird taste buds. Other people really like beer/wine/etc., I don't. No biggie.

But then as time went by I saw all the data about how wine-tasting "experts" can't even agree on which is the best, the moment you start using scientific controls. And then I started asking people about the particulars of why they like alcohol. It turns out that when it comes any implications of "I like alcohol", I have the exact same characterstics as those who claim to like alcohol.

For example, there are people who insist that, yes, I must like alcohol, because, well, what about Drink X which has low alcohol content and is heavily loaded with flavoring I'd like anyway? And wine experts would tell me that, on taste alone, ice cream wins. And defenses of drinking one's favorite beverage always morph into "well, it helps to relax..."

So, I came to the conclusion that people have the very same taste for alcohol that I do, it's just that they need to cook up a rationlizations for getting high. Still trying to find counterevidence...

Your turn: convince me that you really, really like the taste of [alcoholic beverage that happens to also signal your social status].

Comment author: swestrup 17 July 2009 04:41:45AM 0 points [-]

I rather enjoy the taste of a Brown Cow, which is Creme de Cacoa in Milk. Then again, I'm sure I'd prefer a proper milkshake. Generally, if I drink an alcoholic beverage its for the side effects.

Comment author: kpreid 02 July 2009 11:07:03PM 2 points [-]

I found more value in “maybe I need to set up a blog of things I have read that I think are true” than in the extremely broad topic of “harness your biases”. If I were editing the article I would throw out that topic and keep the particular notion of improving your knowledge by preparing it for publication.

Comment author: swestrup 03 July 2009 09:27:54AM 0 points [-]

Granted, the title was probably too flip, but I think yours is a little wordy. I'm not sure I can do better at the moment other than maybe something like "Self-Publication as a Truth Filter".

Comment author: swestrup 02 July 2009 08:54:37PM 0 points [-]

Reading this, I suddenly had an A-Ha! and checked my post from last month that had, mysteriously, never garnered a single comment or vote and discovered that it was in the drafts area. I could swear that I double checked it at the time to make sure it had been published, but in any case, I've now made sure its published. Thanks!

Harnessing Your Biases

10 swestrup 02 July 2009 08:45PM

Theoretically, my 'truth' function, the amount of evidence I need to cache something as 'probably true and reliable' should be a constant. I find, however, that it isn't. I read a large amount of scientific literature every day, and only have time to investigate a scant amount of it in practice. So, typically I rely upon science reporting that I've found to be accurate in the past, and only investigate the few things that have direct relevance to work I am doing (or may end up doing).

Today I noticed something about my habits. I saw an article on how string theory was making testable predictions in the realm of condensed matter physics, and specifically about room-temperature superconductors. While a pet interest of mine, this is not an area that I'm ever likely to be working in, but the article seemed sound and so I decided it was an interesting fact, and moved on, not even realizing that I had cached it as probably true.

A few minutes later it occurred to me that some of my friends might also be interested in the article. I have a Google RSS feed that I use to republish occasional articles that I think are worth reading. I have a known readership of all of 2. Suddenly, I discovered that what I had been willing to accept as 'probably true' on my own behalf was no longer good enough. Now I wanted to look at the original paper itself, and to see if I could find any learnéd refutations or comments.

This seems to be because my reputation was now, however tangentially, "on the line" since I have a reputation in my circle of friends as the science geek and would not want to damage it by steering someone wrong. Now, clearly this is wrong headed. My theory of truth should be my theory of truth, period.

One could argue, I suppose, that information that I store internally can only affect my own behavior while information that I disseminate can affect the behaviour of an arbitrarily large group of people, and so a more stringent standard should apply to things I tell others. In fact that was the first justification that sprang to mind when I noticed my double standard.

Its a bogus argument though, as none of my friends are likely to repeat the article or post it in their blogs and so the dissemination has only a tiny probability of propagating by that route. However, once its in my head and I'm treating it as true, I'm very likely to trot it out as an interesting fact when I'm talking at Science Fiction conventions or to groups of interested geeks. If anything, the standard for my believing something should be more stringent than my standard for repeating it, not the other way around.

But, the title of this post is "Harnessing Your Biases" and it seems to me that if I am going to have this strange predisposition to check more carefully if I am going to publish something, then maybe I need to set up a blog of things I have read that I think are true. It can just be an edited feed of my RSS stream, since this is simple to put together. Then I may find myself being more careful in what I accept as true. The mere fact that I have the feed and that its public (although I doubt that anyone would, in fact, read it), would make me more careful. Its even possible that it will contain very few articles as I would find I don't have time to investigate interesting claims well enough to declare them true, but this will have the positive side effect that I won't go around caching them internally as true either.

I think that, in many ways, this is why, in the software field, code reviews are universally touted as an extraordinarily cheap and efficient way of improving code design and documentation while decreasing bugs, and yet is very hard to get put into practice. The idea is that after you've written any piece of code, you give it to a coworker to critique before you put it in the code base. If they find too many things to complain about, it goes back for revision before being given to yet another coworker to check. This continues until its deemed acceptable.

In practice, the quality of work goes way up and the speed of raw production goes down marginally. The end result is code that needs far less debugging and so the number of working lines of code produced per day goes way up. I think this is because programmers in such a regime quickly find that the testing and documenting that they think is 'good enough' when their work is not going to be immediately reviewed is far less than the testing and documenting they do when they know they have to hand it to a coworker to criticize. The downside, of course, is that they are now opening themselves up for criticism on a daily basis, and this is something that few folks enjoy no matter how good it is for them, and so the practice continues to be quite rare due to programmer resistance to the idea.

This appears to be two different ways in which to harness the bias that folks have to do better (or more careful) work when it is going to be examined, to achieve better results. Can anyone else here think of other biases that can be exploited in useful ways to leverage greater productivity or reliability in projects?

 

View more: Next