Comment author: Manfred 10 March 2013 06:23:20PM *  2 points [-]

Nice story :)

The way this plays out feels Joseph Campbell-ey, with Kay even refusing a literal call before the tension ramps up. Which is not bad at all from a literary perspective, but might cause audiences to see things in terms of the structure of the story rather than as a lesson. So hm, what are some ways to vividly show our protagonist doing the best with what they have rather than living in the past, or than selling out / giving up.

Or maybe Kay has given up initially, and then over the course of the story rekindles an explicit desire to do what's right now as a direct response to our villain's self-justifications.

Other rationality skills to possibly include: noticing when you're writing in the bottom line beforehand, making plans more shock-proof and modular than humans naively want to, explicitly stopping and checking the consequences of a difficult choice, noticing when you flinch away from unpleasant thoughts - sometimes that's okay, but sometimes you need to do that thing that's unpleasant to think about.

Comment author: Pavitra 12 March 2013 09:33:58PM 1 point [-]

The story is, in large part, about the structure of the story: Pluto's tragic flaw is that he's thinking about his real life in terms of story structure.

Comment author: Pavitra 08 March 2013 04:54:11AM *  4 points [-]

Consider the epistemic state of someone who knows that they have the attention of a vastly greater intelligence than themselves, but doesn't know whether that intelligence is Friendly. An even-slightly-wrong CAI will modify your utility function, and there's nothing you can do but watch it happen.

Comment author: Will_Newsome 19 February 2013 12:58:46AM 2 points [-]

The somebody could only be a few programmers hired/recruited by CFAR working with direction from Leah. Basically Leah would have to get some people Anna respects to agree the idea is good and then talk to Anna about it. But presumably Anna and CFAR generally are really busy, so, it probably won't go anywhere in any case.

Comment author: Pavitra 21 February 2013 10:54:07AM 4 points [-]

Not really relevant here, but I only just now got the pun in CFAR's acronym.

Comment author: Desrtopa 20 January 2013 01:52:27PM 0 points [-]

I don't assume that bad uses can't be reduced, and my answer is somewhat tongue in cheek, but I do suspect that getting people to stop using this mode of thought for bad ideas would be very difficult. Getting people to apply it to good causes as well might be worse, outcome-wise, than getting them to stop applying it all, but trying to get people to apply it to good causes might still have a better return on investment than trying to get them to stop, simply because it's easier.

Comment author: Pavitra 20 January 2013 01:55:31PM -1 points [-]

You may be right, but I don't trust a human to only arrive at that conclusion if it's true. I think we ought to refrain from pressing D, just in case.

Comment author: DataPacRat 20 January 2013 08:25:40AM 1 point [-]

What level of confidence is high (or low) enough that you would feel means that something is within the 'noise level'?

Comment author: Pavitra 20 January 2013 01:45:44PM *  -1 points [-]

Depending on how smart I feel today, anywhere from -10 to 40 decibans.

(edit: I remember how log odds work now.)

Comment author: CellBioGuy 19 January 2013 10:48:59PM *  6 points [-]

Seeing as I work every day with individual DNA molecules which behave discretely (as in, one goes into a cell or one doesn't), and on the way to my advisor I walk past a machine that determines the 3D molecular structure of proteins... yeah.

This edifice not being true would rely on truly convoluted laws of the universe that emulate it in minute detail under every circumstance I can think of, but not doing so under some circumstance not yet seen. I am not sure how to quantify that, but I would certainly never plan for it being the case. >99.9? Most of the 0.1% comes from the possibility that I am intensely stupid and do not realize it, not thinking that it could be wrong within the framework of what is already known. Though at that scale the numbers are really hard to calibrate.

Comment author: Pavitra 20 January 2013 01:42:12PM -1 points [-]

I think a more plausible scenario for the atomic theory being wrong would be that the scientific community -- and possibly the scientific method -- is somehow fundamentally borked up.

Humans have come up with -- and become strongly confident in -- vast, highly detailed, completely nowhere-remotely-near-true theories before, and it's pretty hard to tell from the inside whether you're the one who won the epistemic lottery. They all think they have excellent reasons for believing they're right.

Comment author: fubarobfusco 20 January 2013 06:52:04AM 2 points [-]

Less than one in seven billion.

Comment author: Pavitra 20 January 2013 01:39:14PM 6 points [-]

You are way overconfident in your own sanity. What proportion of humans experience vivid, detailed hallucinations on a regular basis? (not counting dreams)

Comment author: Desrtopa 19 January 2013 05:14:43AM 7 points [-]

Well, if you can't stop people from using a superweapon for bad causes, it may be an improvement to see to it that it's also used for good causes.

Comment author: Pavitra 20 January 2013 01:33:10PM -1 points [-]

The original question was:

Do you really think encouraging this idea in general is good?

That is: assuming it is possible to reduce bad uses at the cost of also reducing good uses, should one do so?

Your reply seems to assume that the bad uses can't be reduced, which contradicts the pre-established assumptions. If you want to change the assumptions of a discussion, please include a note that you are doing so and ideally a short explanation of why you think the previous assumptions should be rejected in favor of the new ones.

Comment author: Stuart_Armstrong 18 January 2013 07:21:27PM 12 points [-]

Do you really think encouraging this idea in general is good?

I'd certainly prefer if the serious risks were the anthropomorphised ones, rather than the trivial ones.

Comment author: Pavitra 19 January 2013 04:41:48AM 3 points [-]

So it's a great idea as long as only causes you agree with get to use the superweapon?

Comment author: PhilGoetz 15 January 2013 07:33:43PM *  10 points [-]

You're welcome, but about half the episodes are bad. The season openers are the worst. YMMV. I recommend "Look before you sleep", "Green isn't your color", "Sisterhooves Social", "Hearts and Hooves Day", "Read it and Weep", "MMMystery on the Friendship Express", or "Sweet and Elite". Avoid "Feeling Pinkie Keen", "Over a Barrel", and "Canterlot Wedding".

I can't believe I just wrote that.

The show's writers are often sloppy about consistency--characters, history, apparent time period, etc., change wildly from episode to episode. There's a lot of fridge horror in things that the writers threw in without thinking through the implications. There are a number of episodes with stupid (as in, possibly harmful) "morals".

What the show has is a certain attitude that's generally been lacking in entertainment (niceness, basically), and it's the only show I can think of at the moment where the characters are grown-ups. In pretty much every other show on TV, there are a bunch of characters who come together for one specific purpose or reason (to run a news show, fight vampires, get off the island, hunt aliens, run a hospital, talk with each other in a bar, whatever). Then they go back to whatever it is they do when they aren't together, which isn't important. In MLP, the characters all have their own lives, and there is no one thing they all get together for. The lives they are having offstage aren't irrelevant; they're often the ultimate causes of the conflicts that cause them to get together.

Maybe Lost was similar in that way. I didn't see enough of it to judge.

I still think people should realize their model is broken when a children's program contains ritual sacrifice to demons.

Comment author: Pavitra 16 January 2013 04:22:23AM 6 points [-]

View more: Prev | Next