Consider the epistemic state of someone who knows that they have the attention of a vastly greater intelligence than themselves, but doesn't know whether that intelligence is Friendly. An even-slightly-wrong CAI will modify your utility function, and there's nothing you can do but watch it happen.
The somebody could only be a few programmers hired/recruited by CFAR working with direction from Leah. Basically Leah would have to get some people Anna respects to agree the idea is good and then talk to Anna about it. But presumably Anna and CFAR generally are really busy, so, it probably won't go anywhere in any case.
Not really relevant here, but I only just now got the pun in CFAR's acronym.
I don't assume that bad uses can't be reduced, and my answer is somewhat tongue in cheek, but I do suspect that getting people to stop using this mode of thought for bad ideas would be very difficult. Getting people to apply it to good causes as well might be worse, outcome-wise, than getting them to stop applying it all, but trying to get people to apply it to good causes might still have a better return on investment than trying to get them to stop, simply because it's easier.
You may be right, but I don't trust a human to only arrive at that conclusion if it's true. I think we ought to refrain from pressing D, just in case.
What level of confidence is high (or low) enough that you would feel means that something is within the 'noise level'?
Depending on how smart I feel today, anywhere from -10 to 40 decibans.
(edit: I remember how log odds work now.)
Seeing as I work every day with individual DNA molecules which behave discretely (as in, one goes into a cell or one doesn't), and on the way to my advisor I walk past a machine that determines the 3D molecular structure of proteins... yeah.
This edifice not being true would rely on truly convoluted laws of the universe that emulate it in minute detail under every circumstance I can think of, but not doing so under some circumstance not yet seen. I am not sure how to quantify that, but I would certainly never plan for it being the case. >99.9? Most of the 0.1% comes from the possibility that I am intensely stupid and do not realize it, not thinking that it could be wrong within the framework of what is already known. Though at that scale the numbers are really hard to calibrate.
I think a more plausible scenario for the atomic theory being wrong would be that the scientific community -- and possibly the scientific method -- is somehow fundamentally borked up.
Humans have come up with -- and become strongly confident in -- vast, highly detailed, completely nowhere-remotely-near-true theories before, and it's pretty hard to tell from the inside whether you're the one who won the epistemic lottery. They all think they have excellent reasons for believing they're right.
Less than one in seven billion.
You are way overconfident in your own sanity. What proportion of humans experience vivid, detailed hallucinations on a regular basis? (not counting dreams)
Well, if you can't stop people from using a superweapon for bad causes, it may be an improvement to see to it that it's also used for good causes.
The original question was:
Do you really think encouraging this idea in general is good?
That is: assuming it is possible to reduce bad uses at the cost of also reducing good uses, should one do so?
Your reply seems to assume that the bad uses can't be reduced, which contradicts the pre-established assumptions. If you want to change the assumptions of a discussion, please include a note that you are doing so and ideally a short explanation of why you think the previous assumptions should be rejected in favor of the new ones.
Do you really think encouraging this idea in general is good?
I'd certainly prefer if the serious risks were the anthropomorphised ones, rather than the trivial ones.
So it's a great idea as long as only causes you agree with get to use the superweapon?
You're welcome, but about half the episodes are bad. The season openers are the worst. YMMV. I recommend "Look before you sleep", "Green isn't your color", "Sisterhooves Social", "Hearts and Hooves Day", "Read it and Weep", "MMMystery on the Friendship Express", or "Sweet and Elite". Avoid "Feeling Pinkie Keen", "Over a Barrel", and "Canterlot Wedding".
I can't believe I just wrote that.
The show's writers are often sloppy about consistency--characters, history, apparent time period, etc., change wildly from episode to episode. There's a lot of fridge horror in things that the writers threw in without thinking through the implications. There are a number of episodes with stupid (as in, possibly harmful) "morals".
What the show has is a certain attitude that's generally been lacking in entertainment (niceness, basically), and it's the only show I can think of at the moment where the characters are grown-ups. In pretty much every other show on TV, there are a bunch of characters who come together for one specific purpose or reason (to run a news show, fight vampires, get off the island, hunt aliens, run a hospital, talk with each other in a bar, whatever). Then they go back to whatever it is they do when they aren't together, which isn't important. In MLP, the characters all have their own lives, and there is no one thing they all get together for. The lives they are having offstage aren't irrelevant; they're often the ultimate causes of the conflicts that cause them to get together.
Maybe Lost was similar in that way. I didn't see enough of it to judge.
I still think people should realize their model is broken when a children's program contains ritual sacrifice to demons.
Note that the actual children's program includes plague and famine, more famine, slavery, mind control, plague again, more mind control, recreational infanticide, and slavery again.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Nice story :)
The way this plays out feels Joseph Campbell-ey, with Kay even refusing a literal call before the tension ramps up. Which is not bad at all from a literary perspective, but might cause audiences to see things in terms of the structure of the story rather than as a lesson. So hm, what are some ways to vividly show our protagonist doing the best with what they have rather than living in the past, or than selling out / giving up.
Or maybe Kay has given up initially, and then over the course of the story rekindles an explicit desire to do what's right now as a direct response to our villain's self-justifications.
Other rationality skills to possibly include: noticing when you're writing in the bottom line beforehand, making plans more shock-proof and modular than humans naively want to, explicitly stopping and checking the consequences of a difficult choice, noticing when you flinch away from unpleasant thoughts - sometimes that's okay, but sometimes you need to do that thing that's unpleasant to think about.
The story is, in large part, about the structure of the story: Pluto's tragic flaw is that he's thinking about his real life in terms of story structure.