Of course "never" is testable. The way to falsify is to exhibit a counterexample. "Human beings will never design a heavier than air flying machine" (Lord Kelvin, 1895), "a computer will never beat the human world champion in chess," etc. All falsified, therefore, all testable. If anything, an infinite horizon statement like "never" is more vulnerable to falsification, and therefore should get more "scientific respect."
It's only testable in one direction -- if you like, "never" is testable but "ever" isn't. I don't have a formal argument to hand, but it seems vaguely to me that a hypothesis preferably-ought to be falsifiable both ways.
Nice story :)
The way this plays out feels Joseph Campbell-ey, with Kay even refusing a literal call before the tension ramps up. Which is not bad at all from a literary perspective, but might cause audiences to see things in terms of the structure of the story rather than as a lesson. So hm, what are some ways to vividly show our protagonist doing the best with what they have rather than living in the past, or than selling out / giving up.
Or maybe Kay has given up initially, and then over the course of the story rekindles an explicit desire to do what's right now as a direct response to our villain's self-justifications.
Other rationality skills to possibly include: noticing when you're writing in the bottom line beforehand, making plans more shock-proof and modular than humans naively want to, explicitly stopping and checking the consequences of a difficult choice, noticing when you flinch away from unpleasant thoughts - sometimes that's okay, but sometimes you need to do that thing that's unpleasant to think about.
The story is, in large part, about the structure of the story: Pluto's tragic flaw is that he's thinking about his real life in terms of story structure.
Consider the epistemic state of someone who knows that they have the attention of a vastly greater intelligence than themselves, but doesn't know whether that intelligence is Friendly. An even-slightly-wrong CAI will modify your utility function, and there's nothing you can do but watch it happen.
The somebody could only be a few programmers hired/recruited by CFAR working with direction from Leah. Basically Leah would have to get some people Anna respects to agree the idea is good and then talk to Anna about it. But presumably Anna and CFAR generally are really busy, so, it probably won't go anywhere in any case.
Not really relevant here, but I only just now got the pun in CFAR's acronym.
I don't assume that bad uses can't be reduced, and my answer is somewhat tongue in cheek, but I do suspect that getting people to stop using this mode of thought for bad ideas would be very difficult. Getting people to apply it to good causes as well might be worse, outcome-wise, than getting them to stop applying it all, but trying to get people to apply it to good causes might still have a better return on investment than trying to get them to stop, simply because it's easier.
You may be right, but I don't trust a human to only arrive at that conclusion if it's true. I think we ought to refrain from pressing D, just in case.
What level of confidence is high (or low) enough that you would feel means that something is within the 'noise level'?
Depending on how smart I feel today, anywhere from -10 to 40 decibans.
(edit: I remember how log odds work now.)
Seeing as I work every day with individual DNA molecules which behave discretely (as in, one goes into a cell or one doesn't), and on the way to my advisor I walk past a machine that determines the 3D molecular structure of proteins... yeah.
This edifice not being true would rely on truly convoluted laws of the universe that emulate it in minute detail under every circumstance I can think of, but not doing so under some circumstance not yet seen. I am not sure how to quantify that, but I would certainly never plan for it being the case. >99.9? Most of the 0.1% comes from the possibility that I am intensely stupid and do not realize it, not thinking that it could be wrong within the framework of what is already known. Though at that scale the numbers are really hard to calibrate.
I think a more plausible scenario for the atomic theory being wrong would be that the scientific community -- and possibly the scientific method -- is somehow fundamentally borked up.
Humans have come up with -- and become strongly confident in -- vast, highly detailed, completely nowhere-remotely-near-true theories before, and it's pretty hard to tell from the inside whether you're the one who won the epistemic lottery. They all think they have excellent reasons for believing they're right.
Less than one in seven billion.
You are way overconfident in your own sanity. What proportion of humans experience vivid, detailed hallucinations on a regular basis? (not counting dreams)
Well, if you can't stop people from using a superweapon for bad causes, it may be an improvement to see to it that it's also used for good causes.
The original question was:
Do you really think encouraging this idea in general is good?
That is: assuming it is possible to reduce bad uses at the cost of also reducing good uses, should one do so?
Your reply seems to assume that the bad uses can't be reduced, which contradicts the pre-established assumptions. If you want to change the assumptions of a discussion, please include a note that you are doing so and ideally a short explanation of why you think the previous assumptions should be rejected in favor of the new ones.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I know how Eliezer is using the word "insane."
Just because $CELEBRITY uses it that way doesn't make it right. This usage is conflating two usefully distinct concepts.