In response to comment by [deleted] on The cup-holder paradox
Comment author: Qiaochu_Yuan 26 March 2013 08:07:20PM 4 points [-]

I know how Eliezer is using the word "insane."

Comment author: Pavitra 27 March 2013 12:05:15PM 0 points [-]

Just because $CELEBRITY uses it that way doesn't make it right. This usage is conflating two usefully distinct concepts.

Comment author: IlyaShpitser 12 March 2013 03:52:03PM *  12 points [-]

Of course "never" is testable. The way to falsify is to exhibit a counterexample. "Human beings will never design a heavier than air flying machine" (Lord Kelvin, 1895), "a computer will never beat the human world champion in chess," etc. All falsified, therefore, all testable. If anything, an infinite horizon statement like "never" is more vulnerable to falsification, and therefore should get more "scientific respect."

Comment author: Pavitra 13 March 2013 10:06:20AM 2 points [-]

It's only testable in one direction -- if you like, "never" is testable but "ever" isn't. I don't have a formal argument to hand, but it seems vaguely to me that a hypothesis preferably-ought to be falsifiable both ways.

Comment author: Manfred 10 March 2013 06:23:20PM *  2 points [-]

Nice story :)

The way this plays out feels Joseph Campbell-ey, with Kay even refusing a literal call before the tension ramps up. Which is not bad at all from a literary perspective, but might cause audiences to see things in terms of the structure of the story rather than as a lesson. So hm, what are some ways to vividly show our protagonist doing the best with what they have rather than living in the past, or than selling out / giving up.

Or maybe Kay has given up initially, and then over the course of the story rekindles an explicit desire to do what's right now as a direct response to our villain's self-justifications.

Other rationality skills to possibly include: noticing when you're writing in the bottom line beforehand, making plans more shock-proof and modular than humans naively want to, explicitly stopping and checking the consequences of a difficult choice, noticing when you flinch away from unpleasant thoughts - sometimes that's okay, but sometimes you need to do that thing that's unpleasant to think about.

Comment author: Pavitra 12 March 2013 09:33:58PM 1 point [-]

The story is, in large part, about the structure of the story: Pluto's tragic flaw is that he's thinking about his real life in terms of story structure.

Comment author: Pavitra 08 March 2013 04:54:11AM *  4 points [-]

Consider the epistemic state of someone who knows that they have the attention of a vastly greater intelligence than themselves, but doesn't know whether that intelligence is Friendly. An even-slightly-wrong CAI will modify your utility function, and there's nothing you can do but watch it happen.

Comment author: Will_Newsome 19 February 2013 12:58:46AM 2 points [-]

The somebody could only be a few programmers hired/recruited by CFAR working with direction from Leah. Basically Leah would have to get some people Anna respects to agree the idea is good and then talk to Anna about it. But presumably Anna and CFAR generally are really busy, so, it probably won't go anywhere in any case.

Comment author: Pavitra 21 February 2013 10:54:07AM 4 points [-]

Not really relevant here, but I only just now got the pun in CFAR's acronym.

Comment author: Desrtopa 20 January 2013 01:52:27PM 0 points [-]

I don't assume that bad uses can't be reduced, and my answer is somewhat tongue in cheek, but I do suspect that getting people to stop using this mode of thought for bad ideas would be very difficult. Getting people to apply it to good causes as well might be worse, outcome-wise, than getting them to stop applying it all, but trying to get people to apply it to good causes might still have a better return on investment than trying to get them to stop, simply because it's easier.

Comment author: Pavitra 20 January 2013 01:55:31PM -1 points [-]

You may be right, but I don't trust a human to only arrive at that conclusion if it's true. I think we ought to refrain from pressing D, just in case.

Comment author: DataPacRat 20 January 2013 08:25:40AM 1 point [-]

What level of confidence is high (or low) enough that you would feel means that something is within the 'noise level'?

Comment author: Pavitra 20 January 2013 01:45:44PM *  -1 points [-]

Depending on how smart I feel today, anywhere from -10 to 40 decibans.

(edit: I remember how log odds work now.)

Comment author: CellBioGuy 19 January 2013 10:48:59PM *  6 points [-]

Seeing as I work every day with individual DNA molecules which behave discretely (as in, one goes into a cell or one doesn't), and on the way to my advisor I walk past a machine that determines the 3D molecular structure of proteins... yeah.

This edifice not being true would rely on truly convoluted laws of the universe that emulate it in minute detail under every circumstance I can think of, but not doing so under some circumstance not yet seen. I am not sure how to quantify that, but I would certainly never plan for it being the case. >99.9? Most of the 0.1% comes from the possibility that I am intensely stupid and do not realize it, not thinking that it could be wrong within the framework of what is already known. Though at that scale the numbers are really hard to calibrate.

Comment author: Pavitra 20 January 2013 01:42:12PM -1 points [-]

I think a more plausible scenario for the atomic theory being wrong would be that the scientific community -- and possibly the scientific method -- is somehow fundamentally borked up.

Humans have come up with -- and become strongly confident in -- vast, highly detailed, completely nowhere-remotely-near-true theories before, and it's pretty hard to tell from the inside whether you're the one who won the epistemic lottery. They all think they have excellent reasons for believing they're right.

Comment author: fubarobfusco 20 January 2013 06:52:04AM 2 points [-]

Less than one in seven billion.

Comment author: Pavitra 20 January 2013 01:39:14PM 6 points [-]

You are way overconfident in your own sanity. What proportion of humans experience vivid, detailed hallucinations on a regular basis? (not counting dreams)

Comment author: Desrtopa 19 January 2013 05:14:43AM 7 points [-]

Well, if you can't stop people from using a superweapon for bad causes, it may be an improvement to see to it that it's also used for good causes.

Comment author: Pavitra 20 January 2013 01:33:10PM -1 points [-]

The original question was:

Do you really think encouraging this idea in general is good?

That is: assuming it is possible to reduce bad uses at the cost of also reducing good uses, should one do so?

Your reply seems to assume that the bad uses can't be reduced, which contradicts the pre-established assumptions. If you want to change the assumptions of a discussion, please include a note that you are doing so and ideally a short explanation of why you think the previous assumptions should be rejected in favor of the new ones.

View more: Prev | Next