Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Roland2 18 May 2008 07:18:36PM 4 points [-]

One suggestion for the flaw:

Conclusions from this article: a) you are never safe b) you must understand a) on a emotional basis c) the only way to achieve b) is through an experience of failure after following the rules you trusted

The flaw is that the article actually does the opposite of what it wants to accomplish: by giving the warning(a) it makes people feel safer. In order to convey the necessary emotion of "not feeling safe"(b) Eliezer had to make the PS regarding the flaw.

In a certain sense this also negates c). I think Eliezer doesn't really want us to fail(c) in order to recognize a), the whole point of overcomingbias.com is to prevent humans from failing. So if Eliezer did a good job in conveying the necessary insecurity through his PS then hopefully c) won't happen to you.


Comment author: wafflepudding 04 April 2017 09:44:13PM 1 point [-]

That second paragraph was hard for me. Seeing "a)" and "b)" repeated made me parse it as a jigsaw puzzle where the second "a)" was a subpoint of the first "b)", but then "c)" got back to the main sequence only to jump back to the "b)", the second subpoint of the first "b)". That didn't make any sense, so I tried to read each clause separately, and came up with "1. You are never safe. 2. You must understand. 3. On an emotional basis..." before becoming utterly lost. Only after coming back to it later did I get that repeated letters were references to previous letters.

In response to Infinite Certainty
Comment author: Paul_Gowder 09 January 2008 09:41:23AM 2 points [-]

Hah, I'll let Decartes go (or condition him on a workable concept of existence -- but that's more of a spitball than the hardball I was going for).

But in answer to your non-contradiction question... I think I'd be epistemically entitled to just sneer and walk away. For one reason, again, if we're in any conventional (i.e. not paraconsistent) logic, admitting any contradiction entails that I can prove any proposition to be true. And, giggle giggle, that includes the proposition "the law of non-contradiction is true." (Isn't logic a beautiful thing?) So if this mathematician thinks s/he can argue me into accepting the negation of the law of non-contradiction, and takes the further step of asserting any statement whatsoever to which it purportedly applies (i.e. some P, for which P&~P, such as the whiteness of snow), then lo and behold, I get the law of non-contradiction right back.

I suppose if we wanted to split hairs, we could say that one can deny the law of non-contradiction without further asserting an actual statement to which that denial applies -- i.e. ~(~(P&~P)) doesn't have to entail the existence of a statement P which is both true and false ((竏パ)Np, where N stands for "is true and not true?" Abusing notation? Never!) But then what would be the point of denying the law?

(That being said, what I'd actually do is stop long enough to listen to the argument -- but I don't think that commits me to changing my zero probability. I'd listen to the argument solely in order to refute it.)

As for the very tiny credence in the negation of the law of non-contradiction (let's just call it NNC), I wonder what the point would be, if it wouldn't have any effect on any reasoning process EXCEPT that it would create weird glitches that you'd have to discard? It's as if you deliberately loosened one of the spark plugs in your engine.

Comment author: wafflepudding 10 January 2017 06:38:22AM 0 points [-]

There are, apparently, certain Eastern philosophies that permit and even celebrate logical contradiction. To what extent this is metaphorical I couldn't say, but I recently spoke to an adherent who quite firmly believed that a given statement could be both true and false. After some initial bewilderment, I verified that she wasn't talking about statements that contained both true and false claims, or were informal and thus true or false under different interpretations, but actually meant what she'd originally seemed to mean.

I didn't at first know how to argue such a basic axiom -- it seemed like trying to talk a rock into consciousness -- but on reflection, I became increasingly uncertain what her assertion would even mean. Does she, when she thinks "Hmm, this is both true and false" actually take any action different than I would? Does belief in NNC wrongly constrain some sensory anticipation? As Paul notes, need the law of non-contradiction hold when not making any actual assertions?

All this is to say that the matter which at first seemed very simple became confusing along a number of axes, and though Paul might call any one of these complaints "splitting hairs" (as would I), he would probably claim this with far less certainty than his original 100% confidence in NNC's falsehood: That is, he might be more open-minded about a community of mathematicians explaining why actually some particular complaint isn't splitting hairs at all and is highly important for some non-obvious reasons and due to some fundamental assumptions being confused it would be misleading to call NNC 'false'.

But more simply, I think Paul may have failed to imagine how he would actually feel in the actual situation of a community of mathematicians telling him that he was wrong. Even more simply, I think we can extrapolate a broader mistake of people who are presented with the argument against infinite certainty replying with a particular thing they're certain about, and claiming that they're even more certain about their thing than the last person to try a similar argument. Maybe the correct general response to this is to just restate Eliezer's reasoning about any 100% probability simply being in the reference class of other 100% probabilities, less than 100% of which are correct.

Comment author: Nominull3 25 October 2008 04:05:18PM 8 points [-]

I don't know that it's that impressive. If we launch a pinball in a pinball machine, we may have a devil of a time calculating the path off all the bumpers, but we know that the pinball is going to wind up fallin in the hole in the middle. Is gravity really such a genius?

Comment author: wafflepudding 28 December 2016 06:15:58PM 0 points [-]

It seems to me that you are predicting the path of the pinball, but quickly enough that you don't realize you're doing it. It's such a fundamental axiom that if there is a clear downward path to a given position, this position will be reached, that it's easy to forget that it was originally reasoning about intermediate steps that led to this axiom. At most points the pinball can reach, it is expected to move down. At the next point, it's expected to move down again. You would inductively expect it to reach a point where it cannot move down anymore, and this point is the hole (or sometimes a fault in the machine).

Contrast with the hole being upraised, or blocked by some barrier. All of the paths you envision lead to a point other than the hole, so you conclude that the ball will land instead on some other array of points. There it's easier to see that gravity still requires path-based reasoning.

In response to Is Santa Real?
Comment author: scotherns 17 March 2009 11:49:13AM 10 points [-]

My oldest child is six. She has always been taught to distinguish 'real' from 'pretend', and encouraged to decide which is which herself.

She seems to have no problem discovering that something she previously believed is false - at this age there is still so much to learn, and her world view is updating pretty constantly.

What does seem to be distressing for her is finding out that some adults believe things which she has placed solidly in the 'pretend' category. Her teacher's belief in god is particularly perplexing for her.

In response to comment by scotherns on Is Santa Real?
Comment author: wafflepudding 31 October 2016 02:50:53AM 0 points [-]

In case you're still active, I'm curious what your child's reasoning was for placing God in the pretend category. Like, did she know about Occam's Razor, or was she pattern matching God with other fantasies she's heard? I'm mostly curious because I don't think I've ever heard a perspective as undiluted as an Untheist's.

Comment author: Caspian 05 April 2009 05:18:44AM 26 points [-]

The counterfactual anti-mugging: One day No-mega appears. No-mega is completely trustworthy etc. No-mega describes the counterfactual mugging to you, and predicts what you would have done in that situation not having met No-mega, if Omega had asked you for $100.

If you would have given Omega the $100, No-mega gives you nothing. If you would not have given Omega $100, No-mega gives you $10000. No-mega doesn't ask you any questions or offer you any choices. Do you get the money? Would an ideal rationalist get the money?

Okay, next scenario: you have a magic box with a number p inscribed on it. When you open it, either No-mega comes out (probability p) and performs a counterfactual anti-mugging, or Omega comes out (probability 1-p), flips a fair coin and proceeds to either ask for $100, give you $10000, or give you nothing, as in the counterfactual mugging.

Before you open the box, you have a chance to precommit. What do you do?

Comment author: wafflepudding 27 October 2016 08:43:27AM 1 point [-]

You forgot about MetaOmega, who gives you $10,000 if and only if No-mega wouldn't have given you anything, and O-mega, who kills your family unless you're an Alphabetic Decision Theorist. This comment doesn't seem specifically anti-UDT -- after all, Omega and No-mega are approximately equally likely to exist; a ratio of 1:1 if not an actual p of .5 -- but it still has the ring of Just Cheating. Admittedly, I don't have any formal way of telling the difference between decision problems that feel more or less legitimate, but I think part of the answer might be that the Counterfactual Mugging isn't really about how to act around superintelligences: It illustrates a more general need to condition our decisions based on counterfactuals, and as EY pointed out, UDT still wins the No-mega problem if you know about No-mega, so whether or not we should subscribe to some decision theory isn't all that dependent on which superintelligences we encounter.

I'm necroing pretty hard and might be assuming too much about what Caspian originally meant, so the above is more me working this out for myself than anything else. But if anyone can explain why the No-mega problem feels like cheating to me, that would be appreciated.

Comment author: hairyfigment 02 October 2016 01:33:56AM 0 points [-]

I am strongly disagreeing with you. The cultures that existed on Earth for tens of millenia or more were recognizably human; one of them built an LHC "eventually", but any number of chance factors could have prevented this. Like I just said, modern science started with an extreme outlier.

Comment author: wafflepudding 02 October 2016 09:04:04AM 0 points [-]

Gotcha. So, assuming that the actual Isaac Newton didn't rise to prominence*, are you thinking that human life would usually end before his equivalent came around and the ball got rolling? Most of our existential risks are manmade AFAICT. Or you think that we'd tend to die in between him and when someone in a position to build the LHC had the idea to build the LHC? Granted, him being "in a position to build the LHC" is conditional on things like a supportive surrounding population, an accepting government, etcetera; but these things are ephemeral on the scale of centuries.

To summarize, yes, some chance factor would def prevent us from building the LHC as the exact time we did, but with a lot of time to spare, some other chance factor would prime us to build it somewhen else. Building the LHC just seems to me like the kind of thing we do. (And if we die from some other existential risk before Hadron Colliding (Largely), that's outside the bounds of what I was originally responding to, because no one who died would find himself in a universe at all.)

*Not that I'm condoning this idea that Newton started science.

Comment author: hairyfigment 29 September 2016 10:15:19PM 0 points [-]

...As I pointed out recently in another context, humans have existed for tens of thousands of years or more. Even civilization existed for millenia before obvious freak Isaac Newton started modern science. Your position is a contender for the nuttiest I've read today.

Possibly it could be made better by dropping this talk of worlds and focusing on possible observers, given the rise in population. But that just reminds me that we likely don't understand anthropics well enough to make any definite pronouncements.

Comment author: wafflepudding 02 October 2016 01:04:03AM 0 points [-]

Are you responding to "Unless human psychology is expected to be that different from world to world?"? Because that's not my position, I'd think that most things recognizable as human will be similar enough to us that they'd build an LHC eventually. I guess I'm not exactly sure what you're getting at.

Comment author: steven 20 September 2008 10:57:03PM 3 points [-]

IMHO if anthropics worked that way and if the LHC really were a world-killer, you'd find yourself in a world where we had the propensity not to build the LHC, not one where we happened not to build one due to a string of improbable coincidences.

Comment author: wafflepudding 29 September 2016 02:28:39AM 0 points [-]

I'd agree that certain worlds would have the building of the LHC pushed back or moved forward, but I doubt there would be many where the LHC was just never built. Unless human psychology is expected to be that different from world to world?

Comment author: entirelyuseless 13 June 2016 12:15:18PM *  0 points [-]

wafflepudding is saying something similar to this:

You can suffer the $10,000 damage in two ways, Path A and Path B. Normally these two things happen equally often. If you pay the $100, you can prevent Path A from happening, with a 100% chance. That means if you pay, Path B will definitely happen. But it also means that since you're the sort of person who would pay in this situation, you will receive that prophecy only 50% as often, in general, as a person who would not pay; this happens because you only get the prophecy when path B is going to happen, instead of either Path A or path B.

I am not the sort of person who would pay in that situation, and I do not want to be. But I am the sort of person who might very well pay the $100 before hearing any prophecy, and therefore I will get the prophecy 50% as often anyway.

Comment author: wafflepudding 13 June 2016 10:11:35PM 0 points [-]

I am extremely satisfied with this description; I hadn't personally thought of it in such specific terms, and this would be a perfect way to say it. I'll admit I'm a bit confused why you would pay before but not after, considering that either one is done by a person to whom the prophecy is given 50% less often.

Comment author: entirelyuseless 12 June 2016 01:19:38PM 0 points [-]

I would pay Omega in the counterfactual mugging, but I would not pay here.

The reason is that in the counterfactual mugging case, I would want to be the sort of person who pays when they get offered a deal like that.

Here, I would not want to be the sort of person who pays to fight an infallible prophecy.

However, I would want to be the sort of person who pays to fight a non-infallible prophecy, so I would be happy to precommit to pay in non-infallible prophecy situations.

Comment author: wafflepudding 13 June 2016 06:38:59AM 0 points [-]

The kind of person who pays to fight an infallible prophecy is the same kind of person to whom infallible prophecies are given 50% less often. In this case.

View more: Next