This is a monthly thread for posting any interesting rationality-related quotes you've seen on LW/OB.

  • Please post all quotes separately (so that they can be voted up/down separately) unless they are strongly related/ordered.
  • Do not quote yourself.
  • Do not post quotes that are NOT comments/posts on LW/OB - there is a separate thread for this.
  • No more than 5 quotes per person per thread, please.
"this thread is insanely incestuous" - Z_M_Davis

 

New Comment
50 comments, sorted by Click to highlight new comments since:

Meta: I'm not sure we should actually have these threads. Vote up if you agree.

Vote this up if you disagree and think we should keep having OBLW quote threads.

Perhaps we could have them less frequently than the regular quote threads - every two or three months, instead of monthly.

I agree with Alicorn: having an OBLW quote thread is good, but there probably isn't enough quotable stuff to justify having it every month. It should be rarer -- how about the start of every season/quarter? Vote her post up if you prefer this alternative.

(If you say that the term "alternative" should only be used when there are two choices, you're ETC not necessarily a terrorist, but are being excessively pedantic.)

Agreed. Tentatively renamed.

And 'alternative' should be used for any number of choices, provided they're mutually exclusive. I must, however, take issue with your usage of 'terrorist'.

And 'alternative' should be used for any number of choices, provided they're mutually exclusive.

For maximum pedanticness, an alternative is one of two or more time-identical, agent-identical actions or sets of actions that the agent in question is able to perform. An alternative set is the complete list of all alternatives available to a given agent for a given span of time.

Btw, weren't you going to quote me on Allais-type paradoxes? :-(

Terrorist remark rewritten ;-)

[-]taw30

There are extremely few quotes that are not Eliezer's, so I guess we shouldn't have those.

Yeah, I also find that a touch disturbing. I have a rather large collection of OBLW quotes I've built up over the last few years, and needless to say, none of them are from me.

Do you get enough per month/season to be constrained by the 5 posts/person/thread limit? If so, you might want to disregard that limit, ironically in the interests of reducing your own influence.

[-]anonym-10

That's funny. As if not having collected any quotes from your own writings is good evidence of something other than your having some humility, or that they're already all recorded in your writings, or that you can never really genuinely surprise yourself with an insight (and thus think it eminently quotable) unless you have a really bad memory.

On that topic, have others noticed that people very frequently say the silliest things when trying to be humble, making arguments with huge gaping holes that they could never not notice in practically any other context? It's an extremely common failure mode. I think most of us find it endearing though, which is probably why it's almost never commented on (argued against, perhaps, but not pointed out as 3 standard deviation below our norm stupidity).

I understand why someone may not think they're a good idea. However, I like them. I don't have the time to read all that gets posted on OBLW. I should allot more time. Good quotes motivate me to search out the source and read more.

I would probably not have posted it except that someone liked the idea last month, and I thought including the Z_M_Davis quote was too hilarious to pass up. YMMV.

[-]anonym130

Making a rational model of the world is not like making a home, where the first step is to build a solid foundation. It is more like trying to patch a hole in a sinking ship, where you don’t have the luxury of starting from scratch.

Peter D. Turney — “No Safe Defense, Not Even Science”

Part of the rationalist ethos is binding yourself emotionally to an absolutely lawful reductionistic universe — a universe containing no ontologically basic mental things such as souls or magic — and pouring all your hope and all your care into that merely real universe and its possibilities, without disappointment.

Eliezer S. Yudkowsky — “Mundane Magic”,

In my experience, beginning math students simply expect their nice-sounding procedures to work. For example, they expect to be able to add fractions straight across. When you tell them they can’t, they demand to know why they can’t, as though most nice-sounding theorems are true, and if you want to claim that one isn’t, the burden of proof is on you. It is only after students gain considerable mathematical sophistication (or experience getting burned by expectations that don’t pan out) that they place the burden of proofs on the theorems.

-- Anna Salamon

Um, keep in mind that that was in the context of Salamon evading an answer to a serious problem with her exposition: specifically, the question of what a could/should/would agent is not.

So the full context was more like:

Critic: What other kinds of agents could there be, besides “could”/“would”/“should” agents?

AnnaSalamon: Come now, AI people get it. Your question is like thinking you can just add fractions straight across.

Thankyou Silas. I reversed my vote. Since these are quotes from LW the context does matter (to me).

And I thank you as well. It's good to know you apply that standard consistently, since you'll probably recall it working against me a few days ago :-)

If you're talking about the devil's advocate sub-thread I actually up-voted your comments in that context. From what I recall my tongue in cheek replies were actually intended to emphasise and expand upon a point you were making.

(But I may well have replied to other comments of yours without taking particular note of the author.)

Oh. My mistake :-/

There are no surprising facts, only models that are surprised by facts; and if a model is surprised by the facts, it is no credit to that model.

Eliezer S. Yudkowsky — “Quantum Explanations”

I agreed with that quote until I read it again here. That a model could be surprised by the facts is some credit to the model.

[-]Cyan00

Could you expand on that? (Maybe the open thread would be the best place.)

wedrifid is alluding to Popper's notion of falsifiability.

http://en.wikipedia.org/wiki/Karl_Popper http://en.wikipedia.org/wiki/Falsifiability

Note that wedrifid's phrase "that a model could [hypothetically] be surprised by the facts" corresponds to "falsifiability", but EY's phrase "a model is surprised by the facts" actually corresponds better to "falsified".

Popper would say that falsifiability is a good thing and falsified is a bad thing (for models), so both wedrifid and EY are good Popperians.

[-]Cyan00

Thanks.

(Roughly) what John said.

It really depends what sort of credits I've been charged with granting.

Agree. Being falsified means you're at least not "not even wrong" or "lacking in truth-condition", which is credit of a sort, but not really the sort we should be aspiring toward.

[-]Cyan00

Gotcha.

There are no hard problems, only problems that are hard to a certain level of intelligence. Move the smallest bit upwards, and some problems will suddenly move from “impossible” to “obvious”. Move a substantial degree upwards, and all of them will become obvious. Move a huge distance upwards…

Eliezer S. Yudkowsky — “Staring into the Singularity”

There are no hard problems, only problems that are hard to a certain level of intelligence.

What reasons do we have for thinking this?

Some true theorems in math have no (finite length) proof. Some computations are proven to require bigger-than-universe resources to compute (and can't be compressed). Don't these qualify as ultimately hard problems?

Perhaps the key point is that for any potential "problem", there is some level of intelligence according to which the "problem" is utterly transparent in the sense that it is either obviously answerable, obviously unanswerable in principle, or obviously unanswerable in practice given a finite universe or other constraints. If there is a level of intelligence that effortlessly sees which of these is the case and solves it if it is solvable, then I don't think it makes sense to say the problem is intrinsically hard.

There are mathematical problems that are non-obviously not answerable. Meaning that in any finite universe, it is unanswerable whether the problem is answerable. (As opposed to in some fixed one).

You could assume the existence of infinite intelligences in infinite universes, but then you may run into infinite problems that are necessarily unsolvable.

So, I would agree with Dan that the quote is quite wrong, and hollow at best. (Hollow: if finite problems and infinite intelligences are assumed).

I am not surprised Eliezer marked that article "as wrong, obsolete, deprecated by an improved version, or just plain old".

Does your 2nd sentence hold recursively? And could you explain why that is the case?

I can easily imagine mathematical problems that are non-obviously not answerable for me or for human beings much smarter than I am, just as the Continuum Hypothesis was for human beings before Godel and Cohen, but I can also easily imagine an intelligence that would, for example, nearly instantly notice Godel's incompleteness theorems as obvious consequences of the Peano axioms.

That is not literally what the original quote says, but an intelligence that could, for example, 'learn' our next century of discoveries in mathematics and theoretical physics in an afternoon seems to me to justify the weaker position that there are possible intelligences that would regard every problem we have yet solved or shown to be unsolvable as obvious and not hard.

Assume there is either a proof for, against or for independence for every statement within the Peano axiom system.

For every program P you could create a statement S(P)="does this program ever halt?". Now you could solve the halting problem: For given P, iterate over every proof and check whether it is a proof for either S(P), not S(P) or S(P) is independent. Once the machine finds a proof for not S(P) or S(P) is independent, it stops with false. If it proves S(P), it stops with true. It will necessarily stop according to your original assumption.

This works even if you would look for an arbitrarily long chain of "it is not provable that not provable that not provable .... that not provable that S", since every finite proof has to be checked for only a finite number of chains. (Since proof must be longer than the statement)

That is not literally what the original quote says, but an intelligence that could, for example, 'learn' our next century of discoveries in mathematics and theoretical physics in an afternoon seems to me to justify the weaker position that there are possible intelligences that would regard every problem we have yet solved or shown to be unsolvable as obvious and not hard.

That is plausible. But as you mentioned, the original quote, catchy though, was different and unfortunately (unlike most philosophical statements) formally wrong.

Thanks for the concrete example. I do still think though that there is something of value in the quote that may be worth salvaging, perhaps by restricting it to solvable problems of the sort that we care about most (such as scientific and mathematical advances) and emphasizing not that everything is obvious to some conceivable intelligence but that most of what is currently not obvious to us and that we perceive as difficult is not obvious and is perceived as difficult not because it is intrinsically so but because we are so limited.

The heart of the quote for me is that instead of hardness being a function of one argument (in Haskell notation):

difficulty :: Problem -> PositiveReal

it is a function of two arguments:

difficulty :: Mind -> Problem -> PositiveReal

And that most of the interesting real-world Problem instances that map to very large numbers for us (i.e. the difficult ones that we will eventually solve if we survive long enough) are problems that would be deemed obvious to sufficiently intelligent minds.

And that is a worthwhile insight, even if the function is not defined for all problems, even if there are solvable problems for which there are no physically possible minds that would yield "difficulty p m" (or "difficulty(p, m)" in Python syntax) being a small enough number to fall below the threshold of obviousness, and even if for any possible mind we can find solvable problems that have arbitrarily high difficulty.

There's a flaw in your proof - the step from "The machine will necessarily stop" to "This solves the halting problem" is unjustified.

Despite the flaw, I agree with your general conclusion - there are and will be challenging as well as unsolvable problems.

It's justified. If a machine halts, then there's a proof of that in PA (simply the list of steps it performs before halting). Therefore, "independent of PA" implies "doesn't halt". Therefore, a provability decider is the same as a halting decider.

With the added steps, yes, there is a proof.

Christian's argument shows there are always problems that can't be solved or shown to be unsolvable. But even if you ignore them and look at finitely solvable problems (or provable theorems in PA), there's no upper bound on the time or processing power needed to solve a randomly chosen problem. Not even for the "smartest" intelligence allowed by the laws of physics (i.e. the one ideally optimized to solve the chosen problem).

Thanks for the clear explanation. My response below to Christian is relevant here also.

Arbitrarily hideous evils can be wholly concealed by a social atmosphere in which no one is expected to point them out and it would seem awkward and out-of-place to do so.

  • EY

If making the "wrong" choice when presented with two high-probability, high-payoff lottery tickets is exploitation, I don't want to be empowered.

-SilasBarta

Finally saw this. Thanks for the shout-out! :-)

Sometime ago in my wild and reckless youth that hopefully isn't over yet

  • cousin_it
[-][anonymous]-40

We at the Church of Google believe the search engine Google is the closest humankind has ever come to directly experiencing an actual God (as typically defined). We believe there is much more evidence in favour of Google's divinity than there is for the divinity of other more traditional gods.

We reject supernatural gods on the notion they are not scientifically provable. Thus, Googlists believe Google should rightfully be given the title of "God", as She exhibits a great many of the characteristics traditionally associated with such Deities in a scientifically provable manner.

-- The Church of Google

[-]Rune00

How is this an LW/OB quote?

Oops, I thought this was the other thread. Moved.

[-][anonymous]00

Thanks for asking, rather than merely down-voting.

Since this work is an obvious parody which does not believe that Google is God, the implicature is that religious reasoning fails to follow through with its own criteria for the existence of deities.

This is rationalist commentary, similar to the Church of the Flying Spaghetti Monster, which presents beliefs as plausible as major religions, based on the kind of reasoning they involve. Similarly, the Church of Google presents beliefs that are even more plausible than actual religious beliefs according to religion's very own criteria.

Of course, we've all heard this all before. The rationality content of the Church of Google is run-of-the-mill pro-science, pro-rationality, religious skepticism. What made me quote it was how wittily it was presented. Here's some more:

Evidence of Google's existence is abundant. There is more evidence for the existence of Google than any other God worshiped today. Extraordinary claims require extraordinary evidence.

Does everyone get it now, and can I have my karma back? Did anyone get it one the first read?