Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

LW/OB Quotes - Fall 2009

2 Post author: thomblake 01 September 2009 03:11PM

This is a monthly thread for posting any interesting rationality-related quotes you've seen on LW/OB.

  • Please post all quotes separately (so that they can be voted up/down separately) unless they are strongly related/ordered.
  • Do not quote yourself.
  • Do not post quotes that are NOT comments/posts on LW/OB - there is a separate thread for this.
  • No more than 5 quotes per person per thread, please.
"this thread is insanely incestuous" - Z_M_Davis

 

Comments (48)

Comment author: anonym 01 September 2009 03:22:57PM *  7 points [-]

Part of the rationalist ethos is binding yourself emotionally to an absolutely lawful reductionistic universe — a universe containing no ontologically basic mental things such as souls or magic — and pouring all your hope and all your care into that merely real universe and its possibilities, without disappointment.

Eliezer S. Yudkowsky — “Mundane Magic”,

Comment author: anonym 01 September 2009 03:18:40PM *  6 points [-]

There are no surprising facts, only models that are surprised by facts; and if a model is surprised by the facts, it is no credit to that model.

Eliezer S. Yudkowsky — “Quantum Explanations”

Comment author: wedrifid 15 September 2009 02:07:28PM 2 points [-]

I agreed with that quote until I read it again here. That a model could be surprised by the facts is some credit to the model.

Comment author: Cyan 15 September 2009 02:18:54PM 0 points [-]

Could you expand on that? (Maybe the open thread would be the best place.)

Comment author: Johnicholas 15 September 2009 02:48:53PM 7 points [-]

wedrifid is alluding to Popper's notion of falsifiability.

http://en.wikipedia.org/wiki/Karl_Popper http://en.wikipedia.org/wiki/Falsifiability

Note that wedrifid's phrase "that a model could [hypothetically] be surprised by the facts" corresponds to "falsifiability", but EY's phrase "a model is surprised by the facts" actually corresponds better to "falsified".

Popper would say that falsifiability is a good thing and falsified is a bad thing (for models), so both wedrifid and EY are good Popperians.

Comment author: Cyan 15 September 2009 04:22:11PM 0 points [-]

Thanks.

Comment author: wedrifid 15 September 2009 04:04:08PM 0 points [-]

(Roughly) what John said.

It really depends what sort of credits I've been charged with granting.

Comment author: Eliezer_Yudkowsky 16 September 2009 12:41:22AM 1 point [-]

Agree. Being falsified means you're at least not "not even wrong" or "lacking in truth-condition", which is credit of a sort, but not really the sort we should be aspiring toward.

Comment author: Cyan 15 September 2009 04:22:19PM 0 points [-]

Gotcha.

Comment author: anonym 01 September 2009 03:25:56PM 10 points [-]

Making a rational model of the world is not like making a home, where the first step is to build a solid foundation. It is more like trying to patch a hole in a sinking ship, where you don’t have the luxury of starting from scratch.

Peter D. Turney — “No Safe Defense, Not Even Science”

Comment author: Yvain 05 September 2009 12:55:43PM *  6 points [-]

In my experience, beginning math students simply expect their nice-sounding procedures to work. For example, they expect to be able to add fractions straight across. When you tell them they can’t, they demand to know why they can’t, as though most nice-sounding theorems are true, and if you want to claim that one isn’t, the burden of proof is on you. It is only after students gain considerable mathematical sophistication (or experience getting burned by expectations that don’t pan out) that they place the burden of proofs on the theorems.

-- Anna Salamon

Comment author: SilasBarta 05 September 2009 02:58:56PM -1 points [-]

Um, keep in mind that that was in the context of Salamon evading an answer to a serious problem with her exposition: specifically, the question of what a could/should/would agent is not.

So the full context was more like:

Critic: What other kinds of agents could there be, besides “could”/“would”/“should” agents?

AnnaSalamon: Come now, AI people get it. Your question is like thinking you can just add fractions straight across.

Comment author: wedrifid 15 September 2009 01:47:24PM 0 points [-]

Thankyou Silas. I reversed my vote. Since these are quotes from LW the context does matter (to me).

Comment author: SilasBarta 15 September 2009 03:48:22PM *  -1 points [-]

And I thank you as well. It's good to know you apply that standard consistently, since you'll probably recall it working against me a few days ago :-)

Comment author: wedrifid 15 September 2009 04:16:56PM -1 points [-]

If you're talking about the devil's advocate sub-thread I actually up-voted your comments in that context. From what I recall my tongue in cheek replies were actually intended to emphasise and expand upon a point you were making.

(But I may well have replied to other comments of yours without taking particular note of the author.)

Comment author: SilasBarta 15 September 2009 04:28:21PM -1 points [-]

Oh. My mistake :-/

Comment author: Eliezer_Yudkowsky 01 September 2009 09:59:32PM 15 points [-]

Meta: I'm not sure we should actually have these threads. Vote up if you agree.

Comment author: thomblake 01 September 2009 11:59:13PM *  4 points [-]

I would probably not have posted it except that someone liked the idea last month, and I thought including the Z_M_Davis quote was too hilarious to pass up. YMMV.

Comment author: Eliezer_Yudkowsky 01 September 2009 09:59:49PM 16 points [-]

Vote this up if you disagree and think we should keep having OBLW quote threads.

Comment author: Alicorn 02 September 2009 04:09:14PM 18 points [-]

Perhaps we could have them less frequently than the regular quote threads - every two or three months, instead of monthly.

Comment author: SilasBarta 02 September 2009 04:25:58PM *  2 points [-]

I agree with Alicorn: having an OBLW quote thread is good, but there probably isn't enough quotable stuff to justify having it every month. It should be rarer -- how about the start of every season/quarter? Vote her post up if you prefer this alternative.

(If you say that the term "alternative" should only be used when there are two choices, you're ETC not necessarily a terrorist, but are being excessively pedantic.)

Comment author: thomblake 02 September 2009 04:41:14PM 1 point [-]

Agreed. Tentatively renamed.

And 'alternative' should be used for any number of choices, provided they're mutually exclusive. I must, however, take issue with your usage of 'terrorist'.

Comment author: Alicorn 02 September 2009 05:54:51PM 2 points [-]

And 'alternative' should be used for any number of choices, provided they're mutually exclusive.

For maximum pedanticness, an alternative is one of two or more time-identical, agent-identical actions or sets of actions that the agent in question is able to perform. An alternative set is the complete list of all alternatives available to a given agent for a given span of time.

Comment author: SilasBarta 02 September 2009 10:02:19PM 0 points [-]

Btw, weren't you going to quote me on Allais-type paradoxes? :-(

Comment author: SilasBarta 02 September 2009 04:51:27PM 0 points [-]

Terrorist remark rewritten ;-)

Comment author: taw 01 September 2009 11:43:47PM 2 points [-]

There are extremely few quotes that are not Eliezer's, so I guess we shouldn't have those.

Comment author: Eliezer_Yudkowsky 02 September 2009 06:35:08AM 2 points [-]

Yeah, I also find that a touch disturbing. I have a rather large collection of OBLW quotes I've built up over the last few years, and needless to say, none of them are from me.

Comment author: Larks 02 September 2009 11:38:14PM 2 points [-]

Do you get enough per month/season to be constrained by the 5 posts/person/thread limit? If so, you might want to disregard that limit, ironically in the interests of reducing your own influence.

Comment author: anonym 03 September 2009 04:17:59AM -1 points [-]

That's funny. As if not having collected any quotes from your own writings is good evidence of something other than your having some humility, or that they're already all recorded in your writings, or that you can never really genuinely surprise yourself with an insight (and thus think it eminently quotable) unless you have a really bad memory.

On that topic, have others noticed that people very frequently say the silliest things when trying to be humble, making arguments with huge gaping holes that they could never not notice in practically any other context? It's an extremely common failure mode. I think most of us find it endearing though, which is probably why it's almost never commented on (argued against, perhaps, but not pointed out as 3 standard deviation below our norm stupidity).

Comment author: Dustin 02 September 2009 02:43:28AM 0 points [-]

I understand why someone may not think they're a good idea. However, I like them. I don't have the time to read all that gets posted on OBLW. I should allot more time. Good quotes motivate me to search out the source and read more.

Comment author: anonym 01 September 2009 03:20:27PM *  3 points [-]

There are no hard problems, only problems that are hard to a certain level of intelligence. Move the smallest bit upwards, and some problems will suddenly move from “impossible” to “obvious”. Move a substantial degree upwards, and all of them will become obvious. Move a huge distance upwards…

Eliezer S. Yudkowsky — “Staring into the Singularity”

Comment author: DanArmak 14 September 2009 09:05:46AM 1 point [-]

There are no hard problems, only problems that are hard to a certain level of intelligence.

What reasons do we have for thinking this?

Some true theorems in math have no (finite length) proof. Some computations are proven to require bigger-than-universe resources to compute (and can't be compressed). Don't these qualify as ultimately hard problems?

Comment author: anonym 15 September 2009 03:25:51AM 0 points [-]

Perhaps the key point is that for any potential "problem", there is some level of intelligence according to which the "problem" is utterly transparent in the sense that it is either obviously answerable, obviously unanswerable in principle, or obviously unanswerable in practice given a finite universe or other constraints. If there is a level of intelligence that effortlessly sees which of these is the case and solves it if it is solvable, then I don't think it makes sense to say the problem is intrinsically hard.

Comment author: Christian_Szegedy 15 September 2009 04:05:56AM *  2 points [-]

There are mathematical problems that are non-obviously not answerable. Meaning that in any finite universe, it is unanswerable whether the problem is answerable. (As opposed to in some fixed one).

You could assume the existence of infinite intelligences in infinite universes, but then you may run into infinite problems that are necessarily unsolvable.

So, I would agree with Dan that the quote is quite wrong, and hollow at best. (Hollow: if finite problems and infinite intelligences are assumed).

I am not surprised Eliezer marked that article "as wrong, obsolete, deprecated by an improved version, or just plain old".

Comment author: anonym 15 September 2009 05:24:54AM 0 points [-]

Does your 2nd sentence hold recursively? And could you explain why that is the case?

I can easily imagine mathematical problems that are non-obviously not answerable for me or for human beings much smarter than I am, just as the Continuum Hypothesis was for human beings before Godel and Cohen, but I can also easily imagine an intelligence that would, for example, nearly instantly notice Godel's incompleteness theorems as obvious consequences of the Peano axioms.

That is not literally what the original quote says, but an intelligence that could, for example, 'learn' our next century of discoveries in mathematics and theoretical physics in an afternoon seems to me to justify the weaker position that there are possible intelligences that would regard every problem we have yet solved or shown to be unsolvable as obvious and not hard.

Comment author: Christian_Szegedy 15 September 2009 06:12:47AM *  3 points [-]

Assume there is either a proof for, against or for independence for every statement within the Peano axiom system.

For every program P you could create a statement S(P)="does this program ever halt?". Now you could solve the halting problem: For given P, iterate over every proof and check whether it is a proof for either S(P), not S(P) or S(P) is independent. Once the machine finds a proof for not S(P) or S(P) is independent, it stops with false. If it proves S(P), it stops with true. It will necessarily stop according to your original assumption.

This works even if you would look for an arbitrarily long chain of "it is not provable that not provable that not provable .... that not provable that S", since every finite proof has to be checked for only a finite number of chains. (Since proof must be longer than the statement)

That is not literally what the original quote says, but an intelligence that could, for example, 'learn' our next century of discoveries in mathematics and theoretical physics in an afternoon seems to me to justify the weaker position that there are possible intelligences that would regard every problem we have yet solved or shown to be unsolvable as obvious and not hard.

That is plausible. But as you mentioned, the original quote, catchy though, was different and unfortunately (unlike most philosophical statements) formally wrong.

Comment author: anonym 21 September 2009 12:47:06AM *  1 point [-]

Thanks for the concrete example. I do still think though that there is something of value in the quote that may be worth salvaging, perhaps by restricting it to solvable problems of the sort that we care about most (such as scientific and mathematical advances) and emphasizing not that everything is obvious to some conceivable intelligence but that most of what is currently not obvious to us and that we perceive as difficult is not obvious and is perceived as difficult not because it is intrinsically so but because we are so limited.

The heart of the quote for me is that instead of hardness being a function of one argument (in Haskell notation):

difficulty :: Problem -> PositiveReal

it is a function of two arguments:

difficulty :: Mind -> Problem -> PositiveReal

And that most of the interesting real-world Problem instances that map to very large numbers for us (i.e. the difficult ones that we will eventually solve if we survive long enough) are problems that would be deemed obvious to sufficiently intelligent minds.

And that is a worthwhile insight, even if the function is not defined for all problems, even if there are solvable problems for which there are no physically possible minds that would yield "difficulty p m" (or "difficulty(p, m)" in Python syntax) being a small enough number to fall below the threshold of obviousness, and even if for any possible mind we can find solvable problems that have arbitrarily high difficulty.

Comment author: Johnicholas 15 September 2009 12:29:10PM 0 points [-]

There's a flaw in your proof - the step from "The machine will necessarily stop" to "This solves the halting problem" is unjustified.

Despite the flaw, I agree with your general conclusion - there are and will be challenging as well as unsolvable problems.

Comment author: pengvado 15 September 2009 08:10:31PM *  3 points [-]

It's justified. If a machine halts, then there's a proof of that in PA (simply the list of steps it performs before halting). Therefore, "independent of PA" implies "doesn't halt". Therefore, a provability decider is the same as a halting decider.

Comment author: Johnicholas 15 September 2009 11:31:13PM 1 point [-]

With the added steps, yes, there is a proof.

Comment author: DanArmak 16 September 2009 08:14:39PM 0 points [-]

Christian's argument shows there are always problems that can't be solved or shown to be unsolvable. But even if you ignore them and look at finitely solvable problems (or provable theorems in PA), there's no upper bound on the time or processing power needed to solve a randomly chosen problem. Not even for the "smartest" intelligence allowed by the laws of physics (i.e. the one ideally optimized to solve the chosen problem).

Comment author: anonym 21 September 2009 12:56:32AM 0 points [-]

Thanks for the clear explanation. My response below to Christian is relevant here also.

Comment author: wedrifid 15 September 2009 01:57:36PM 1 point [-]

Arbitrarily hideous evils can be wholly concealed by a social atmosphere in which no one is expected to point them out and it would seem awkward and out-of-place to do so.

  • EY
Comment author: thomblake 03 September 2009 12:35:54PM 1 point [-]

If making the "wrong" choice when presented with two high-probability, high-payoff lottery tickets is exploitation, I don't want to be empowered.

-SilasBarta

Comment author: SilasBarta 12 October 2009 08:13:14PM 0 points [-]

Finally saw this. Thanks for the shout-out! :-)

Comment author: PhilGoetz 04 September 2009 04:20:42PM 0 points [-]

Sometime ago in my wild and reckless youth that hopefully isn't over yet

  • cousin_it