Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

RafeFurst comments on Reductionism - Less Wrong

40 Post author: Eliezer_Yudkowsky 16 March 2008 06:26AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (150)

Sort By: Old

You are viewing a single comment's thread.

Comment author: RafeFurst 07 March 2010 05:15:34PM 4 points [-]

Reductionism is great. The main problem is that by itself it tells us nothing new. Science depends on hypothesis generation, and reductionism says nothing about how to do that in a rational way, only how to test hypotheses rationally. For some reason the creative side of science -- and I use the word "creative" in the generative sense -- is never addressed by methodology in the same way falsifiability is:

http://emergentfool.com/2010/02/26/why-falsifiability-is-insufficient-for-scientific-reasoning/

We are at a stage of historical enlightenment where more and better reductionism is producing marginal returns. To be even less wrong, we might spend more time on the hypothesis generation side of the equation.

Comment author: Jack 07 March 2010 06:08:56PM 7 points [-]

Really? I think of reductionism as maybe the greatest, most wildly successful abductive tool in all of history. If we can't explain some behavior or property of some object it tells us one good guess is to look to the composite parts of that thing for the answer. The only other strategy for hypothesis generation I can think of that has been comparably successful is skepticism (about evidence and testimony). "I was hallucinating." and "The guy is lying" have explained a lot of things over the years. Can anyone think of others?

Comment author: JGWeissman 07 March 2010 06:32:52PM 4 points [-]

Science depends on hypothesis generation, and reductionism says nothing about how to do that in a rational way, only how to test hypotheses rationally.

You may be interested in Science Doesn't Trust Your Rationality, in which Eliezer suggests that science is a way of identifying the good theories produced by a community of scientists who on their own have some capacity to produce theories, and that Bayesian rationality is a systematic way of producing good theories.

Oh, and Welcome to Less Wrong! You have identified an important point in your first few comments, and I hope that is predictor of good things to come.

Comment author: whowhowho 04 February 2013 03:13:01PM 0 points [-]

and that Bayesian rationality is a systematic way of producing good theories.

An automated theory generator would be worth a nobel.

Comment author: TheOtherDave 04 February 2013 05:38:31PM 2 points [-]

So, the introduction of "automated" to this discussion feels like a complete nonsequitor to me. Can you clarify why you introduce it?

Comment author: whowhowho 04 February 2013 07:49:51PM 0 points [-]

If you have a "systematic" way of "producing" something, (JGWeissman) surely you can automate it.

Comment author: TheOtherDave 04 February 2013 08:21:25PM 0 points [-]

Ah. OK, thanks for clarifying.

Comment author: [deleted] 05 February 2013 05:03:04AM 1 point [-]

I could call a procedure "systematic" even if one of the steps used a human's System 1 as an oracle, in which case it'd be hard to automate that as per Moravec's paradox.

Comment author: whowhowho 05 February 2013 11:07:13AM *  0 points [-]

I would not call such a procedure systematic. Who would? Here's a system for success as an author: first have a brilliant idea...it reads like a joke, doesn't it?

Comment author: [deleted] 05 February 2013 12:32:23PM 1 point [-]

I wasn't thinking of something that extreme; more like the kind of tasks people do on Mechanical Turk.

Comment author: whowhowho 05 February 2013 12:35:06PM -2 points [-]

Is there anything non systematic by that definition? In what way does it promote Bayesianism to call it systematic?

Comment author: TheOtherDave 05 February 2013 04:08:30PM 2 points [-]

Well, I have no idea if it "promotes Bayesianism" or not, but when someone talks to me about a systematic approach to doing something in normal conversation, I understand it to be as opposed to a scattershot/intuitive approach.

For example, if I want to test a piece of software, I can make a list of all the integration points and inputs and key use cases and build a matrix of those lists and build test cases for each cell in that matrix, or I can just construct a bunch of test cases as they occur to me. The former approach is more systematic, even if I can't necessarily automate the test cases.

I realize that your understanding of "systematic" is different from this... if I've understood you, if I can't automate the test cases then this approach is not systematic on your account.

Comment author: [deleted] 05 February 2013 04:39:21PM *  2 points [-]

Is there anything non systematic by that definition?

See TheOtherDave.

In what way does it promote Bayesianism to call it systematic?

See E.T. Jaynes calling certain frequentist techniques “ad-hockeries”. EDIT: BTW, I didn't have Bayesianism in mind when I replied to this ancestor -- I should stop replying to comments without reading their ancestors first.

Comment author: private_messaging 05 February 2013 07:39:15AM *  1 point [-]

It feels like you use 'questions' a lot more than usual, and it looks very much like a rhetorical device because you inject counter points into your questions. Can you clarify why you do it? (see what I did there?)

Sidenote: Actually, questions are often a sneaky rhetorical device - you can modify the statement in the way of your choosing, and then ask questions about that. You see that in political debates all the time.

Comment author: Vaniver 05 February 2013 02:12:43PM 0 points [-]

Agreed that questions can be used in underhanded ways, but this example does seem more helpful at focusing the conversation than something like:

Can you clarify why you added "automated" to the discussion?

That could easily go in other directions; this makes clear that the question is "how did we get from A to B?" while sharing control of the topic change / clarification.

Comment author: TheOtherDave 05 February 2013 03:37:44PM 0 points [-]

Can you clarify why you do it?

Sure, I'd be happy to: because I want answers to those questions.

For example, whowhowho's introduction of "automated" did in fact feel like a nonsequitor to me, and I wanted to understand better why they'd introduced it, to see whether there was some clever reasoning there I'd failed to follow. Their answer to my question clarified that, and I thanked them for the clarification, and we were done.

(see what I did there?)

You asked a question.
I answered it.
It really isn't that complicated.

That said, I suspect from context that you mean to imply that you did something sneaky and rhetorical just then, just as you seem to believe that I do something sneaky and rhetorical when I ask questions.
If that's true, then no, I guess I don't see what you did there.

questions are often a sneaky rhetorical device

Yes. So are statements.

Comment author: shminux 04 February 2013 06:35:56PM 2 points [-]
Comment deleted 05 February 2013 09:22:55AM *  [-]
Comment author: Kawoomba 05 February 2013 11:55:31AM 0 points [-]

Solomonoff Induction, in so much as it is related to interpretations at all, rejects 'many worlds interpretation' because valid (non falsified) code strings are the ones whose output began with the actual experimental outcome rather than list all possible outcomes, i.e. are very much Copenhagen - like.

Has this point ever been answered? If we are content with the desired output appearing somewhere along the line - as opposed to the start - then the simplest theory of everything would be printing enough digits of pi, and our universe would be described somewhere down the line.

Comment deleted 05 February 2013 01:25:41PM [-]
Comment author: Kawoomba 05 February 2013 03:14:21PM 2 points [-]
Comment author: Eliezer_Yudkowsky 05 February 2013 07:39:54PM 2 points [-]

Solomonoff induction is about putting probability distributions on observations - you're looking for the combination of the simplest program that puts the highest probability on observations. Technically, the original SI doesn't talk about causal models you're embedded in, just programs that assign probabilities to experiences.

Generalizing somewhat, for QM as it appears to humans, the generalized-SI-selected hypothesis would be something along the lines of one program that extrapolated the wavefunction, then another program that looked for people inside it and translated the underlying physics into the "observed data" from their perspective, then put probabilities on the sequences of data corresponding to integral squared modulus. Note that you also need an interface from atoms to experiences just to e.g. translate a classical atomic theory of matter into "I saw a blue sky", and an implicit theory of anthropics/sum-probability-measure too if the classical universe is large enough to have more than one copy of you.

Comment author: Kawoomba 05 February 2013 07:42:35PM 1 point [-]

Thanks for this. I'll mull it over.

Comment author: private_messaging 05 February 2013 10:29:42PM 1 point [-]
Comment author: whowhowho 05 February 2013 08:04:12PM 2 points [-]

It isn't at all clear why all that would add up to something simpler than a single world theory

Comment author: Eliezer_Yudkowsky 05 February 2013 08:08:19PM 8 points [-]

Single-world theories still have to compute the wavefunction, identify observers, and compute the integrated squared modulus. Then they have to pick out a single observer with probability proportional to the integral, peek ahead into the future to determine when a volume of probability amplitude will no longer strongly causally interact with that observer's local blob, and eliminate that blob from the wavefunction. Then translating the reductionist model into experiences requires the same complexity as before.

Basically, it's not simpler for the same reason that in a spatially big universe it wouldn't be 'simpler' to have a computer program that picked out one observer, calculated when any photon or bit of matter was moving away and wasn't going to hit anything that would reflect it back, and then eliminated that matter.

Comment author: Morendil 07 March 2010 06:37:44PM 0 points [-]

Agreed: we need more posts on abductive reasoning specifically.