POLL: Realism and reductionism

-5 draq 05 November 2010 09:13PM

A second attempt.

Defintions:

universe: that which contains everything.

reality: the realm of natural phenomena.

scientific theory: a theory that identifies natural phenomena.

morality: the realm of normative rules.

normative theory: a theory that identifies normative rules.

identification: "this natural phenomenon has following properties" or "this normative rule says: ... "

 

What are you?

Please answer in the form of [ABC0]{4}, where 0 stands for no opinion. Feel free to add an explanation.

Example: B0BA stands for anti-realism, no opinion on values, weak ontological realism, scientific reductionism.

 


 

1A realism

Reality is external to the mind.

It is possible to evaluate which scientific theory is more correct.

1B anti-realism

Reality is external to the mind.

It is impossible to evaluate which scientific theory is more correct.

1C subjectivism

Reality is a product of the mind.


2A value realism

Morality is external to the mind.

It is possible to evaluate which normative theory is better.

2B value anti-realism

Morality is external to the mind.

It is impossible to evaluate which normative theory is better.

2C value relativism

Morality is a product of the mind.


3A strong ontological reductionism

Mental phenomena are reducible to reality and reality is reducible to mathematics.

Mathematics is the universe.

3B weak ontological reductionism

Mental phenomena are reducible to reality, but reality is not reducible to mathematics.

Reality (and mathematics) is the universe.

3C anti-reductionism

Mental phenomena are not reducible to reality and reality is not reducible to mathematics.

 


 

4A scientific reductionism

The entirety of scientific theories can be reduced to some axiomatic theories.

4B scientific anti-reductionism

The entirety of scientific theories cannot be reduced to some axiomatic theories.

New natural phenomena require new irreducible scientific theories.

 


 

Comment author: jimrandomh 04 November 2010 09:19:57PM 0 points [-]

I think it is insufficiently detailed to identify a unique utility function - it needs to have specific extrapolation and reconciliation procedures filled in, the details of those procedures are important and affect the result, and a bad extrapolation procedure could produce arbitrary results.

That said, programming an AI with any value system that didn't match the template of CEV (plus details) would be a profoundly stupid act. I have seen so many disastrously buggy attempts to define what human values are that I doubt it could be done correctly without the aid of a superintelligence.

Comment author: draq 05 November 2010 06:57:43PM 0 points [-]

No Universally Compelling Arguments contains a proof that for every possible morality, there is a mind with volition to which it does not apply. Therefore, there is no absolute morality.

There is no universally compelling argument for morality as much as there is no universally compelling for reality. You can change the physical perception as well. But it does not necessary follow that there is no absolute reality.

I also have to correct my position: CEV is not absolute morality. Volition is rather a "reptor" or "sensor" of morality I made a conceptual mistake.

Can you formulate your thoughts value-free, that is without words like "profoundly stupid", "important". Because these words suggest that we should do something. If there is no universal morality, why do you postulate anything normative? Other than for fun.

ps I have to stop posting. First, I have to take time for thinking. Second, this temporary block is driving me insane.

Comment author: Pavitra 04 November 2010 09:26:28PM 0 points [-]

So there is no normative rule that Pavitra (you) should care about G101. It just happens, it could also be different and it does not matter. That is what I call (moral) nihilism.

Don't you ever ask why you should care (about anything, incl. yourself caring about things)? (I am not suggesting you becoming suicidal, but on the other hand, there is no normative rule against it, so... hm... I still won't)

Again, it's not that I don't care about anything. I just happen to have a few core axioms, things that I care about for no reason. They don't feel arbitrary to me -- after all, I care about them a great deal! -- but I didn't choose to care about them. I just do.

A large group of crazies agreeing: Ever heard of religion, homeopathy, TCM et cetera?

Sure, and those are the claims I take the time to evaluate and debunk.

If you get into the car, you are a G701, if not, you are a G702.

Please explain the relationship between G701-702 and G698-700.

Comment author: draq 05 November 2010 06:47:41PM 0 points [-]

Again, it's not that I don't care about anything. I just happen to have a few core axioms, things that I care about for no reason. They don't feel arbitrary to me -- after all, I care about them a great deal! -- but I didn't choose to care about them. I just do.

And you believe that other minds have different core believs?

Sure, and those are the claims I take the time to evaluate and debunk.

I think we should close the discussion and take some time thinking.

Please explain the relationship between G701-702 and G698-700.

"chance is low" or "chance is high" are not mere descriptive, they also contain values. chance is low --> probably safe to drive, high --> probably not, based on the more fundamental axiom that surviving is good. And "surviving is good" is not descriptive, it is normative because good is a value. you can also say instead: "you should survive", which is a normative rule.

In response to POLL: Reductionism
Comment author: David_Allen 04 November 2010 07:56:24PM *  0 points [-]

I struggled to understand your original descriptions, so I rephrased them. Does this capture your categorization?

  1. Strong ontological reductionist:
    • a) everything can be reduced to a mathematical object in a mathematical realm
    • b) nothing exists outside the mathematical realm
  2. Weak ontological reductionist:
    • a) mental phenomena are entirely physical in nature
    • b) everything does not necessarily reduce to mathematical objects in a mathematical realm
  3. Strong scientific reductionist:
    • a) the behavior of the universe is fundamentally deterministic
  4. Weak scientific reductionist:
    • a) the behavior of the universe is fundamentally probabilistic
    • b) it is impossible to predict specific outcomes in quantum level systems
    • c) concepts and theories in chemistry and biology are useful high level approximations of the fundamentally probabilistic universe
Comment author: draq 04 November 2010 08:12:18PM *  0 points [-]

Thanks for the rephrasing. I would amend:

  1. Weak scientific reductionist:
    c) concepts and theories in chemistry and biology are only useful high level approximations to physical models of the universe. They could be reduced to physical theories if b) does not apply.
Comment author: Pavitra 04 November 2010 07:22:22PM 0 points [-]

G101: Pavitra (me) cares about something.

What is the point in caring for G101?

Since I'm Pavitra, it doesn't really matter to me if G101 has a point; I care about it anyway.

What if you can't predict?

Their claims are basically noisy. If a large group of crazies started agreeing with each other, that might require looking into more carefully.

That is not how your brain works (a rough guess).

Not natively, no. That's why it requires advocacy.

Comment author: draq 04 November 2010 07:32:12PM 0 points [-]

Since I'm Pavitra, it doesn't really matter to me if G101 has a point; I care about it anyway.

So there is no normative rule that Pavitra (you) should care about G101. It just happens, it could also be different and it does not matter. That is what I call (moral) nihilism.

Don't you ever ask why you should care (about anything, incl. yourself caring about things)? (I am not suggesting you becoming suicidal, but on the other hand, there is no normative rule against it, so... hm... I still won't)

Their claims are basically noisy. If a large group of crazies started agreeing with each other, that might require looking into more carefully.

A large group of crazies agreeing: Ever heard of religion, homeopathy, TCM et cetera?

Not natively, no. That's why it requires advocacy.

You care about things. I assume you care about your health. In that case, you don't want to be in a crash. So you'll evaluate whether you should get into a car. If you get into the car, you are an optimist, if not, you are a pessimist.

Again, why is important to advocate anything? -- Because you care about it. -- So what?

Comment author: Pavitra 04 November 2010 06:43:42PM *  0 points [-]

So your point is there is no point in caring for anything. Do you call yourself a nihilist?

No, I care about things. It's just that I don't think that G695 (assuming it's defined -- see below) would be particularly humane or good or desirable, any more than (say) Babyeater morality.

Would you call yourself a naive realist?

Certainly not -- hence "eventually". Science requires interpreting data.

Edit: oh, sorry, forgot to address your actual point.

At a certain point, the working model of reality begins to predict what the insane will claim to perceive and how those errors come about.

I would say, the optimistic view is saying "There is probably/hopefully no crash". But don't let us fight over words.

Very well. Let us assume that (warning: numbers just made up) one in every 100,000 car trips results in a crash. The G698 view says "The chances of a crash are low." The G699 view says "The chances of a crash are high." The G700 view says "The chances of a crash are 1/1000000." I advocate the G700 view, and assert that believing G698 or G699 interferes with believing G700.

Does CEV of humankind exists?

I personally don't think the extrapolated volition of humanity coheres, but I have the impression that others disagree with me.

I would be very surprised, however, if the extrapolated volition of all volitional entities cohered and the extrapolated volition of all volitional humans did not.

Comment author: draq 04 November 2010 07:07:51PM 0 points [-]

I like gensyms.

G101: Pavitra (me) cares about something.

What is the point in caring for G101?

At a certain point, the working model of reality begins to predict what the insane will claim to perceive and how those errors come about.

What if you can't predict?

I advocate the G700 view, and assert that believing G698 or G699 interferes with believing G700.

That is not how your brain works (a rough guess). Your brain thinks either G698 or G699 and then comes out with a decision about either driving or not. This heuristic process is called optimism or pessimism.

Comment author: Pavitra 04 November 2010 06:18:50PM 0 points [-]

Absolute morality is the coherent extrapolated volition of all entities with volition.

This sounds like a definition, so let's gensym it and see if it still makes sense.

G695 is the coherent extrapolated volition of all entities with volition.

Why should I care about G695? In particular, why should I prefer it over G696, which is the CEV of all humans with volition alive in 2010, or over G697, which is the CEV of myself?

So the reason why you think there is a reality is because there is a strong consensus and the reason why you think that there is no morality is because there is no strong consensus?

No, that's my reason for breaking symmetry between them, for discarding the assumption that the explanation of the two phenomena should be essentially isomorphic. I then investigate the two unrelated phenomena individually and eventually come to the conclusion that there is one reality between all humans, but a separate morality for each human.

within the boundaries of my ignorance, I can hope for the better or believe in the worse.

There is a very great difference between hoping for the better and believing in the better. Nor are "better" or "worse" the only two options.

Suppose you're getting into a car, and you're wondering whether you will get into a crash. The optimistic view is that you will definitely not crash. The pessimistic view is that you will definitely crash. Neither of these is right.

To be absolute, it has to apply to all mind that has volition.

You're constructing a universal CEV. It's not an already-existing ontologically fundamental entity. It's not a thing that actually exists.

That is why I evaluate arguments based on other things than someone's ulterior motives.

Consciously, sure. I just wanted to warn you against the human credulity bias.

Comment author: draq 04 November 2010 06:26:23PM 0 points [-]

Why should I care about G695? In particular, why should I prefer it over G696, which is the CEV of all humans with volition alive in 2010, or over G697, which is the CEV of myself?

So your point is there is no point in caring for anything. Do you call yourself a nihilist?

I then investigate the two unrelated phenomena individually and eventually come to the conclusion that there is one reality between all humans, but a separate morality for each human.

Would you call yourself a naive realist? What about people on LSD, schizophrenics and religious people who see their Almighty Lord Spaghetti Monster in what you would call clouds. You surely mean that there is one reality between all humans that are "sane".

Suppose you're getting into a car, and you're wondering whether you will get into a crash. The optimistic view is that you will definitely not crash. The pessimistic view is that you will definitely crash. Neither of these is right.

I would say, the optimistic view is saying "There is probably/hopefully no crash". But don't let us fight over words.

You're constructing a universal CEV. It's not an already-existing ontologically fundamental entity. It's not a thing that actually exists.

Does CEV of humankind exists?

Comment author: jimrandomh 04 November 2010 05:58:37PM 0 points [-]

And yet you seem to acknowledge that the output of the CEV function depends on whose volition it is asked to extrapolate. In what sense then is morality absolute, rather than relative to a certain kind of mind?

To be absolute, it has to apply to all mind that has volition.

No Universally Compelling Arguments contains a proof that for every possible morality, there is a mind with volition to which it does not apply. Therefore, there is no absolute morality.

Comment author: draq 04 November 2010 06:09:18PM 0 points [-]

What do you think of Eliezer's idea of "coherent extrapolated volition of humankind" and his position that FAI should optimise it?

Comment author: Pavitra 04 November 2010 05:54:55PM *  0 points [-]

Often, when I stop to think about a decision, I find that my desire changes upon reflection. The latter desire generally seems more intellectually coherent(*), and across multiple instances, the initial desires on various occasions are generally more inconsistent with one another while the after-reflection desires are generally more consistent with one another. From this I infer the existence of a (possibly only vague, partially specified, or partially consistent) common cause to the various instances' after-reflection desires. This common cause appears to roughly resemble a bundle of heuristics that collectively approximate some sort of optimization criteria. I call the bundle of heuristics my "moral intuition" and the criteria they approximate my "morality".

I suspect that other human's minds are broadly similar to mine in this respect, and that their moral intuitions are broadly similar to mine. To the extent they correlate, we might call the set of common trends "human morality" or "humaneness".

(*) An example of intellectual coherence vs. incoherence: Right now, I'd like to go get some ice cream from the freezer. However, on reflection, I remember that there isn't any ice cream in the freezer at the moment, so walking over to the freezer would not satisfy the impulse that motivated the action.

Comment author: draq 04 November 2010 05:58:16PM 1 point [-]

What about the Baby-Eaters and the Super Happy People in the story Three Worlds Collide? Do they have anything you would call "humaneness"?

POLL: Reductionism

-3 draq 04 November 2010 05:55PM

Since there is no handy toll to create polls on LW, please post comments on your position.

As which of the following would you identify yourself? (I am not good at rationalist taboo, thus please excuse me for ambiguous terms.)

Strong ontological reductionst

See defintion on Wikipedia. Someone who believes that mental phenomena can be fully reduced to physics and that physics can be fully reduced to mathematics. That is, desires and electrons don't have any fundamental qualities, but are in the end mathematical objects. And nothing exists outside the mathematical realm.

Weak ontological reductionist

Someone who believes that mental phenomena don't have any qualities outside the domain of physics. Every aspect of mental phenomena can be fully reduced to physical phenomena. But physical phenomena are not necessarily mathematical objects.

Strong scientific reductionist

Someone who believes that quantum mechanics is wrong and Laplace's demon can exist in principle (if unrestricted by physical limitations). 

Weak scientific reductionist

Someone who concedes that it is impossible in principle to predict complicated physical systems, but that the concepts and theories in chemistry and biology are mere approximations and simplifications of complicated physical computations to sidestep the (faster-than-)exponential wall. That is, chemical and biological models are not fundamental, but are reducible to physical theories (if we had the theoretical computational power to simulate the models).

 

Please also comment if you are not a reductionist and explain what kind of reductionist you are not.

View more: Next