Wiki Contributions

Comments

Sorted by
draq00

No Universally Compelling Arguments contains a proof that for every possible morality, there is a mind with volition to which it does not apply. Therefore, there is no absolute morality.

There is no universally compelling argument for morality as much as there is no universally compelling for reality. You can change the physical perception as well. But it does not necessary follow that there is no absolute reality.

I also have to correct my position: CEV is not absolute morality. Volition is rather a "reptor" or "sensor" of morality I made a conceptual mistake.

Can you formulate your thoughts value-free, that is without words like "profoundly stupid", "important". Because these words suggest that we should do something. If there is no universal morality, why do you postulate anything normative? Other than for fun.

ps I have to stop posting. First, I have to take time for thinking. Second, this temporary block is driving me insane.

draq00

Again, it's not that I don't care about anything. I just happen to have a few core axioms, things that I care about for no reason. They don't feel arbitrary to me -- after all, I care about them a great deal! -- but I didn't choose to care about them. I just do.

And you believe that other minds have different core believs?

Sure, and those are the claims I take the time to evaluate and debunk.

I think we should close the discussion and take some time thinking.

Please explain the relationship between G701-702 and G698-700.

"chance is low" or "chance is high" are not mere descriptive, they also contain values. chance is low --> probably safe to drive, high --> probably not, based on the more fundamental axiom that surviving is good. And "surviving is good" is not descriptive, it is normative because good is a value. you can also say instead: "you should survive", which is a normative rule.

draq00

Thanks for the rephrasing. I would amend:

  1. Weak scientific reductionist:
    c) concepts and theories in chemistry and biology are only useful high level approximations to physical models of the universe. They could be reduced to physical theories if b) does not apply.
draq00

Since I'm Pavitra, it doesn't really matter to me if G101 has a point; I care about it anyway.

So there is no normative rule that Pavitra (you) should care about G101. It just happens, it could also be different and it does not matter. That is what I call (moral) nihilism.

Don't you ever ask why you should care (about anything, incl. yourself caring about things)? (I am not suggesting you becoming suicidal, but on the other hand, there is no normative rule against it, so... hm... I still won't)

Their claims are basically noisy. If a large group of crazies started agreeing with each other, that might require looking into more carefully.

A large group of crazies agreeing: Ever heard of religion, homeopathy, TCM et cetera?

Not natively, no. That's why it requires advocacy.

You care about things. I assume you care about your health. In that case, you don't want to be in a crash. So you'll evaluate whether you should get into a car. If you get into the car, you are an optimist, if not, you are a pessimist.

Again, why is important to advocate anything? -- Because you care about it. -- So what?

draq00

I like gensyms.

G101: Pavitra (me) cares about something.

What is the point in caring for G101?

At a certain point, the working model of reality begins to predict what the insane will claim to perceive and how those errors come about.

What if you can't predict?

I advocate the G700 view, and assert that believing G698 or G699 interferes with believing G700.

That is not how your brain works (a rough guess). Your brain thinks either G698 or G699 and then comes out with a decision about either driving or not. This heuristic process is called optimism or pessimism.

draq00

Why should I care about G695? In particular, why should I prefer it over G696, which is the CEV of all humans with volition alive in 2010, or over G697, which is the CEV of myself?

So your point is there is no point in caring for anything. Do you call yourself a nihilist?

I then investigate the two unrelated phenomena individually and eventually come to the conclusion that there is one reality between all humans, but a separate morality for each human.

Would you call yourself a naive realist? What about people on LSD, schizophrenics and religious people who see their Almighty Lord Spaghetti Monster in what you would call clouds. You surely mean that there is one reality between all humans that are "sane".

Suppose you're getting into a car, and you're wondering whether you will get into a crash. The optimistic view is that you will definitely not crash. The pessimistic view is that you will definitely crash. Neither of these is right.

I would say, the optimistic view is saying "There is probably/hopefully no crash". But don't let us fight over words.

You're constructing a universal CEV. It's not an already-existing ontologically fundamental entity. It's not a thing that actually exists.

Does CEV of humankind exists?

draq00

What do you think of Eliezer's idea of "coherent extrapolated volition of humankind" and his position that FAI should optimise it?

draq10

What about the Baby-Eaters and the Super Happy People in the story Three Worlds Collide? Do they have anything you would call "humaneness"?

draq-10

Universal morality

You need to go read the sequences, and come back with specific counterarguments to the specific reasoning presented therein on the topics that you're discussing.

I don't think there is an easy way to make FAI.

Absolute morality is the coherent extrapolated volition of all entities with volition. Morality is based on values. In a universe where there are only insentient stones, there is no morality, and even if there are, they are meaningless. Morality exists only where there are values (things that we either like or dislike), or "volition".

Reality and Morality

So the reason why you think there is a reality is because there is a strong consensus and the reason why you think that there is no morality is because there is no strong consensus?

Optimism and pessimism are incompatible with realism. If you're not willing to believe that the universe works the way that it does in fact work, then you're not qualified to work on potentially-world-destroying projects.

I don't see what optimism or pessimism has to do with willingness to believe in an absolute reality. I only know that my knowledge is restricted, and within the boundaries of my ignorance, I can hope for the better or believe in the worse. If I'm omniscient, I will neither be optimistic or pessimistic. We are optimistic because we are ignorant, not the other way around, at least in my case.

And yet you seem to acknowledge that the output of the CEV function depends on whose volition it is asked to extrapolate. In what sense then is morality absolute, rather than relative to a certain kind of mind?

To be absolute, it has to apply to all mind that has volition.

(Incidentally, if you've been reading claims from Clippy that humane and paperclip-maximizing moralities are essentially compatible, then you should realize that e may have ulterior motives and may be arguing disingenuously. Sorry, Clippy.)

That is why I evaluate arguments based on other things than someone's ulterior motives.

draq00

If you read a physics or chemistry textbook, then you'll find a lot of words and only a few equations, whereas a mathematics textbook has much more equations and the words in the book are to explain the equations, whereas the words in a physics book are not only explaining the equations but the issues that the equations are explaining.

However, I haven't fully thought about reductionism, so do you have any recommendations that I want to read?

My current two objections:

1. Computational

According to our current physical theories, it is impossible to predict the behaviour of any system larger than a dozen atoms, see Walter Kohn's Nobel Lecture. We could eventually have a completely new theory, but that would be an optimistic hope.

2. Ontological

Physical objects have other qualities than mathematical objects. And values have other qualities than physical objects. Further elaboration needed.

It shouldn't, because this is a straw man, not the argument that leads us to conclude that there isn't a single absolute morality.

It is not a straw man, because I am not attacking any position. I think I was misunderstood, as I said.

Load More