Comment author: Peterdjones 25 May 2011 11:25:29PM 0 points [-]

As for this collectivism, though, I don't go for it. There is no way to know another's utility function, no way to compare utility functions among people, etc. other than subjectively.

That's very contestable. It has frequently argued here that preferences can be inferred from behaviour; it's also been argued that introspection (if that is what you mean by "subjectively") is not a reliable guide to motivation.

Comment author: Amanojack 26 May 2011 12:42:29AM 0 points [-]

This is the whole demonstrated preference thing. I don't buy it myself, but that's a debate for another time. What I mean by subjectively is that I will value one person's life more than another person's life, or I could think that I want that $1,000,000 more than a rich person wants it, but that's just all in my head. To compare utility functions and work from demonstrated preference usually - not always - is a precursor to some kind of authoritarian scheme. I can't say there is anything like that coming, but it does set off some alarm bells. Anyway, this is not something I can substantiate right now.

Comment author: ArisKatsaris 26 May 2011 12:25:29AM 0 points [-]

I'm getting a bad vibe here, and no longer feel we're having the same conversation

"Person or group that decides"? Who said anything about anyone deciding anything? And my point was that this perhaps this is the meta-ethical position that every rational agent individually converges to. So nobody "decides", or everyone does. And if they don't reach the same decision, then there's no single objective morality -- but even i so perhaps there's a limited set of coherent metaethical positions, like two or three of them.

I personally think all this collectivism is a carryover from the idea of (collective) democracy and other silly ideas.

I think my post was inspired more by TDT solutions to Prisoner's dilemma and Newcomb's box, a decision theory that takes into account the copies/simulations of its own self, or other problems that involve humans getting copied and needing to make a decision in blind coordination with their copies.

I imagined system that are not wholly copied, but rather just the module that determines the meta-ethical constraints, and tried to figure out to which directions would such system try to modify themselves, in the knowledge that other such system would similarly modify themselves.

Comment author: Amanojack 26 May 2011 12:37:48AM 0 points [-]

You're right, I think I'm confused about what you were talking about, or I inferred too much. I'm not really following at this point either.

One thing, though, is that you're using meta-ethics to mean ethics. Meta-ethics is basically the study of what people mean by moral language, like whether ought is interpreted as a command, as God's will, as a way to get along with others, etc. That'll tend to cause some confusion. A good heuristic is, "Ethics is about what people ought to do, whereas meta-ethics is about what ought means (or what people intend by it)."

Comment author: Peterdjones 25 May 2011 11:15:04PM *  1 point [-]

I'll just decide not to follow the advice, or I'll try it out and then after experiencing pain I will decide not to follow the advice again. I might tell you that, too, but I don't need to use the word "true" or any equivalent to do that. I can just say it didn't work.

Any word can be eliminated in favour of a definitions or paraphrase. Not coming out with an equivalent -- showing that you have dispensed with the concept -- is harder. Why didn't it work? You're going to have to paraphrase "Because it wasn't true" or refuse to answer.

Comment author: Amanojack 26 May 2011 12:29:29AM *  -1 points [-]

The concept of truth is for utility, not utility for truth. To get them backwards is to merely be confused by the words themselves. It's impossible to show you've dispensed with any concept, except to show that it isn't useful for what you're doing. That is what I've done. I'm non-cognitive to God, truth, and objective value (except as recently defined). Usually they all sound like religion, though they all are or were at one time useful approximate means of expressing things in English.

Comment author: TimFreeman 25 May 2011 05:30:46PM 0 points [-]

What about beliefs being justified by non-beliefs? If you're a traditional foundationalist, you think everything is ultimately grounded in sense-experience, about which we cannot reasonably doubt.

If a traditional foundationalist believes that beliefs are justified by sense-experience, he's a justificationalist. The argument in the OP works. How can he justify the belief that beliefs are justified by sense-experience without first assuming his conclusion?

Also, what about externalism? This is one of the major elements of modern epistemology, as a response to such skeptical arguments.

I had to look it up. It is apparently the position that the mind is a result of both what is going on inside the subject and outside the subject. Some of them seem to be concerned about what beliefs mean, and others seem to carefully avoid using the word "belief". In the OP I was more interested in whether the beliefs accurately predict sensory experience. So far as I can tell, externalism says we don't have a mind that can be considered as a separate object, so we don't know things, so I expect it to have little to say about how we know what we know. Can you explain why you brought it up?

I don't mean to imply that either of these is correct, but it seems that if one is going to attempt to use disjunctive syllogism to argue for anti-justificationism, you ought to be sure you've partitioned the space of reasonable theories.

I don't see any way to be sure of that. Maybe some teenage boy sitting alone in his bedroom in Iowa figured out something new half an hour ago; I would have no way to know. Given the text above, do think there are alternatives that are not covered?

Perhaps it is so structured that it is invulnerable to being changed after it is adopted, regardless of the evidence observed.

This example seems anomalous. If there exists some H such that, if P(H) > 0.9, you lose the ability to choose P(H), you might want to postpone believing in it for prudent reasons. But these don’t really bear on what the epistemically rational level of belief is (Assuming remaining epistemically rational is not part of formal epistemic rationality).

Furthermore, if you adopted a policy of never raising P(H) above 0.9, it’d be just like you were stuck with P(H) < 0.9 !

The point is that if a belief will prevent you from considering alternatives, that is a true and relevant statement about the belief that you should know when choosing whether to adopt it. The point is not that you shouldn't adopt it. Bayes' rule is probably one of those beliefs, for example.

Without a constraining external metric, there are many consistent sets [of preferences], and the only criticism you can ultimately bring to bear is one of inconsistency.

I presently believe there are many consistent sets of preferences, and maybe you do too. If that's true, we should find a way to live with it, and the OP is proposing such a way.

I don't know what the word "ultimately" means there. If I leave it out, your statement is obviously false -- I listed a bunch of criticisms of preferences in the OP. What did you mean?

Comment author: Amanojack 25 May 2011 11:09:22PM 0 points [-]

How can he justify the belief that beliefs are justified by sense-experience without first assuming his conclusion?

I don't know what exactly "justify" is supposed to mean, but I'll interpret it as "show to be useful for helping me win." In that case, it's simply that certain types of sense-experience seem to have been a reliable guide for my actions in the past, for helping me win. That's all.

To think of it in terms of assumptions and conclusions is to stay in the world of true/false or justified/unjustified, where we can only go in circles because we are putting the cart before the horse. The verbal concepts of "true" and "justified" probably originated as a way to help people win, not as ends to be pursued for their own sake. But since they were almost always correlated with winning, they became ends pursued for their own sake - essential ones! In the end, if you dissolve "truth" it just ends up meaning something like "seemingly reliable guidepost for my actions."

Comment author: endoself 25 May 2011 07:44:12PM 1 point [-]

Are you losing sleep over the daily deaths in Iraq? Are most LWers? . . . If we cared as much as we signal we do, no one would be able go to work, or post on LW. We'd all be too grief-stricken.

That is exactly what I was talking about when I said "There's a difference between mental distress and action-motivating desire.". Utility functions are about choices, not feelings, so I assumed that, in a discussion about utility we would be using the word 'care' (as in "If we cared as much as we signal we do") to refer to motives for action, not mental distress. If this isn't clear, I'm trying to refer to the same ideas discussed here.

And it also isn't immediately clear that anyone would really want their utility function to be unbounded (unless I'm misinterpreting the term).

It does not make sense to speak of what someone wants their utility function to be; utility functions just describe actual preferences. Someone's utility function is unbounded if and only if there are consequences with arbitrarily high utility differences. For every consequence, you can identify one that is over twice as good (relative to some zero point, which can be arbitrarily chosen. This doesn't really matter if you're not familiar with the topic, it just corresponds to the fact that if every consequence were 1 utilon better, you would make the same choices because relative utilities would not have changed.) Whether a utility function has this property is important in many circumstances and I consider it an open problem whether humans' utility functions are unbounded, though some would probably disagree and I don't know what science doesn't know.

Comment author: Amanojack 25 May 2011 10:52:21PM 0 points [-]

Is this basically saying that you can tell someone else's utility function by demonstrated preference? It sounds a lot like that.

Comment author: TimFreeman 25 May 2011 05:41:07PM 0 points [-]

However, if my seeing one black swan doesn't justify my belief that there is at least one black swan, how can I refute "all swans are white"?

Refuting something is justifying that it is false. The point of the OP is that you can't justify anything, so it's claiming that you can't refute "all swans are white". A black swan is simply a criticism of the statement "all swans are white". You still have a choice -- you can see the black swan and reject "all swans are white", or you can quibble with the evidence in a large number of ways which I'm sure you know of too and keep on believing "all swans are white". People really do that; searching Google for "Rapture schedule" will pull up a prominent and current example.

Comment author: Amanojack 25 May 2011 10:46:53PM *  0 points [-]

Why not just phrase it in terms of utility? "Justification" can mean too many different things.

Seeing a black swan diminishes (and for certain applications, destroys) the usefulness of the belief that all swans are white. This seems a lot simpler.

Putting it in terms of beliefs paying rent in anticipated experiences, the belief "all swans are white" told me to anticipate that if I knew there was a black animal perched on my shoulder it could not be a swan. Now that belief isn't as reliable of a guidepost. If black swans are really rare I could probably get by with it for most applications and still use it to win at life most of the time, but in some cases it will steer me wrong - that is, cause me to lose.

So can't this all be better phrased in more established LW terms?

Comment author: [deleted] 25 May 2011 09:40:39PM 2 points [-]

Suppose you think that 3+4=6.

I offer you the following deal: give me $3 today and $4 tomorrow, and I will give you a 50 cent profit the day after tomorrow, by returning to you $6.50. You can take as much advantage of this as you want. In fact, if you like, you can give me $3 this second, $4 in one second, and in the following second I will give you back all your money plus 50 cents profit - that is, I will give you $6.50 in two seconds.

Since you think that 3+4=6, you will jump at this amazing deal.

In response to comment by [deleted] on Conceptual Analysis and Moral Theory
Comment author: Amanojack 25 May 2011 10:34:57PM 0 points [-]

I agree with this, if that makes any difference.

Comment author: Peterdjones 25 May 2011 09:06:03PM *  0 points [-]

I meant the second part: "but when you really drill down there are only beliefs that predict my experience more reliably or less reliably" How do you know that?

That's what I was responding to.

It is not the case that all beliefs can do is predict experience based on existing preferences. Beliefs can also set and modify preferences. I have given that counterargument several times.

Z org: And what pan-galactic value are your objective values? Pan-galactic value is the ultimate value, dontcha know.

I think moral values are ultimate because I can;t think of a valid argument of the form "I should do <immoral thing> because <excuse>". Please give an example of a pangalactic value that can be substituted for ,<excuse>

You just eliminated it: If to assert P is to assert "P is true," then to assert "P is true" is to assert P. We could go back and forth like this for hours.

Yeah,. but it sitll comes back to truth. If I tell you it will increase your happiness to hit yourself on the head with a hammer, your response is going to have to amount to "no, that's not true".

Dictionary says, [objective[ "Not influenced by personal feelings, interpretations, or prejudice; based on facts; unbiased."

How can a value be objective?

By being (relatively) uninfluenced by personal feelings, interpretations, or prejudice; based on facts; unbiased.

Especially since a value is a personal feeling.

You haven't remotely established that as an identity. It is true that some people some of the time arrive at values through feelings. Others arrive at them (or revise them) through facts and thinking.

you are defining "value" differently, how?

"Values can be defined as broad preferences concerning appropriate courses of action or outcomes"

Comment author: Amanojack 25 May 2011 10:31:49PM 0 points [-]

I missed this:

If I tell you it will increase your happiness to hit yourself on the head with a hammer, your response is going to have to amount to "no, that's not true".

I'll just decide not to follow the advice, or I'll try it out and then after experiencing pain I will decide not to follow the advice again. I might tell you that, too, but I don't need to use the word "true" or any equivalent to do that. I can just say it didn't work.

Comment author: Peterdjones 25 May 2011 09:40:44PM 1 point [-]

You still don't have a good argument to the effect that no one cares about truth per se.

Comment author: Amanojack 25 May 2011 10:20:59PM 0 points [-]

A lot of people care about truth, even when (I suspect) they diminish their enjoyment needlessly by doing so, so no argument there. In the parent I'm just continuing to try to explain why my stance might sound weird. My point from farther above, though, is just that I don't/wouldn't care about "truth" in those rare and odd cases where it is already part of the premises that truth or falsehood will not affect me in any way.

Comment author: ArisKatsaris 25 May 2011 09:37:35PM 0 points [-]

It's called that too. Are you just objecting as to what we are calling it?

Comment author: Amanojack 25 May 2011 10:17:13PM *  0 points [-]

Yeah, because calling it that makes it pretty hard to understand. If you just mean Collective Greatest Happiness Utilitarianism, then that would be a good name. Objective morality can mean way too many different things. This way at least you're saying in what sense it's supposed to be objective.

As for this collectivism, though, I don't go for it. There is no way to know another's utility function, no way to compare utility functions among people, etc. other than subjectively. And who's going to be the person or group that decides? SIAI? I personally think all this collectivism is a carryover from the idea of (collective) democracy and other silly ideas. But that's a debate for another day.

View more: Prev | Next