Comment author: Amanojack 04 February 2012 09:12:08AM 1 point [-]

The question of "dualism" isn't even a real question. Science tells us that a certain wavelength of light will appear to us as green. But what really is the point of knowing that? Well, it gives us a set of instructions for how to make us experience green. But the instructions for how to produce the subjective experience are not themselves the experience. The notion that if we could just figure out how to make people experience green through some manipulation we will have learned something amazing is silly. We can already do that by showing a green flag or telling someone not to think of a green rabbit.

Comment author: CuSithBell 16 June 2011 05:05:45PM 9 points [-]

Consider the package deal to include getting your brain rewired so that you would receive pleasure from the end of mankind. Now do you choose the package deal?

I wouldn't. Can you explain to me why I wouldn't, if you believe the only thing I can want is pleasure?

Maybe you're hyperbolically discounting that future pleasure and it's outweighed by the temporary displeasure caused by agreeing to something abhorrent? ;)

Comment author: Amanojack 10 August 2011 03:05:22AM 1 point [-]

Plus we have a hard time conceiving of what it would be like to always be in a state of maximal, beyond-orgasmic pleasure.

When I imagine it I cannot help but let a little bit of revulsion, fear, and emptiness creep into the feeling - which of course would not be actually be there. This invalidates the whole thought experiment to me, because it's clear I'm unable to perform it correctly, and I doubt I'm uncommon in that regard.

Comment author: Peterdjones 26 May 2011 12:57:52AM 0 points [-]

There's a difference between disagreeing with something and not knowing what it means, and I do seriously not know what you mean. I'm not sure why you would think it is veiled disagreement, seeing as lukeprog's whole post was making this very same point about incoherence. (But incoherence also only has meaning in the sense of "incoherent to me" or someone else,

"incoherence" means several things. Some of them, such a self-contradiction are as objective as anything. You seem to find morality meaningless in some personal sense. Looking at dictionaries doesn't seem to work for you. Dictionaries tend to define the moral as the good.It is hard to believe that anyone can grow up not hearing the word "good" used a lot, unless they were raised by wolves. So that's why I see complaints of incoherence as being disguised disagreement.

At bottom, I act to get enjoyment and/or avoid pain, that is, to win.

If you say so. That doesn't make morality false, meaningless or subjective. It makes you an amoral hedonist.

But it seems to me that just because what I want can be influenced by what could be called objective or factual beliefs doesn't make my want for deliciousness "uninfluenced by personal feelings."

Perhaps not completley, but that sill leaves some things as relatively more objective than others.

In summary, value/preferences can either be defined to include (1) only personal feelings (though they may be universal or semi-universal), or to also include (2) beliefs about what would or wouldn't lead to such personal feelings. I can see how you mean that 2 could be objective, and then would want to call them thus "objective values." But not for 1, because personal feelings are, well, personal.

Then your categories aren't exhaustive, because preferences can also be defined to include universalisable values alongside personal whims. You may be making the classic of error of taking "subjective" to mean "believed by a subject"

Comment author: Amanojack 26 May 2011 01:15:49AM -1 points [-]

Dictionaries tend to define the moral as the good.It is hard to believe that anyone can grow up not hearing the word "good" used a lot, unless they were raised by wolves

The problem isn't that I don't know what it means. The problem is that it means many different things and I don't know which of those you mean by it.

an amoral hedonist

I have moral sentiments (empathy, sense of justice, indignation, etc.), so I'm not amoral. And I am not particularly high time-preference, so I'm not a hedonist.

preferences can also be defined to include universalisable values alongside personal whims

If you mean preferences that everyone else shares, sure, but there's no stipulation in my definitions that other people can't share the preferences. In fact, I said, "(though they may be universal or semi-universal)."

You may be making the classic of error of taking "subjective" to mean "believed by a subject"

It'd be a "classic error" to assume you meant one definition of subjective rather than another, when you haven't supplied one yourself? This is about the eight time in this discussion that I've thought that I can't imagine what you think language even is.

I doubt we have any disagreement, to be honest. I think we only view language very, radically differently. (You could say we have a disagreement about language.)

Torture Simulated with Flipbooks

9 Amanojack 26 May 2011 01:00AM

What if the brain of the person you most care about were scanned and the entirety of that person's mind and utility function at this moment were printed out on paper, and then several more "clock ticks" of their mind as its states changed exactly as they would if the person were being horribly tortured were printed out as well, into a gigantic book? And then the book were flipped through, over and over again. Fl-l-l-l-liiiiip! Fl-l-l-l-liiiiip!

Would this count as simulated torture? If so, would you care about stopping it, or is it different from computer-simulated torture?

Comment author: Peterdjones 25 May 2011 11:25:29PM 0 points [-]

As for this collectivism, though, I don't go for it. There is no way to know another's utility function, no way to compare utility functions among people, etc. other than subjectively.

That's very contestable. It has frequently argued here that preferences can be inferred from behaviour; it's also been argued that introspection (if that is what you mean by "subjectively") is not a reliable guide to motivation.

Comment author: Amanojack 26 May 2011 12:42:29AM 0 points [-]

This is the whole demonstrated preference thing. I don't buy it myself, but that's a debate for another time. What I mean by subjectively is that I will value one person's life more than another person's life, or I could think that I want that $1,000,000 more than a rich person wants it, but that's just all in my head. To compare utility functions and work from demonstrated preference usually - not always - is a precursor to some kind of authoritarian scheme. I can't say there is anything like that coming, but it does set off some alarm bells. Anyway, this is not something I can substantiate right now.

Comment author: ArisKatsaris 26 May 2011 12:25:29AM 0 points [-]

I'm getting a bad vibe here, and no longer feel we're having the same conversation

"Person or group that decides"? Who said anything about anyone deciding anything? And my point was that this perhaps this is the meta-ethical position that every rational agent individually converges to. So nobody "decides", or everyone does. And if they don't reach the same decision, then there's no single objective morality -- but even i so perhaps there's a limited set of coherent metaethical positions, like two or three of them.

I personally think all this collectivism is a carryover from the idea of (collective) democracy and other silly ideas.

I think my post was inspired more by TDT solutions to Prisoner's dilemma and Newcomb's box, a decision theory that takes into account the copies/simulations of its own self, or other problems that involve humans getting copied and needing to make a decision in blind coordination with their copies.

I imagined system that are not wholly copied, but rather just the module that determines the meta-ethical constraints, and tried to figure out to which directions would such system try to modify themselves, in the knowledge that other such system would similarly modify themselves.

Comment author: Amanojack 26 May 2011 12:37:48AM 0 points [-]

You're right, I think I'm confused about what you were talking about, or I inferred too much. I'm not really following at this point either.

One thing, though, is that you're using meta-ethics to mean ethics. Meta-ethics is basically the study of what people mean by moral language, like whether ought is interpreted as a command, as God's will, as a way to get along with others, etc. That'll tend to cause some confusion. A good heuristic is, "Ethics is about what people ought to do, whereas meta-ethics is about what ought means (or what people intend by it)."

Comment author: Peterdjones 25 May 2011 11:15:04PM *  1 point [-]

I'll just decide not to follow the advice, or I'll try it out and then after experiencing pain I will decide not to follow the advice again. I might tell you that, too, but I don't need to use the word "true" or any equivalent to do that. I can just say it didn't work.

Any word can be eliminated in favour of a definitions or paraphrase. Not coming out with an equivalent -- showing that you have dispensed with the concept -- is harder. Why didn't it work? You're going to have to paraphrase "Because it wasn't true" or refuse to answer.

Comment author: Amanojack 26 May 2011 12:29:29AM *  -1 points [-]

The concept of truth is for utility, not utility for truth. To get them backwards is to merely be confused by the words themselves. It's impossible to show you've dispensed with any concept, except to show that it isn't useful for what you're doing. That is what I've done. I'm non-cognitive to God, truth, and objective value (except as recently defined). Usually they all sound like religion, though they all are or were at one time useful approximate means of expressing things in English.

Comment author: TimFreeman 25 May 2011 05:30:46PM 0 points [-]

What about beliefs being justified by non-beliefs? If you're a traditional foundationalist, you think everything is ultimately grounded in sense-experience, about which we cannot reasonably doubt.

If a traditional foundationalist believes that beliefs are justified by sense-experience, he's a justificationalist. The argument in the OP works. How can he justify the belief that beliefs are justified by sense-experience without first assuming his conclusion?

Also, what about externalism? This is one of the major elements of modern epistemology, as a response to such skeptical arguments.

I had to look it up. It is apparently the position that the mind is a result of both what is going on inside the subject and outside the subject. Some of them seem to be concerned about what beliefs mean, and others seem to carefully avoid using the word "belief". In the OP I was more interested in whether the beliefs accurately predict sensory experience. So far as I can tell, externalism says we don't have a mind that can be considered as a separate object, so we don't know things, so I expect it to have little to say about how we know what we know. Can you explain why you brought it up?

I don't mean to imply that either of these is correct, but it seems that if one is going to attempt to use disjunctive syllogism to argue for anti-justificationism, you ought to be sure you've partitioned the space of reasonable theories.

I don't see any way to be sure of that. Maybe some teenage boy sitting alone in his bedroom in Iowa figured out something new half an hour ago; I would have no way to know. Given the text above, do think there are alternatives that are not covered?

Perhaps it is so structured that it is invulnerable to being changed after it is adopted, regardless of the evidence observed.

This example seems anomalous. If there exists some H such that, if P(H) > 0.9, you lose the ability to choose P(H), you might want to postpone believing in it for prudent reasons. But these don’t really bear on what the epistemically rational level of belief is (Assuming remaining epistemically rational is not part of formal epistemic rationality).

Furthermore, if you adopted a policy of never raising P(H) above 0.9, it’d be just like you were stuck with P(H) < 0.9 !

The point is that if a belief will prevent you from considering alternatives, that is a true and relevant statement about the belief that you should know when choosing whether to adopt it. The point is not that you shouldn't adopt it. Bayes' rule is probably one of those beliefs, for example.

Without a constraining external metric, there are many consistent sets [of preferences], and the only criticism you can ultimately bring to bear is one of inconsistency.

I presently believe there are many consistent sets of preferences, and maybe you do too. If that's true, we should find a way to live with it, and the OP is proposing such a way.

I don't know what the word "ultimately" means there. If I leave it out, your statement is obviously false -- I listed a bunch of criticisms of preferences in the OP. What did you mean?

Comment author: Amanojack 25 May 2011 11:09:22PM 0 points [-]

How can he justify the belief that beliefs are justified by sense-experience without first assuming his conclusion?

I don't know what exactly "justify" is supposed to mean, but I'll interpret it as "show to be useful for helping me win." In that case, it's simply that certain types of sense-experience seem to have been a reliable guide for my actions in the past, for helping me win. That's all.

To think of it in terms of assumptions and conclusions is to stay in the world of true/false or justified/unjustified, where we can only go in circles because we are putting the cart before the horse. The verbal concepts of "true" and "justified" probably originated as a way to help people win, not as ends to be pursued for their own sake. But since they were almost always correlated with winning, they became ends pursued for their own sake - essential ones! In the end, if you dissolve "truth" it just ends up meaning something like "seemingly reliable guidepost for my actions."

Comment author: endoself 25 May 2011 07:44:12PM 1 point [-]

Are you losing sleep over the daily deaths in Iraq? Are most LWers? . . . If we cared as much as we signal we do, no one would be able go to work, or post on LW. We'd all be too grief-stricken.

That is exactly what I was talking about when I said "There's a difference between mental distress and action-motivating desire.". Utility functions are about choices, not feelings, so I assumed that, in a discussion about utility we would be using the word 'care' (as in "If we cared as much as we signal we do") to refer to motives for action, not mental distress. If this isn't clear, I'm trying to refer to the same ideas discussed here.

And it also isn't immediately clear that anyone would really want their utility function to be unbounded (unless I'm misinterpreting the term).

It does not make sense to speak of what someone wants their utility function to be; utility functions just describe actual preferences. Someone's utility function is unbounded if and only if there are consequences with arbitrarily high utility differences. For every consequence, you can identify one that is over twice as good (relative to some zero point, which can be arbitrarily chosen. This doesn't really matter if you're not familiar with the topic, it just corresponds to the fact that if every consequence were 1 utilon better, you would make the same choices because relative utilities would not have changed.) Whether a utility function has this property is important in many circumstances and I consider it an open problem whether humans' utility functions are unbounded, though some would probably disagree and I don't know what science doesn't know.

Comment author: Amanojack 25 May 2011 10:52:21PM 0 points [-]

Is this basically saying that you can tell someone else's utility function by demonstrated preference? It sounds a lot like that.

Comment author: TimFreeman 25 May 2011 05:41:07PM 0 points [-]

However, if my seeing one black swan doesn't justify my belief that there is at least one black swan, how can I refute "all swans are white"?

Refuting something is justifying that it is false. The point of the OP is that you can't justify anything, so it's claiming that you can't refute "all swans are white". A black swan is simply a criticism of the statement "all swans are white". You still have a choice -- you can see the black swan and reject "all swans are white", or you can quibble with the evidence in a large number of ways which I'm sure you know of too and keep on believing "all swans are white". People really do that; searching Google for "Rapture schedule" will pull up a prominent and current example.

Comment author: Amanojack 25 May 2011 10:46:53PM *  0 points [-]

Why not just phrase it in terms of utility? "Justification" can mean too many different things.

Seeing a black swan diminishes (and for certain applications, destroys) the usefulness of the belief that all swans are white. This seems a lot simpler.

Putting it in terms of beliefs paying rent in anticipated experiences, the belief "all swans are white" told me to anticipate that if I knew there was a black animal perched on my shoulder it could not be a swan. Now that belief isn't as reliable of a guidepost. If black swans are really rare I could probably get by with it for most applications and still use it to win at life most of the time, but in some cases it will steer me wrong - that is, cause me to lose.

So can't this all be better phrased in more established LW terms?

View more: Prev | Next