Comment author: Tom_Breton 03 January 2008 07:58:14PM 0 points [-]

That's true, Benquo.

Comment author: Tom_Breton 03 January 2008 04:25:16AM 8 points [-]

"How many legs does a dog have, if you call a tail a leg?
Four. Calling a tail a leg doesn't make it a leg." -- Abraham Lincoln

This is the sort of quip that gives the speaker a cheap thrill of superiority, but underneath it is just a cheap trick.

In this case, the trick is that Lincoln (or whoever its real author is) has confused de dicto and de re. That is, he confuses assertions that are to be understood inside vs outside a quote-like context; in this case, in the context of the provision that we shall call a dog's tail a leg. He uses that to commit the fallacy of ambiguity. There is an undistributed middle term lurking in there, a modal operator that appears twice and needs to have the same semantics both times, and doesn't.

So I don't think this particular quote is a good illustration of "the map is not the territory". There's nothing about general semantics that forbids agreeing on or using some labelling scheme, even a variant labelling. The idea of GS is "the map is not the territory", not "use no maps" or "use no non-standard maps".

Comment author: Tom_Breton 17 December 2007 03:56:42AM 0 points [-]

...there really is some good stuff in there.

My advice would be to read Reasons and Persons (by Derek Parfit) and The Methods of Ethics (by Henry Sidgwick).

Looked up both. Two bum steers. Sidgwick is mostly interested is naming and taxonomizing ethical positions, and Parfit is just wrong.

Comment author: Tom_Breton 11 November 2007 08:12:26PM 0 points [-]

The atoms of a screwdriver don't have tiny little XML tags inside describing their "objective" purpose. The designer had something in mind, yes, but that's not the same as what happens in the real world. If you forgot that the designer is a separate entity from the designed thing, you might think, "The purpose of the screwdriver is to drive screws" - as though this were an explicit property of the screwdriver itself, rather than a property of the designer's state of mind. You might be surprised that the screwdriver didn't reconfigure itself to the flat-head screw, since, after all, the screwdriver's purpose is to turn screws.

This is the distinction Daniel Dennett makes between the intentional stance and the design stance. I consider it a useful one. He also distinguishes the physical stance, which you touch on.

Comment author: Tom_Breton 01 November 2007 12:05:00AM 0 points [-]

Tom, if having an upper limit on disutility(Specks) that's lower than disutility(Torture*1) is begging the question in favour of SPECKS then why isn't *not* having such an upper limit begging the question in favour of TORTURE?

It should be obvious why. The constraint in the first one is neither argued for nor agreed on and by itself entails the conclusion being argued for. There's no such element in the second.

Comment author: Tom_Breton 31 October 2007 09:39:00PM 0 points [-]

@Neel.

Then I only need to make the condition slightly stronger: "Any slight tendency to aggregation that doesn't beg the question." Ie, that doesn't place a mathematical upper limit on disutility(Specks) that is lower than disutility(Torture=1). I trust you can see how that would be simply begging the question. Your formulation:

D(Torture, Specks) = [10 * (Torture/(Torture + 1))] + (Specks/(Specks + 1))

...doesn't meet this test.

Contrary to what you think, it doesn't require unbounded utility. Limiting the lower bound of the range to (say) 2 * disutility(torture) will suffice. The rest of your message assumes it does.

For completeness, I note that introducing numbers comparable to 3^^^3 in an attempt to undo the 3^^^3 scaling would cause a formulation to fail the "slight" condition, modest though it is.

Comment author: Tom_Breton 31 October 2007 08:00:00PM 5 points [-]

It's truly amazing the contortions many people have gone through rather than appear to endorse torture. I see many attempts to redefine the question, categorical answers that basically ignore the scalar, and what Eliezer called "motivated continuation".

One type of dodge in particular caught my attention. Paul Gowder phrased it most clearly, so I'll use his text for reference:

...depends on the following three claims:

a) you can unproblematically aggregate pleasure and pain across time, space, and individuality,

"Unproblematically" vastly overstates what is required here. The question doesn't require unproblematic aggregation; any slight tendency of aggregation will do just fine. We could stipulate that pain aggregates as the hundredth root of N and the question would still have the same answer. That is an insanely modest assumption, ie that it takes 2^100 people having a dust mote before we can be sure there is twice as much suffering as for one person having a dust mote.

"b" is actually inapplicable to the stated question and it's "a" again anyways - just add "type" or "mode" to the second conjunction in "a".

c) it is a moral fact that we ought to select the world with more pleasure and less pain.

I see only three possibilities for challenging this, none of which affects the question at hand.

  • Favor a desideratum that roughly aligns with "pleasure" but not quite, such as "health". Not a problem.
  • Focus on some special situation where paining others is arguably desirable, such as deterrence, "negative reinforcement", or retributive justice. ISTM that's already been idealized away in the question formulation.
  • Just don't care about others' utility, eg Rand-style selfishness.
In response to A Priori
Comment author: Tom_Breton 19 October 2007 03:16:00AM 0 points [-]

In a comment on "How to convince we that 2+2=3", I pointed out that the study of neccessary truths is not the same as the possession of neccessary truths (credit to David Deutsch for that important insight). Unfortunately, the discussion here seems to have gotten hung up on a philosophical formulation that blurs that important distinction, a priori. Eliezer's quotative paragraph illustrates the problem:

The Internet Encyclopedia of Philosophy defines "a priori" propositions as those knowable independently of experience. Wikipedia quotes Hume: Relations of ideas are "discoverable by the mere operation of thought, without dependence on what is anywhere existent in the universe." You can see that 1 + 1 = 2 just by thinking about it, without looking at apples.

All of these definitions seem to assume there is no distinction between the existence of neccessary truths and knowing neccessary truths (more correctly, justifiably assigning extremely high probability to them). But there are neccessary truths that are not knowable by any means we have or expect to have. Eg, the digits of Gregory Chaitin's Omega constant, beyond the first few. Omega is the probability that a random Turing machine will halt. Whatever value it has, it neccessarily has.

(One might say more charitably that these definitions are only categorizing knowledge and say nothing about non-knowledge. If so, they mislead, and also make a subtler mistake. Neccessary truths are not a special type of knowledge, they are topic of knowledge)

One can understand why the mistake is made. Epistemology, the branch of philosophy about how we know what we know, is not looking for a way to assign untouchable status to what seems its most certain knowledge.

Comment author: Tom_Breton 01 October 2007 10:15:29PM 0 points [-]

G, you're raising points that I already answered.

Comment author: Tom_Breton 01 October 2007 03:13:32AM 0 points [-]

I don't believe this is exactly correct. After all, when you're just about to start listening to the clever arguer, do you really believe that box B is almost certain not to contain the diamond?

Where do you get that A is "almost certain" from? I just said the prior probability of B was "low". I don't think that's a reasonable restatement of what I said.

Your actual probability starts out at 0.5, rises steadily as the clever arguer talks (starting with his very first point, because that excludes the possibility he has 0 points), and then suddenly drops precipitously as soon as he says "Therefore..." (because that excludes the possibility he has more points).

It doesn't seem to me that excluding the possibility that he has more points should have that effect.

Consider the case where CA is artificially restricted to raising a given number of points. By common sense, for a generous allotment this is nearly equivalent to the original situation, yet you never learn anything new about how many points he has remaining.

You can argue that CA might still stop early when his argument is feeble, and thus you learn something. However, since you've stipulated that every point raises your probability estimate, he won't stop early. To make an argument without that assumption, we can ask about a situation where he is required to raise exactly N points and assume he can easily raise "filler" points.

ISTM at every juncture in the unrestricted and the generously restricted arguments, your probability estimate should be nearly the same, excepting only that you need compensate slightly less in the restricted case.

Now, there is a certain sense of two ways of saying the same thing, raising the probability per point (presumably cogent) but lowering it as a whole in compensation.

But once you begin hearing CA's argument, you know tautologically that you are hearing his argument, barring unusual circumstances that might still cause it not to be fully presented. I see no reason to delay accounting that information.

View more: Prev | Next