In response to The Halo Effect
Comment author: Adirian 03 December 2007 11:15:04PM 0 points [-]

Nate - that strategy can only work insofar as other groups aren't utilizing it, and to that extent, you will be punished through those groups hesitating to employ you in their capacities as clients and customers.

In response to The Halo Effect
Comment author: Adirian 30 November 2007 03:35:50AM 1 point [-]

As Different says, but in regard to connections between perceived intelligence, and perceived honesty, to pick two particularly useful examples - the usefulness of either quality, in regard to your interactions with the individual, are dependent upon both. I/e, it isn't a great idea to trust what a not-too-bright individual tells you, even if he or she has never told a lie in their life, for the simple reason that they may not have the faculty to evaluate their statements. And the reverse might be true - particularly bright individuals may not be good candidates for trust, particularly on important issues, because they have great faculty for making value judgments about when it is most profitable to lie. Individuals at either extremes may not be good candidates for trust.

I had a long discussion with my brother on precisely the issue of attractiveness - in regard to banks. Banks are great examples because they spend immense quantities of money making themselves look respectable. So - would you trust your money to a bank that was going to spend some of it making itself look good? Or would you trust your money to a bank that doesn't care how it looks? A bank that cares about its reputation enough to spend massive amounts of money maintaining it isn't going to sacrifice that by stealing your relatively small sum - it is the better choice, presuming on the rationality of the bank. An individual who goes to great lengths to APPEAR competent is going to try to BE competent - someone who doesn't care whether or not they appear competent do not care whether you think they are competent, and hence, may not make an effort to be more competent as relates to you and your business. The better-groomed candidate, other things being equal, is the better choice, presuming upon that individual's rationality. (Particularly since the issue is reinforcing - clients and customers, after all, are making the same judgments.)

An individual who makes an effort to appear more attractive likely has a reason for doing so - they may care what other people think of them (which suggests they'll be nicer, when somebody is watching, at least), they may want to appear more competent (which suggests they'll be better at other things they do, presuming they follow similar levels of investment), and, presuming they DO have rational reasons for taking care of their appearance, they may simply be smarter than your average person. Naturally good looking people may be getting the benefits of biases we develop based on those who acquire appearances by effort - and may get points for honesty, as well, because they aren't attempting to "lie" about their appearance. (Which would be interesting, because it would mean naturally good-looking individuals gain more benefits than those who provided much of the bias incentive to begin with.)

Just some thoughts.

Comment author: Adirian 17 November 2007 05:17:27AM 0 points [-]

""Somewhat rational" does not mean "irrational". There are three different ways in which something can be said to be rational. (1) That reason can be applied to it. Duh, reason can be applied to *everything*. (2) That it's prosecuted by means of reason. Ethical thought sometimes proceeds by means of reason, and sometimes not. Hence, "somewhat rational". (3) That applying reason to it doesn't show up inconsistencies. Perhaps some people have (near enough) perfectly consistent ethical positions. Certainly most people don't. It's not unheard of for philosophers to advocate embracing that inconsistency. But generally there's some degree of consistency, and sufficiently gross inconsistencies can prompt revision. Hence, again, "somewhat rational"."

The second is the only situation by which somewhat rational makes sense, but was not the context of the argument, which was, after all, about moral systems, and not moral thoughts - as for the third, inconsistent consistency, I think you will agree, is not consistency at all.

Since we're having a conversation, I might hazard a suggestion that it is what you are saying that is giving me the impressions of what it is you think. And I stated my reasons in each case why I thought you were thinking as you were - if you wish to address me, address the reasons I gave, so I might know in what way I am failing to understand what it is you are attempting to communicate.

Comment author: Adirian 16 November 2007 06:46:24AM 0 points [-]

Our behavior is nothing more than the expression of our thoughts. If we behave as though something is a terminal value - we are doing nothing more than expressing our intents and regards, which is to say, we THINK of it as a terminal value. There is no distinction between physical action and mental thought, or between what is in our heads and what comes out of our mouths - our mind moves our muscles, and our thoughts direct our voice. There is no "actual thought" and - what? Nonactual thought? As if your body operated of its own will, acting against what your actual thoughts are. The mind is responsible for what the body does. I'm not eluding the distinction. I'm denying it.

Your language explains precisely why I said that you don't believe ethics is rational. Somewhat-rational means irrational - that is, something that is rational only some of the time it is, in fact, irrational. Either a thing is rational, and logic can reasonably and consistently be applied to it - or it isn't. There isn't "mathematical logic" and then "otherwise logic." Many have been going to great lengths to explain, among other things, how Bayesian Reasoning - derived entirely from a pretty little formula which is quite mathematical - is meaningful in daily thinking. There is just logic. It's the same logic in mathematics as it is in philosophy. It is only the axioms - the definitions - which vary.

Because axioms exist where rationality begins - that is their purpose. They are the definitions, the borders, from which rationality starts.

Incidentally, if you don't think ethics is like mathematical logic, and you've been reading and agreeing with anything Eliezer posts on the subject, you should take a foundations of mathematics course. He is going to great lengths to describe ethics in a way that is extremely mathematical, if the language has been stripped away for legibility. (For example, he explains infinite recursion, rather than using the word.) Which may, of course, be why he avoids the use of the word "axiom," and instead simply explains it. I'd also recommend a classical philosophy course - because the very FIELD of ethics is derived from precisely the thing you are suggesting is ridiculous, the search for mathematical, for logical, expressions of morality. The root of which I think it is clear is the value code upon which an individual builds their morality - a thing without rational value in itself, save as a definition, save as an axiom.

That is almost what I meant by axioms. Values. Terminal values, specifically. And also the basis of any individual's ethical code. The entire point of my post was linguistics - hence the sentence that axioms would be a simpler way of explaining terminal values. What I meant by "morality itself is a terminal value and an axiom," however, is akin to what you suggest - it is that if morality is treated as an irrational entity, as you seem want to do, then yes, absolutely everything someone thinks about right and wrong must be treated in a rational ethical system as an axiom. Which is, as you say, possibly true - but thoroughly worthless.

Comment author: Adirian 16 November 2007 01:19:39AM 0 points [-]

It is a terminal value, however - you are regarding B as something other than B, something other than a stage from which to get to C. To exactly the ends you permit your visceral reaction to the guns themselves shape your opinion, you are treating the abolition or freedom to use guns as an ends, rather than a means. (To reduce crime or promote freedom generally, respectively.) Remember that morality itself is the use of bias - on deciding between two ethical structures which is the better based on subjectively defined values - so to say that something is bias in a moral framework means that it is being treated as a moral axiom, a terminal value.

Your commentary means one of two things - either your don't believe ethics is a rational system to which logic can be applied, or you don't accept that axioms have a place in ethics. Addressing the latter, it is certain that they do, as in any rational system. At the very least you must accept the axioms of definition - among which will be those axioms, those values, by which you judge the merits of any given situation or course of action. "Death is bad" can be an axiom or a derived value - but in order to be derived, you must posit an axiom by which it can be derived, say, that "Thinking is good," and then reason from there, by stating, for example, that death stops the process of thinking. Which applies no matter which direction you come from - from the side of the axioms, trying to discover what situations are best, or from the side of the derived values, trying to figure out what axioms led to their derivation.

Regarding the latter argument - then you take ethics itself as a thing which cannot further be defined, and so claim that morality is itself the terminal value, the axiom. Which I don't think would be your position.

Comment author: Adirian 15 November 2007 11:43:12PM 0 points [-]

Terminal values sound, essentially, like moral axioms - they are, after all, terminal. (If they had a basis in a specific future, it would be a question of what, specifically, about that future is appealing - and that quality would, in turn, become a new terminal value.) When treating morality as a logical system, it would simplify your language in explaining yourself somewhat, I think, to describe them as such - particularly since once you have done so, Godel's theorem goes a long way towards explaining why you can't rationalize a conceptual terminal value down any further. (They are very interesting axioms, since we can only consistently treat them conceptually and as variables, but nevertheless axiomatic in nature.)

Speaking of people coming to think of B as a good thing itself, many of those in favour of banning guns treat gun abolition as a terminal value in its own right - challenging those in favour of gun freedoms to prove that guns reduce crime, rather than asserting that they increase it. That is, they treat the abolition of guns as a positive thing in its own right, and only the improvement of another positive thing, say, by reducing crime, can balance the inherent evil of permitting people to own guns.

Comment author: Adirian 08 November 2007 05:10:59AM 0 points [-]

You can alter the question slightly to permit a very limited form of group selection - you have to have completely isolated genomes, to start with, and a high level of mutative cost between the two groups. (I/e, mammalian versus octopus eyes - refinement guarantees the two groups can't crossover or mutate to adapt the other's characteristics.) If selective pressure favours one of the two characteristics, one group will be effectively "selected out."

The genetic variance doesn't even have to be defined - it could just be a selective tendency against. (I/e, groups for whom quality of children is more important than quantity may be more resistant to certain selective pressures, and vice versa - it isn't individual genes be selected upon in this case, it's the genome.)

So genomes, insofar as they may BE atomic, can be operated upon with selective pressure. Any atomic construct with reproductive capacity is subject to some form of evolution. It's simply much, much slower and rarer for larger constructs. (Because evolution on a smaller-construct form operates at such a high speed comparatively.)

In response to Fake Selfishness
Comment author: Adirian 08 November 2007 04:56:04AM 5 points [-]

I must point out that "whenever they want to be nice to someone" entails a desire to be nice to someone. Your very phrase defines it as being in their interests to be nice to someone. Rationalization isn't even necessary here. You wanted to do something - you did it. Selfishness isn't that complicated.

My guess would be that this individual had read Atlas Shrugged and hadn't fully understood what selfish meant in the context. Ayn Rand was setting out to redefine the word, not to glorify the "old" meaning.

Comment author: Adirian 06 November 2007 04:30:00AM 0 points [-]

First, there is the correct point that our mutation rate has been at a steady decline - the first couple of billion years had a much higher rate of data encoding than the last couple of billion years, of which, the former had a much higher.

Second, there is the point that a significant portion of pregnancies are failures - we could possibly double the rate of data encoding from that alone, presuming all of one of those bits is improvement on genetic repair and similar functionality. (Reducing mutation rates of critical genes.)

Third, multiple populations could encode multiple bits of data, if they are kept distinct except for a very small level of cross-breeding to keep both populations compliant. (That is, a low level of geographic isolation could, in sexually reproducing creatures, increase the number of gene pools to play with, although at a nonlinear rate - it wouldn't be a huge increase over a bit per half of population lost.)

Fourth, and finally, not only did you forget the first two billion years of evolution, you forgot DNA transfusion in its varying forms - which occurs occasionally in bacteria, whereby one can acquire the information encoded in another.

Comment author: Adirian 11 October 2007 08:47:46PM -1 points [-]

Which still doesn't say anything about the impact of priming on an individual's decision-making process regarding a matter they are well-informed on - because weak correlation is still better than no correlation.

View more: Prev | Next