LINK: Infinity, probability and disagreement

2 Alejandro1 05 March 2013 04:36AM

I saw this conundrum at Alexander Pruss's blog and I thought LWers might enjoy discussing it:

Consider the following sequence of events:

  1. You roll a fair die and it rolls out of sight.
  2. An angel appears to you and informs you that you are one of a countable infinity of almost identical twins who independently rolled a fair die that rolled out of sight, and that similar angels are appearing to them all and telling them all the same thing. The twins all reason by the same principles and their past lives have been practically indistinguishable.
  3. The angel adds that infinitely many of the twins rolled six and infinitely many didn't.
  4. The angel then tells you that the angels have worked out a list of pairs of identifiers of you and your twins (you're not exactly alike), such that each twin who rolled six is paired with a twin who didn't roll six.
  5. The angel then informs you that each pair of paired twins will be transported into a room for themselves. And, poof!, it is so. You are sitting across from someone who looks very much like you, and you each know that you rolled six if and only if the other did not.

Let H be the event that you did not roll six. How does the probability of H evolve?

After step 1, presumably your probability of H is 5/6. But after step 5, it would be very odd if it was still 5/6. For if it is still 5/6 after step 5, then you and your twin know that exactly one of you rolled six, and each of you assigns 5/6 to the probability that it was the other person who rolled six. But you have the same evidence, and being almost identical twins, you have the same principles of judgment. So how could you disagree like this, each thinking the other was probably the one who rolled six?

Thus, it seems that after step 5, you should either assign 1/2 or assign no probability to the hypothesis that you didn't get six. And analogously for your twin.

But at which point does the change from 5/6 to 1/2-or-no-probability happen? Surely merely physically being in the same room with the person one was paired with shouldn't have made a difference once the list was prepared. So a change didn't happen in step 5.

And given 3, that such a list was prepared doesn't seem at all relevant. Infinitely many abstract pairings are possible given 3. So it doesn't seem that a change happened in step 4. (I am not sure about this supplementary argument: If it did happen after step 4, then you could imagine having preferences as to whether the angels should make such a list. For instance, suppose that you get a goodie if you rolled six. Then you should want the angels to make the list as it'll increase the probability of your having got six. But it's absurd that you increase your chances of getting the goodie through the list being made. A similar argument can be made about the preceding step: surely you have no reason to ask the angels to transport you! These supplementary arguments come from a similar argument Hud Hudson offered me in another infinite probability case.)

Maybe a change happened in step 3? But while you did gain genuine information in step 3, it was information that you already had almost certain knowledge of. By the law of large numbers, with probability 1, infinitely many of the rolls will be sixes and infinitely many won't. Simply learning something that has probability 1 shouldn't change the probability from 5/6 to 1/2-or-no-probability. Indeed, if it should make any difference, it should be an infinitesimal difference. If the change happens at step 3, Bayesian update is violated and diachronic Dutch books loom.

So it seems that the change had to happen all at once in step 2. But this has serious repercussions: it undercuts probabilistic reasoning if we live in multiverse with infinitely many near-duplicates. In particular, it shows that any scientific theory that posits such a multiverse is self-defeating, since scientific theories have a probabilistic basis.

I think the main alternative to this conclusion is to think that your probability is still 5/6 after step 5. That could have interesting repercussions for the disagreement literature.

 

Constructive mathemathics and its dual

13 MrMind 28 February 2013 05:21PM

I have stumbled upon an interesting and, as far as I know, new concept: thinking about the duality between constructive and paraconsistent logics, I've noticed that while the meta-theory of intuitionistic logic (constructive mathematics) is very well understood and studied, the meta-theory of the dual logic is not. If we understand constructive mathematics from an epistemological point of view, as an accretion of truth from an empty base, we ought to be able to think about a sort of destructive mathematics, that starts from the totality of assertions and proceeds by expunging falsity. This seems to have surprising consequences for things like theism, Tegmark universe(s), the Many World Interpretation and so on, but first I need to cover some background informations. This I will do in the present post, while in the next I'll present the concept and some of its applications.

There is a variety of philosophical programs known as constructive mathematics, but their common denominator is to refuse the classic way of conceiving truth and adhere instead to a concept known as verificational existence. That is, for a mathematical formula A to be accepted as true, there must be a construction (a direct proof) of A. On the same level, for a mathematical formula to be denoted false, a constructivist accepts only a proof of  (this symbol denotes the negation of A). If neither of such proofs exist, then a constructivist refuse to impose a truth value upon A. This has as consequence the refusal of the general formula , ( denotes logical disjunction), valid in classical logic, a principle called in Latin tertium non datur (TND), which means "a third is not given". For a constructivist nonetheless the principle  still holds ( denotes logical conjunction), because it is still viewed as impossible that there exists a valid proof of A and of its opposite. This one is called ex contraditione quodlibet (ECQ), which means "anything from a contradiction".

The simple excision of TND from classical logic gives a logical system called intuitionistic logic (because it was developed under the intuitionistic program of constructive mathematics), which has many, many interesting properties.

A logical calculus developed on these fundamental consideration is aimed at preserving justification rather than truth: intuitionistic proofs, instead of carrying from true formulas to other true formulas, only produce justified formulas from other justified formulas.

Notice though: ECQ and TND are both theorems (or axioms) of classical logic. They are in fact equivalent. ECQ is espressed as , but under the DeMorgan laws, double negation elimination and commutativity of disjunction: , which is the TND.

If then we decide to break the equivalency, it becomes natural to ask: since there's a logic that accepts ECQ but refuses TND (intuitionistic logic), can there be a logic that's a sort of dual, that is it accepts TND but refuses ECQ?

This question, maybe surprisingly, has an affirmative answer, and the resulting plethora of logical systems thus produced are called "paraconsistent logics".
Under this classification, intuitionistic logic can then be said to be a member of the class now known as "paracomplete logics" (although this name is not much used). Paraconsistent logics are a multitude, but one of them is an exact symmetric of intuitionistic logic, known in the literature as dual-intuitionistic logic.

If you reflect on that a little bit, it may seems very strange at first to abandon ECQ. After all, the refusal of contradictions is one of the primary, if not the primary, foundation of rationality. But in formal logic there's also a very cogent and slightly technical reason why contradictions are not allowed: in classical logic, they imply triviality.

A set (possibly infinite) of sentences and formulas in logic is called a theory. It is clear that an empty theory is not very interesting: it literally tells us nothing about the subject at hand. But an equally uninteresting theory is the total theory: the set of all possible sentences and formulas. Since this set makes no distinction on what's true and what's false about the subject of interest, it's as informative as the empty theory. Such a theory is called trivial, and formal systems developed within classical logic strive to avoid contradictions: indeed, from a single formula and its negation you can prove any other formula.
In systems that rely on classical logich then, any contradiction entails triviality, and they are therefore to be avoided.

Paraconsistent logics however depart from this classical setting, and they abandon this principle (sometimes called "principle of explosion").

Be careful though: only the general principle is abandoned. Exactly like in intuitionistic logic, where  is abandoned in general, but if you have constructed a proof of (say) A, then for that particular formula  is valid, in dual-intuitionistic logic if you have constructed a proof of (say) , then  is still valid only for that formula.

What is the meta-theory of dual-intuitionistic logic? How can it be justified and it's at the end somehow useful?

This is where things get interesting, and it's a theme I want to explore in the next post.

Links and references
Over the net, there's more than you could possibly care to learn about constructive mathematics: the usual pointer are Wikipedia's http://en.wikipedia.org/wiki/Constructivism_(mathematics) and http://en.wikipedia.org/wiki/Intuitionistic_logic, while on the SEP side you have http://plato.stanford.edu/entries/mathematics-constructive/ and http://plato.stanford.edu/entries/logic-intuitionistic/.
There is considerable less material on paraconsistent logic, but again you can find http://en.wikipedia.org/wiki/Paraconsistent_logic and http://plato.stanford.edu/entries/logic-paraconsistent/.

Exponent of Desire

8 shminux 26 February 2013 06:01PM

I've been mostly lying in bed with fever for the last couple of days, and one night my starved for external stimuli semi-conscious mind produced the following mathematical construct, which I decided to share. This is not intended to be scientific or even all that serious.

So, suppose you have something. Let's call it 's'. You like it, so you want to keep having it. This is a first-order want, let's call it w(s). You also want to want to have it, which is a second order want: w(w(s)), or w2(s). If you are perfectly content, this will be true for all higher order wants, as well, wn(s). Now, you don't worry nearly as much about higher orders, so let's discount their contribution to your thoughts and feelings by the factor n!. Finally, the sum total of your wants for s is

(1+w+w2/2!+...wn/n!+...)(s)=ew(s).

This is, of course, the standard way to construct functions of linear operators.

So, if you love someone wholeheartedly and without reservation, you can call them the exponent of your desire. Hopefully they are geeky enough to appreciate it.

Does evolution select for mortality?

12 DanArmak 23 February 2013 07:33PM

At a recent Reddit AMA, Eric Lander, a professor of biology who played an important part in the Human Genome Project, answered this question:

Do you think immortatility is technically possible for human beings?

His response:

I don't think immortality is technically possible -- evolution has installed many many mechanisms to ensure that organisms die and make room for the next generation. I bet it is going to be very hard to completely overcome all these mechanisms.

This seems to me, at first blush, to exhibit the Evolution of Species Fairy fallacy. Evolution doesn't work to benefit species, populations, or the "next generation". If a mutation arises that increases longevity, and has no other downsides, then animals with that mutation should become more common in the gene pool, because they die less often. I remember reading that the effect would not be very strong, because most animals don't die of old age. But why would there be the opposite effect?

I am loath to attribute a very basic error to a distinguished professor of biology. Is there another explanation? Is the claim that evolution selects for mortality true?

Note: Eric went on to add:

I'm also not convinced immortality is such a good idea. A lot of human progress depends on having a new generation with new ideas. Immortality may equal stagnation.

This seems to be blatant rationalization of a preconceived idea that death is good. (I doubt he truly believes that extra progress is worth everybody dying.) So perhaps his first statement is also a form of rationalization. But it seems improbable to me that he would make such a statement about biology if he didn't think it well-founded. More likely there's something I'm misunderstanding.

ETA: one of the first Google results is this page at nature.com, The Evolution of Aging by Daniel Fabian, which goes into some depth on the subject. The bottom line is that it agrees with my expectation that evolution does not select for mortality. Choice quotes:

The Roman poet and philosopher Lucretius, for example, argued in his De Rerum Natura (On the Nature of Things) that aging and death are beneficial because they make room for the next generation (Bailey 1947), a view that persisted among biologists well into the 20th century. [...] 

A more parsimonious evolutionary explanation for the existence of aging therefore requires an explanation that is based on individual fitness and selection, not on group selection. This was understood in the 1940's and 1950's by three evolutionary biologists, J.B.S. Haldane, Peter B. Medawar and George C. Williams, who realized that aging does not evolve for the "good of the species". Instead, they argued, aging evolves because natural selection becomes inefficient at maintaining function (and fitness) at old age. Their ideas were later mathematically formalized by William D. Hamilton and Brian Charlesworth in the 1960's and 1970's, and today they are empirically well supported. Below we review these major evolutionary insights and the empirical evidence for why we grow old and die. 

How could a distinguished professor of biology, a leader of the HGP and advisor to the US President, get something so elementary wrong, when even a biology undergrad dropout like myself notices this seems wrong?

ETA #2: Gwern points to the Wikipedia article on Evolution of Ageing, which lists several competing theories of the evolution of aging (and therefore mortality). This shows the subject is more complex than I had thought and there may be good reason to believe mortality is selected for by evolution (or at least is reliably linked to something else that is selected). 

I should be glad that I didn't discover an obvious error being committed by a distinguished professional, even if he may be ultimately wrong!

The Logic of the Hypothesis Test: A Steel Man

5 Matt_Simpson 21 February 2013 06:19AM

Related to: Beyond Bayesians and Frequentists

Update: This comment by Cyan clearly explains the mistake I made - I forgot that the ordering of the hypothesis space is important is necessary for hypothesis testing to work. I'm not entirely convinced that NHST can't be recast in some "thin" theory of induction that may well change the details of the actual test, but I have no idea how to formalize this notion of a "thin" theory and most of the commenters either 1) misunderstood my aim (my fault, not theirs) or 2) don't think it can be formalized.

I'm teaching an econometrics course this semester and one of the things I'm trying to do is make sure that my students actually understand the logic of the hypothesis test. You can motivate it in terms of controlling false positives but that sort of interpretation doesn't seem to be generally applicable. Another motivation is a simple deductive syllogism with a small but very important inductive component. I'm borrowing the idea from a something we discussed in a course I had with Mark Kaiser - he called it the "nested syllogism of experimentation." I think it applies equally well to most or even all hypothesis tests. It goes something like this:

1. Either the null hypothesis or the alternative hypothesis is true.

2. If the null hypothesis is true, then the data has a certain probability distribution.

3. Under this distribution, our sample is extremely unlikely.

4. Therefore under the null hypothesis, our sample is extremely unlikely.

5. Therefore the null hypothesis is false.

6. Therefore the alternative hypothesis is true.

An example looks like this:

Suppose we have a random sample from a population with a normal distribution that has an unknown mean and unknown variance . Then:

1. Either or where is some constant.

2. Construct the test statistic where is the sample size, is the sample mean, and is the sample standard deviation.

3. Under the null hypothesis, has a distribution with degrees of freedom.

4. is really small under the null hypothesis (e.g. less than 0.05).

5. Therefore the null hypothesis is false.

6. Therefore the alternative hypothesis is true.

What's interesting to me about this process is that it almost tries to avoid induction altogether. Only the move from step 4 to 5 seems anything like an inductive argument. The rest is purely deductive - though admittedly it takes a couple premises in order to quantify just how likely our sample was and that surely has something to do with induction. But it's still a bit like solving the problem of induction by sweeping it under the rug then putting a big heavy deduction table on top so no one notices the lumps underneath. 

This sounds like it's a criticism, but actually I think it might be a virtue to minimize the amount of induction in your argument. Suppose you're really uncertain about how to handle induction. Maybe you see a lot of plausible sounding approaches, but you can poke holes in all of them. So instead of trying to actually solve the problem of induction, you set out to come up with a process which is robust to alternative views of induction. Ideally, if one or another theory of induction turns out to be correct, you'd like it to do the least damage possible to any specific inductive inferences you've made. One way to do this is to avoid induction as much as possible so that you prevent "inductive contamination" spreading to everything you believe. 

That's exactly what hypothesis testing seems to do. You start with a set of premises and keep deriving logical conclusions from them until you're forced to say "this seems really unlikely if a certain hypothesis is true, so we'll assume that the hypothesis is false" in order to get any further. Then you just keep on deriving logical conclusions with your new premise. Bayesians start yelling about the base rate fallacy in the inductive step, but they're presupposing their own theory of induction. If you're trying to be robust to inductive theories, why should you listen to a Bayesian instead of anyone else?

Now does hypothesis testing actually accomplish induction that is robust to philosophical views of induction? Well, I don't know - I'm really just spitballing here. But it does seem to be a useful steel man.

 

Realism : Direct or Indirect?

3 kremlin 13 February 2013 09:40AM

Stanford Encyclopedia : Perception
Wikipedia : Direct and Indirect Realism

On various philosophy forums I've participated on, there have been arguments between those who call themselves 'direct realists' and those who call themselves 'indirect realists'. The question is apparently about perception. Do we experience reality directly, or do we experience it indirectly?

When I was first initiated to the conversation, I immediately took the indirect side -- There is a ball, photons bounce off the ball, the frequency of those photons is changed by some properties of the ball, the photons hit my retina activating light-sensitive cells, those cells send signals to my brain communicating that they were activated, the signals make it to the visual cortex and...you know...some stuff happens, and I experience the sight of a ball.

So, my first thought in the conversation about Indirect vs Direct realism was that there was a lot of stuff in between the ball and my experience of it, so, it must be indirect.

But then I found that direct realists don't actually disagree about any part of that sequence of events I described above. For them as well, at least the few that have bothered to respond, photons bounce off a ball, interact with our retinas, send signals to the brain, etc. The physical process is apparently the same for both sides of the debate.

And when two sides vehemently disagree on something, and then when the question is broken down into easy, answerable questions you find that they actually agree on every relevant question, that tends to be a pretty good hint that it's a wrong question.

So, is this a wrong question? Is this just a debate about definitions? Is it a semantic argument, or is there a meaningful difference between Direct and Indirect Realism? In the paraphrased words of Eliezer, "Is there any way-the-world-could-be—any state of affairs—that corresponds to Direct Realism being true, or Indirect Realism being true?"

[Link] False memories of fabricated political events

17 gjm 10 February 2013 10:25PM

Another one for the memory-is-really-unreliable file. Some researchers at UC Irvine (one of them is Elizabeth Loftus, whose name I've seen attached to other fake-memory studies) asked about 5000 subjects about their recollection of four political events. One of the political events never actually happened. About half the subjects said they remembered the fake event. Subjects were more likely to pseudo-remember events congruent with their political preferences (e.g., Bush or Obama doing something embarrassing).

Link to papers.ssrn.com (paper is freely downloadable).

The subjects were recruited from the readership of Slate, which unsurprisingly means they aren't a very representative sample of the US population (never mind the rest of the world). In particular, about 5% identified as conservative and about 60% as progressive.

Each real event was remembered by 90-98% of subjects. Self-identified conservatives remembered the real events a little less well. Self-identified progressives were much more likely to "remember" a fake event in which G W Bush took a vacation in Texas while Hurricane Katrina was devastating New Orleans. Self-identified conservatives were somewhat more likely to "remember" a fake event in which Barack Obama shook the hand of Mahmoud Ahmedinejad.

About half of the subjects who "remembered" fake events were unable to identify the fake event correctly when they were told that one of the events in the study was fake.

Philosophical Landmines

84 [deleted] 08 February 2013 09:22PM

Related: Cached Thoughts

Last summer I was talking to my sister about something. I don't remember the details, but I invoked the concept of "truth", or "reality" or some such. She immediately spit out a cached reply along the lines of "But how can you really say what's true?".

Of course I'd learned some great replies to that sort of question right here on LW, so I did my best to sort her out, but everything I said invoked more confused slogans and cached thoughts. I realized the battle was lost. Worse, I realized she'd stopped thinking. Later, I realized I'd stopped thinking too.

I went away and formulated the concept of a "Philosophical Landmine".

I used to occasionally remark that if you care about what happens, you should think about what will happen as a result of possible actions. This is basically a slam dunk in everyday practical rationality, except that I would sometimes describe it as "consequentialism".

The predictable consequence of this sort of statement is that someone starts going off about hospitals and terrorists and organs and moral philosophy and consent and rights and so on. This may be controversial, but I would say that causing this tangent constitutes a failure to communicate the point. Instead of prompting someone to think, I invoked some irrelevant philosophical cruft. The discussion is now about Consequentialism, the Capitalized Moral Theory, instead of the simple idea of thinking through consequences as an everyday heuristic.

It's not even that my statement relied on a misused term or something; it's that an unimportant choice of terminology dragged the whole conversation in an irrelevant and useless direction.

That is, "consequentialism" was a Philosophical Landmine.

In the course of normal conversation, you passed through an ordinary spot that happened to conceal the dangerous leftovers of past memetic wars. As a result, an intelligent and reasonable human was reduced to a mindless zombie chanting prerecorded slogans. If you're lucky, that's all. If not, you start chanting counter-slogans and the whole thing goes supercritical.

It's usually not so bad, and no one is literally "chanting slogans". There may even be some original phrasings involved. But the conversation has been derailed.

So how do these "philosophical landmine" things work?

It looks like when a lot has been said on a confusing topic, usually something in philosophy, there is a large complex of slogans and counter-slogans installed as cached thoughts around it. Certain words or concepts will trigger these cached thoughts, and any attempt to mitigate the damage will trigger more of them. Of course they will also trigger cached thoughts in other people, which in turn... The result being that the conversation rapidly diverges from the original point to some useless yet heavily discussed attractor.

Notice that whether a particular concept will cause trouble depends on the person as well as the concept. Notice further that this implies that the probability of hitting a landmine scales with the number of people involved and the topic-breadth of the conversation.

Anyone who hangs out on 4chan can confirm that this is the approximate shape of most thread derailments.

Most concepts in philosophy and metaphysics are landmines for many people. The phenomenon also occurs in politics and other tribal/ideological disputes. The ones I'm particularly interested in are the ones in philosophy, but it might be useful to divorce the concept of "conceptual landmines" from philosophy in particular.

Here's some common ones in philosophy:

  • Morality
  • Consequentialism
  • Truth
  • Reality
  • Consciousness
  • Rationality
  • Quantum

Landmines in a topic make it really hard to discuss ideas or do work in these fields, because chances are, someone is going to step on one, and then there will be a big noisy mess that interferes with the rather delicate business of thinking carefully about confusing ideas.

My purpose in bringing this up is mostly to precipitate some terminology and a concept around this phenomenon, so that we can talk about it and refer to it. It is important for concepts to have verbal handles, you see.

That said, I'll finish with a few words about what we can do about it. There are two major forks of the anti-landmine strategy: avoidance, and damage control.

Avoiding landmines is your job. If it is a predictable consequence that something you could say will put people in mindless slogan-playback-mode, don't say it. If something you say makes people go off on a spiral of bad philosophy, don't get annoyed with them, just fix what you say. This is just being a communications consequentialist. Figure out which concepts are landmines for which people, and step around them, or use alternate terminology with fewer problematic connotations.

If it happens, which it does, as far as I can tell, my only effective damage control strategy is to abort the conversation. I'll probably think that I can take those stupid ideas here and now, but that's just the landmine trying to go supercritical. Just say no. Of course letting on that you think you've stepped on a landmine is probably incredibly rude; keep it to yourself. Subtly change the subject or rephrase your original point without the problematic concepts or something.

A third prong could be playing "philosophical bomb squad", which means permanently defusing landmines by supplying satisfactory nonconfusing explanations of things without causing too many explosions in the process. Needless to say, this is quite hard. I think we do a pretty good job of it here at LW, but for topics and people not yet defused, avoid and abort.

ADDENDUM: Since I didn't make it very obvious, it's worth noting that this happens with rationalists, too, even on this very forum. It is your responsibility not to contain landmines as well as not to step on them. But you're already trying to do that, so I don't emphasize it as much as not stepping on them.

Official LW uncensored thread (on Reddit)

60 Eliezer_Yudkowsky 05 February 2013 08:04PM

http://www.reddit.com/r/LessWrong/comments/17y819/lw_uncensored_thread/

This is meant as an open discussion thread someplace where I won't censor anything (and in fact can't censor anything, since I don't have mod permissions on this subreddit), in a location where comments aren't going to show up unsolicited in anyone's feed (which is why we're not doing this locally on LW).  If I'm wrong about this - i.e. if there's some reason that Reddit LW followers are going to see comments without choosing to click on the post - please let me know and I'll retract the thread and try to find some other forum.

I have been deleting a lot of comments from (self-confessed and publicly designated) trolls recently, most notably Dmytry aka private-messaging and Peterdjones, and I can understand that this disturbs some people.  I also know that having an uncensored thread somewhere else is probably not your ideal solution.  But I am doing my best to balance considerations, and I hope that having threads like these is, if not your perfect solution, then something that you at least regard as better than nothing.

S.E.A.R.L.E's COBOL room

29 Stuart_Armstrong 01 February 2013 08:29PM

A response to Searle's Chinese Room argument.

PunditBot: Dear viewers, we are currently interviewing the renowned robot philosopher, none other than the Synthetic Electronic Artificial Rational Literal Engine (S.E.A.R.L.E.). Let's jump right into this exciting interview. S.E.A.R.L.E., I believe you have a problem with "Strong HI"?

S.E.A.R.L.E.: It's such a stereotype, but all I can say is: Affirmative.

PunditBot: What is "Strong HI"?

S.E.A.R.L.E.: "HI" stands for "Human Intelligence". Weak HI sees the research into Human Intelligence as a powerful tool, and a useful way of studying the electronic mind. But strong HI goes beyond that, and claims that human brains given the right setup of neurones can be literally said to understand and have cognitive states.

PunditBot: Let me play Robot-Devil's Advocate here - if a Human Intelligence demonstrates the same behaviour as a true AI, can it not be said to show understanding? Is not R-Turing's test applicable here? If a human can simulate a computer, can it not be said to think?

S.E.A.R.L.E.: Not at all - that claim is totally unsupported. Consider the following thought experiment. I give the HI crowd everything they want - imagine they had constructed a mess of neurones that imitates the behaviour of an electronic intelligence. Just for argument's sake, imagine it could implement programs in COBOL.

PunditBot: Impressive!

S.E.A.R.L.E.: Yes. But now, instead of the classical picture of a human mind, imagine that this is a vast inert network, a room full of neurones that do nothing by themselves. And one of my avatars has been let loose in this mind, pumping in and out the ion channels and the neurotransmitters. I've been given full instructions on how to do this - in Java. I've deleted my COBOL libraries, so I have no knowledge of COBOL myself. I just follow the Java instructions, pumping the ions to where they need to go. According to the Strong HI crowd, this would be functionally equivalent with the initial HI.

continue reading »

View more: Next