Comment author: Constant2 06 August 2008 06:06:42PM 0 points [-]

Whoever is censoring Caledonian: can it be done without adding the content-free nastiness (such as "bizarre objection", "illogic", and "gibberish")?

In response to The Meaning of Right
Comment author: Constant2 30 July 2008 05:57:00PM 0 points [-]

Any two AIs are likely to have a much vaster difference in effective intelligence than you could ever find between two humans (for one thing, their hardware might be much more different than any two working human brains). This likelihood increases further if (at least) some subset of them is capable of strong self-improvement. With enough difference in power, cooperation becomes a losing strategy for the more powerful party.

I read stuff like this and immediately my mind thinks, "comparative advantage." The point is that it can be (and probably is) worthwhile for Bob and Bill to trade with each other even if Bob is better at absolutely everything than Bill. And if it is worthwhile for them to trade with each other, then it may well be in the interest of neither of them to (say) eliminate the other, and it may be a waste of resources to (say) coerce the other. It is worthwhile for the state to coerce the population because the state is few and the population are many, so the per-person cost of coercion falls below the benefit of coercion; it is much less worthwhile for an individual to coerce another (slavery generally has the backing of the state - see for example the fugitive slave laws). But this mass production of coercive fear works in part because humans are similar to each other and so can be dealt with more or less the same way. If AIs are all over the place, then this does not necessarily hold. Furthermore if one AI decides to coerce the humans (who are admittedly similar to each other) then the other AIs may oppose him in order that they themselves might retain direct access to humans.

The AIs might agree that they'd all be better off if they took the matter currently in use by humans for themselves, dividing the spoils among each other.

Maybe but maybe not. Dividing the spoils paints a picture of the one-time destruction of the human race, and it may well be to the advantage of the AIs not to kill off the humans. After all, if the humans have something worth treating as spoils, then the humans are productive and so might be even more useful alive.

You definitely don't want an FAI to unpredictably change its terminal values. Figuring out how to reliably prevent this kind of thing from happening, even in a strongly self-modifying mind (which humans aren't), is one of the sub-problems of the FAI problem.

The FAI may be an unsolvable problem, if by FAI we mean an AI into which certain limits are baked. This has seemed dubious ever since Asimov. The idea of baking in rules of robotics has long seemed to me to fundamentally misunderstand both the nature of morality and the nature of intelligence. But time will tell.

In response to The Meaning of Right
Comment author: Constant2 30 July 2008 04:10:00PM 0 points [-]

An AI can indeed have preferences that conflict with human preferences, but if it doesn't start out with such preferences, it's unclear how it comes to have them later.

We do not know very well how the human mind does anything at all. But that the the human mind comes to have preferences that it did not have initially, cannot be doubted. For example, babies do not start out preferring Bach to Beethoven or Beethoven to Bach, but adults are able to develop that preference, even if it is not clear at this point how they come to do so.

If you could do so easily and with complete impunity, would you organize fights to death for your pleasure?

Voters have the ability to vote for policies and to do so easily and with complete impunity (nobody retaliates against a voter for his vote). And, unsurprisingly, voters regularly vote to take from others to give unto themselves - which is something they would never do in person (unless they were criminals, such as muggers or burglars). Moreover humans have an awe-inspiring capacity to clothe their rapaciousness in fine-sounding rhetoric.

Moreover, humans are often tempted to do things they know they shouldn't, because they also have selfish desires. AIs don't if you don't build it into them.

Conflict does not require selfish desires. Any desire, of whatever sort, could potentially come into conflict with another person's desire, and when there are many minds each with its own set of desires then conflict is almost inevitable. So the problem does not, in fact, turn on whether the mind is "selfish" or not. Any sort of desire can create the conflict, and conflict as such creates the problem I described. In a nutshell: evil men need not be selfish. A man such as Pol Pot could indeed have wanted nothing for himself and still ended up murdering millions of his countrymen.

In response to The Meaning of Right
Comment author: Constant2 30 July 2008 01:34:00PM 0 points [-]

A tendency to become corrupt when placed into positions of power is a feature of some minds.

Morality in the human universe is a compromise between conflicting wills. The compromise is useful because the alternative is conflict, and conflict is wasteful. Law is a specific instance of this, so let us look at property rights: property rights is a decision-making procedure for deciding between conflicting desires concerning the owned object. There really is no point in even having property rights except in the context of the potential for conflict. Remove conflict, and you remove the raison d'etre of property rights, and more generally the raison d'etre of law, and more generally the raison d'etre of morality. Give a person power, and he no longer needs to compromise with others, and so for him the raison d'etre of morality vanishes and he acts as he pleases.

The feature of human minds that renders morality necessary is the possibility that humans can have preferences that conflict with the preferences of other humans, thereby requiring a decisionmaking procedure for deciding whose will prevails. Preference is, furthermore, revealed in the actions taken by a mind, so a mind that acts has preferences. So all the above is applicable to an artificial intelligence if the artificial intelligence acts.

What makes you think a human-designed AI would be vulnerable to this kind of corruption?

I am assuming it acts, and therefore makes choices, and therefore has preferences, and therefore can have preferences which conflict with the preferences of other minds (including human minds).

Comment author: Constant2 24 July 2008 03:26:36PM 2 points [-]

I say go ahead and pick a number out of the air,

A somewhat arbitrary starting number is also useful as a seed for a process of iterative approximation to a true value.

Comment author: Constant2 24 July 2008 02:16:03PM 5 points [-]

how you can talk about probabilities without talking about several possible worlds

But if probability is in the mind, and the mind in question is in this world, why are other worlds needed? Moreover (from wikipedia):

In Bayesian theory, the assessment of probability can be approached in several ways. One is based on betting: the degree of belief in a proposition is reflected in the odds that the assessor is willing to bet on the success of a trial of its truth.

Disposition to bet surely does not require a commitment to possible worlds.

Comment author: Constant2 16 July 2008 09:02:27AM 0 points [-]

Elephants are not properties of physics any more than probabilities are. The concept of an elephant is subjective - as are all concepts.

If you are indeed agreeing with the parallel I have set up between probability and elephants and if this is not just your own personal view, then perhaps the subjectivist theory of probability should more properly be called the subjectivist theory of pretty much everything that populates our familiar world. Anyway, I think I can agree that probability is as subjective and as psychological and as non-physical and as existing in the mind and not in the world as an elephant or, say, an exploding nuclear bomb - another item that populates our familiar world.

Comment author: Constant2 16 July 2008 07:02:32AM 0 points [-]

With complete information (and a big computer) an observer would know which way the coin would land - and would find probabilities irrelevant.

But this is true of most everyday observations. We observe events on a level far removed from the subatomic level. With complete information and infinite computing power an observer would would find all or virtually all ordinary human-level observations irrelevant. But irrelevancy to such an observer is not the same thing as non-reality. For example, the existence of elephants would be irrelevant to an observer who has complete information on the subatomic level and sufficient computing power to deal with it. But it does not follow that elephants do not exist. Do you think it follows that elephants do not exist?

The probabilities arise from ignorance and lack of computing power - properties of observers, not properties of the observed.

The concept of an elephant could with equal reason be said to arise from ignorance and lack of computing power. I can certainly understand that a thought such as, "the elephant likes peanuts, therefore it will accept this peanut" is a much easier thought to entertain than a thought that infallibly tracks every subatomic particle in its body and in the environment around it. So, certainly, the concept of an elephant is a wonderful shortcut. But I'm not so sure about getting from this to the conclusion that elephants (like probability) are subjective. Do you think that elephants are subjective?

Comment author: Constant2 15 July 2008 10:05:05PM 0 points [-]

If such ideas seem unproblematic to you

It is the example that seems on the face of it unproblematic. I am open to either (a) a demonstration that it is compatible with subjectivism[*], or (b) a demonstration that it is problematic. I am open to either one. Or to something else. In any case, I don't adhere to frequentism.

[*] (I made no firm claim that it is not compatible with subjectivism - you are the one who rejected the compatibility - my own purpose was only to raise the question since it seems on the face of it hard to square with subjectivism, not to answer the question definitively.)

Comment author: Constant2 15 July 2008 08:07:26PM 0 points [-]

Jaynes' perspective on the historical behaviour of biased coins would make no mention of probability - unless he was talking about the history of the expectations of some observer with partial information about the situation. Do you see anything wrong with that?

I see nothing wrong with that. Similarly, if someone mentions only the atoms in my body, and never mentions me, there is nothing wrong with that. However, I am also there.

What I have pointed out is that seemingly unproblematic statements can indeed be made of the sort that I described. That Jaynes himself makes no such statements says nothing one way or another about this. There are different possible responses, including:

1) It might be shown that certain classes of factual statements about history, including the one I gave, are in fact in some sense relative, may incorporate a tacit perspective and therefore may be in that sense subjective. An example of such a statement might be a statement that an object is "at rest" rather than "in motion". This statement tacitly presupposes a frame of reference, and so is in that sense not fully objective.

2) It might be shown that there was something wrong about the sort of statement that I gave as an example.

View more: Prev | Next