Comment author: Vaniver 25 February 2015 03:02:05PM 2 points [-]

The Libertarian "brass" rule is a case in point: "Do not unto others as you would not have them do unto you," which may be summarized as "Do no harm."

Suppose you had perfect omniscience. (I'm not saying an AI would, I'm just setting up a hypothetical.) It might be the case that whenever you consider doing something, you notice that it has some harmful effect in the future on someone you consider morally important. You then end up not being able to do anything, including not being able to do nothing- because doing nothing also leads to harm in the future. So we can't just ban all harm; we need to somehow proportionally penalize harm, so that it's better to do less harm than more harm. But there are good things that are worth purchasing with harm, and so then we're back into tradeoff territory and maximizing profit instead of just minimizing cost.

(Indeed, the function of morality seems to mostly be to internalize externalities, rather than simply minimize negative externalities. Rules like "do no harm" serve for this purpose by making you consider harm to others before you act, which hopefully prevents you from doing things that are net negative while still allowing you to do things that are net positive.)

The brass rule does not require rendering assistance.

Humans have some idea of commission and omission: consider the difference between me running my car into you, you walking into the path of my car, and me not grabbing you to prevent you from walking into the path of a car. The first would be murder, the second possibly manslaughter and possibly not, and the third is not a crime. But that's a human-sized sense of commission and omission. It's not at all clear that AGIs will operate on the same scale.

When one takes a system-sized viewpoint, commission and omission become very different. The choice to not add a safety feature that makes accidents less likely does make the system-designer responsible for those accidents in some way, but not in a way that maps neatly on to murder, manslaughter, and nothing.

It seems like AGIs are more likely to operate on a system-sized viewpoint than a human-sized viewpoint. It's not enough to tell Google "don't be evil" and trust that their inborn human morality will correct translate "evil." What does it mean for an institution the size and shape of Google to be evil? They need to make many tradeoffs that people normally do not have to consider, and thus may not have good intuitions for.

Comment author: rlsj 26 February 2015 10:00:33PM 0 points [-]

"[Y]ou notice that [a proposed action] has some harmful effect in the future on someone you consider morally important. You then end up not being able to do anything ..."

Not being able to do that thing, yes, and you shouldn't do it -- unless you can obviate the harm. A case in point is the AGI taking over management of all commodity production and thus putting the current producers out of work. But how is that harmful to them? They can still perform the acts if they wish. They can't earn a living, you say? Well, then, let the AGI support them. Ah, but then, you suppose, they can't enjoy the personal worth that meaningful employment reinforces. The what? Let's stick to the material, please.

"You then end up not being able to do nothing -- because doing nothing also leads to harm in the future."

That does not follow. Doing nothing is always an option under the brass rule. Morally you are not the cause of any harm that then occurred, if any.

Commission vs. omission [of causative actions]: Omitting an action may indeed allow an entity to come to harm, but this is not a moral issue unless acting would harm that entity or another, perhaps to a lesser degree. Commission -- taking action -- is the problematic case. I repeat: a coded moral system should be restrictive, not promotional. Preventing external harm may be desirable and admirable but is never morally imperative, however physically imperative it may be.

"[Not adding a safety feature] does make the system-designer responsible for [the resulting] accidents in some way ..."

Only by the "Good Samaritan" moral code, in which this society is so dolefully steeped. I prefer Caveat emptor. It may be that when AGIs are the principal operators of harmful equipment, the obsession with safety will moderate.

Comment author: rlsj 25 February 2015 02:34:59AM 0 points [-]

CEV, MR, MP ... We do love complexity! Is such love a defining characteristic of intelligent entities?

The main point of all morality, as it is commonly practiced and understood, is restrictive, not promotional. A background moral code should not be expected to suggest goals to the AI, merely to denigrate some of them. The Libertarian "brass" rule is a case in point: "Do not unto others as you would not have them do unto you," which may be summarized as "Do no harm."

Of course, "others" has to be defined, perhaps as entities demonstrating sufficiently complex behavior, and exceptions have to be addressed, such as a third-party about to harm a second party. Must you restrain the third-party and likely harm her instead?

"Harm" will also need precise definition but that should be easier.

The brass rule does not require rendering assistance. Would ignoring external delivery of harm be immoral? Yes, by the "Good Samaritan" rule, but not by the brass rule. A near-absolute adherence to the brass rule would solve most moral issues, whether for AI or human.

"Near-absolute" because all the known consequences of an action must be considered in order to determine if any harm is involved and if so, how negatively the harm weighs on the goodness scale. An example of this might be a proposal to dam a river and thereby destroy a species of mussel. Presumably mussels would not exhibit sufficiently complex behavior in their own right, so the question for this consequence becomes how much their loss would harm those who do.

Should an AI protect its own existence? Not if doing so would harm a human or another AI. This addresses Asimov's three laws, even the first. The brass rule does not require obeying anything.

Apart from avoiding significant harm, the selection of goals does not depend on morality.

--rLsj

Comment author: leplen 17 September 2014 04:16:14PM 3 points [-]

In my experience, computer systems currently get out of my control by doing exactly what I ordered them to do, which is frequently different than I what I wanted them to do.

Whether or not a system is "just following orders" doesn't seem to be a good metric for it being under your control.

Comment author: rlsj 17 September 2014 11:42:19PM 1 point [-]

How does "just following orders," a la Nuremberg, bear upon this issue? It's out of control when its behavior is neither ordered nor wanted.

Comment author: KatjaGrace 16 September 2014 03:34:34AM 1 point [-]

In what sense do you think of an autonomous laborer as being under 'our control'? How would you tell if it escaped our control?

Comment author: rlsj 16 September 2014 08:39:49PM 2 points [-]

How would you tell? By its behavior: doing something you neither ordered nor wanted.

Think of the present-day "autonomous laborer" with an IQ about 90. The only likely way to lose control of him is for some agitator to instill contrary ideas. Censorship for robots is not so horrible a regime.

Who is it that really wants AGI, absent proof that we need it to automate commodity production?

Comment author: KatjaGrace 16 September 2014 04:08:04AM 2 points [-]

Bostrom says that it is hard to imagine the world economy having a doubling time as short as weeks, without minds being created that are much faster and more efficient than those of humans (p2-3). Do you think humans could maintain control of an economy that grew so fast? How fast could it grow while humans maintained control?

Comment author: rlsj 16 September 2014 07:56:58PM 7 points [-]

Excuse me? What makes you think it's in control? Central Planning lost a lot of ground in the Eighties.

Comment author: KatjaGrace 16 September 2014 02:50:20PM 1 point [-]

Sorry! I edited it - tell me if it still isn't clear.

Comment author: rlsj 16 September 2014 07:49:38PM 2 points [-]

Please, Madam Editor: "Without the benefit of hindsight," what technologies could you possibly expect?

The question should perhaps be, What technology development made the greatest productive difference? Agriculture? IT? Et alia? "Agriculture" if your top appreciation is for quantity of people, which admittedly subsumes a lot; IT if it's for positive feedback in ideas. Electrification? That's the one I'd most hate to lose.

Comment author: ciphergoth 16 September 2014 08:13:23AM 1 point [-]

The steam engine heralded the Industrial Revolution and a lasting large increase in doubling rate. I would expect a rapid economic growth after either of these inventions, followed by returning to the existing doubling rate.

Comment author: rlsj 16 September 2014 07:30:57PM 2 points [-]

After achieving a society of real abundance, further economic growth will have lost incentive.

We can argue whether or not such a society is truly reachable, even if only in the material sense. If not, because of human intractability or AGI inscrutability, progress may continue onward and upward. Perhaps here, as in happiness, it's the pursuit that counts.

Comment author: KatjaGrace 16 September 2014 02:11:48AM 2 points [-]

Good points. Any thoughts on what the dangerous characteristics might be?

Comment author: rlsj 16 September 2014 03:05:08AM 2 points [-]

An AI can be dangerous only if it escapes our control. The real question is, must we flirt with releasing control in order to obtain a necessary or desirable usefulness? It seems likely that autonomous laborers, assembly-line workers, clerks and low-level managers would, without requiring such flirtation, be useful and sufficient for the society of abundance that is our main objective. But can they operate without a working AGI? We may find out if we let the robots stumble onward and upward.

Comment author: KatjaGrace 16 September 2014 02:01:34AM 1 point [-]

If there are insights that some humans can't 'comprehend', does this mean that society would never discover certain facts had the most brilliant people not existed, or just that they would never be able to understand them in an intuitive sense?

Comment author: rlsj 16 September 2014 02:45:10AM 1 point [-]

"Does this mean that society would never discover certain facts had the most brilliant people not existed?"

Absolutely! If they or their equivalent had never existed in circumstances of the same suggestiveness. My favorite example of this uniqueness is the awesome imagination required first to "see" how stars appear when located behind a black hole -- the way they seem to congregate around the event horizon. Put another way: the imaginative power able to propose star deflections that needed a solar eclipse to prove.

Comment author: KatjaGrace 16 September 2014 01:11:13AM 3 points [-]

AI seems to be pretty good at board games relative to us. Does this tell us anything interesting? For instance, about the difficulty of automating other kinds of tasks? How about the task of AI research? Some thoughts here.

Comment author: rlsj 16 September 2014 01:40:36AM 7 points [-]

For anything whose function and sequencing we thoroughly understand the programming is straightforward and easy, at least in the conceptual sense. That covers most games, including video games. The computer's "side" in a video game, for example, which looks conceptually difficult, most of the time turns out logically to be only decision trees.

The challenge is the tasks we can't precisely define, like general intelligence. The rewarding approach here is to break down processes into identifiable subtasks. A case in point is understanding natural languages, one of whose essential questions is, "What is the meaning of "meaning?" In terms of a machine it can only be the content of a subroutine or pointers to subroutines. The input problem, converting sentences into sets of executable concepts, is thus approachable. The output problem, however, converting unpredictable concepts into words, is much tougher. It may involve growing decision trees on the fly.

View more: Next