rlsj
rlsj has not written any posts yet.

rlsj has not written any posts yet.

CEV, MR, MP ... We do love complexity! Is such love a defining characteristic of intelligent entities?
The main point of all morality, as it is commonly practiced and understood, is restrictive, not promotional. A background moral code should not be expected to suggest goals to the AI, merely to denigrate some of them. The Libertarian "brass" rule is a case in point: "Do not unto others as you would not have them do unto you," which may be summarized as "Do no harm."
Of course, "others" has to be defined, perhaps as entities demonstrating sufficiently complex behavior, and exceptions have to be addressed, such as a third-party about to... (read more)
How does "just following orders," a la Nuremberg, bear upon this issue? It's out of control when its behavior is neither ordered nor wanted.
How would you tell? By its behavior: doing something you neither ordered nor wanted.
Think of the present-day "autonomous laborer" with an IQ about 90. The only likely way to lose control of him is for some agitator to instill contrary ideas. Censorship for robots is not so horrible a regime.
Who is it that really wants AGI, absent proof that we need it to automate commodity production?
Excuse me? What makes you think it's in control? Central Planning lost a lot of ground in the Eighties.
Please, Madam Editor: "Without the benefit of hindsight," what technologies could you possibly expect?
The question should perhaps be, What technology development made the greatest productive difference? Agriculture? IT? Et alia? "Agriculture" if your top appreciation is for quantity of people, which admittedly subsumes a lot; IT if it's for positive feedback in ideas. Electrification? That's the one I'd most hate to lose.
After achieving a society of real abundance, further economic growth will have lost incentive.
We can argue whether or not such a society is truly reachable, even if only in the material sense. If not, because of human intractability or AGI inscrutability, progress may continue onward and upward. Perhaps here, as in happiness, it's the pursuit that counts.
An AI can be dangerous only if it escapes our control. The real question is, must we flirt with releasing control in order to obtain a necessary or desirable usefulness? It seems likely that autonomous laborers, assembly-line workers, clerks and low-level managers would, without requiring such flirtation, be useful and sufficient for the society of abundance that is our main objective. But can they operate without a working AGI? We may find out if we let the robots stumble onward and upward.
"Does this mean that society would never discover certain facts had the most brilliant people not existed?"
Absolutely! If they or their equivalent had never existed in circumstances of the same suggestiveness. My favorite example of this uniqueness is the awesome imagination required first to "see" how stars appear when located behind a black hole -- the way they seem to congregate around the event horizon. Put another way: the imaginative power able to propose star deflections that needed a solar eclipse to prove.
For anything whose function and sequencing we thoroughly understand the programming is straightforward and easy, at least in the conceptual sense. That covers most games, including video games. The computer's "side" in a video game, for example, which looks conceptually difficult, most of the time turns out logically to be only decision trees.
The challenge is the tasks we can't precisely define, like general intelligence. The rewarding approach here is to break down processes into identifiable subtasks. A case in point is understanding natural languages, one of whose essential questions is, "What is the meaning of "meaning?" In terms of a machine it can only be the content of a subroutine or pointers to subroutines. The input problem, converting sentences into sets of executable concepts, is thus approachable. The output problem, however, converting unpredictable concepts into words, is much tougher. It may involve growing decision trees on the fly.
"[Y]ou notice that [a proposed action] has some harmful effect in the future on someone you consider morally important. You then end up not being able to do anything ..."
Not being able to do that thing, yes, and you shouldn't do it -- unless you can obviate the harm. A case in point is the AGI taking over management of all commodity production and thus putting the current producers out of work. But how is that harmful to them? They can still perform the acts if they wish. They can't earn a living, you say? Well, then, let the AGI support them. Ah, but then, you... (read more)