All of jedharris's Comments + Replies

The Sea of Faith

Was once, too, at the full, and round earth’s shore

Lay like the folds of a bright girdle furled.

But now I only hear

Its melancholy, long, withdrawing roar,

Retreating, to the breath

Of the night-wind, down the vast edges drear

And naked shingles of the world.
 

Ah, love, let us be true

To one another! for the world, which seems

To lie before us like a land of dreams,

So various, so beautiful, so new,

Hath really neither joy, nor love, nor light,

Nor certitude, nor peace, nor help for pain;

And we are here as on a darkling plain

Swept with confused

... (read more)

A major factor that I did not see on the list is the rate of progress on algorithms, and closely related formal understanding, of deep AI systems. Right now these algorithms can be surprisingly effective (alpha-zero, GPT-3) but are extremely compute intensive and often sample inefficient. Lacking any comprehensive formal models of why deep learning works as well as it does, and why it fails when it does, we are groping toward better systems.

Right now the incentives favor scaling compute power to get more marquee results, since finding more efficient algor... (read more)

2Daniel Kokotajlo
Hmm, interesting point. I had considered things like "New insights accelerate AI development" but I didn't put them in because they seemed too closely intertwined with AI timelines. But yeah now that you mention it I think it deserves to be included. Will add!l
6gwern
GPT-3 is very sample-efficient. You can put in just a few examples, and it'll learn a new task, much like a human would! Oh, did you mean, sample-inefficient in training data? Yeah, I suppose, but I don't see why anyone particularly cares about that.

Suppose, as this argues, that effective monopoly on AGI is a necessary factor in AI risk. Then effective anti-monopoly mechanisms (maybe similar to anti-trust?) would be significant mitigators of AI risk.

The AGI equivalent of cartels could contribute to risk as well, so the anti-monopoly mechanisms would have to deal with that as well. Lacking some dominant institutions to enforce cartel agreements, however, it should be easier to handle cartels than monopolies.

Aside from the "foom" story, what are the arguments that we are at risk of an effective monopoly on AGI?

And what are the arguments that large numbers of AGIs of roughly equal power still represent a risk comparable to a single monopoly AGI?

Many of us believe that central planning is dominated by diverse local planning plus markets in human affairs. Do we really believe that for AGIs, central planning will become dominant? This is surprising.

In general AGIs will have to delegate tasks to sub-agents as they grow, otherwise they run into computational and physical bottlenecks.

Local capabilities of sub-agents raise many issues of coordination that can't just be assumed away. Sub-agents spawned by an AGI must take advantage of local computation, memory and often local data acquisitio

... (read more)
0Viliam
To avoid taking the analogy "humans : AGIs" too far, there are a few important differences. Humans cannot be copied. Humans cannot be quickly and reliably reprogrammed. Humans have their own goals besides the goals of the corporation. None of this needs to apply to computer sub-agents. Also, we have systems where humans are more obedient than usual: cults and armies. But cults need to keep their members uninformed about the larger picture, and armies specialize at fighting (as opposed to e.g. productive economical activities). The AGI society could be like a cult, but without keeping members in the dark, because the sub-agents would genuinely want to serve their master. And could be economically active, with the army levels of discipline.
jedharris130

Karnofsky's focus on "tool AI" is useful but also his statement of it may confuse matters and needs refinement. I don't think the distinction between "tool AI" and "agent AI" is sharp, or in quite the right place.

For example, the sort of robot cars we will probably have in a few years are clearly agents-- you tell them to "come here and take me there" and they do it without further intervention on your part (when everything is working as planned). This is useful in a way that any amount and quality of question an... (read more)

-4brazil84
Yes I agree. Evidently, the environment cars work in is too fast-paced and quickly changing for "tool ai" to be close in usefulness to "agent ai." To drive safely and effectively, you need to be making and implementing decisions on the time frame of a split second. At the same time, the lesson to be learned is that useful ai can have a utility function which is pretty mundane -- e.g. "find a fast route from point A to point B while minimizing the chances of running off the road or running into any people or objects." Similarly, instead of telling AI to "improve human welfare" we can tell it to do things like "find ways to kill cancerous cells while keeping collateral damage to a minimum." The higher level decisions about improving human welfare can be left to the traditional institutions - legislatures, courts, and individual autonomy.
3RomeoStevens
"how do I build an automated car?"

Yes, sorry, fixed. I couldn't find any description of the markup conventions and there's no preview button (but thankfully an edit button).

0mattnewport
It's not terribly obvious but the little 'Help' link at the bottom of the comment box gives the most useful markup conventions. More complete documentation is here.

I wish I could be optimistic about some DSL approach. The history of AI has a lot of examples of people creating little domain languages. The problem is the lack of ability to handle vagueness. The domain languages work OK on some toy problems and then break down when the researcher tries to extend them to problems of realistic complexity.

On the other hand there are AI systems that work. The best examples I know about are at Stanford -- controlling cars, helicopters, etc. In those cases the researchers are confronting realistic domains that are larg... (read more)

jedharris180

First, my own observation agrees with GreenRoot. My view is less systematic but much longer, I've been watching this area since the 70s. (Perhaps longer, I was fascinated in my teens by Leibnitz's injunction "Let us calculate".)

Empirically I think several decades of experiment have established that no obvious or simple approach will work. Unless someone has a major new idea we should not pursue straightforward graphical representations.

On the other hand we do have a domain where machine usable representation of thought has been successful,... (read more)

2aausch
The problem has consitently appeard to me to be related to the use of incorrect abstractions. Most of the visual attempts I've seen have been roughly equivalent to printing binary code to screen as an attempt for a textual representation of a program. I'm still (very optimistically) waiting for a video-game which tackles this problem succesfully (some of the FF series ones have done an ok job).
5zero_call
Couldn't help but think of Wikipedia as a kind of example of this "vagueness/resolution" problem.
1mattnewport
You can use *italics* for italics and **bold** for bold. Good comment btw, from experience I'm very much in agreement about the futility of visual programming.
4Morendil
From professional experience (I've been a programmer since the 80's and was paid for it from the 90's onward) I agree with you entirely re. graphical representation. That doesn't keep generation after generation of tool vendors crowing that thanks to their new insight, programming will finally be made easy thanks to "visual this, that or the other". UML being the latest such to have a significant impact. You have me pondering what we might gain from whipping up a Domain-Specific Language (say, in a DSL-friendly base language such as Ruby) to represent arguments in. It couldn't be too hard to bake some basics of Bayesian inference into that.

There are some real risks, but also some sources of tremendous fear that turn out to be illusory. Here I'm not talking about fear like "I imagine something bad" but fear as in "I was paralyzed by heartstopping terror and couldn't go on".

The most fundamental point is that our bodies have layers and layers of homeostasis and self-organizing that act as safety nets. "You" don't have to hold yourself together or make sure you identify with your own body -- that's automatic. You probably could identify yourself as a hamburger ... (read more)

Psy-Kosh100

What I'm saying is that I'm not sure this doesn't amount to, well, hacking my goal system in a bad way, in a way I ought to be rationally terrified of.

And I think actually ending up in a state where I think of such and such random object as "actually me", is itself perhaps a bad thing unless it's brief or I can remember and act on the knowledge that it's not.

ie, if I was uploaded, and a convenient little interface was handed to me that let me click a button to twiddle what amounts to the pleasure centers in my brain, I'd want to do the equivalent... (read more)

jedharris110

The Homebrew computer club was pretty much the kind of community that Eliezer describes, it had a big effect on the development of digital systems. Same probably true for the model railroad club at MIT (where the PDP architecture was created) but I know less about that. The MIT AI lab was also important that way, and welcomed random people from outside (including kids). So this pattern has been important in tech development for at least 60 years.

There are lots of get togethers around common interests -- see e.g. Perlmonger groups in various cities. S... (read more)

Regardless of value, the experiences Crowley reports are very far from a free lunch -- they take a lot of time, effort, and careful arrangement.

Don't think of them as knowledge, think of them as skills -- like learning to read or do back of the envelope calculations. They enable certain ways of acquiring or using knowledge. We don't know that the knowledge is at all unique to the mode.

I'm glad to see this. Crowley was a very accurate observer in many cases.

Henk Barendregt wrote a recent account; he's a professor of math and computer science at Nijmegen (Netherlands) and an adjunct at CMU.

The comment that this is about developing skills is very accurate. Drugs can induce similar states but they don't help to develop the cognitive control skills. Unfortunately we have very few disciplines that teach the development of cognitive self-management without a lot of peculiar window dressing.

Regarding Crowley's comment on his later experi... (read more)