A major factor that I did not see on the list is the rate of progress on algorithms, and closely related formal understanding, of deep AI systems. Right now these algorithms can be surprisingly effective (alpha-zero, GPT-3) but are extremely compute intensive and often sample inefficient. Lacking any comprehensive formal models of why deep learning works as well as it does, and why it fails when it does, we are groping toward better systems.
Right now the incentives favor scaling compute power to get more marquee results, since finding more efficient algor...
Suppose, as this argues, that effective monopoly on AGI is a necessary factor in AI risk. Then effective anti-monopoly mechanisms (maybe similar to anti-trust?) would be significant mitigators of AI risk.
The AGI equivalent of cartels could contribute to risk as well, so the anti-monopoly mechanisms would have to deal with that as well. Lacking some dominant institutions to enforce cartel agreements, however, it should be easier to handle cartels than monopolies.
Aside from the "foom" story, what are the arguments that we are at risk of an effective monopoly on AGI?
And what are the arguments that large numbers of AGIs of roughly equal power still represent a risk comparable to a single monopoly AGI?
Many of us believe that central planning is dominated by diverse local planning plus markets in human affairs. Do we really believe that for AGIs, central planning will become dominant? This is surprising.
In general AGIs will have to delegate tasks to sub-agents as they grow, otherwise they run into computational and physical bottlenecks.
Local capabilities of sub-agents raise many issues of coordination that can't just be assumed away. Sub-agents spawned by an AGI must take advantage of local computation, memory and often local data acquisitio
Karnofsky's focus on "tool AI" is useful but also his statement of it may confuse matters and needs refinement. I don't think the distinction between "tool AI" and "agent AI" is sharp, or in quite the right place.
For example, the sort of robot cars we will probably have in a few years are clearly agents-- you tell them to "come here and take me there" and they do it without further intervention on your part (when everything is working as planned). This is useful in a way that any amount and quality of question an...
Yes, sorry, fixed. I couldn't find any description of the markup conventions and there's no preview button (but thankfully an edit button).
I wish I could be optimistic about some DSL approach. The history of AI has a lot of examples of people creating little domain languages. The problem is the lack of ability to handle vagueness. The domain languages work OK on some toy problems and then break down when the researcher tries to extend them to problems of realistic complexity.
On the other hand there are AI systems that work. The best examples I know about are at Stanford -- controlling cars, helicopters, etc. In those cases the researchers are confronting realistic domains that are larg...
First, my own observation agrees with GreenRoot. My view is less systematic but much longer, I've been watching this area since the 70s. (Perhaps longer, I was fascinated in my teens by Leibnitz's injunction "Let us calculate".)
Empirically I think several decades of experiment have established that no obvious or simple approach will work. Unless someone has a major new idea we should not pursue straightforward graphical representations.
On the other hand we do have a domain where machine usable representation of thought has been successful,...
There are some real risks, but also some sources of tremendous fear that turn out to be illusory. Here I'm not talking about fear like "I imagine something bad" but fear as in "I was paralyzed by heartstopping terror and couldn't go on".
The most fundamental point is that our bodies have layers and layers of homeostasis and self-organizing that act as safety nets. "You" don't have to hold yourself together or make sure you identify with your own body -- that's automatic. You probably could identify yourself as a hamburger ...
What I'm saying is that I'm not sure this doesn't amount to, well, hacking my goal system in a bad way, in a way I ought to be rationally terrified of.
And I think actually ending up in a state where I think of such and such random object as "actually me", is itself perhaps a bad thing unless it's brief or I can remember and act on the knowledge that it's not.
ie, if I was uploaded, and a convenient little interface was handed to me that let me click a button to twiddle what amounts to the pleasure centers in my brain, I'd want to do the equivalent...
The Homebrew computer club was pretty much the kind of community that Eliezer describes, it had a big effect on the development of digital systems. Same probably true for the model railroad club at MIT (where the PDP architecture was created) but I know less about that. The MIT AI lab was also important that way, and welcomed random people from outside (including kids). So this pattern has been important in tech development for at least 60 years.
There are lots of get togethers around common interests -- see e.g. Perlmonger groups in various cities. S...
Regardless of value, the experiences Crowley reports are very far from a free lunch -- they take a lot of time, effort, and careful arrangement.
Don't think of them as knowledge, think of them as skills -- like learning to read or do back of the envelope calculations. They enable certain ways of acquiring or using knowledge. We don't know that the knowledge is at all unique to the mode.
I'm glad to see this. Crowley was a very accurate observer in many cases.
Henk Barendregt wrote a recent account; he's a professor of math and computer science at Nijmegen (Netherlands) and an adjunct at CMU.
The comment that this is about developing skills is very accurate. Drugs can induce similar states but they don't help to develop the cognitive control skills. Unfortunately we have very few disciplines that teach the development of cognitive self-management without a lot of peculiar window dressing.
Regarding Crowley's comment on his later experi...