jedharris
jedharris has not written any posts yet.

A major factor that I did not see on the list is the rate of progress on algorithms, and closely related formal understanding, of deep AI systems. Right now these algorithms can be surprisingly effective (alpha-zero, GPT-3) but are extremely compute intensive and often sample inefficient. Lacking any comprehensive formal models of why deep learning works as well as it does, and why it fails when it does, we are groping toward better systems.
Right now the incentives favor scaling compute power to get more marquee results, since finding more efficient algorithms doesn't scale as well with increased money. However the effort to make deep learning more efficient continues and probably can... (read more)
Suppose, as this argues, that effective monopoly on AGI is a necessary factor in AI risk. Then effective anti-monopoly mechanisms (maybe similar to anti-trust?) would be significant mitigators of AI risk.
The AGI equivalent of cartels could contribute to risk as well, so the anti-monopoly mechanisms would have to deal with that as well. Lacking some dominant institutions to enforce cartel agreements, however, it should be easier to handle cartels than monopolies.
Aside from the "foom" story, what are the arguments that we are at risk of an effective monopoly on AGI?
And what are the arguments that large numbers of AGIs of roughly equal power still represent a risk comparable to a single monopoly AGI?
Many of us believe that central planning is dominated by diverse local planning plus markets in human affairs. Do we really believe that for AGIs, central planning will become dominant? This is surprising.
In general AGIs will have to delegate tasks to sub-agents as they grow, otherwise they run into computational and physical bottlenecks.
Local capabilities of sub-agents raise many issues of coordination that can't just be assumed away. Sub-agents spawned by an AGI must take advantage of local computation, memory and often local data acquisition, otherwise they confer no advantage. In general these local capabilities may cause divergent choices that require negotiation to generate
Karnofsky's focus on "tool AI" is useful but also his statement of it may confuse matters and needs refinement. I don't think the distinction between "tool AI" and "agent AI" is sharp, or in quite the right place.
For example, the sort of robot cars we will probably have in a few years are clearly agents-- you tell them to "come here and take me there" and they do it without further intervention on your part (when everything is working as planned). This is useful in a way that any amount and quality of question answering is not. Almost certainly there will be various flavors of robot cars... (read 420 more words →)
Yes, sorry, fixed. I couldn't find any description of the markup conventions and there's no preview button (but thankfully an edit button).
I wish I could be optimistic about some DSL approach. The history of AI has a lot of examples of people creating little domain languages. The problem is the lack of ability to handle vagueness. The domain languages work OK on some toy problems and then break down when the researcher tries to extend them to problems of realistic complexity.
On the other hand there are AI systems that work. The best examples I know about are at Stanford -- controlling cars, helicopters, etc. In those cases the researchers are confronting realistic domains that are largely out of their control. They are using statistical modeling... (read more)
First, my own observation agrees with GreenRoot. My view is less systematic but much longer, I've been watching this area since the 70s. (Perhaps longer, I was fascinated in my teens by Leibnitz's injunction "Let us calculate".)
Empirically I think several decades of experiment have established that no obvious or simple approach will work. Unless someone has a major new idea we should not pursue straightforward graphical representations.
On the other hand we do have a domain where machine usable representation of thought has been successful, and where in fact that representation has evolved fairly rapidly. That domain is "programming" in a broad sense.
Graphical... (read 562 more words →)
There are some real risks, but also some sources of tremendous fear that turn out to be illusory. Here I'm not talking about fear like "I imagine something bad" but fear as in "I was paralyzed by heartstopping terror and couldn't go on".
The most fundamental point is that our bodies have layers and layers of homeostasis and self-organizing that act as safety nets. "You" don't have to hold yourself together or make sure you identify with your own body -- that's automatic. You probably could identify yourself as a hamburger with the right meditation techniques or drugs, but it wouldn't last. The lower levels would kick... (read more)
The Homebrew computer club was pretty much the kind of community that Eliezer describes, it had a big effect on the development of digital systems. Same probably true for the model railroad club at MIT (where the PDP architecture was created) but I know less about that. The MIT AI lab was also important that way, and welcomed random people from outside (including kids). So this pattern has been important in tech development for at least 60 years.
There are lots of get togethers around common interests -- see e.g. Perlmonger groups in various cities. See the list of meetups in your city.
Recently "grass roots... (read more)