Ben Goertzel has made available a pre-print copy of his book Engineering General Intelligence (Vol1, Vol2). The first volume is basically the OpenCog organization's roadmap to AGI, and the second volume a 700 page overview of the design.

New Comment
11 comments, sorted by Click to highlight new comments since:

From the book:

The central hypotheses underlying the CogPrime approach to AGI: that the cognitive synergy ensuing from integrating multiple symbolic and subsymbolic learning and memory components in an appropriate cognitive architecture and environment, can yield robust intelligence at the human level and ultimately beyond.

The OpenCogPrime roadmap from the opencog wiki:

  • Phase 1: Basic Components - Completion of essential aspects of AGI system design (mathematical, conceptual and software-architecture); and implementation of initial versions of key components.

  • Phase 2: Artificial Toddler - On this topic, see this paper on "AGI Preschool" that was submitted from AGI-09. Refinement of design and implementation in the course of teaching the AI system to control an agent in a simulation world, according to a loosely Piagetan learning plan. Goal: an “artificial toddler” with qualitatively intelligent though not humanlike English conversation ability, involving simple sentences appropriately contextually deployed the approximate problem-solving ability of an average four-year old human child within the context of its simulation world

  • Phase 3: Artificial Child - Interaction with the "artificial toddler" so as to teach it to more effectively think and communicate. Goal: an “artificial child” with the approximate problem-solving and communicational ability of an average ten-year old human child within the context of its simulation world

  • Phase 4: Artificial Adult - Instruction of “artificial child” in relevant topics, with a focus on bioscience, mathematics and ethics. Refinement of implementation as necessary. Goal: an intelligent, ethical “artificial adult” and young “artificial scientist”

  • Phase 5: Artificial Scientist - Instruction of artificial scientist in AI design and general computer science. Goal: an ethical AI capable of radically modifying and improving its own implementation in accordance with its goals

  • Phase 6: Artificial Intellect - An Ai created by an artificial Scientist. Goal: an ethical Intellect capable of managing the Ai scientists

In terms of the above breakdown, at present we are near the start of Phase Two, and still wrapping up some aspects of Phase One.

[-][anonymous]00

Can you give or direct me to more Cliff Note Summary versions of AGI research? I'd love to contribute to OpenCog as a (non-computer) scientist and I wonder if there's anything I could help with. Am I right in guessing the code is about stuff like this?

No. I just excerpted this part because a) I thought it summarizes the key phases well and b) I'm interested in this kind of approach (I see lots of parallels between machine learning meta strategies and human learning and education.

Any opinions on where Goertzel's stuff stands in relation to whatever there is that passes for state of the art in AGI research?

And is it even worth trying to have this conversation on LW? We don't seem to see much anything here about AI that actually does stuff that's being worked on right now (Google cars, IBM Watson, DeepMind etc) beyond what you can read in a press release. Is all of the interesting stuff proprietary, so we don't get bored grad students coming here chatting about it, or is there an understanding with the people involved with actual AI research that LW and MIRI are not worth bothering with?

[-][anonymous]100

Any opinions on where Goertzel's stuff stands in relation to whatever there is that passes for state of the art in AGI research?

Depends on how you dereference "AGI research". The term was invented by Goertzel et al to describe what OpenCog is, so at least from that standpoint it is very relevant. Stepping back, among people who actually bother to make the AI/AGI distinction, OpenCog is definately one giant influential project in this relatively small field. It's not a monoculture community though, and there are other influential AGI projects with very different designs. But OpenCog is cerntainly a heavy-weight contender.

Of course there is also the groups which don't make the AI/AGI distinction, such as most of the machine learning & perception crowds, and Kurzweil et al. These people think they can achieve general intelligence through layering narrow AI techniques or direct emulation, and probably think very little of integrative methods pursued by Goertzel.

And is it even worth trying to have this conversation on LW?

Can you elaborate? I'm not sure I understand the question. Why wouldn't this be a great place to discuss AGI?

Why wouldn't this be a great place to discuss AGI?

Because LW has been around for 5 or so years, and I've remember seeing very little nuts and bolts AI discussion at the level of, say, Starglider's AI Mini-FAQ happen here, very few discussion about deep technical details of something like IBM's recent AI work, whatever goes on at DeepMind and things like that. Of course there are going to be trade secrets involved, but beyond pretty much just AIXI, I don't even see much ambient awareness about whatever publicly known technical methods there are that the companies are probably basing their stuff on. It's as if the industry was busy fielding automobiles, biplanes and tanks while the majority at LW still had trouble figuring out the basic concepts of steam power.

LW can discuss the philosophy part, but I don't see much capability around that could go actually look through Goertzel's design and go "this thing looks like a non-starter because recognized technical problem X", "this thing resembles successful design Y, it's probably worth studying more closely" or "this thing has a really novel and interesting attack for known technical problem Z, even if the rest is junk that part definitely needs close studying" for instance. And I don't think the philosophy is going to stay afloat for very long if it's practitioners aren't able to follow the technical details of what people are actually doing in the domain they'd like to philosophize about.

[-][anonymous]60

I was going to respond with a biting "well then what the heck is the point of LW?" post, but I think you got the point:

I don't think the philosophy is going to stay afloat for very long if it's practitioners aren't able to follow the technical details of what people are actually doing in the domain they'd like to philosophize about.

Frankly without a willingness to educate oneself about implementation details, the philosophizing is pointless. Maybe this is a wakeup call for me to go find a better community :\

EDIT: Who created the StarDestroy AI mini-FAQ? Do we know their real-world identity?

Frankly without a willingness to educate oneself about implementation details, the philosophizing is pointless. Maybe this is a wakeup call for me to go find a better community :\

I was hoping more of a study technical AI details and post about them here, but whatever works. If you do find a better community, post a note here somewhere.

EDIT: Who created the StarDestroy AI mini-FAQ? Do we know their real-world identity?

Michael Wilson, looks like.

[-][anonymous]30

My goal is to enact a positive singularity. To that end I'm not convinced of the instrumentality of educating people on the interwebs, given other things I could be doing.

I had thought that a community with a tight focus on 'friendly AGI' would be interested in learning, and discussing how such an AGI might actually be constructed, or otherwise getting involved in some way. If not, I don't think it's worth my time to correct this mistake.

I'm not convinced of the instrumentality of educating people on the interwebs

Oh really? :-D

is there an understanding with the people involved with actual AI research that LW and MIRI are not worth bothering with?

As far as DeepMind goes, Jaan Tallinn was involved in it and is one of the biggest donors to MIRI.

If I look at the participant list of MIRI workshops they always had a person from Google in attendance.