You are viewing revision 1.4.0, last edited by pedrochaves

Coherent Aggregated Volition is Ben Goertzel's response to Eliezer Yudkowsky's Coherent Extrapolated Volition. CAV would be a combination of the goals and beliefs of humanity at the present time.

The author considers the "extrapolation" aspect of CEV as distorting the concept of volition and to be highly uncertain. He considers that if the person whose volition is being extrapolated has some inconsistent aspects (which is tipically human), then there could be a great variety of possible extrapolations. The problem would then be which version of this extrapolated human to choose, or how to aggregate them, which would be very difficult to achieve.

Coherent Aggregated Volition is presented as simpler, and intended to be easier to formalize and prototype in the foreseeable future. CAV is not, however, intended to answer the question of Friendly AI, although Goertzel claims CEV is possibly not the answer as well.

The concept

As stated, Coherent Aggregated Volition is an attempt to capture the idea of CEV, as interpretated by Goertzel, but in a way that makes it easier to implement and which we are able to prototype.

First, the author defendes that we must treat goals and beliefs together, as a single concept, which he calls gobs (and gobses for the plural). Each agent thus has several gobs, logically consistent or not. As a way to measure how much these gobs are distant to each other, the term gobs metric is used - the persons or AGI could agree, with various degrees, on several metrics, but it seems probable that this individual's metrics would differ less than their gobses.

Given a population of intelligent agents with different gobses, we could try to find a single gobs that maximizes logical consistency, compactness, similarity to the different gobses in the population and amount of evidence supporting these beliefs. This "multi-extremal optimization algorithm" is what the author calls Coherent Aggregated Volition.

CEV and CAV

See also

References