Nonperson Predicate

Blog Posts

A Nonperson Predicate is a theorized test which can definitely distinguish computational structures which are not people; i.e., a predicate which returns 1 for all people, and returns 0 or 1 for nonpeople; thus if itnonpeople. This would be helpful due to the hypothetical risk that an artificial general intelligence could, while modelling the world around it, produce conscious beings as part of its world-model.

If a nonperson predicate returns 1, the structure may or may not be a person, but if it returns 0, the structure is definitely not a person. In other words, any time at least one trusted nonperson predicate returns 0, we know we can run that program without creating a person. (The impossibility of perfectly distinguishing people and nonpeople is a trivial consequence of the halting problem.)

The need for such a test arises from the possibility that when an Artificial General Intelligence predicts a person'person's actions, it may develop a model of them so complete that the model itself qualifies as a person (though not necessarily the same person). As the AGI investigates possibilities, these simulated people might be subjected to a large number of unpleasant situations. With a trusted nonperson predicate, either the AGI'AGI's designers or the AGI itself could ensure that no actual people are created.

Any practical implementation would likely consist of a large number of nonperson predicates of increasing complexity. For most nonpersons, a predicate will quickly return that it is not a person and conclude the test. Although any number of the predicates may be used before the test claims that something is not a person, it is crucial that any predicate in the test never claims that a person isn'isn't a person. Unclassifiable cases being in-principle unavoidable, it is preferable that the AGI errs on the side of considering possible-persons as persons.

Created by TerminalAwareness at

A Nonperson Predicate is a theorized test which can definitely distinguish computational structures which are not people; i.e., a predicate which returns 1 for all people, and returns 0 or 1 for nonpeople; thus if it returns 1, the structure may or may not be a person, but if it returns 0, the structure is definitely not a person. In other words, any time at least one trusted nonperson predicate returns 0, we know we can run that program without creating a person. (The impossibility of perfectly distinguishing people and nonpeople is a trivial consequence of Rice's Theorem which is a trivial consequence of the halting problem.)

A Nonperson Predicate is a theorized test used towhich can definitely distinguish betweencomputational structures which are not people; i.e., a personpredicate which returns 1 for all people, and anythingreturns 0 or 1 for nonpeople; thus if it returns 1, the structure may or may not be a person, but if it returns 0, the structure is definitely not a person. In other words, any time at least one trusted nonperson predicate returns 0, we know we can run that isn'tprogram without creating a person. (The impossibility of perfectly distinguishing people and nonpeople is a trivial consequence of Rice's Theorem which is a trivial consequence of the halting problem.)

The need for such a test arises from the possibility that when an Artificial General Intelligence predicts a person's actions, it may develop a model of them so complete that the model itself qualifies as a person.person (though not necessarily the same person). As the AGI investigates possibilities, these simulated people might be subjected to a large number of unpleasant situations. With a Nonpersontrusted nonperson predicate, either the AGI's designers or the AGI itself could ensure that no actual people are created.

Any practical implementation would likely consist of a large number of nonperson predicates of increasing complexity. For most nonpersons, a predicate will quickly return that it is not a person and conclude the test. Although any number of the predicates may be used before the test claims that something is not a person, it is crucial that any predicate in the test never claims that a person isn't a person. If false classifications areUnclassifiable cases being in-principle unavoidable, it is preferable that the AGI errs on the side of considering nonpersonspossible-persons as persons.

The need for such a test arises from the possibility that when an Artificial General Intelligence predicts a person's actions, it may develop a model of them so complete that the model itself qualifies as a person. As the AGI investigates possibilities, all the negative situations the model experiences would generatethese simulated people might be subjected to a large amountnumber of negative utility. Simulatingunpleasant situations. With a sufficiently complex model of a person is a computational hazard. Such a situation may be avoidable by limitingNonperson predicate, either the complexity of any model of a personAGI's designers or the AGI itself could ensure that an AGI creates, as discussed in Computational Hazards.no actual people are created.

Any practical implementation would likely consist of a large number of nonperson predicates of increasing complexity. For most nonpersons, ana predicate will quickly return that it is not a person and conclude the test. Although any number of the predicates may be used before the test claims that something is not a person, it is crucial that any predicate in the test never claims that a person isn't.t a person. If false classifications are unavoidable, it is preferable that the AGI considerserrs on the side of considering nonpersons persons than considering a person a nonperson.persons.

A Nonperson Predicate is a theorized test which canused to distinguish between a person and a non-person. It must never return a false negative, claiming a personanything that isn't a person, but false positives are tolerable.person. The need for such a test arises from the possibility that in seeking to accurately predict a person's actions,when an Artificial General Intelligence predicts a person's actions, it may develop a model of them so complete itthat the model itself qualifies as a person. Since that model exists only for the AGI's use, it would find itself experiencing both every bad and good possibilityAs the AGI simulated.investigates possibilities, all the negative situations the model experiences would generate a large amount of negative utility. Simulating a sufficiently complex model of a person is a computational hazard. Such a situation may be avoidable by limiting the complexity of any model of a person that an AGI is permitted to simulate a sentient being with,creates, as discussed in Computational Hazards.

Any practical implementation would likely consist of a large number of nonperson predicates of increasing complexity. For most nonpersons, an predicate will quickly return that it is not a person and conclude the test. Although any number of the predicates may be used before the test claims that something is not a person, it is crucial that any predicate in the test never claims that a person isn't. If unavoidable, it is preferable that the AGI considers nonpersons persons than considering a person a nonperson.

See Also

A Nonperson Predicate is a theorized test which can distinguish between a person and a non-person. It must never return a false negative, claiming a person isn't a person, but false positives are tolerable. The need for such a test arises from the possibility that in seeking to accurately predict a person's actions, an Artificial General Intelligence may develop a model so complete it itself qualifies as a person. Since that model exists only for the AGI's use, it would find itself experiencing both every bad and good possibility the AGI simulated. Such a situation may be avoidable by limiting the complexity an AGI is permitted to simulate a sentient being with.with, as discussed in Computational Hazards.

A Nonperson Predicate is a theorized test which can distinguish between a person and a non-person. It must never return a false negative, claiming a person isn't a person, but false positives are tolerable. The need for such a test arises from the possibility that in seeking to accurately predict a person's actions, an Artificial General Intelligence may develop a model so complete it itself qualifies as a person. Since that model exists only for the AGI's use, it would find itself experiencing both every bad and good possibility the AGI simulated. Such a situation may be avoidable by limiting the complexity an AGI is permitted to simulate a sentient being with.

Blog Posts