Preference

Created by steven0461 at

Preference is usually conceptualized as a set of attitudes or evaluations made by an agent towards a specific object. Although typically studied by the social sciences,object, and it has been proposed that AI has a more robust set of methods to deal with them. These can be divided in several steps:

Preference is usually conceptualized as a set of attitudes or evaluations made by a subject oran agent towards a specific object or group of objects. These attitudes can vary in their intensity and valence, directly influencing the decision-making process, both implicitly and explicitly.

object. Although typically studied by the social sciences, it has been proposed that AI has a more robust set of methods to deal with them. These can be divided in several steps:

  1. Preferences acquisition: Extraction of preferences from a user, through an interactive learning system, e.g. a question-answer process.
  2. Preferences modeling: After extraction, the goal is to create a mathematical model expressing the preferences, taking into account its properties, likeproperties (for instance, if the transitionpreferences are transitive between pairs of relations.choices).
  3. Preferences representation: With a robust model of preferences, it becomes necessary to develop a symbolic system to represent them - a preference representation language.
  4. Preferences reasoning: Finally, having represented a user’s or agent’s preferences, it is possible to mine the data looking for new insights and knowledge. This could be used, for instance, to aggregate users based on preferences or as biases in decision processes and game theory scenarios.
  1. Preference acquisition:Preferences acquisition: Extraction of preferences from a user, through an interactive learning system, e.g. a question-answer process.
  2. Preferences modeling:modeling: After extraction, the goal is to create a mathematical model expressing the preferences, taking into account its properties, like the transition of relations.
  3. Preference representation:Preferences representation: With a robust model of preferences, it becomes necessary to develop a symbolic system to represent them - a preference representation language.
  4. Preferences reasoning:reasoning: Finally, having represented a user’s or agent’s preferences, it is possible to mine the data looking for new insights and knowledge. This could be used, for instance, to aggregate users based on preferences or as biases in decision processes and game theory scenarios.

~Further Reading & References

Preference is usually conceptualized as a normative sideset of optimization. attitudes or evaluations made by a subject or agent towards a specific object or group of objects. These attitudes can vary in their intensity and valence, directly influencing the decision-making process, both implicitly and explicitly.

Although typically studied by the social sciences, it has been proposed that AI has a more robust set of methods to deal with them. These can be divided in several steps:

Preference acquisition: Extraction of preferences from a user, through an interactive learning system, e.g. a question-answer process. Preferences modeling: After extraction, the goal is roughly equivalent to goalscreate a mathematical model expressing the preferences, taking into account its properties, like the transition of relations. Preference representation: With a robust model of preferences, it becomes necessary to develop a symbolic system to represent them - a preference representation language. Preferences reasoning: Finally, having represented a user’s or agent’s preferences, it is possible to mine the data looking for new insights and values, but the concept refers moreknowledge. This could be used, for instance, to the sum totalaggregate users based on preferences or as biases in decision processes and game theory scenarios.

This sequential chain of all agent'thought can be particularly useful when dealing with Coherent Extrapolated Volition, as a way of systematically exploring agent’s goals and dispositions than to the individual components. For example, for an agent that runs a decision theory based on expected utility maximization, preference should specify both prior and utility function.motivations.

Blog posts~Further Reading & References

If some entity pushes reality into some state -- across many contexts, not just by accident -- then you could say it prefers that state. Preferences arePreference is a normative side of optimization. Preference is roughly equivalent to goals and values.

Preference orderingsvalues, but the concept refers more to the sum total of all agent's goals and dispositions than to the individual components. For example, for an agent that obey someruns a rationalitydecision theory axioms can be represented by abased on expected utility maximization, preference should specify both prior and utility function.

Preference orderings that obey some rationality axioms can be represented by a utility function.

If some entity pushes reality into some state -- across many contexts, not just by accident -- then you could say it prefers that state. Preferences are roughly equivalent to goals and values.

See also