No, utility functions are not a property of computer programs in general. They are a property of (a certain class of) agents.
A utility function is just a way for an agent to evaluate states, where positive values are good (for states the agent wants to achieve), negative values are bad (for states the agent wants to avoid), and neutral values are neutral (for states the agent doesn't care about one way or the other). This mapping from states to utilities can be anything in principle: a measure of how close to homeostasis the agent's internal state is, a measure of how many smiles exist on human faces, a measure of the number of paperclips in the universe, etc. It all depends on how you program the agent (or how our genes and culture program us).
Utility functions drive decision-making. Behavioral policies and actions that tend to lead to states of high utility will get positively reinforced, such that the agent will learn to do those things more often. And policies/actions that tend to lead to states of low (or negative) utility will get negatively reinforced, such that the agent learns to do them less often. Eventually, the agent learns to steer the world toward states of maximum utility.
Depending on how aligned an AI's utility function is with humanity's, this could be good or bad. It turns out that for highly capable agents, this tends to be bad far more often than good (e.g., maximizing smiles or paperclips will lead to a universe devoid of value for humans).
Nondeterminism really has nothing to do with this. Agents that can modify their own code could in principle optimize for their utility functions even more strongly than if they were stuck at a certain level of capability, but a utility function still needs to be specified in some way regardless.
The idea of a utility function comes from various theorems (originating independently of computers and programming) that attempt to codify the concept of "rational choice". These theorems demonstrate that if someone has a preference relation over the possible outcomes of their actions, and this preference relation satisfies certain reasonable-sounding conditions, then there must exist a numerical function of those outcomes (called the "utility function") such that their preference relation over actions is equivalent to comparing the expected utilities arising from those actions. Their most preferred action is therefore the one that maximises expected utility.
Here is Eliezer's exposition of the concept in the context of LessWrong.
The theorem most commonly mentioned is the VNM (Von Neumann-Morgenstern) theorem, but there are several other derivations than theirs of similar results.
The foundations of utility theory are entangled with the foundations of probability. For example, Leonard Savage (The Foundations of Statistics, 1954 and 1972) derives both together from the agent's preferences.
The theorems are normative: they say that a rational agent must have preferences that can be described by a utility function, or they are liable to, for example, pay to get B instead of A, but then pay again to get A instead of B (without ever having had B before switching back). Actual agents do whatever they do, regardless of the theorems.
One occasionally sees statements to the effect that "everything has a utility function, because we can just attach utility 1 to what it does and 0 to what it doesn't do." I call this the Texas Sharpshooter Utility Function, by analogy with the Texas Sharpshooter, who shoots at a barn door and then draws a target around the bullet hole. Such a supposed utility function is exactly as useful as a stopped clock is for telling the time.