1 min read

20

One of the more interesting papers at this year's AGI-12 conference was Finton Costello's Noisy Reasoners. I think it will be of interest to Less Wrong:

 

This paper examines reasoning under uncertainty in the case where the AI reasoning mechanism is itself subject to random error or noise in its own processes. The main result is a demonstration that systematic, directed biases naturally arise if there is random noise in a reasoning process that follows the normative rules of probability theory. A number of reliable errors in human reasoning under uncertainty can be explained as the consequence of these systematic biases due to noise. Since AI systems are subject to noise, we should expect to see the same biases and errors in AI reasoning systems based on probability theory.

 

New Comment
14 comments, sorted by Click to highlight new comments since:
[-]gwern140

A recent paper I found even more interesting, courtesy of XiXiDu: "Burn-in, bias, and the rationality of anchoring"

Bayesian inference provides a unifying framework for addressing problems in machine learning, artificial intelligence, and robotics, as well as the problems facing the human mind. Unfortunately, exact Bayesian inference is intractable in all but the simplest models. Therefore minds and machines have to approximate Bayesian inference. Approximate inference algorithms can achieve a wide rangeof time-accuracy tradeoffs, but what is the optimal tradeoff? We investigate time-accuracy tradeoffs using the Metropolis-Hastings algorithm as a metaphor for the mind’s inference algorithm(s). We find that reasonably accurate decisions are possible long before the Markov chain has converged to the posterior distribution, i.e. during the period known as “burn-in”. Therefore the strategy that is optimal subject to the mind’s bounded processing speed and opportunity costs may perform so few iterations that the resulting samples are biased towards the initial value. The resulting cognitive process model provides a rational basis for the anchoring-and-adjustment heuristic. The model’s quantitative predictions are tested against published data on anchoring in numerical estimation tasks.

[-][anonymous]70

We however can expect AI systems to be less subject to noise than human brains.

[+]twanvl-50

While the model is interesting, it is almost irremediably ruined by this line: "since by definition P(A) = Ta/n", which substantially conflates probability with frequency. From this point of view, the conclusion:

It is clear from Pearl's work that probability theory provides normatively correct rules which an AI system must use to reason optimally about uncertain events. It is equally clear that AI systems (like all other physical systems) are unavoid- ably subject to a certain degree of random variation and noise in their internal workings. As we have seen, this random variation does not produce a pattern of reasoning in which probability estimates vary randomly around the correct value; instead,it produces systematic biases that push probability estimates in certain directions and so will produce conservatism, subadditivity, and the conjunction and disjunction errors in AI reasoning.

does not follow (because the estimation is made by the prior and not by counting).
BUT the issue of noise in AI is interesting per se: if we a have stable self-improving friendly AI, could it faultily copies/update itself into an unfriendly version?

Repeated self modification is problematic, because it represents a product of series (though possibly a convergent one, if the AI gets better at maintaining its utility function / rationality with each modification) -- naively, because no projection about the future can have a confidence of 1, there is some chance that each change to the AI will be negative-value.

Right, it's not only noise that can alter the value of copying/propagating source code, since we can at least imagine that future improvements will be also more stable in this regard: there's also the possibility that the moral landscape of an AI could be fractal, so that even a small modification might turn friendliness into unfriendliness/hostility.

While the model is interesting, it is almost irremediably ruined by this line: "since by definition P(A) = Ta/n", which substantially conflates probability with frequency.

Think of P(A) merely as the output of a noiseless version of the same algorithm. Obviously this depends on the prior, but I think this one is not unreasonable in most cases.

I'm not sure I've understood the sentence

Think of P(A) merely as the output of a noiseless version of the same algorithm.

because P(A) is the noiseless parameter.
Anyway, the entire paper is based on the counting algorithm to establish that random noise can give rise to structured bias, and that this is a problem for a bayesian AI.
But while the mechanism can be an interesting and maybe even correct way to unify the mentioned bias in human mind, it can hardly be posed as a problem for such an artificial intelligence. A counting algorithm for establishing probabilities basically denies everything bayesian update is designed for (the most trivial example: extraction from a finite urn).

Well, yes, the prior that yields counting algorithms is not universal. But in many cases it's good idea! And if you decide to use, for example, some rule-of-succession style modifications, the same situation appears.

In the case of a finite urn, you might see different biases (or none at all if your algorithm stubbornly refuses to update because you chose a silly prior).

[+][anonymous]-50