asr comments on The Singularity Institute's Arrogance Problem - Less Wrong

63 Post author: lukeprog 18 January 2012 10:30PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (307)

You are viewing a single comment's thread. Show more comments above.

Comment author: asr 21 January 2012 01:03:09AM *  4 points [-]

Also, it's only for irrational beings like humans that there is a distinction between "justified' and 'belief.' An AI would simply have degrees of belief in something according to the strength of the justification, using Bayesian rules. So JTB is clearly a human-centered definition, which doesn't usefully define knowledge anyway.

I am skeptical that AIs will do pure Bayesian updates -- it's computationally intractable. An AI is very likely to have beliefs or behaviors that are irrational, to have rational beliefs that cannot be effectively proved to be such, and no reliable way to distinguish the two.

Comment author: XiXiDu 21 January 2012 10:25:44AM 5 points [-]

I am skeptical that AIs will do pure Bayesian updates -- it's computationally intractable.

Isn't this also true for expected utility-maximization? Is a definition of utility that is precise enough to be usable even possible? Honest question.

An AI is very likely to have beliefs or behaviors that are irrational...

Yes, I wonder there is almost no talk about biases in AI systems. . Ideal AI's might be perfectly rational but computationally limited but artificial systems will have completely new sets of biases. As a simple example take my digicam, which can detect faces. It sometimes recognizes faces where indeed there are no faces, just like humans do but yet on very different occasions. Or take the answers of IBM Watson. Some were wrong but in completely new ways. That's a real danger in my opinion.

Comment author: wedrifid 22 January 2012 10:52:11PM 3 points [-]

Is a definition of utility that is precise enough to be usable even possible? Honest question.

Honest answer: Yes. For example 1 utilon per paperclip.

Comment author: lessdazed 23 January 2012 03:29:42PM 2 points [-]

As a simple example take my digicam, which can detect faces. It sometimes recognizes faces where indeed there are no faces, just like humans do but yet on very different occasions.

I appreciate the example. It will serve me well. Upvoted.