asr comments on The Singularity Institute's Arrogance Problem - Less Wrong

63 Post author: lukeprog 18 January 2012 10:30PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (307)

You are viewing a single comment's thread. Show more comments above.

Comment author: Solvent 21 January 2012 12:14:49AM *  3 points [-]

I apologize, but that does not look like a solution to the Gettier Problem. Could you elaborate?

Okay, the Gettier problem. I can explain the Gettier problem, but it's just my explanation, not Eliezer's.

The Gettier problem is pointing out problems with the definition of knowledge as justified true belief. "Justified true belief" (JTB) is an attempt at defining knowledge. However, it falls into the classic problem with philosophy of using intuition wrong, and has a variety of other issues. Lukeprog discusses the weakness of conceptual analysis here.

Also, it's only for irrational beings like humans that there is a distinction between "justified' and 'belief.' An AI would simply have degrees of belief in something according to the strength of the justification, using Bayesian rules. So JTB is clearly a human-centered definition, which doesn't usefully define knowledge anyway.

Incidentally, I just re-read this post, which says:

Yudkowsky once wrote, "If there's any centralized repository of reductionist-grade naturalistic cognitive philosophy, I've never heard mention of it." When I read that I thought: What? That's Quinean naturalism! That's Kornblith and Stich and Bickle and the Churchlands and Thagard and Metzinger and Northoff! There are hundreds of philosophers who do that!

So perhaps Eliezer didn't create original solutions to many of the problems I credited him with solving. But he certainly created them on his own. Like Leibniz and calculus, really.

Comment author: asr 21 January 2012 01:03:09AM *  4 points [-]

Also, it's only for irrational beings like humans that there is a distinction between "justified' and 'belief.' An AI would simply have degrees of belief in something according to the strength of the justification, using Bayesian rules. So JTB is clearly a human-centered definition, which doesn't usefully define knowledge anyway.

I am skeptical that AIs will do pure Bayesian updates -- it's computationally intractable. An AI is very likely to have beliefs or behaviors that are irrational, to have rational beliefs that cannot be effectively proved to be such, and no reliable way to distinguish the two.

Comment author: XiXiDu 21 January 2012 10:25:44AM 5 points [-]

I am skeptical that AIs will do pure Bayesian updates -- it's computationally intractable.

Isn't this also true for expected utility-maximization? Is a definition of utility that is precise enough to be usable even possible? Honest question.

An AI is very likely to have beliefs or behaviors that are irrational...

Yes, I wonder there is almost no talk about biases in AI systems. . Ideal AI's might be perfectly rational but computationally limited but artificial systems will have completely new sets of biases. As a simple example take my digicam, which can detect faces. It sometimes recognizes faces where indeed there are no faces, just like humans do but yet on very different occasions. Or take the answers of IBM Watson. Some were wrong but in completely new ways. That's a real danger in my opinion.

Comment author: wedrifid 22 January 2012 10:52:11PM 3 points [-]

Is a definition of utility that is precise enough to be usable even possible? Honest question.

Honest answer: Yes. For example 1 utilon per paperclip.

Comment author: lessdazed 23 January 2012 03:29:42PM 2 points [-]

As a simple example take my digicam, which can detect faces. It sometimes recognizes faces where indeed there are no faces, just like humans do but yet on very different occasions.

I appreciate the example. It will serve me well. Upvoted.