Eliezer_Yudkowsky comments on Don't Revere The Bearer Of Good Info - Less Wrong

82 Post author: CarlShulman 21 March 2009 11:22PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (64)

You are viewing a single comment's thread. Show more comments above.

Comment author: billswift 22 March 2009 12:14:13AM *  1 point [-]

Just a brief mention since we're supposed to avoid AI for a while, but it is too relevant to this post to totally ignore: I just finished J Storrs Hall's "Beyond AI", the overlap and differences with Eliezer's FAI are very interesting, and it is a very readable book.

EDIT: You all might notice I did write "overlap and differences"; I noticed the differences, but I do think they are interesting; not least because they seem similar to some of Robin's criticisms of Eliezer's FAI.

Comment author: Eliezer_Yudkowsky 22 March 2009 12:24:11AM 1 point [-]

I think I see a lot more difference between my own work and others' work than some of my readers may.

Comment author: PhilGoetz 22 March 2009 12:37:18AM *  5 points [-]

I think that's inevitable, if for no other reason that someone reading two treatments of one subject that they don't completely understand is likely to interpret them in a correlated way. They may make similar assumptions in both cases; or they may understand the one they read first, and try to interpret the one they read second in a similar way.

Comment author: CarlShulman 22 March 2009 12:38:04AM 2 points [-]

Hall gives a passable history of AI, acts as a messenger for a lot of standard AI ideas, including the Dennett compatibilist account of free will and some criticisms of nonreductionist accounts of consciousness, and acts as a messenger for a stew of social science ideas, e.g. social capital and transparent motivations, although the applicability of the latter is often questionable. Those sections aren't bad.

It's only when he gets to considering the dynamics of powerful intelligences and offers up original ideas that he makes glaring errors. Since that's your specialty, those mistakes stand out as horribly egregious, while casual readers might miss them or think them outweighed by the other sections of the book.

I see differences between you and Drescher, or you and Greene, both in substance (e.g. some clear errors in Drescher's book when he discusses the ethical value of rock-minds, neglecting the possibility that happy experiences of others could figure in our utility functions directly, rather than only through game theoretic interactions with powerful agents) and in presentation/formalization/frameworks.

We could try to quantify percentage overlap in views on specific questions.