David3 comments on My Bayesian Enlightenment - Less Wrong

25 Post author: Eliezer_Yudkowsky 05 October 2008 04:45PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (57)

Sort By: Old

You are viewing a single comment's thread.

Comment author: David3 06 October 2008 11:39:43PM 0 points [-]

Phil,

There's really two things im considering. One, whether the general idea of AI throttling is meaningful and what the technical specifics could be (crude example: lets give it only X compute power yielding an intelligence level Y) Two, if we could reliably build a human level AI, it could be of great use, not in itself, but as a tool for investigation, since we could finally "look inside" at concrete realizations of mental concepts, which is not possible with our own minds. As an example, if we could teach a human level AI morality (presumably possible since we ourselves learn it) we would have a concrete realization of that morality as computation that could be looked at outright and even debugged. Could this not be of great value for insights into FAI?