David3 comments on My Bayesian Enlightenment - Less Wrong

25 Post author: Eliezer_Yudkowsky 05 October 2008 04:45PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (57)

Sort By: Old

You are viewing a single comment's thread.

Comment author: David3 08 October 2008 09:46:31AM 0 points [-]

Ben,

Using your analogy I was thinking more along lines of reliably building a non-super weapon in the first place. Also, I wasnt suggesting that F would be a module, but rather that FAI (the theory) could be easier to figure out via a non "superlative" AI, after which point you'd _then_ attempt to build the superweapon according to FAI, having had key insights into what morality is.

Imagine OpenCogPrime has reached human level AI. Presumably you could teach it morality/moral judgements like humans. At this point, you could actually look inside at the AtomTable and have a concrete mathematical representation of morality. You could even trace whats going on during judgements. Try doing the same by introspecting into your own thoughts.