Vaniver comments on Debunking Fallacies in the Theory of AI Motivation - LessWrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (343)
I agree with everything in this comment up to:
This doesn't appear to be correct given that you can always transform functional programs into imperative programs and vice versa.
I've never heard that you can program in functional languages without doing testing and relying only on type checking to ensure correct behavior.
In fact, AFAIK, Haskell, the most popular pure functional programming language, is bad enough that you actually have to test all non-trivial programs for memory leaks, since it is not possible to reason except for special cases about the memory allocation behavior of a program from its source code and the language specification: the allocation behavior depends on implementation-specific and largely undocumented details of the compiler and the runtime.
Anyway, this memory allocation issue may be specific to Haskell, but in general, as I understand, there is nothing in the functional paradigm that guarantees a higher level of correctness than the imperative paradigm.
"Certain classes of errors" is meant to be read as a very narrow claim, and I'm not sure that it's relevant to AI design / moral issues. Many sorts of philosophical errors seem to be type errors, but it's not obvious that typechecking is the only solution to that. I was primarily drawing on this bit from Programming in Scala, and in rereading it I realize that they're actually talking about static type systems, which is an entirely separate thing. Editing.
Ok, sorry for being nitpicky.
In case it wasn't clear, thanks for nitpicking, because I was confused and am not confused about that anymore.