You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

paper-machine comments on Why AI may not foom - Less Wrong Discussion

23 Post author: John_Maxwell_IV 24 March 2013 08:11AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (78)

You are viewing a single comment's thread.

Comment author: [deleted] 24 March 2013 09:31:28PM 8 points [-]

I find the use of schematic differential equations, as if they actually meant something, to be horrifically bad. Yudkowsky's original point in Hard Takeoff was that there is no a priori reason to expect than an agent that can RSI should improve at a rate that humans can react to.

Even naive dimensional analysis is enough to show that these equations don't mean anything.

Comment author: John_Maxwell_IV 26 March 2013 06:50:08AM 1 point [-]

I think use of equations is fine as long as you don't put more weight in to them than words. Ultimately, as I said, it's all very speculative. Equations represent model thinking, not association-based reasoning or reasoning by analogy. I tend to think that model thinking is typically more useful than the other two, but yes, if you're the sort of person who says "if it's an equation, it must be right" then you shouldn't do that here.

Even naive dimensional analysis is enough to show that these equations don't mean anything.

Go on...