You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

WoodSwordSquire comments on Stupid Questions, 2nd half of December - Less Wrong Discussion

2 Post author: Bound_up 23 December 2015 05:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (186)

You are viewing a single comment's thread.

Comment author: WoodSwordSquire 29 December 2015 06:19:39PM 0 points [-]

Would an AI that simulates a physical human brain be less prone to FOOM than a human-level AI that doesn't bother simulating neurons?

It sounds like it might be harder for such an AI to foom, since it would have to understand the physical brain well enough before it could improve on its' simulated version. If such an AI exists at all, that knowedge would probably be available somewhere, so it could still happen if you simulated someone smart enough to learn it (or simulated one of the people who helped build it). The AI should at least be boxable if it doesn't know much about neurology or programming, though.

Maybe the catch is that a boxed human simulation that can't self-modify isn't very useful. It'd be good as assistive technology or immortality, but you probably can't learn much about any other kind of AI by studying a simulated human. (The things you could learn from it, are mostly ones you could learn about as easily from studying a physical human.)