shminux comments on AI risk, new executive summary - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (76)
Sure, but I find "can't understand" sort of fuzzy as a concept. i.e. I wouldn't say I 'understand' compactification and calabi yau manifolds the same way I understand sheet music (or the same way I understand the word green), but I do understand them all in some way.
It seems unlikely to me that there exist concepts that can't be at least broadly conveyed via some combination of those. My intuition is that existing human languages cover, with their descriptive power, the full range of explainable things.
for example- it seems unlikely there exists a law of physics that cannot be expressed as an equation. It seems equally unlikely there exists an equation I would be totally incapable of working with. Even if I'll never have the insight that lead someone to write it down, if you give it to me, I can use it to do things.
My intuition is the exact opposite.
I can totally imagine that some models are not reducible to equations, but that's not the point, really.
Unless this "use" requires more brainpower than you have... You might still be able to work with some simplified version, but you'd have to have transhuman intelligence to "do things" with the full equation.
But that seems incredibly nebulous. What is the exact failure mode?