lessdazed comments on Why an Intelligence Explosion might be a Low-Priority Global Risk - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (94)
I don't think that's the right reference class. We're not asking is something is sufficient, but if something is likely.
If you can figure this out, and a superintelligent AI couldn't assign it the probability it deserves and investigate and experiment with it, does that make you supersuperintelligent?
Also, isn't the random noise hypothesis being privileged here? Likewise for "our tendency to be biased and act irrationally might partly be a trade off between plasticity, efficiency and the necessity of goal-stability."
Why do these properties of expert systems matter, as no one is discussing combining them?
There's progress along these lines.
"Inadvertently" gives the wrong connotations.
What if the AI changed some of its parameters?
--John von Neumann