XiXiDu comments on Non-orthogonality implies uncontrollable superintelligence - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (45)
How do you know that you have to make a decision now? You don't know when AGI is going to be invented. You don't know if it will be a quick transition from expert systems towards general reasoning capabilities or if AGI will be constructed piecewise over a longer period of time. You don't know if all that you currently believe to know will be rendered moot in future. You don't know if the resources that you currently spend on researching friendly AI are a wasted opportunity because all that you could possible come up with will be much easier to come by in future.
All that you really know at this time is that smarter than human intelligence is likely possible and that something that is smarter is hard to control.
How do you know we don't? Figuring out whether there is urgency or not is one of those questions whose solution we need to estimate... somehow.