FAWS comments on Safe questions to ask an Oracle? - Less Wrong

2 Post author: Stuart_Armstrong 27 January 2012 06:33PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (41)

You are viewing a single comment's thread.

Comment author: FAWS 28 January 2012 05:02:36PM 6 points [-]

Please excuse me if I'm missing something, but why is whether an Oracle AI can be safe considered such an important question in the first place? One of the main premises behind considering unfriendly AI a major existential risk is that someone will end up building one eventually if nothing is done to stop it. Oracle AI doesn't seem to address that. One particular AGI that doesn't itself destroy the world doesn't automatically save the world. Or is the intention to ask the Oracle how best to stop unfriendly AI and/or build friendly AI? Then it would be important to determine whether either of those questions and any sub-questions can be asked safely, but why would comparatively unimportant other questions that e. g. only save a few million lives even matter?