You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Stuart_Armstrong comments on Tools want to become agents - Less Wrong Discussion

12 Post author: Stuart_Armstrong 04 July 2014 10:12AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (81)

You are viewing a single comment's thread. Show more comments above.

Comment author: Stuart_Armstrong 07 July 2014 11:50:33AM 1 point [-]

...If you are dealing with an entity that can't add context (or ask for clarifications) the way a human would.

Can we note you've moved from "the problem is not open ended" to "the AGI is programmed in such a way that the problem is not open ended", which is the whole of the problem.

Comment author: TheAncientGeek 07 July 2014 12:10:19PM *  2 points [-]

In a sense. Non openness is a non problem for fairly limited AIs, because their limitations prevent them having a wide search space that would need to be narrowed down. Non openness is also something that is part of, or an implication of, an ability that is standardly assumed in a certain class of AGIs, namely those with human level linguistic ability. To understand a sentence correctly is to narrow down its space of possible meanings.

Only AIXIs have an own oneness that would need additional measures to narrow them down.

They are no threat at the moment, and the easy answer to AI safety might be to not use them....like we don't build hydrogen filled airships.