You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Elo comments on Black box knowledge - Less Wrong Discussion

2 Post author: Elo 03 March 2016 10:40PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (7)

You are viewing a single comment's thread. Show more comments above.

Comment author: Elo 05 March 2016 11:11:49AM *  1 point [-]

trust that a toaster will toast bread

yes, this is a retrospective example. once I already know what happens; I can say that a toaster makes bread into toast. If you start to make predictive examples; things get more complicated as you have mentioned.

It still helps to have an understanding of what you don't know. And in the case of AI; an understanding of what you are deciding not to know (for now) can help you consider the risk involved in playing with AI of unclear potential.

i.e. AI with defined CEV -> what happens next -> humans are fine. seems like a bad idea to expect a good outcome from. Now maybe we can work on a better process for defining CEV.