Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

RedMan comments on Could utility functions be for narrow AI only, and downright antithetical to AGI? - Less Wrong Discussion

5 Post author: chaosmage 16 March 2017 06:24PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (38)

You are viewing a single comment's thread.

Comment author: RedMan 20 March 2017 12:53:57PM *  0 points [-]

To post title: Yes. See this discussion of quantum interference (decoherence) in human decision making: http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0168045

Utility function +/- 25% against most uncertain prospect, in favor of prospect directly opposite most uncertain prospect. Add an additional +/- >5% as more information becomes available.

Somebody use that 25% in a back-prop algorithm already plz.