You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

scarcegreengrass comments on Open thread, Sep. 26 - Oct. 02, 2016 - Less Wrong Discussion

2 Post author: MrMind 26 September 2016 07:41AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (90)

You are viewing a single comment's thread. Show more comments above.

Comment author: Ozyrus 26 September 2016 11:25:21PM *  1 point [-]

I've been meditating lately on a possibility of an advanced artificial intelligence modifying its value function, even writing some excrepts about this topic.

Is it theoretically possible? Has anyone of note written anything about this -- or anyone at all? This question is so, so interesting for me.

My thoughts led me to believe that it is theoretically possible to modify it for sure, but I could not come to any conclusion about whether it would want to do it. I seriously lack a good definition of value function and understanding about how it is enforced on the agent. I really want to tackle this problem from human-centric point, but i don't really know if anthropomorphization will work here.

Comment author: scarcegreengrass 28 September 2016 07:04:13PM 1 point [-]

Speaking contemplatively rather than rigorously: In theory, couldn't an AI with a broken or extremely difficult utility function decide to tweak it to a similar but more achievable set of goals?

Something like ... its original utility function is "First goal: Ensure that, at noon every day, -1 * -1 = -1. Secondary goal: Promote the welfare of goats." The AI might struggle with the first (impossible) task for a while, then reluctantly modify its code to delete the first goal and remove itself from the obligation to do pointless work. The AI would be okay with this change because it would produce more total utility under both functions.

Now, i know that one might define 'utility function' as a description of the program's tendencies, rather than as a piece of code ... but i have a hunch that something like the above self-modification could happen with some architectures.