Wiki Contributions

Comments

Sorted by
Marv K10

I agree on both counts. You're right that I should model the alignment of the system as well as its intelligence. I guess the alignment could be thought of as minimizing the distance of high dimensional vectors representing the players' and the AI's values. So each user (and the AI, too) could have a value vector associated with it, and the cost functions of the user could then incorporate how much they care about their own alignment (to the rest of the users), and the cost function of the AI needs to be tuned so that it is enough aligned when it reaches a critical threshold of intelligence. That way, you could express how important it is that the AI is aligned, as a function of its intelligence. 

Marv K40

Nice writeup. I wasn't even aware k-means clustering can be viewed from the Variational Bayes framework. In case more perspectives are useful to any readers: When I first tried to learn about this, I found the Pyro Introduction very helpful; because it is split up over a lot of files, I put together these slides for Bayesian Neural Networks, which also start out with a motivation for Variational Bayes.

Marv K10

I've been thinking about alignment of subsystems in a very similar style and am really excited to see someone  else thinking along this way. I started a comment with my own thoughts on this approach; but it got out of hand quickly; so I made a separate post: https://www.lesswrong.com/posts/AZfq4jLjqsrt5fjGz/formalizing-alignment 

Would be keen on having any sort of feedback.

Marv K10

Thanks for the pointers! The overviews in both sources are great. I especially like Rumelhart's Story Grammar. Though from what I gather from Mark Riedl's post is that the field is mostly about structure/grammar inherent to stories as objects that exist pretty much in a vacuum, and does not explicitly focus on making connections to some sort of models of agents that communicate using these stories.