You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

eli_sennesh comments on Open Thread, May 4 - May 10, 2015 - Less Wrong Discussion

3 Post author: Gondolinian 04 May 2015 12:06AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (215)

You are viewing a single comment's thread. Show more comments above.

Comment author: estimator 10 May 2015 10:15:23PM *  2 points [-]

Short answer: use differential entropy and differential mutual information.

Differential entropy and Shannon entropy are both instances of a more general concept; Shannon entropy for discrete distributions and differential entropy for absolutely continuous ones.

KL-divergence is actually more about approximations, than about variables dependence; it would be strange to use KL-divergence for your purposes, since it is non-symmetric. Anyway, KL-divergence is tightly connected to mutual information:

Differential mutual information is a measure of variables' dependence; differential entropy is a measure of... what? In the discrete case we have the encoding interpretation, but it breaks in the continuous case. The fact that it can be negative shouldn't bother you, because its interpretation is unclear anyway.

As for (differential) mutual information, it can't be negative, as you can see from the formula above (KL-divergence is non-negative). Nothing weird occurs here.

Comment author: [deleted] 11 May 2015 02:26:46AM 0 points [-]

Brilliant! Thank you so much!