Jose Sepulveda

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by

Oh I see that, thanks! :) Super interesting work. I'm testing it's application to recommender systems.

Looking at your code I see you still add an L1 penalty to the loss, is this still necessary? In my own experiments I've noticed that top-k is able to achieve sparsity on it's own without the need for L1.