All of Jose Sepulveda's Comments + Replies

Oh I see that, thanks! :) Super interesting work. I'm testing it's application to recommender systems.

Looking at your code I see you still add an L1 penalty to the loss, is this still necessary? In my own experiments I've noticed that top-k is able to achieve sparsity on it's own without the need for L1.

1Bart Bussmann
Although the code has the option to add a L1-penalty, in practice I set the l1_coeff to 0 in all my experiments (see main.py for all hyperparameters).