Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
wecacuee114

I reviewed this paper. It is a good paper, but more hype than substance. Or at least I think so. TLDR;

I make 4 major critques of the paper

  1. MLPs (Multi-Layer Perceptrons) have learnable activation functions as well. This depends on the what you call an activation function.
  2. The content of the paper does not justify the name, Kolmogorov-Arnold networks (KANs). The theorem Kolmogorov-Arnold Theorem is basically unused in the paper. It should have been called spline activation function deep neural networks.
  3. KANs are MLPs with spline-basis as the activation function. You can reinterpret, the proposed KAN as MLPs with spline-basis as the activation functions.
  4. KANs do not beat the curse of dimensionality. Well if they did, MLPs will do it too.

https://vikasdhiman.info/reviews/KAN_a_review.html