Jerry-Master/KAN-benchmarking

comparison of performance? like Accuracy

Closed this issue · 4 comments

Is there any plan to provide a comparison of model performance besides efficiency?

Meaningful issue.
I have same question.
And I think, the performance(likes accuracy) of algorihtm is heavyly based on the task.
E.g. I tested (a few compositions) the function fitting by CKAN and MLP, same parametes number, the performance of MLP is worse than KAN's.

There are other repositories for that, like: https://github.com/yu-rp/kanbefair. And research has already been done on that side. The conclusion is that KANs underperform in typical deep learning tasks like computer vision and natural language processing but outperform on other tasks like symbolic learning. It seems a good discovery for interpretabitily research but not much for performance research.

@Jerry-Master are there ways to see which version of KAN is the best for some of the major ML benchmarks for table (regression or classification)? There are some cases where there is a need for proxy similar to picking sub-columns of a table and creating mathematical equations, instead of CV and NLP where the data is more "rich".

I would say try? There is no way of knowing for sure before trying. As far as I am concerned, there is no rigorous analysis on the effect of KAN for tabular data. And I would say that is true for any other deep learning methodology. Tabular data is normally private and public benchmarks are not representative of the real world. In my experience, you have to try and every time is different.