feat vs few?
echo66 opened this issue · 5 comments
Greetings!
I would like to know if there is any practical difference between the two projects. I'm asking this because testing feat would require a lot more effort than few and, as such, I need to know if it is worth it.
Thanks in advance!
Thanks for your interest! We'll be releasing a preprint that describes Feat in the coming days. The basic differences are
- Feat optimizes a population of models; Few optimizes one
- Feat supports multi-type features in one model; Few does single type
- Feat has gradient descent built in, we haven't added that to Few yet
Feat is definitely harder to install and we haven't made an official release yet, so you might want to start with Few and go from there. I'll keep this thread updated once we get some tangible empirical comparisons between the two.
Gradient descent??? I'm probably a "little bit" behind the state of the art regarding evolutionary approaches but...doesn't gradient descent require your function to be differentiable? Evolutionary approaches are not required to be differentiable, right?
Feat will learn the constants for the subset of features that are differentiable using gradient descent. It is a local search built within the larger search for feature forms.
here is the arxiv preprint I mentioned.