Encourage use of same datasets across implementations
grantmcdermott opened this issue · 7 comments
In looking over several examples, I've come around to the idea that we should strongly encourage re-use of the same datasets across language implementations (i.e for a specific page). Advantages include:
- Common input and output for direct comparisons.
- Avoids duplicate typing up of the task (we can write that once under the implementation header) and unnecessary in-code commenting (which, frankly, I think we have too much of atm and personally find quite distracting).
I recently did this for the Collapse a Dataset page and, personally, think it's a lot easier to read and compare the code examples now. @vincentarelbundock's Rdatasets is a very useful resource in this regard, since it provides a ton of datasets that can be directly read in as CSVs. (Both Julia and Python statsmodels have nice wrappers for it too.)
Question: Do others agree? If so, I'll add some text to Contributing page laying this out.
PS. I also think we should discourage use of really large files, especially since this is going to start becoming a drag on our GA Actions builds. There is one big offender here that I'll try to swap out when I get a sec. (Sorry, that's my student and I should have warned her about it.)
I definitely agree with this. I can add something to the Contributing page (although I'm not sure how closely that gets read anyway).
I agree, and this has generally been my approach. (Although in some cases I've used the nice sklearn built-in data generation functions like make_regression
because they make it really easy to demonstrate something with a generated dataset.)
Agree about the problematic page; it could be better. When I was doing Python versions of some geospatial pages, I took one look at that and thought 'no'.
@NickCH-K Okay, cool. I'll let you update then. Feel free to close this when you do. I think people will get the idea once we manage to set the standard across a couple pages.
@aeturrell Ya, I agree it can be tough to balance with benefits of generated datasets. On that note, I just saw in commit log that you added a Python SVM example (nice!). Do you know of a good off-the-shelf dataset that we could use here?
FWIW, Julia uses its own mirror of Rdatasets
(which is extremely out of date now). I think that John Myles White, who originally forked it years ago, was worried about stability. This was reasonable at the time because he was one of the very first people to discover Rdatasets
.
But I think that overall it's been pretty stable. Since I stopped actively contributing to Python statsmodels
, I think that Josef has only contacted me once about a problem, and we got it fixed pretty quickly.
Oh, thanks for the heads-up @vincentarelbundock. I should double check that the example I used above actually works then! Do you think there's any chance of them updating it (have you contacted them?) I'm happy to if not...
EDIT: Scratch that, I've just read that they only want to bundle some example and that any more would have to be added manually. I'll see about putting in a PR.
I think my previous post left too much ambiguity: My strong intention is to keep everything in Rdatasets
rock stable, and to only remove/modify data if the underlying package does it too, or if I am asked directly.
So far, this has only happened once or twice in many years, and not recently. So I wouldn't worry about the examples you posted. They should still work.
Do you know of a good off-the-shelf dataset that we could use here?
Good question. To replace what's there, we'd want linear and non-linear binary classification with balanced classes. A quick look at the sklearn datasets (which is also where the make_dataset
generation functions are), suggests this might work for simple binary classification with almost balanced classes. I haven't checked Rdatasets
though.