How to grab SOM tarball for the shiny apps?
Closed this issue · 4 comments
I've uploaded a simple app to get us rolling and to see how bad we'll be bogged down by the size of the tarball. How do you suggest we connect the shiny scripts to the SOM tarball data? Server copy? Call Aurora?
Right now the scripts simply reference my local tarball csv, so obviously the app won't work anywhere else yet.
@piersond - I generally do not like to store binary data on GitHub and also generally agree with @brunj7 that separating data and code is a best practice but, in this case, I think the most efficient and logical approach is to store the data in the GitHub repo where the Shiny app lives. If we switch to .Rds from .csv, the file size is only ~2.5 Mb (even if that doubles, it is still totally manageable), and .Rds is faster than .csv so that is a bonus. If this become unwieldy, we can always put the data on Aurora or somewhere else down the road but I think this is a good approach for now. When I generate the tarball, I can send copies to Google Drive and the Shiny GH repo so that folks and Shiny could be working with the same data....or folks could just clone (the Shiny repo).
cc @wwieder
@piersond - more on that...if we go that route, I can stop including the date in the tarball filename and instead rely on GH commits and Google's file history to identify the tarball version, which would make it so that we do not have to change your Shiny code every time the tarball is updated.
@srearl thanks for sharing your thoughts on this. The rds route sounds good to me. As I recall, you made a tarball rds this week, correct? Can I get our shiny app rolling by moving that over from GDrive to the repo? Should it exist at the top level in the repo?
Yeah, have been making both an Rds and csv when tarballing so, yep, grab the latest from Google Drive and should be good. Top level is fine, Shiny apps are (usually) not deep or file rich so sub-directories, for data or otherwise, probably will not be necessary.