Musescore PDF Downloader is just that. An electron app, that lets you download any sheet from musescore as a PDF.
... and I do, too. A long time ago (a year or so) it was still possible to download PDFs from musescore. However, they removed this features because of ... well, you know why. Although that wasn't that big of a deal, since you can still get the image source from the page and then convert the SVGs to PNG and then create a PDF, it is quite the effort doing it for each song. So this app automates just that. It
- Gets the link of the image form the page
- Checks the image type
- Converts the image to PNG before downloading or
- Downloads the PNG image
- Creates a PDF with the combined images
Heck if I know... But I would like to direct you to a similar project here. I am in general of the same opinion, so make of that what you will.
Not much so far
- Download images from musescore
- Combine images into PDF
- Include the option to download the music as well
Sync the process better, so that the PDF is only created when all the images have been downloaded- Allow for pasting multiple links and download of multiple files
- The app size is still horrifically huge. I am trying to find out why and bring it down to about 50mb...
- Remove superagent completely and rely only on puppeteer to minimise app footprint
So musescore has changed the way they are hosting their files. From what I can gather, they were hosting all of their images on their own server with the files being titled 'score_0.svg' or 'score_0.png'. Then you could just count up the number of each sheet and get the URLs this way. However, it seems that musescore now only hosts the first page of their servers with the other pages being spread across different hosts (AWS, ultimate-guitar ...), so the only way to get all the links is to manually scroll through the page which I am doing with puppeteer now. Although it is a little bit more overhead, this method has a few advantages:
- No need to interate through and change the urls, since puppeteer doesn't care what the url to the image is. It just grabs the src form the image
- The order of the images is always correct, since it goes through them one by one.
- Event if the names, the urls or the hosts of the image change, puppeteer will just grab the links.
- JavaScript is executed directly in a browser instance, so I am scraping the real page, not some stump.
Of course there are also some disadvantages:
- If the page structure changes, I will need to redo the scraping.
- It takes a little more time than going through the page programmatically, since the browser has to physically check the code, scroll the page, get the links and so on.
In my opinion the advantages easily outweigh the disadvantages, so the switch was a good decision in the end. Hopefully it will hold for a while now. 😉
- The syncing is sometimes off, meaning that the PDF is going to get created before all the images have been downloaded. (Until I figure this out, just click the Download button again, it should work fine, if all images are present)
If you want to build the app, just run npm run make
.
If you want to start the app, just run npm run dev
.