Issues
- 2
- 5
- 1
There are no automated tests. My recommendation is to use like nf-test or pytest-workflow. By doing this, the functionality of the pipeline can be easily tested. This introduces some extra work, so I would recommend adding at least one nf-test or pytest-worklow test for the whole pipeline. If this is not feasible, please provide some documentation around the test data set included in the repo, specially around the output files. Which output files are the key ones, and how should they look like.
#48 opened by jhayer - 5
- 1
Please create a release of Baargin on GitHub, so users can run baargin using the command:
#44 opened by jhayer - 2
Ideally the only required software to run the pipeline should be Nextflow and Docker/Singularity. I would suggest review the installation instructions, requiring a conda env. shouldn't be required.
#36 opened by jhayer - 1
A nice UX pattern is to have the download_dbs.py to be part of the pipeline itself. It's possible to have a module to download the dbs, if they are missing. This lowers the entry barrier in my experience.
#41 opened by jhayer - 0
There are no community guidelines.
#28 opened by jhayer - 0
- 0
This is linked to my previuos comment in the "Functionality" entry. Nextflow + Singularity/Docker should be the only requirements to run the pipeline
#37 opened by jhayer - 0
I recommend to remove unused-commented code and unused modules (such as https://github.com/jhayer/baargin/blob/main/modules/emblmygff3.nf)
#32 opened by jhayer - 0
This is missing from the software paper
#30 opened by jhayer - 0
Typo: "Hight" → "High"
#18 opened by jhayer - 0
Python: for tabular results it's recommended to use the stdlib csv module. This makes the code easier to follow as the csv module is responsable for the formatting. Example -> https://github.com/jhayer/baargin/blob/main/bin/amrfinder_compile_heatmap.py#L79
#45 opened by jhayer - 0
The containers.config file should not have the cpus, memory and time limits for the modules. Placing those in a base.config or similar makes it easier to find the module's resource requirements.
#42 opened by jhayer - 0
The params are available in the modules, there is no need to provide that on import (such as: `include {quast} from './modules/quast.nf' params(output: params.output)`
#38 opened by jhayer - 0
There are some nextflow modules with strange identation, such as https://github.com/jhayer/baargin/blob/main/modules/agat.nf
#31 opened by jhayer - 0
There are no guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support
#24 opened by jhayer - 0
Please provide a more structured documentation of the parameters. The current descriptions contains textual instructions, an example nextflow params file, and the output of the nextflow params file.
#21 opened by jhayer - 0
As mentioned earlier, please provide clear instructions on which steps need to be performed to run the pipeline. The only requirements should be Nextflow and (Docker or Singularity). I don't understand why the installation packages currently instruct me to set up a Conda environment or install Python packages.
#35 opened by jhayer - 0
Please review the installation instructions. I shouldn't need to install a conda environment to be able to run a Nextflow pipeline.
#34 opened by jhayer - 0
Assuming that the pipeline documentation is the README.md in the root of the repo, I didn't find a statement of need
#19 opened by jhayer - 0
- 0
The sentences in the paragraph "Statement of Need" contain two or three different messages which you try to convey at the same time. This causes the sentences in this paragraph to become quite long and hard to read. Can you rephrase this section to make it less wordy?
#16 opened by jhayer - 0
It might be better to rephrase "it is now affordable to sequence the full genomes of hundreds of bacterial strains at a time" as "it has become more affordable to sequence the full genomes of hundreds of bacterial strains at a time."
#14 opened by jhayer - 1
- 0
AMRFinder DB auto update
#12 opened by jhayer - 0
- 0
Prepare conda env for local run
#9 opened by jhayer - 2
Test path input before running
#7 opened by jhayer - 1
Pre-download of mandatory DBs
#3 opened by jhayer - 0
Pre-install of KrakenTools
#4 opened by jhayer