Add separate command to synthesise IP cores
Opened this issue · 6 comments
In the current synth command there seem to be two distinct stages:
- Generate all IP core blocks
- Synthesise design
Is it possible to split the command into two? This first stage is parallel whilst the second is mostly single-threaded. By splitting the command, the build can be more easily load balanced within CI pipelines.
Hi David,
More or less. Strictly speaking, the ipcores are "generated" together with the project. That entails loading xci
files, upgrading the definition of the cores, and generating the out-of-context run.
synth
invokes the Vivado command with the same name, which kicks off the parallel OOC runs, and then proceeds with synthesis.
I think it should be possible to start the OOC runs with an independent command, but may I ask what sort of load-balancing you have in mind?
Alessandro
Hi Alessandro,
I have a firmwmare CI pipeline where each of the stages is a separate job (make project, synth, impl etc.). These run on a kubernetes cluster where each job is assigned a certain amount of CPU and RAM resources.
By having the OOC runs separate, it will mean that the overall cluster can run more efficiently as the CPU required for the OOC runs is then released when the synthesis of the full design is running.
I hope that helps in understanding the motivation for this request.
David
Hi David,
OK, understood. I think it can be arranged, albeit not right away (I have little time for ipbb at the moment).
Out of curiosity, in the Gitlab CI we use I've kept synth
and impl
together (at the cost of suboptimal CPU/RAM utilization) to minimize the amount of data exchanged between jobs. In order to run synth
and impl
as separate jobs the post-synthesis design has to be stored as artifact and then retrieved by the next job and designs can grow quite large.
How do you deal with that, either in GL or in the CI fwk you're using, if it's not GL?
Alessandro
Hi Alessandro,
Thank you for looking in to it.
We use artifacts as a way to pass the design between jobs, but I have set an expiry on the artifacts so that excess storage space is not wasted on old designs. If you are interested in how I'm using ipbb for the pipeline, I have an example project on gitlab:
https://gitlab.cern.ch/cms-tracker-phase2-data-processing/BE_firmware/emp-pipeline/-/pipelines/2769173
Hi David,
Sure. Just one warning: when running synth
straight away, vivado figures out what OOC runs need to be executed to successfully complete synthesis. That can be a subset of all the ips loaded in the project.
I can "easily" have ipbb
launch all the OOCs, but it may be more than needed.
Very nice setup. May I ask how's the kubernetes cluster setup?
Aaaand, any chance to have a peek at the /ci/tools/
scripts? ;-)
Hi Alessandro,
I already cache the build IPs between runs, using the -c
flag for the make project command. This means that for subsequent pipelines the IPs are not rebuilt as far as I can tell. Would something similar be achievable in the case where they are separate?
My kubernetes cluster is currently made up of 1 master node, and then 4 worker nodes, each with an AMD Ryzen 5900X and 128GB RAM. Vivado is mounted onto each worker node as an NFS share over a 10G network, which is then mounted into the runner as a hostPath
volume. This method was found to be within 1% of bare metal performance. Caching is achieved by a local S3 instance, also on the 10G LAN.
The /ci/tools
scripts can be found here, they form part of the docker image created by that repository which is then used for all the vivado build jobs.