This repository contains scripts and YAML workflows for GitHub Actions (GHA) to build and to deploy the docker images that are used and/or published by the GHDL GitHub organization. All of them are pushed to hub.docker.com/u/ghdl.
Images for development (i.e., building and/or testing ghdl):
- images include development depedendencies for ghdl.
- images include runtime dependencies for ghdl.
- images include ghdl tarballs built in ghdl/build images.
- external dependencies which we want to keep almost in the edge, but are not part of ghdl.
Ready-to-use images:
- images, which are based on correponding ghdl/run images, include ghdl along with minimum runtime dependencies.
- images, which are based on
ghdl/ghdl:buster-*
images, include ghdl along with VUnit.*-master
variants include latest VUnit (master branch), while others include the latest stable release (installed through pip).
- ready-to-use images with GHDL and complements (ghdl-language-server, GtkWave, VUnit, etc.).
- images allow to try experimental synthesis features of ghdl.
See USE_CASES.md if you are looking for usage examples from a user perspective.
NOTE: currently, there is no triggering mechanism set up between ghdl/ghdl and ghdl/docker. All the workflows in this repo are triggered by CRON jobs.
· base.yml
(8 jobs -max 4-, 40 images) [twice a month]
Build and push all the ghdl/build:*
and ghdl/run:*
docker images. :
- A pair of images is created in one job for each of
[ ls-debian, ls-ubuntu ]
. - One job is created for each of
[ fedora (29 | 30), debian (buster | sid), ubuntu (16 | 18)]
, and six images are created in each job; two (ghdl/build:*
,ghdl/run:*
) for each supported backend[ mcode, llvm*, gcc ]
.
· cache.yml
(5 jobs -max 5-, 7 images) [weekly]
Build and push all the images to ghdl/cache:*
and some to ghdl/synth:*
. Each of the following images includes a tool on top of a debian:buster-slim
image:
ghdl/synth:yosys
: includes YosysHQ/yosys (master
).ghdl/synth:icestorm
: includes cliffordwolf/icestorm (master
).ghdl/synth:nextpnr
: includes YosysHQ/nextpnr (master
).
Furthermore:
ghdl/cache:yosys-gnat
: includeslibgnat-8
on top ofghdl/synth:yosys
.ghdl/cache:gtkwave
: contains a tarball with GtkWave (gtkwave3-gtk3
) prebuilt for images based on Debian Buster.ghdl/cache:formal
: contains a tarball with YosysHQ/SymbiYosys (master
) and Z3Prover/z3 (master
) prebuilt for images based on Debian Buster.ghdl/synth:symbiyosys
: includes the tarball fromghdl/cache:formal
and Python3 on top ofghdl/synth:yosys
.
· ghdl.yml
(15 jobs -max 3-, 30 images) [weekly]
Build and push almost all the ghdl/ghdl:*
and ghdl/pkg:*
images. A pair of images is created in one job for each combination of:
[ fedora: [29, 30], debian: [sid], ubuntu: [16, 18] ]
and[ mcode, llvm*]
.[ fedora: [29, 30], debian: [buster, sid] ]
and[ gcc* ]
.- For Debian only,
[buster, sid]
and[mcode]
and[--gpl]
.
The procedure in each job is as follows:
- Repo ghdl/ghdl is cloned.
- ghdl is built in the corresponding
ghdl/build:*
image. - A
ghdl/ghdl:*
image is created based on the correspondingghdl/run:*
image. - The testsuite is executed inside the
ghdl/ghdl:*
image created in the previous step. - If successful, a
ghdl/pkg:*
image is created with the tarball built in the first step. ghdl/ghdl:*
andghdl/pkg:*
images are pushed to hub.docker.com/u/ghdl.
· daily.yml
(3 jobs -max 3-, 6 images) [daily]
Complement of ghdl.yml
, to be run daily. One job is scheduled for each combination of [ buster ]
and [ mcode, llvm-7 , gcc-8.3.0 ]
.
· ext.yml
(5 jobs -max 4-, 15 images) [twice a week]
Build and push all the ghdl/vunit:*
and ghdl/ext:*
images. Four jobs are scheduled:
ls
: build and pushghdl/ext:ls-debian
andghdl/ext:ls-ubuntu
(a job for each of them). These include ghdl/ghdl, the ghdl/ghdl-language-server backend and the vscode-client (precompiled but not preinstalled).vunit
: build and push all theghdl/vunit:*
images, which are based on the ones created in the daily workflow.gui
: build and push the following images:ghdl/ext:gtkwave
: includes GtkWave (gtk3) on top ofghdl/vunit:llvm-master
.ghdl/ext:broadway
: adds a script toghdl/ext:gtkwave
in order to launch a Broadway server that allows to use GtkWave from a web browser.ghdl/ext:ls-vunit
: includes VUnit (master
) on top ofghdl/ext:ls-debian
.ghdl/ext:latest
: includes GtkWave on top ofghdl/ext:ls-vunit
.
synth
: build and push all theghdl/synth:*
images which are not created in workflowcache
:- Repo tgingold/ghdlsynth-beta is cloned and it's build scripts are used to build two images:
ghdl/synth:latest
: includes ghdl/ghdl with synthesis features enabled on top of aghdl/run:buster-mcode
image.ghdl/synth:beta
: includes ghdl fromghdl/synth:latest
along with ghdlsynth-beta built as a module for YosysHQ/yosys, on top ofghdl/cache:yosys-gnat
.
- Then, image
ghdl/synth:formal
is built, which includes the tarball fromghdl/cache:formal
and Python3 on top ofghdl/synth:beta
.
- Repo tgingold/ghdlsynth-beta is cloned and it's build scripts are used to build two images:
Multiple artifacts (i.e. standalone tarballs) of GHDL are generated in these workflows. For example, each job in daily.yml
generates a tarball that is then installed in a ghdl/ghdl:*
image and published in a ghdl/pkg:*
image. These resources might be useful for users/developers who:
- Want to use a base image which is compatible but different from the ones we use. E.g., use
python:3-slim-buster
instead ofdebian:buster-slim
. - Do not want to build and test GHDL every time.
Precisely, this is how images in VUnit/docker are built. See github.com/VUnit/docker/blob/master/run.sh.
However, it is discouraged to use these pre-built tarballs to install GHDL on host systems. Instead, ghdl/packaging contains sources for package manager systems, and it provides nightly builds of GHDL.