golang/go

x/build: speed up large container start-up times without pre-pulling containers into VMs (CRFS)

bradfitz opened this issue ยท 33 comments

Tracking bug for improving how we maintain & deploy our larger builder environment containers easily and quickly while also having them start up quickly.

Our current situation (building a container, pushing to gcr.io, then automating the creation of a COS-like VM images that has the image pre-pulled) is pretty gross and tedious.

I propose CRFS: a Container-Registry Filesystem. See design doc at https://github.com/golang/build/tree/master/crfs#crfs-container-registry-filesystem

The gist of it is that we can read bytes from gcr.io directly with a FUSE filesystem, rather than doing huge docker pulls. It's not very hard once you tweak the tarballs into a more amenable format.

Change https://golang.org/cl/167392 mentions this issue: crfs: start of a README / design doc of sorts

ktock commented

Interesting idea.

Maybe you know, there are some related concepts around the container world, which are aiming to make image lightweight and to boot containers faster using lazy-pull and de-duplication technology.

Don't you aim to minimize image size by taking each chunks much smaller? like:

GZIP(TAR(file1_small_chunk1)) + GZIP(TAR(file1_small_chunk2)) + GZIP(TAR(file1_small_chunk3)) + GZIP(TAR(file2_small_chunk1)) + ... + GZIP(TAR(index of earlier files in magic file))

If you take chunk smaller, you can achive inter-image de-duplication on chunk level like casync and desync doing (not only partial-pulling).

Recently, I'm implementing a rough PoC which tackles similar kind of issue, (booting containers faster and minimizing image size).
Additionally, I aim to achieve it without any modification on runtime or registry, using init-like program inside container and using FUSE-in-container like technique.

Thanks.

Heyo, don't know if you've seen this: containerd/containerd#2968

Once that settles it should enable creating a crfs 'snapshotter' that skips pulling images and would just perform a FUSE mount.

@dprotaso, I hadn't seen that. Excellent. Thanks for the link!

@ktock, while I'm a big fan of content-addressable storage & deduplication (my https://perkeep.org/ project is all about it), it's not my goal with this project to address that. I just want fast boot times here. Storage as far as I'm concerned is free.

Also you might not need to reinvent the wheel with stargz

https://github.com/samtools/htslib/blob/develop/bgzf.c
https://github.com/biogo/hts/tree/master/bgzf

Another interesting thing from: http://samtools.github.io/hts-specs/SAMv1.pdf

It is worth noting that there is a known bug in the Java GZIPInputStream class that concatenated gzip archives cannot be successfully decompressed by this class. BGZF files can be created and manipulated using the built-in Java util.zip package, but naive use of GZIPInputStream on a BGZF file will not work due to this bug.

glyn commented

I just wanted to check that, if this feature goes ahead, it won't be bundled into the standard library as that seems inappropriate to me.

@glyn, no, that won't happen. That would be entirely bizarre. The Go team writes a lot of code but very little of it goes into the standard library. I even added the FAQ entry that says we don't want most code in the standard library: https://golang.org/doc/faq#x_in_std

Hi -- just commenting here to link this to an issue within containerd which seems to tackle a similar problem as described here containerd/containerd#2943 (comment)

@bradfitz This is a very cool hack.

It might be worth just turning off layer compression (easier said that done, but works with standard docker once you push that way), then just use transport compression when fetching the individual file chunks. That might complicate backend storage a bit, which might have to use a different compression technique, but the images would be runnable by an unmodified docker daemon.

It's at least worth a look. ;)

@stevvooe, you'd still need an index somewhere. If you already need to push modified or additional layers to hold the index, might as well also compress it all?

Change https://golang.org/cl/167769 mentions this issue: crfs/stargz: add start of package

Change https://golang.org/cl/167920 mentions this issue: crfs/stargz: add basic file reading, chunking big files, more tests, docs

@stevvooe, my index comment was slightly unrelated in retrospect. You're probably more concerned about runtime CPU usage for decoding gzip for reads, eh? Turning off layer compression should indeed solve that, but would increase the $$$ cost for image storage. And I'm unsure both whether a) gcr.io supports transport compression (probably), and b) whether it's even worth it inside a very fast network.

@bradfitz I've read the original issue and the linked design doc in full, which helped me understand this better, but I still have an unanswered question about this part:

The gist of it is that we can read bytes from gcr.io directly with a FUSE filesystem, rather than doing huge docker pulls.

I understand one of the benefits is the ability to stream the container image, so parts of it can start being accessed sooner, instead of waiting for the entire container image to be downloaded before the first byte can be read.

But is there also an advantage that a typical workload would read less bytes than the entire container image contains? I.e., only a small subset is typically needed, so the savings are also that less bytes need to be downloaded in total?

Change https://golang.org/cl/168737 mentions this issue: crfs/stargz/stargzify: add tool to convert a tar.gz to stargz

Change https://golang.org/cl/168799 mentions this issue: crfs, stargz: basics of read-only FUSE filesystem, directory support

dw commented

Hi Brad,

I came via HN :) Cool project, just a few thoughts:

It's possible to do 'solid' compression while retaining the same level of compatibility as done here, the benefit is not resetting the compressor for small files. Looks like regular chunk size also makes it possible to drop at least one TOCEntry field

Regarding TOCEntry, some kind of sorted array that does not require full decoding rather than a recursive structure would make the format far more appealing for reuse, and also reduces the runtime requirements for any parser

One place to look for design inspiration might be squashfs, it's solving a similar problem although its constraints are a little looser. For example squashfs does not store a single large index, subdirectories have their own separate representation

@dw, thanks. I was meaning to explore grouping small files together into one gzip stream but first I want to get all the pieces working before I optimize too much. For now a 7% bloat is acceptable.

Looks like regular chunk size also makes it possible to drop at least one TOCEntry field

Yeah, there's a lot of redundant info in there (including the name, which stores its full path), but I liked the flexibility to perhaps do file-specific chunk sizes in the future based on known access patterns for different types of files.

Regarding TOCEntry, some kind of sorted array that does not require full decoding rather than a recursive structure would make the format far more appealing for reuse, and also reduces the runtime requirements for any parser

Yeah, the JSON is slightly inefficient, but I figured it's okay to just slurp the whole thing in at start-up (for all layers) and keep it all in memory. It's not big (at least for the layers I've seen or work with), so I didn't want to prematurely optimize. But people with millions of files in their layers might not find it as acceptable.

cben commented

It might be worth just turning off layer compression (...), then just use transport compression when fetching the individual file chunks

I'm not sure this would be workable.

  • At large scale, registries might be unhappy to waste storage and CPU compressing on the fly :)
  • If you mean Transfer-Encoding: gzip, that's rare on servers and just being added to Go http client (#29162) but in principle would be clean โ€” Range requests allow seeking by offsets in original uncompressed tar file.
    Alas, it is gone in HTTP 2: httpwg/http2-spec#445, so in the long term is a dead end :-(
  • static Content-Encoding: gzip is what we already have: the server stores it pre-compressed, but can't seek using tar metadata because don't know how it maps to compressed offsets.
  • If you mean on-the-fly compression via Content-Encoding: gzip, which is the widely deployed http compression, it's not what you want :-(. The way HTTP defined Content-Encoding essentially matches a static pre-compressed resource, which means Range queries index by compressed offsets, back to the problem Stargz solves.

I've attempted a hacky integration of CRFS with fuse-overlayfs, I am still playing with it but it is already fine as PoC. It should solve the problem of having a working overlay implementation, more details here: containers/fuse-overlayfs#79

@giuseppe, nice! I'd been meaning to try fuse-overlayfs as I kept hitting ESTALE errors with the kernel overlayfs against crfs. I did a bunch of work (locally, not pushed) to make sure inode numbers are stable and even added name_to_handle_at/etc wrappers in golang/sys@9f0b1ff as part of debugging it, but I've never been able to make the kernel overlayfs happy for prolonged periods of time. (It works for a bit, but then starts returning ESTALE errors, and IIRC unrelated to dropping caches)

That's why CRFS kinda hit a pause, working through and getting stuck on that.

Your progress unblocks things. Thanks for the demo of using podman, too. That was my hand-wavvy plan, to run the overlay-merged container directly with runc or something but I hadn't started down that path yet.

ktock commented

Recently I'm working on CRFS to make it work with overlayfs, which should be indispensable to integrate CRFS with container runtimes. Finally I found the cause of the ESTALE error and submitted two PRs to achive it. Could anyone help us out with reviewing them?

@giuseppe, are you still working on CRFS? Could you help us with reviewing them?

ktock commented

Stargz image now works with containerd! (still, patch needed)

https://github.com/ktock/remote-snapshotter

This is still under the active discussion in containerd community, so please join and help us out!

@giuseppe, are you still working on CRFS? Could you help us with reviewing them?

@ktock sorry, I've missed your previous comment.

Yes, I am still interested in supporting CRFS in fuse-overlayfs. While playing with it, I've noticed that the gzip compression is a performance bottleneck with many small files, so I've proposed a change in the OCI specs to support zstd: containers/image#639. I think CRFS can benefit a lot from it.

I've added plugins support to fuse-overlayfs: containers/fuse-overlayfs#119, that can be used to retrieve data for lower layers (and leaving the writeable upper layer management to fuse-overlayfs). I'll have to write one for CRFS.

@giuseppe do you think there is a way too re-use some of the work @ktock has done within podman?

@giuseppe do you think there is a way too re-use some of the work @ktock has done within podman?

I think it should be possible. I've not yet looked into the integration with Podman as I am still playing with the lower level bits

Also note that the mature and popular CernVM FileSystem is a general purpose caching read-only download-on-demand filesystem that could be helpful here. It includes a tool called DUCC for downloading and installing layers from a docker-type registry. I see @ktock has already heard about it in a remote-snapshotter pull request. A difference is that CVMFS uses a publishing step to prepare all the files, but we have found that doing the extra work up front is well worth it for applications that have orders of magnitude higher readers than writers. We are working on a new feature to efficiently merge a new upper layer with previously published registry layers so we will be able to scale up the publishing of containers while still avoiding doing layer-merging at run time with an overlay filesystem.

Exactly what I need now. I read through the code and design doc which mentioned no changes to docker. But from what I see CRFS seems to use FUSE to intercept read/write requests to /var/lib/docker/{image,overlay2} and convert them into HTTP range request to achieve lazy-pulling.

This requires at least a new StorageDriver, are you planning merge the new StorageDriver to the master branch of docker?

And how are you going to create the init and rw layer for container? (Slacker flattens all the layers to create a NFS clone, so it will not face this problem.)

https://github.com/ktock/stargz-snapshotter

Expected to be adopted by containerd/CRI before Docker

Thx for this quick response, I'll learn something more about containerd.

ktock commented

FYI: https://github.com/containerd/stargz-snapshotter
Stargz is now available under the containerd org and works on Kubernetes. I also posted a blog about it.