Fetch auxiliary test data when testing published crates.
This library addresses the problem that integration test suites and
documentation tests can not be ran from the published .crate
archive alone,
if they depend on auxiliary data files that should not be shipped to downstream
packages and end users.
For this task it augments Cargo.toml
with additional fields that describe how
an artifact archive composed from VCS files that are associated with the exact
version at which they were created. The packed data and exact version is then
referenced when executing test from the .crate
archive. A small runtime
component unpacks the data and rewrites file paths to a substitute file tree.
This repository contains a reference implementation for interpreting the
auxiliary metadata. It's simple binary behind a feature of this library. Within
the workspace we also define an alias xtask
(in this here) to
invoke it easily.
# test for developers
cargo xtest-data test <path-to-repo>
# test for packager
cargo xtest-data crate-test <.crate>
# prepare a test but delay its execution
eval `cargo xtest-data fetch-artifacts <.crate>`
For an offline use, where archives are handled by yourself:
# Prepare .crate and .xtest-data archives:
cargo xtest-data package
# on stdout, e.g.: ./target/xtest-data/xtest-data-1.0.0-beta.3.xtest-data
# < -- Any method to upload/download/exchange archives -- >
# After downloading both files again:
eval `cargo xtest-data \
fetch-artifacts xtest-data-1.0.0-beta.3.crate \
--pack-artifact xtest-data-1.0.0-beta.3.xtest-data`
# Now proceed with regular testing
Integrate this package as a dev-dependency into your tests.
let mut path = PathBuf::from("tests/data.zip");
xtest_data::setup!()
.rewrite([&mut path])
.build();
// 'Magically' changed.
assert!(path.exists(), "{}", path.display());
Note the calls above are typed as infallible but they are not total—they will panic when something is missing since this indicates absent data. The reasoning is that this indicates a faulty setup, not something the test should handle. The expectation of the library is that you access all data through this library instead of as a direct path.
As a developer of a library, you will write some integration with the goal of ensuring correct functionality of your code. Typically, these will be executed in a CI pipeline before release. However, what if someone else—e.g. an Open Source OS distribution—wants to repackage your code? In some cases they might need to perform simple, small modifications: rewrite dependencies, apply compilation options like hardening flags, etc. After those modifications it's unclear if the end product still conforms to its own expectations. Thus will want to run the integration test suite again. That's where the library comes in. It should ensure that:
- It is unobtrusive in that it does not require modification to the code that is used when included as a dependency.
- Tests should be reproducible from the packaged
.crate
, and an author can check this property locally and during pre-release checks. - Auxiliary data files required for tests are referenced unambiguously.
- It does not make unmodifiable assumptions about the source of test data.
First, export the self-contained object-pack collection with your test runs.
CARGO_XTEST_DATA_PACK_OBJECTS="$(pwd)/target/xtest-data" cargo test
zip xtest-data.zip -r target/xtest-data
This allows utilizing the library component to provide a compelling experience
for testing distributed packages with the test data as a separate archive. You
can of course pack target/xtest-data
in any other shape or form you prefer.
When testing a crate archive reverse these steps:
unzip xtest-data.zip
CARGO_XTEST_DATA_PACK_OBJECTS="$(pwd)/target/xtest-data" cargo test
For the basic usage, see the above section How to apply. For more advanced API usage consult the documentation. The complete interface is not much more complex than the simple version above.
There is one additional detail if you want to check that your crate
successfully passes the tests on a crate distribution. For this you can
repurpose the cargo-xtest-data
subcommand of this crate as a binary:
cd path/to/xtest-data
cargo xtest-data --path to/your/crate test
Hint: if you add the source repository of xtest-data
as a submodule and
modify your workspace to include the xtask
folder then you can execute the
xtask
from your own crate via a cargo alias, avoiding the system wide
install.
The xtask will:
- Run
cargo package
to create the.crate
archive and accompanying pack directory. Note that this requires the sources selected for the crate to be unmodified. - Stop, if
test
is not selected. Otherwise, decompress and unpack this archive into a temporary directory. - Compile the package with
xtest-data
overrides for local development (see next section). In particular:CARGO_XTEST_DATA_PACK_OBJECTS
will point to the pack output directory;CARGO_XTEST_DATA_TMPDIR
will be set to a temporary directory create within thetarget
directory;CARGO_TARGET_DIR
will also point to the target directory.
This keeps the rustc
cached data around while otherwise simulating a fresh
distribution compilation.
In all settings, the xtest_data
will inspect the following:
- The
Cargo.toml
file located in theCARGO_MANIFEST_DIR
will be read, decoded and must at least contain the keyspackage.name
,package.version
,package.repository
.
In a non-source setting (i.e. when running from a downloaded crate) the
xtest_data
package will read the following environment variables:
CARGO_XTEST_DATA_TMPDIR
(fallback:TMPDIR
) is required to be set when any of the tests are NOT integration tests. Simply put, the setup creates some auxiliary data files but it can not guarantee cleaning them up. This makes an explicit effort to communicate this to the environment. Feel free to contest this reasoning if you feel your use-case were better addressed with an implicit, leaking temporary directory.CARGO_XTEST_DATA_PACK_OBJECTS
: A directory for git pack objects (seeman git pack-objects
). Pack files are written to this directory when running tests from source, and read from this directory when running tests from a.crate
archive. These are the same objects that would be fetched when doing a shallow and sparse clone from the source repository.CARGO_XTEST_VCS_INFO
: Path to a file with version control information as json, equivalent in structure to cargo's generated VCS information. This will force xtest into VCS mode, where resources are replaced with data from the pack object(s). Can be used to either force crates to supply internal vcs information or to supplement such information. For example, packages generated withcargo package --allow-dirty
will not include such a file, and this can be used to override with a forced selection.
The tool also provides limited sandboxing for local development in the form of
a [cargo xtask][cargo-xtask], allowing testing the actual released archives.
However, the idea of an xtask is that the exact setup is not uploaded with the
main package and just a local dev-tool which is configured in the source cargo
configuration (.cargo/config.toml
relative to your workspace).
The cargo run
command will only let you refer to local dependencies in your
workspace, not actually dev-dependencies, see
upstream. There are two ways
to pin the specific version regardless: You might refer to this repository as a
git submodule (or subtree). In the meantime you cargo install
the binary
globally which makes it available as a cargo
subcommand.
When cargo
packages a .crate
, it will include a file called
.cargo_vcs_info.json
which contains basic version information, i.e. the
commit ID that was used as the basis of creation of the archive. When the
methods of this crate run, they detect the presence or absence of this file to
determine if data can be fetched (we also detect the repository information
from Cargo.toml
).
If we seem to be running outside the development repository, then by default we
won't do anything but validate the information, debug print what we plan to
fetch—and then instantly panic. However, if the environment variable
CARGO_XTEST_DATA_FETCH
is set to yes
, true
or 1
then we will try
to download and checkout requested files to the relative location.
- The package is a pure dev-dependency and there is focus on introducing a small amount of dependencies. (Any patches to minimize this further are welcome. We might add a toggle to disable locks and its dependencies if non-parallel test execution is good enough?)
- A full offline mode with minimal auxiliary source archives is provided. Building the crate without executing tests does not require any test data.
- The
xtask
tool can be used for local development and CI (we use it in our own pipeline for example). It's not strongly linked to the implementation, just the public interface, so it is possible to replace it with your own logic. - Auxiliary files are referenced by the commit object ID of the distributed crate, which implies a particular tree-ish from which they are retrieved. This is equivalent to descending a Merkle tree which lends itself to efficient signatures etc.
- It is possible to overwrite the source repository as long as it provides a
git compatible server. For example, you might pre-clone the source commit and
provide the data via a local
file://
repository.
When fetching data, git may repeatedly ask for credentials and is pretty slow.
This issue should not occur when git
supports sparse-checkout
. This is
because we are shelling out to Git and git checkout
, which we utilize to very
selectively unshallow the commit at the exact path specs which we require, does
not keep the connection alive—even when you give it multiple pathspecs at the
same time through --pathspecs-from-file=-
. With sparse-checkout
, however,
we only call this once which lowers the number of connection attempts. A
workaround is to setup a local agent and purge that afterwards or to create a
short-lived token instead.