"Object not found" updating registry
Closed this issue ยท 22 comments
cargo test --verbose
Updating registry `https://github.com/rust-lang/crates.io-index`
error: failed to load source for a dependency on `interpolate_idents`
Caused by:
Unable to update registry https://github.com/rust-lang/crates.io-index
Caused by:
failed to fetch `https://github.com/rust-lang/crates.io-index`
Caused by:
[9/-3] Object not found - no match for id (ad11335f47604b5f394994354204323098310754)
version: cargo 0.19.0-nightly (fa7584c14 2017-04-26)
Hm is this persistent? In the sense does it reproduce if you try again?
Also, if you blow away $CARGO_HOME/registry
does it reproduce?
Removing the registry seems to have some effect.
How unusual! You wouldn't happen to have any proxies or Cargo configuration to try to point to another github repo would you? I checked and that id definitely exists in the index...
Once I managed to wipe the right cargo directory it managed to update correctly, sorry for the shadow edit :)
I just had the same really weird behavior:
Updating registry `https://github.com/rust-lang/crates.io-index`
error: failed to verify package tarball
Caused by:
failed to load source for a dependency on `fast_chemail`
Caused by:
Unable to update registry `https://github.com/rust-lang/crates.io-index`
Caused by:
failed to fetch `https://github.com/rust-lang/crates.io-index`
Caused by:
[9/-3] object not found - no match for id (5681de4c2f203b06b32b009e2e3a5fe8949fc16c)
Weirdly, I'm suddenly seeing this as well:
Clank:sample d$ rustc --version
rustc 1.26.0 (a77568041 2018-05-07)
Clank:sample d$ cargo --version
cargo 1.26.0 (0e7c5a931 2018-04-06)
Clank:sample d$ cargo run
Updating registry `https://github.com/rust-lang/crates.io-index`
error: failed to fetch `https://github.com/rust-lang/crates.io-index`
Caused by:
object not found - no match for id (5cd375d78d991ed8dffbfe70fcf69c072484afb5); class=Odb (9); code=NotFound (-3)
...but I can hit the url, no problem:
Clank:sample d$ curl https://github.com/rust-lang/crates.io-index -o tmp.txt
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 830k 0 830k 0 0 200k 0 --:--:-- 0:00:04 --:--:-- 200k
(ah, as per #4245, rm -rf ~/.cargo/registry
fixed it. How strange...)
I have this
$ cargo build
Updating registry `https://github.com/rust-lang/crates.io-index`
error: failed to fetch `https://github.com/rust-lang/crates.io-index`
Caused by:
object not found - no match for id (04d82c0f2273128b88bdf61816ce0e69e8ebe351); class=Odb (9); code=NotFound (-3)
$ cargo --version
cargo 1.28.0 (96a2c7d16 2018-07-13)
$ rustc --version
rustc 1.28.0 (9634041f0 2018-07-30)
this solution worked but the problem is mostly that its such a weird error ๐
Chiming in with the same issue, also fixed by rm -rf ~/.cargo/registry
. Full log trace here.
Cargo version cargo 1.32.0 (8610973aa 2019-01-02)
(though it was also happening on a freshly updated nightly, as well my previous version which I think was 1.31)
Problem
Same issue, new project, on MacOS. The project has only semver as a dependency.
System Version: macOS 10.13.6 (17G11008)
Kernel Version: Darwin 17.7.0
[dependencies]
semver = "0.9.0"
$ cargo build
Updating crates.io index
error: failed to load source for a dependency on `semver`
Caused by:
Unable to update registry `https://github.com/rust-lang/crates.io-index`
Caused by:
failed to fetch `https://github.com/rust-lang/crates.io-index`
Caused by:
object not found - no match for id (7c81d5e4f533d296122e4777d8324508c2159601); class=Odb (9); code=NotFound (-3)
Notes
Output of cargo version
:
cargo 1.40.0 (bc8e4c8be 2019-11-22)
rustc 1.40.0 (73528e339 2019-12-16)
Solution
rm -rf ~/.cargo/registry
Same issue in v1.41.0 & Linux. Clearing the registry did fix it:
Updating crates.io index
error: failed to fetch `https://github.com/rust-lang/crates.io-index`
Caused by:
object not found - no match for id (05707ea64ba70866ac7211b5fde456a239e21f55); class=Odb (9); code=NotFound (-3)
Same issue here: v1.41.1 on Darwin (OSX Catalina), but unfortunately clearing out ~/.cargo/registry
has no effect
If anyone hits this problem, can you compress ~/.cargo/registry/index
and post it somewhere? I think github supports up to 100mb zip files if you want to attach it here. I'd like to dig through and see if there is some way to reverse-engineer what has gone wrong.
@Cache-miss That looks like a basic network error. This issue is for the object not found
error.
If you are consistently having that error, I recommend opening a new issue. You can also try net.git-fetch-with-cli as a workaround.
Same issue here: v 1.42
removing ~/.cargo/registry
doesn't help
This happens consistently, it's not connected with networking.
We have a shared CARGO_HOME
between branches in CI. Sometimes we have this problem and it started in mid 1.41.
I've been investigating it and found that cargo
(or git
) cleans up there sometimes and removes some deltas from CARGO_HOME/registry
. This causes the problem. I'm not sure, but potentially cache is corrupting when there are different versions of deps.
As in here
Downloading crates ...
error: failed to download `assert_cmd v0.12.1`
Caused by:
unable to get packages from source
Caused by:
failed to parse manifest at `/ci-cache/substrate/cargo/cargo-check-benches/registry/src/github.com-1ecc6299db9ec823/assert_cmd-0.12.1/Cargo.toml`
Caused by:
can't find `bin_fixture` bin, specify bin.path
Command exited with non-zero status 101
Vs here
Downloading crates ...
error: failed to download `assert_cmd v1.0.1`
Caused by:
unable to get packages from source
Caused by:
failed to parse manifest at `/ci-cache/substrate/cargo/cargo-check-benches/registry/src/github.com-1ecc6299db9ec823/assert_cmd-1.0.1/Cargo.toml`
Caused by:
can't find `bin_fixture` bin, specify bin.path
Command exited with non-zero status 101
I've started the investigation back in mid 1.41 and the error message was a bit different, so some details from there if you will:
I've managed to repro it in docker against a copy of the corrupt cache
root@e540f637dc6b:/builds# cargo test --verbose --all-features --release --manifest-path core/Cargo.toml
Updating crates.io index
error: failed to fetch `https://github.com/rust-lang/crates.io-index`
Caused by:
missing delta bases; class=Indexer (15)
[net]
git-fetch-with-cli = true
I got the git command which cargo runs under the hood:
root@29f556a902eb:/builds# cargo test --verbose --all-features --release --manifest-path core/Cargo.toml
Updating crates.io index
Running `git fetch --tags --force --update-head-ok 'https://github.com/rust-lang/crates.io-index' 'refs/heads/master:refs/remotes/origin/master'`
error: failed to fetch `https://github.com/rust-lang/crates.io-index`
Caused by:
process didn't exit successfully: `git fetch --tags --force --update-head-ok 'https://github.com/rust-lang/crates.io-index' 'refs/heads/master:refs/remotes/origin/master'` (exit code: 128)
--- stderr
fatal: pack has 1 unresolved delta
fatal: index-pack failed
Skipping some steps with strace
and changing git
versions which didn't help resolving it, I came to the conclusion that cargo
does the cleanup.
Also, git --unshallow
does not fix the repo as it's not a shallow clone.
Unfortunately, I don't have much time for another investigation and will do a workaround instead.
But if I'll stumble upon the corrupt cache, I'll send it to you, @ehuss .
We have a shared
CARGO_HOME
between branches in CI.
Is it possible that the branches may be run in parallel? I know that git can corrupt itself when more than one operation is run on a file repo at the same time. Cargo has a file lock to avoid that (and other similar things) but maybe it has bugs. Thanks to the change log, there was even a bug fixed in it in 1.41 (#7602)
@ehuss we're lucky today! Here's is your corrupted cache and steps to repro:
- download https://send.firefox.com/download/9a5096d5aabed961/#e0exonr-1lJxPPKIiYZGQQ
- unarchive it in a 'tar xf -C /place/you/want'
sudo docker run -it -v /place/you/want/:/cache/ parity/rust-builder:latest
- inside container:
export CARGO_HOME=/cache/cargo-check-benches/
git clone https://github.com/paritytech/substrate .
BUILD_DUMMY_WASM_BINARY=1 time cargo +nightly check --benches --all
- and you will get the error
Is it possible that the branches may be run in parallel?
It's rare, but possible, yes. However, before 1.41 it was never an issue.I've witnessed many times that the job is waiting for a file lock before starting download.
Now, when I stopped using shared cache between the branches (which used to work perfectly) and had to stop pre-populating cache for the new branches, CI obviously became less productive and returned to download/unzip the same dependencies again and again.
Will cargo vendor
help me work around it?
Any progress on the matter?
Ok, this is getting really annoying, today I was asked to clean cargo cache more than 8 times. Even made a way for devs to clean the cache on their own.
Now we stop using CARGO_HOME
caching until this issue resolves.
Can anyone help with it?
@TriplEight Several of the error messages you posted seem unrelated to this issue. Are you seeing Object not found - no match for id
? If it is different, a separate issue might be best. It looks like the link you posted has expired.
I'm pretty unfamiliar with gitlab CI, and particularly how it handles caching. You might want to check that the filesystem supports the style of locking Cargo uses (flock
with LOCK_EX
on unix).