borgbackup/borg

Dealing with attic issues

maltefiala opened this issue · 63 comments

Here is a list of all open issues in attic, acquired with this dirty python script . I suggest we go through all of them and tick them when fixed.

Done / tracked here / invalid / out-of-scope:

I ticked all the stuff that's currently in borg master branch. There is some more stuff already fixed/improved that is not merged into there yet.

I am also ticking the items now that I think should be closed (because duplicate, irrelevant, stale, not enough infos, ...).

I'm also ticking the stuff that has own issues in the borg issue tracker.

about jborg/attic#252 (fadvise DONTNEED): it is merged (and AFAICS using fadvise DONTNEED had positive effects).

https://github.com/borgbackup/borg/blob/master/borg/_chunker.c#L160

People seem to disagree whether that is beneficial or not.
I'll reconsider as soon as facts or at least good reasoning is available that indicates using it is wrong.

about pull mode: it is ticked because we have an issue (which is a bit more general) in our issue tracker: #36

it would have been nice to clarify which tickets are resolved, in the issue tracker or ignored instead of just ticking them off here... now we don't clearly know what's what...

@anarcat I see your point, but I also don't want to make it even more work.

I mainly see the unchecked attic issues as stuff that has still to be looked at (and maybe dealt with), so that one does not have to go through all the issues there again and again. Of course we also need to deal with the issues in borg issue tracker.

What's really fixed can be seen more easily in CHANGES.rst.

yeah ok, just trying to fix #224 here :)

jborg/attic#99 implemented through #248

jborg/attic#181 implemented through #247

After doing a large volume test myself (see #216) and not finding issues, I'll check these ones as "can't reproduce" now:

jborg/attic#264
jborg/attic#176
jborg/attic#166

Note: I put a checkmark in that (native) "windows support" attic issue. Not because we have native windows support (only cygwin), but because the related changesets there are rather messed up.

jborg/attic#18 has been reopened in #315

jborg/attic#70 is 404 (huh? how can that happen?)

but as we recently improved FUSE performance, I've just checked it as solved.

i checked jborg/attic#367 in the list as we track this in #225.

Hi
In the list there is a point named "Pull mode" which is ticked. That is great since it is exactly what I need (schedule and manage backups from one central server). Unfortunately I can't find any hint in the docs how to make backups in pull mode.

so does that mean it is fixed/implemented but I am missing the docs or is it ticked by mistake?
(I would love to see pull mode implemented)

@anarcat: thx - and sorry for just skimming over the thread

Hi,

I'm unable to run borg create as I'm running into #317 (something related to NFS). Would you like me to open a separate issue in this repo?

info:

Platform: OpenBSD orchid.home 5.9 GENERIC.MP#1854 amd64 amd64
Borg: 0.30.0 Python: CPython 3.4.4

@pyrohh it's unclear what your issue is. sure open a new ticket and provide enough information so we can reproduce.

@ThomasWaldmann I'll do that, and I meant to link to jborg/attic#317, not #317 in this repo.

@pyrohh ah :) but i doubt it is same issue as that one as we do not use fcntl locking any more.

False alarm, everything works after a reinstall... must have been the boogie man. Anyway, this is such a nice piece of software, thanks for working on it :D

jborg/attic#103 checked, we have it now in ticket #661.

jborg/attic#304 i checked that as solved as we require py 3.4+ now and it only happened on 3.2.

I've checked the "win32" attic issue, see our "windows" branch.

checked jborg/attic#93 - the original question was answered "use rsync". in the discussion, the idea implemented by "borg with-lock subcommand" came, which we also have implemented now.

jborg/attic#380 fixed in borg by PR #1019.

jborg/attic#266 is now tracked as #1042 here.

IMHO could be ticked:
jborg/attic#362 (locally backup s3 issues) should use fuse
jborg/attic#323 (ssh:// after network re-connection and SIGINT issues) locking was improved on borg
jborg/attic#296 (Add disk space preview command) borg has --dry-run
jborg/attic#260 (improve restores by posix_fallocate) Fixed already in borg?
jborg/attic#227 (exclusive=x flag and RemoteRepository) This and several other issues are about parallel backups into the same repository. Borg has #1220 and some locking refinement discussions that try to solve this AFAIK.
jborg/attic#220 (Implement mid-file checkpoints?) We got that now! =D
jborg/attic#219 (aes-gcm support) TW PR, already in borg?
jborg/attic#216 (draw a picture from the internals documentation) Borg has extensive documentation with some sample ascii-art. Guess that can be also ticked
jborg/attic#207 (Flexibility and cleanup) TW PR, already in borg?
jborg/attic#211 (aes-gcm or aes-ocb?) TW issue, already in borg?
jborg/attic#210 (where shall parameters get stored?) TW issue, already in borg?
jborg/attic#131 (Parallel backups from different hosts crash attic) See above
jborg/attic#145 (Repository clone command?) According to #683 can be ticked

What we should test/discuss
jborg/attic#276 (Incorrect directory permissions during extract) Interesting and easy to test
jborg/attic#169 (Issue with modification dates of parent directories in mounted archives) same
jborg/attic#195 ("extract" command has no destination switch?) Do we want this?
jborg/attic#151 (Force-include the specified paths in 'create')

update

jborg/attic#110 : Borg has #210 and #773 for the lock timeout feature. As with all issues above that are about parallel operations (list while creating, create while mounted etc) we have #768

A few others simply need some discussion if this can be easily implemented and if it makes sense to implement. But I'm leaving that to others. :)

ticked PermissionError jborg/attic#381 fixed in 9ebc53a

ticked now: 362 296 260

323: unclear, we have recently documented some ssh settings though, so maybe could be checked

not fixed, still open: 216

in the works, PR against master/1.1 exists: 220

planned for 1.2, but not finished yet: 219, 211, 210

207: partially done (compression), crypto still todo

131: no crash (AFAIK), but a deadlock, not much better

145: rsync does not change the repo UUID (and that was the point of that issue)

(for transparency) jborg/attic#260

I'd tick "improve restores by posix_fallocate" because it has next to none positive impact on traditional FSes and (reportedly) negative impact on CoW FSes like ZFS and btrfs

Another very bad thing (tm) about fallocate is that if the FS doesn't support it glibc implements the worst-possible fallback ever.

OK, ticked 260 off the list. @enkore can you add your comment to the attic ticket and suggest closing it?

checked attic 130, likely solved by our #545.
checked attic 220, we have it as #1198.
checked attic 382, we have it as #1295.

checked attic 169 -> it is not designed to do that, should be closed there also.

checked attic 177 -> safe to say it's an "issue" outside attic. i.e. crash before the file was committed to disk and stuff like that. #1060 goes in the direction.

attic 382 resolved by #1295

checked attic 372 - solved by borg check --repair since 1.0.4 (remember correct chunks) and 1.0.6 (list missing chunks, warn at extract time, fuse EIO, healing capability).

checked attic 197 - issues on openssl 1.0.0j, which we don't support

checked attic 107 - borg caches per-archive hashindexes locally, so it only needs to fetch metadata from new archives.

checked attic 100, quite old and no response there.

FYI, just noticed jborg/attic#219 is closed.

@JensRantil yes, but in borg, we'll do aes-gcm, likely in borg 1.2.

attic 120 is borg #1462.

checked attic 227 - in PR #1371 rpc api for exclusive flag was added, so remote and local repos are now more similar in how they lock. also, the deadlock was fixed.

checked attic 383 - can be solved with borg with-lock ...

checked attic 384, out-of-scope / too special.

attic 211 and 219 checked, tracked here by #1044 and #45.

attic 131 checked. It is a rather old ticket, the tracebacks do not contain enough information.

Also, this issue is likely related to repository locking which is very different in borg (mkdir-based and since 1.0.7 immediately-exclusive) than in attic (posix locks, lock upgrades).

Checked attic 387 - borg works on cygwin and there is an ongoing windows port in branch "windows".

Checked attic 145, is locally tracked in #1695 now.

attic 276 is now tracked locally in #1751.

attic 195 is now tracked locally in #1783.

attic 345 has many local equivalents, including #1044 #672 #747 #45 and so on

attic 292 is solved since 1.0.8 (#1524).

from tw: attic 292 is about using different repo urls, borg #1524 is about saving the answer in any case, not just when doing write ops.

attic 210 is implemented with Manifest.config and borg_security_dir. Maybe create a internals doc ticket there?

from tw: yes that would be good, also for the changelog.

attic 207 is long implemented in Borg :) The crypto stuff has it's own tickets (see above).

from tw: some is implemented, some will (hopefully) come with 1.2. but yes, we can check it as we have local tickets.

attic 182: fuse archives are cached via a tempfile in the page cache. Resolved?

from tw: no, attic 182 is about a search index to optimize access to the right chunks.

attic 117 resolved in attic (also: fuse versions view, fuse repository mount, borg diff).

from tw: you mean "resolved in borg"? yes, fuse versions help, although it is not a general "file search" functionality. it only helps if you know the precise path.

attic 110: better locking is now done, otherwise it's covered by #768 and the like

from tw: ok, similar enough.

attic 104 is tracked locally in #1406

from tw: ok

attic 393 is handled by borg #2092.

attic 395 checked - borg can be built with openssl 1.1.

vjon commented

attic 386: I've opened #2415 to discuss this issue in the general case.

I'm closing this now. If we are missing something important here in the borg issue tracker, please file an issue here (after checking we do not already have one for it).