ianka/xfs_undelete

Restore file with big inode number fail

Closed this issue · 5 comments

Hello, author!
My test scenario is to delete all files in the /testdir directory. During my test, I found that this tool cannot recover files with inode number jumps, which often occurs in multi-level directories, or files with larger inode numbers cannot be recovered. Even if the inode number is specified with the -s parameter, it cannot be recovered. For example, files with inode numbers of 50-110 can be restored, while files with numbers of 13581-13590 and 1069121-1069129 cannot be restored. How to solve this situation?

ianka commented

Could you describe the problem more in detail?

It's perfectly normal that inode numbers aren't consecutive. Xfs_undelete does not count those numbers up itself. Instead, it walks the inode tree and takes the inode numbers as identifiers only. If you use the -s option, you have to specify an existing inode number. As the tool still walks the tree from the very beginning but does not try to recover anything until that exact inode number specified is found.

ianka commented

Xfs_undelete does not care about directories. The inode tree is a separate structure.

You don't need the -s option for those tests. It's meant to pick up an interrupted recovery at some existing inode number that had been printed during the interrupted run. So you can recover in multiple batches e.g. if your output directory is on a rather small USB stick. If you point -s to an inode number, it won't start the inode tree walk at that point —that's simply not possible— but walk the whole tree and simply not recover any files before that one inode with the number specified with -s has been passed during the walk.

The inode tree is always walked in its entirety. (Unless you interrupt the recovery run.)

I understood that in your test environment, some files could be recovered, and those tend to have small inode numbers, and their inode numbers tend to be consecutive.

That's unfortunately not enough information to look for a specific bug. Do you have more clues for me?

Unfortunately, I have been unable to find a solution to your problem. Sorry.

I'm closing this issue. Feel free to reopen if you have new information.