Cannot get encryption method of split files
Opened this issue · 10 comments
I'm using 7z2hashcat 0.9 and I'm trying to use this to get the encryption method of an archive that is separated into multiple files, which looks like this:
dbs.7z.001
dbs.7z.002
dbs.7z.003
etc...
Here is my console output:
W:\MEGA\db>7z2hashcat dbs.7z.001
W:\MEGA\db>7z2hashcat .\
WARNING: could not open the file '.\' for reading
W:\MEGA\db>7z2hashcat ./
WARNING: could not open the file './' for reading
W:\MEGA\db>7z2hashcat *
WARNING: could not open file '*'
W:\MEGA\db>7z2hashcat *.
WARNING: could not open file '*.'
W:\MEGA\db>7z2hashcat dbs.7z.001
W:\MEGA\db>
Any ideas? Is this supported? Thanks.
Never tried with this kind of archives. How do you generate them?
With 7z itself?
or is it the case that a huge file was splitted with an external tool?
Maybe it works if you just concatenate them first.
Furtheremore, it seems that you are using windows and try to use the linux-style path and wildcards (the usage of course depends on the operating system and the shell). The reason to use the wildcard is normally just to avoid specifying each and every file individually, but if you have problem to find the correct wildcards for your operating system, you should just specify each file individually.
The "could not open file" in this case just means that your operating system/shell did not understand the wildcard (and didn't expand it for you).
I will try to generate such an archive (splitted), but my guess is that if you just put the files together (in correct order), you should be fine (and you then only need to specify 1 single file for 7z2hashcat).
Maybe you can try this approach too and report back.
update: I just found out that windows doesn't expand wildcards automatically, you need to use additional handlers/code within your executable/script! This might work: http://stackoverflow.com/a/7898588 , but I am not sure if I should add additional modules just for this windows file expanding. bummer
update2: it seems that "create volume" option is a standard and generally supported option of 7z: one can create splitted volumes like this:
7z a huge_file.7z huge_file.txt -v100M -p
I might also try something like this for windows globbing (untested code yet):
my @file_list = @list;
BEGIN
{
my $os = $^O;
if (($os eq "MSWin32") || ($os eq "Win32"))
{
my $windows_glob_module = "File::Glob";
my $windows_globbing = "bsd_glob";
if (eval "require $windows_glob_module")
{
$windows_glob_module->import ($windows_globbing);
my @new_file_list = ();
foreach $item (@file_list)
{
push (@new_file_list, $windows_globbing ($item));
}
@file_list = @new_file_list;
}
}
}
I generated the archive by right clicking on a bunch of files, then:
7-zip -> Add to archive... -> Split to volumes, bytes: -> 700M CD
But the command you used (7z a huge_file.7z huge_file.txt -v100M -p
) should do the same thing, looking at the syntax.
I'm still undecided if we should add support for splitted archives. The problem here is: it seems that normally they are splitted for good reasons i.e. exactly because otherwise the whole 7z file would be too huge. That also means, that it doesn't make sense to read the file into RAM/memory (because we risk to reach memory limits).
On the other hand, if we try not to read the files into memory but instead try to implement some logic and code to jump between the files (seeking from one file, then jump back to the first file etc etc), we might risk to (need to) read the same files several times, have problems with getting the first/last files right, incorrect sorting, mix up with other files that were specified within the command line etc.
If we really decide to implement this, I would say that we at least add the restriction that the user can't mix splitted files with "non-splitted" ones.
Otherwise, what should happen when the user runs something like this:
./7z2hashcat.pl tiny_file.7z huge_file.7z.* another_file.7z
update: of course, thinking about it, we mainly need the first and last file (but this also depends on the size of each chunk, i.e. this is only valid if the chunks are "large enough"). I don't say that this is impossible to implement, but it of course requires to add some logic and code that might be totally unnecessary, because the user can just "concatenate" the splits manually:
Linux:
cat huge_file.7z.* > huge_file.7z
Windows:
copy /b huge_file.7z.* huge_file.7z
(of course this takes for granted that linux/windows do the sorting correctly, e.g. start with 001, 002, 003...., but this should always be the case as far as I know)
I added the file globbing feature for windows, now you should be able to use wildcards.
With the commit above I have added support for splitted 7z files (the restriction was implemented exactly as discuessed above: only a list of splitted files that belong to the same .7z can be provided, sorting will be handled internally, but you can't mix 2 independent splitted archives within the same run).
Could you please test and report back?
Thank you
As always: for new features (during the testing phase) you need to use the perl version directly aka 7z2hashcat.pl (not the "release" version, i.e. not the exe file)
W:\MEGA\db>perl C:\Users\Lavanoid\bin\7z2hashcat.pl dbs.7z.*
$7z$1$19$0$$8$f25e0c9dbe5f45910000000000000000$2892453778$512$510$e598324b3892f3774dcb7d5cb581ea341e86b103667c3b458e5e4351198c6efac3f6acfdd75311417653854f28652b6a5630ac9e4edaa809ffd5df9d0d3323fc18c27f5491795c58a8eacd01436e4b42b4ba868414598f29c1fdb42ce55a240c58df02fd01b9eff3e5c7c61db4e2deb802e7df17989f98895607741686acad12f6dee59d1efd0e0aaa19ee0b124d7f1c722e55b97e1929be26eee703cf3a6777a57219e1111390c069a8d74a44babcbd80eb0e6c92e67b743c3bdf03870b5486b1d4a1d89b3e01f49ffa12a1c738625eae3a5c7bd3106ef38eb4dd9deb1304f770563c049a9f732ed63eb752c384fbcab914caa9220c5e20cd9096c6fca29a3a088c07eb6d441295082e842154b522c6e54c0dbd25ef2294e2dc5bc718c733def90e15a7f7ee64dc7001720847216fd3f06f63b3f34331739b69afe2deb73844a0f424e387604090be88cb2d1328e56283ff0c63d150bb718a416ac155fe560812ecdcbb289377a9198d4551577aa6c2e8925227964c2e0035dece6045f54859fa03ab768b34e2444064bb413177c85a18950dc2cae40bfa1039f54432314e563fb6ea78f54363f118b172a62621f44636b93378536792208fcfb7e527ee08ff9e7d5bbe0e12dd97bd3851895a8bb8c58f0da2217719521b14f9ef2cf8cf1fc8cc523e117be753407f4574a5fa9399c8bde7da231dd8e3cd20dd34ab00885d36$922$5d00100000
W:\MEGA\db>
Hashcat:
OpenCL Platform #1: Advanced Micro Devices, Inc.
================================================
* Device #1: Turks, skipped
* Device #2: Intel(R) Xeon(R) CPU 5150 @ 2.66GHz, 2047/8186 MB allocatable, 4MCU
Hash '$7z$1$19$0$$8$f25e0c9dbe5f45910000000000000000$2892453778$512$510$e598324b3892f3774dcb7d5cb581ea341e86b103667c3b458e5e4351198c6efac3f6acfdd75311417653854f28652b6a5630ac9e4edaa809ffd5df9d0d3323fc18c27f5491795c58a8eacd01436e4b42b4ba868414598f29c1fdb42ce55a240c58df02fd01b9eff3e5c7c61db4e2deb802e7df17989f98895607741686acad12f6dee59d1efd0e0aaa19ee0b124d7f1c722e55b97e1929be26eee703cf3a6777a57219e1111390c069a8d74a44babcbd80eb0e6c92e67b743c3bdf03870b5486b1d4a1d89b3e01f49ffa12a1c738625eae3a5c7bd3106ef38eb4dd9deb1304f770563c049a9f732ed63eb752c384fbcab914caa9220c5e20cd9096c6fca29a3a088c07eb6d441295082e842154b522c6e54c0dbd25ef2294e2dc5bc718c733def90e15a7f7ee64dc7001720847216fd3f06f63b3f34331739b69afe2deb73844a0f424e387604090be88cb2d1328e56283ff0c63d150bb718a416ac155fe560812ecdcbb289377a9198d4551577aa6c2e8925227964c2e0035dece6045f54859fa03ab768b34e2444064bb413177c85a18950dc2cae40bfa1039f54432314e563fb6ea78f54363f118b172a62621f44636b93378536792208fcfb7e527ee08ff9e7d5bbe0e12dd97bd3851895a8bb8c58f0da2217719521b14f9ef2cf8cf1fc8cc523e117be753407f4574a5fa9399c8bde7da231dd8e3cd20dd34ab00885d36$922$5d00100000': Line-length exception
No hashes loaded
Started: Sun Feb 05 15:48:45 2017
Stopped: Sun Feb 05 15:48:47 2017
Not enough storage is available to process this command.
The password for the archive in question is wetwilly2016
(best password), if that helps at all.
I just tested that hash in JtR (which has no line length restriction anymore) and it works fine!
$ echo >7z.in '$7z$1$19$0$$8$f25e0c9dbe5f45910000000000000000$2892453778$512$510$e598324b3892f3774dcb7d5cb581ea341e86b103667c3b458e5e4351198c6efac3f6acfdd75311417653854f28652b6a5630ac9e4edaa809ffd5df9d0d3323fc18c27f5491795c58a8eacd01436e4b42b4ba868414598f29c1fdb42ce55a240c58df02fd01b9eff3e5c7c61db4e2deb802e7df17989f98895607741686acad12f6dee59d1efd0e0aaa19ee0b124d7f1c722e55b97e1929be26eee703cf3a6777a57219e1111390c069a8d74a44babcbd80eb0e6c92e67b743c3bdf03870b5486b1d4a1d89b3e01f49ffa12a1c738625eae3a5c7bd3106ef38eb4dd9deb1304f770563c049a9f732ed63eb752c384fbcab914caa9220c5e20cd9096c6fca29a3a088c07eb6d441295082e842154b522c6e54c0dbd25ef2294e2dc5bc718c733def90e15a7f7ee64dc7001720847216fd3f06f63b3f34331739b69afe2deb73844a0f424e387604090be88cb2d1328e56283ff0c63d150bb718a416ac155fe560812ecdcbb289377a9198d4551577aa6c2e8925227964c2e0035dece6045f54859fa03ab768b34e2444064bb413177c85a18950dc2cae40bfa1039f54432314e563fb6ea78f54363f118b172a62621f44636b93378536792208fcfb7e527ee08ff9e7d5bbe0e12dd97bd3851895a8bb8c58f0da2217719521b14f9ef2cf8cf1fc8cc523e117be753407f4574a5fa9399c8bde7da231dd8e3cd20dd34ab00885d36$922$5d00100000'
$ ../run/john 7z.in -mask:wetwilly20?d?d
Using default input encoding: UTF-8
Loaded 1 password hash (7z, 7-Zip [SHA256 128/128 AVX 4x AES])
Will run 8 OpenMP threads
Press 'q' or Ctrl-C to abort, almost any other key for status
wetwilly2016 (?)
1g 0:00:00:01 DONE (2017-02-05 17:33) 0.7407g/s 71.11p/s 71.11c/s 71.11C/s wetwilly2095..wetwilly2087
Use the "--show" option to display all of the cracked passwords reliably
Session completed
Strange, Does my hash work in hashcat, for you?
Hashcat still has a line length limit (of a bit less than 1024 characters), that's the problem. For this format (and some others) it should be worked around. I bet you could bump some macro and recompile Hashcat as a temporary workaround.
It seems that tihs new 7z2hashcat.pl feature works also for you and therefore I will close this issue here.
Thanks