philsmd/7z2hashcat

Data Length issue

danie-dejager opened this issue · 2 comments

Is there any way for this tool to look at .7z files larger than 327528 bytes?

This is not something that depends on 7z2hashcat. Hashcat itself has some upper limit of data to be loaded, see https://github.com/hashcat/hashcat/blob/0a0522cf76908003e5b77d99953c8e1c97da5c57/include/interface.h#L1218 etc

Actually, it's indeed (easily) possible to change all lines within the hashcat's source code that use/check the upper limit of 7z data, because for 7-Zip (-m 11600) the check will be done on CPU via a hook function.
Of course, normally it makes no sense to have too large limits and load/reserve that many bytes because in general compressed 7z data is much smaller (I'm talking about a single highly-compressed file).
Therefore, the upper limit seems to be already much higher than the usual/average data size used in -m 11600 (indeed, the limit was much lower in previous versions of hashcat, see commit https://github.com/hashcat/hashcat/pull/1252/files).

I didn't get any response for this issue for over a week. I'm closing it now because it doesn't seem that we can/need to do anything here.

As mentioned, the limit is not something 7z2hashcat introduced, it is a (reasonaby high) limit used by hashcat itself.
It's easily possible to increase the limit in both hashcat and 7z2hashcat as you can see with the commit I posted above.

Any modification you do is at your own risk... by increasing the limit you will be using much more RAM (there shouldn't be any other problems involved if you do it correctly).

Thx