idrassi/HashCheck

[Feature Request] Hash Verify -- opening multiple hash files

Opened this issue · 4 comments

One issue that I encounter regularly is trying to verify multiple hash files, all at once, which creates a huge drag with the harddisk attempting to read multiple files in parallel. There are a few solutions I would like to propose, if any of them would be of interest to implement.

  • When a hash verify window is already open, then reuse it by amending additional hash file contents to it and continue processing in sequential order.
  • Spawn multiple hash verify windows, like normal, but pause them until the previous windows are done.
  • Add options for Joining (and Splitting) hash files, so I can join a few thousand together into one hash file, and then just verify it as normal. Joining and Splitting would have to take the *relative\folder\paths\ into consideration from each of the joined or split hash files.

Thoughts?

I'm in favour of option # 2.

ZPNRG commented

I would probably vote for option #2 where the hash checksum files are queued as they are executed.

WIth the Gurnec HashCheck project, I brought up the same issue back on 05-17-2017. See gurnec#29.

It should be noted that the fundamental issue applies to both creating and checking multiple hash checksum files at the same time, not just checking them,

Below is what I said back in 2017:

"Hi.

I made a comment in the thread for Issue #13 (Queue multiple file check/creation enhancement), but the bulk of the issue there seemed to surround having individual hash checksum files created for each file instead of queue-ing in the sense I am thinking about.

I would like to have just the simple queue-ing aspect. Many times, I want to check a number of hash checksum files (.sha1 or .sha256) that are large or are for large files, like Windows ISOs. If I just double-click them one after another and run them to have HashCheck check the hash checksum files, then the HD (spinning disks) is thrashed. So, I have to run them manually 1 by 1 after each finishes. It would be nice if HashCheck would queue the operations so that after a currently running hash task finishes (creating a hash checksum file or checking a hash checksum file), then it will process the next one that I had started, and keep creating or checking one by one until all of the operations I had started have completed.

To be clear, this would ideally work for creating multiple hash checksum files one after another or double-clicking on existing hash checksum files one after another to be checked. Either way, I would like to start the process for all of the hash checksum files I want created or checked, but then just have HashCheck process them serially 1 by 1 and not have them all going (or trying to work) in parallel (at the same time). Like I said, even fast spinning drives are hammered if the the hash checksums being created or checked are for large files or there are lots of decent sized files and two or more of these hash checksum files being created or checked are processing at the same time. Therefore, being able to start the processes (creating or checking) and have HashCheck queue them and only do one at a time would be nice.

Let me know if there are any questions.

Thank you for the work on HashCheck.

-ZPNRG"

This is mainly why I proposed 3 competing ideas, as they're all compatible with each other and could be simultaneously implemented. The ability to Join multiple checksum files together would itself solve the issue of verifying multiple files, allowing the user to combine them before verifying the combined file. Otherwise, allowing the user to reuse or spawn-new verification windows, solves the issue of window spam, where checking 1000 hash files would spawn 1000 individual windows. Having them smartly pause and wait instead of checking in parallel would prevent 1000 files from being read and verified at the same time, wearing holes into your harddisk.

ZPNRG commented

@idrassi, have you thought about this feature request? Creating and/or verifying multiple digests at the same time in some type of queued manner is still one of my personal biggest needs for HashCheck due to how many digests I create and/or verify at times.