brannondorsey/distributed-password-cracking

[Suggestion] support for larger wordlists/hashlists.

Closed this issue · 14 comments

Node has a buffer limit of one gigabyte. Many of the wordlists exceed this size, and they cannot be loaded into the program.

I'd be happy to donate a bit of monero or bch for this functionality. Drop your address in this thread.

Right you are, in the current implementation you woun't be able to load large word lists. This was really a proof-of-concept for a talk I gave last year and as such it isn't all that well fleshed out. If your goal is to actually crack password hashes, using hashcat on dedicated GPU hardware would actually perform far better.

That said, are you still interested in having this support added? If so I will accept PRs for sure 👍.

Generally, this particular issue is an relatively easy fix (to get line by line), however this code is still relevant and can still be used in a multitude of ways. I hope you keep this repository active. For this particular issue, I can fix it myself. Thanks for the fast reply.

Yeah, line-by-line processing vs loading the whole file into RAM is certainty the more elegant solution. If you do end up fixing it I'd love to merge a PR. I'd like to think I could get to it soon but I'm mid-move and the boxes in my life are taking a bit of a priority, haha. Thanks for the kind words about the repo, I too hope that it can stay active and useful for others in the future.

Also, have you ever considered implementing this type of server on WPA/WPA2 hashing? This would make brute-force attacks for wireless feasible.

It would be embedded on websites like CoinHive, and webmasters would be paid from the bounties users place on a handshake.

I actually have a bit of experience with WPA2 hash cracking. I wrote a tutorial a while ago on the subject, and I've got to say, without GPU acceleration CPU cracking just doesn't compare, even with potentially hundreds of worker nodes. One dream of mine, at least a few months ago, was to implement MD5 hashing with WebAssembly. While that would significantly improve the cracking speed, I think a WebGL accelerated version might actually make your idea feasible. Unfortunately I don't think my SIMD skills are elite enough to implement WPA2 or MD5 hashing via WebGL.

Your tutorial actually helped me learn what handshakes are... yada yada. I use wifite irl most of the time because it's just easier.

Using a GPU in the browser is probably not very efficient because it would need to be done with floats or shaders. However, this would still massively outperform GPU crackers because of the sole number of users. WebAssembly is fast enough, and remember that not everyone has a dGPU.

I heard compiling to webassembly is as easy as feeding it through some program? Am I wrong?

I should just compare the numbers on my quadro p6000 vs my cpu.

(redacted contact information due to possible spam)

Compiling to WebAssembly is indeed as easy as switching the compiler. The trouble I was having during the ~6 hours that I was working on it was figuring out how to properly share arbitrary length memory to pass strings between JS and WebAssembly. Given another weekend, or a dev with more WebAssembly experience than me, it would no doubt be a simple solution.

I think I may be a bit more pessimistic about the distributed browser CPU vs dedicated GPU hashrates than you. That said, we should do a benchmark and answer the question for real!

Alright, here are the #'s

Hashcat for WPA2 on GeForce 1080ti (high end consumer gpu) Speed.Dev.#1.....: 576.8 kH/s (49.53ms)

Accidentally misclicked, sorry.

Hashcat on i7-7700K (high end desktop cpu)

Speed.Dev.#2.....: 16406 H/s (61.76ms)

A gpu is around
35.1578690723 times faster

Wow, that's actually an order of magnitude closer than I expected. I guess the next question is what's the wasm benchmark for a WPA2 crack.

Funny enough, my home workstation has the same CPU and a GTX 1080 (not Ti).

It looks already to be reasonably fast. Also remember Google mentioned "near native speeds" for WebAssembly at the I/O conference.