tmthrgd/mpw-js

RFE integrate domain extractor to allow consistent results despite sloppy copy paste like SuperGenPass and PwdHash

Closed this issue · 2 comments

RFE integrate domain extractor to allow consistent results despite sloppy copy paste like SuperGenPass and PwdHash

https://pwdhash.github.io/website/ flexibly parses a wide variety of inputs thanks to...
https://github.com/collinjackson/pwdhash-website/blob/gh-pages/domain-extractor.js

which was adapted from https://chriszarate.github.io/supergenpass/mobile
with code split between the old 2006 ccTLD list
https://github.com/chriszarate/supergenpass-lib/tree/master/src/lib/tld-list.js
and the filter script that uses it
https://github.com/chriszarate/supergenpass-lib/blob/master/src/lib/hostname.js

this would help with consistency, because changing the "site" entry from starting with http to https and including or dropping the www portion should not result in a different password.

thank you for your contributions, hope that helps make it easy for you, cheers.

(edit for reference later: looking at ready code libraries and updated lists from https://publicsuffix.org/learn maybe would be best to just simplify to automatically crop to content after :// and before the next / if any exists, and remind user to chop any leading subdomains with www.example.com to example.com as the suggestion)

Doing this would break compatability with the official implementation and break existing passwords. This is also a lot of complexity for something I no longer actively work on.

I’m going to have to decline this, but feel free to fork or otherwise build your own page using this implementation.

hi, thanks for the quick reply, i've just got a couple usability and easy compatibility notes.

domain extraction would be optimal:

from my experience of trying to use mpw or getting someone else to use it (specifically non-tech savvy seniors and folks that just write down passwords based on birthdays, relatives, pets)...

trying to explain the issue of reduced security from manually entering the domain they think they are logging into (maybe user followed a lookalike spoof site) plus the frustration of typos if they ignore that security advice anyway, copy paste would be safest, however...

explaining copy paste and trimming down the site they want to login to (which some people have skill and or mobility issues doing precisely anyway, and some can be very long) in a consistent way (when usually it doesn't matter if there is a dangling slash or not on URLs) because the URL could be different depending on what sub-page they were at when they hit login.

they don't understand why the password is different, because they are still saying what site they want to login to, they forgot that the tool is super picky, and get frustrated, and want to just go back to what they were doing (writing down family pet birthday passwords in notes all over the place).

as for compatibility, good point, but seems easy enough to work around:

i would change "Site:" label to "Full URL:" and add a display box below that with a default marked checkbox labeled "Use core domain instead:" which would display the extracted domain that will be used (for full transparency). that way switching modes is a simple and fast uncheck-recheck, just like the other post key generation dropdowns and templates.

this just seemed like it would be a quick and simple usability improvement to integrate at a glance, but you probably work with this sort of thing way more than i, so if you say it isn't trivial, then i'll take your word for it.

i suppose i shall poke at it later on and see if i can make it go, and since this project is tiny bits of a few webpages, then it should be a lot more pleasant than my last/first try*.

anyway thanks for creating a nice and tidy tool, and helping improve user security, cheers.

  • my last/first try was the [insert expletive] experience of trying to learn git while following official directions and tips from across multiple sites in trying to build an older cyanogenmod for lineageos-ification later on. (was forced to unresumably download over 100G of mostly 1G+ files, which got interrupted or stalled a few times. smh, as if nobody ever invented resumable download managers. then after all that wasted carbon footprint, it set aside the 8G of the old release that was asked for to begin with. at least needs a fuse plugin by default to pretend that 100G is local or something).