bruderstein/npp-plugins-x64

question about validation

Closed this issue · 5 comments

I was curious why it was chosen to validate the individual files after extraction instead of validating the hash of the downloaded file. Is there a security reason for this or is it an arbitrary decision done for other reasons?

Thanks!

chcg commented

@mrexodia This repo was planned as an intermediate solution for x64 version, see readme section:

Why are the lists managed differently?

We're working on a new admin system (come and help) that supports editing x64 plugins (and a lot of other improvements), but until that is ready, we have this git managed XML list and JSON file for the validations for the x64 plugin list.

The 32bit admin system is checking the md5 sum of the downloaded zip by a server request.

Checking the hashes of the already installed files allows also to check manually installed versions.

Alright, so it was not a special security consideration? I'm working on my own plugin manager system so I'm curious about the design of others...

Apologies, I thought I'd already replied to this.

The 32bit admin system is checking the md5 sum of the downloaded zip by a server request.

Actually that's not true. It checks the md5sum of the copied files individually not the downloaded file. The md5 of the zip is only stored for future use.

And yes, it was a specific security consideration. There is a slight chance of a non-admin timing attack to install code that may at some point be run as an admin user.

The files are downloaded to a user-writable area then extracted, then copied to the n++ directory (possibly in a different process in the case of an upgrade or non-admin install). It would be possible for something malicious to replace the zip after the check has been made on the zip, then the files would be copied without validation. As n++ is sometimes run in admin, protecting what DLLs get installed is pretty important.

Interesting! I will take that into consideration...

I guess the same could be accomplished by validating the hash of the ZIP after downloading it and then (in memory) compute the hashes of the files before copying them right? Or the problem could be avoided completely by never extracting the files in a temporary directory in the first place, but good to know about it regardless!

Yeah there's definitely several solutions to it. This was chosen as easiest for this project as it separates the download and copy steps neatly, but if you've got a more well defined plugin structure that could be much simpler.