- Really Important stuff
- General best practices
- Application design
- Automated security testing
- Resources
- don’t put live data on any local device unless it has been signed off for such usage
- only access live / sensitive data under strict guidance (each service should have rules around its usage)
- understand the policies around where you should store your source code. NEVER put information such as passwords, API Keys or IP addresses in code repositories, even private ones.
- security should be part of the agile delivery process and be applied per story
- use the OWASP Security Testing Framework for a checklist
- enforce protected branches
- enforce reviews via pull requests
- require signed commits
- have a well defined, understood and enforced peer review process
- ensure you have fast, repeatable deploys with automated testing
- monitor for security advisories and patches and update when necessary
- favour simple systems; they are easier to secure
- adhere to the principles of clean code - this makes applications easier to understand
- consider design by contract. Preconditions set input constraints, postconditions what must be true; test against these
- reduce the attack surface by removing unnecessary code / libraries endpoints, remove demo enabling code, default passwords etc.
- minimise integrations, understand and protect against compromised 3rd parties (e.g. a script sourced from an untrusted 3rd party could be malicious)
- favour small components with a clear, single responsibility
- favour using established libraries and frameworks over rolling your own. However, import only trustworthy software and always verify its integrity
- avoid the use of shared variables / globals
- prefer immutability
- avoid nulls by using Option e.g. Scala Option and Java Optional
- be careful using <script src> unless you have complete control over the script that is loaded
- if submitting a form modifies data or stage, use POST not GET
- avoid SQL injection / javascript injection by ensuring all queries are parameterised (and / or use e.g. an ORM, Active Record)
- protect against cross site scripting (XSS) by escaping / sanitising untrusted data using a standard security encoding library. Also consider using [Content Security Policy] (https://w3c.github.io/webappsec-csp/2/) headers to whitelist assets a page can load
- protect against cross site request forgery (CSRF) which target state-changing requests. Check standard headers to verify the request is same origin AND check a CSRF token
- ensure that resources you load are as expected by using subresource integrity
- use HTTP Strict Transport Security (HSTS) with e.g. a "Strict-Transport-Security: max-age=8640000; includeSubDomains" HTTP Header to protected against SSL strip attacks. Consider entering your domain into the HSTS preload list
- protect against clickjacking by using the "X-Frame-Options: DENY" HTTP Header
- Don’t use JSONP to send sensitive data. Since JSONP is valid JavaScript, it’s not protected by the same-origin policy
- do not eval any non-verified String (e.g. don't eval a String expected to contain JSON - use JSON.parse instead)
- do not store session ids in LocalStorage. Think carefully before putting any sensitive data in local storage, even when encrypted
- prefer sessionStorage to localStorage if persistence longer than the browser session is not required
- validate URLs passed to XMLHttpRequest.open (browsers allow these to be cross-domain)
- only use WebSockets over TLS (wss://) and be aware that communication can be spoofed / hijacked through XSS
- use different subdomains for public facing web pages, static assets and administration
- use secure, signed, httpOnly cookies when possible (and mandatory if it contains account information)
- encrypt any sensitive data with e.g. cookie-encyption (node)
- avoid putting sensitive information in 3rd party cookies
- favour Test Driven Development to encourage good test coverage and application design
- test in an environment configured like live (infrastructure, replication, TLS etc.) with similar data profiles (but not with live data) as early as possible
- any testing against live data in non prod environments (even if scrubbed / anonymised) needs appropriate signoff
- use Continuous Integration (CI) and ensure good automated unit, integration, acceptance, smoke, performance, security tests
- undertake an IT Health Check (ITHC, Penetration Test, Pen Test) for new services or significant changes
- consider use of a version of chaos monkey e.g. Simian Army to test random instance failures
- always use HTTPS (ensure you use TLS 1.2)
- web applications must use a properly configured Web Application Firewall (WAF) e.g. NAXSI
- remove unnecessary functionality and code
- if exceptions occur, fail securely
- monitor metrics e.g. Sysdig
- create audit for successful and unsuccessful login attempts, unsuccessful authorisation attempts, logouts etc.
- disable unused HTTP methods
- restrict all applications and services to running with the minimum set of privileges / permissions
- isolate dev environments from the production network, and allow access to dev from authorised users only (dev environments can be a common attack vector)
- perform integrity checks to ensure there has been no tampering of e.g. hidden fields or transaction ids. Can use checksum, HMAC, encryption or digital signature depending on the risk
- server-side validation of all inputs, including headers, cookies, redirects
- prefer to accept known good input rather than reject known bad input
- sanitise input if necessary (e.g. strip out whitespace or hyphens from phone numbers)
- ensure option selects, checkbox and radio contain only allowable (given) values
- validate data type / length / range / allowed chars
- always re-validate previously entered form data in case it has been surreptitiously altered; hidden fields should be validated too
- all validation failures should result in input rejection with an appropriate message to the user
- have automated tests to check a reasonable range of validation failures are as expected
- do not log sensitive information (e.g. account information or session identifiers) unless necessary
- ensure no debugging / stack traces are displayed to the end user in production
- use generic error messages and custom error pages in production
- prevent tampering of logs by ensuring they are read only and do not allow deletions
- ensure a mechanism exists to conduct log analysis
- restrict access to logs
- do not store passwords, connection strings etc. in plain text
- understand the data that will be used, its retention and removal policy
- understand who will be accessing the service / data, with what devices via what networks / 3rd party services
- only store and use the minimum amount of data required to fulfil the user need; allow users to view only the data they need
- don't (provide interfaces that) allow arbitrary querying of data
- don't allow download of bulk data-sets or too much data to be visible on a page
- rate limit access to large data-sets and record access attempts (also limit the number of transactions a user or device can perform in a given time period)
- enforce use of database schemas, even for noSQL databases by using e.g. Mongoose for MongoDB
- avoid caching data within services unless necessary
- protect caches / temp files containing sensitive data from unauthorised usage and purge them ASAP
- use synchronous cryptography (shared secret) e.g. AES to encrypt / decrypt your own data if its sensitive.
- Ensure a shared key is held securely and separately to the data, preferably in a separate key vault (e.g. Vault) that your service can access when it needs a key
- use a key management process e.g. leveraging Amazon KMS
- encrypt backups (you will need to know which keys are required to handle which version)
- encode fields that have especially sensitive values
- disable autocomplete on forms for sensitive fields
- not transmit any sensitive information within the URL
- disable client-side caching for pages containing sensitive data by using appropriate HTTP cache headers i.e. "Cache-Control: no-store", "Expires: 0" and "Pragma: no-cache"
- anonymise data (ensuring re-identification cannot take place) sent to reporting tools or being used as test data
- consider encrypting partially completed forms under a key held by the user if you do not need to use this data
- applications should connect to databases with different credentials for each trust distinction e.g. user, read-only, admin, guest
- refer to the CESG password guidance when deciding your password policy for users
- if authentication is required, authenticate and authorise on every request
- use centralised authentication controls, favour SSO
- if authentication services go down they should not give users unauthorised access
- authentication failure should give no information as to which part failed - all error responses should be generic and the same
- separate authentication and authorisation from the resource that is being requested
- admin / account management functions should be particularly secure
- any credential store should only use cryptographically strong one-way salted hashes that don’t allow brute-force attacks. (use bcrypt, scrypt or PBKDF2). Salt length should be at least 128 bits and can be stored in db (prevents rainbow attacks)
- enforce the changing of temporary or default passwords when they are used
- password reset links should be time-limited and one-time use only
- prevent users from reusing a password
- notify users when a password reset occurs
- indicate the last attempted login to a user
- think carefully about the implications of using "Remember Me"
- re-authenticate users before performing any critical operation such as uploading files
- more secure: use multi-factor authentication (MFA / 2FA) to obtain one-time passwords (OTP). Favour Google Authenticator, Authy etc. over SMS (which has weak encryption standards that allow for man-in-the-middle attacks)
- consider introducing captcha after a number of login failures
- lock the account after a number of login failures for a given period of time
- enable all users to be force logged out (e.g. invalidating all session cookies)
- be prepared to change the hashing mechanism; ensure you can do it in the fly when users need to log in
- session IDs should be unique, non guessable and non-sequential and suitably long
- use httpOnly, secure, session cookies to store session ids client-side
- use httpOnly, secure, signed, encrypted session cookies if you want to store session data client-side
- set the path and domain for cookies to a suitably restricted value
- session inactivity timeout should be as short as possible
- logout should always be available
- expire session ids after a defined period (to reduce impact of session hijacking)
- session invalidation (due to e.g. timeout, logout, expiration or unauthorised reuse) should immediately delete the session id + session data on the server and client (include a Set-Cookie directive in the response with an expiration time in the past)
- always create a new session (therefore new session id in a cookie) when re-authenticating, to avoid session fixation attacks; never store the session id in the URL
- sensitive session data should be stored on the server
- clear out expired server-side session data frequently
- do not allow concurrent logins for the same user id
- session identifiers should only be in the HTTP cookie header (not in a GET request or anywhere else)
- for sensitive data require per request rather than per session tokens
- ensure there is a clear, tightly defined schema using e.g. JSON Schema for each integration point and ensure all input is validated against this schema
- automated tests should verify that messages conform to the expect schema for each integration point
- rate limit inputs and check payload size -- consider using the circuit breaker design pattern at integration points
- implement transport encryption for the transmission of all sensitive information and supplement with encryption of the payload if necessary
- ensure TLS certificates cover the domain and sub-domains, are current and from a trusted Certificate Authority, and be installed with intermediate certificates when required
- specify character encodings for all connections
- do not allow the mix of TLS and non-TLS content
- filter parameters containing sensitive info in the HTTP referer header when linking to external sites
- require authentication first if appropriate
- check file type, characters, size etc.
- virus / malware scan, preferably in a disposable container
- turn of exec privileges on file upload directories and ensure file is read-only
- OWASP have a good REST Security Cheatsheet
- favour JSON Web Tokens (JWT) in the header as the format for security tokens and protect their integrity with a MAC
- use API Keys in the authorization header to throttle clients and reduce impact of denial of service attacks. Do not rely on them to protect sensitive resources because they are easy to compromise
- consider 2-way TLS client certs if your application is integrating via a web service. However, implementation and trouble-shooting can be onerous and revoking and reissuing certificates a complexity
- whitelist allowable methods and reject non-allowed with 405
- be aware of authorisation needs in service-to-service communication, and avoid the confused deputy problem where a service calls another without providing the appropriate authorisation information. Using external ids can help here.
- the server should always send the Content-Type header with the correct Content-Type, and include a charset
- reject a request with 406 Not Acceptable response if the Content-Type is not supported
- disable CORS headers unless cross-domain calls are needed. If they are needed, be as specific as possible
- consider logging token validation errors to help detect attacks
Whilst projects will have a penetration test and IT health check, these are periodic tasks. We also encourage teams to run automated security testing tools so they can pick up security vulnerabilities much more quickly. We recommend that security testing tools are run on a regular basis, not just when code is pushed. This is because new vulnerabilities may emerge without you having made any changes to your application.
Snyk checks your NodeJS applications dependencies for vulnerabilities.
We recommend 2 ways to use Snyk:
-
Github integration Snyk can automatically raise PRs against your repository to fix vulnerabilities, more details available here:
https://snyk.io/docs/github/ -
Manually Snyk has a CLI that you can use manually or as part of a CI pipeline. The easiest way to configure this is to:
- Locally run snyk wizard
- The wizard will offer to add code to your package json to run snyk vulnerability testing alongside your usual npm test jobs
- Accept this and any CI test jobs will fail if there are new vulnerabilities
SBT Dependency Check checks your dependencies against the OWASP database of vulnerable modules. It does work but is relatively immature, so not as easy to use as Snyk. You can find SBT dependency check here
OWASP provide some tools for this, which includes a command line tool as well as a maven plugin. This is essentially the same tool as SBT Dependency Check above, just more for Java. You can find Dependency Check for Java here
- National Cyber Security Centre Secure Development Practices Guide
- OWASP Top 10 Project
- Web Application Security - A Beginner's Guide
- Identity and Data Security for Web Developers
- The Web Application Hacker's Handbook
- HTTP protocol security considerations
- OWASP cheat sheets
- OWASP secure coding practices
- CESG Enterprise security decisions
- CESG password guidance
- CESG 10 steps to cyber security
- CESG Protecting bulk and personal data
- CESG Security Design Principles for Digital Services
- CESG TLS guidance for external services