git clone git@github.com:eGovPDX/portlandor.git
will create a folder called portlandor
whereever you run it and pull down the code from the repo.
Windows handles line endings differently than *nix based systems (Unix, Linux, macOS). To make sure our code is interoperable with the Linux servers to which they are deployed and to the local Linux containers where you develop, you will need to make sure your git configuration is set to properly handle line endings.
We want the repo to correctly pull down symlinks for use in the Lando containers.git. To do this, we will enable symlinks as part of the cloning of the repo.
git clone -c core.symlinks=true git@github.com:eGovPDX/portlandor.git
git clone
and git checkout
must be run as an administrator in order to create symbolic links
(run either Command Prompt or Windows Powershell as administrator for this step)
Run git config core.autocrlf false
to make sure this repository does not try to convert line endings for your Windows machine.
Follow the steps at https://docs.devwithlando.io/installation/installing.html
Follow the steps at https://docs.devwithlando.io/installation/uninstalling.html To completely remove all traces of Lando, follow the "removing lingering Lando configuration" steps
The .lando.yml file included in this repo will set you up to connect to the correct Pantheon dev environment for the Portland Oregon website. To initialize your local site for the first time, follow the steps below:
- From within your project root directory, run
lando start
to initialize your containers. - Log in to Pantheon and generate a machine token from My Dashboard > Account > Machine Tokens.
- Run
lando terminus auth:login --machine-token=[YOUR MACHINE TOKEN]
, this logs your Lando instance into our Pantheon account. - To make sure you don't hit rate limits with composer, log into Github and generate a personal access token and add it to your lando instance by using
lando composer config --global --auth github-oauth.github.com "$COMPOSER_TOKEN"
. (You should replace $COMPOSER_TOKEN with your generated token.) There is a handy tutorial for this at https://coderwall.com/p/kz4egw/composer-install-github-rate-limiting-work-around - If this is a new clone of the repo: before continuing to the next step you must run
lando composer install
andlando yarn install
to install the appropriate dependencies. - You have three options to get your database and files set up:
- Run
lando latest
to automaticaly download and import the latest DB from Dev. - Manually import the database
- Download the latest database from the Dev environment on Pantheon. (https://dashboard.pantheon.io/sites/5c6715db-abac-4633-ada8-1c9efe354629#dev/backups/backup-log)
- Move your database export into a folder in your project root called
/artifacts
. (We added this to our .gitignore, so the directory won't be there until you create it.) - Run
lando db-import artifacts/portlandor_dev_2018-04-12T00-00-00_UTC_database.sql.gz
. (This is just an example, you'll need to use the actual filename of the database dump you downloaded.)
- Download the latest files from the Dev environment on Pantheon. (https://dashboard.pantheon.io/sites/5c6715db-abac-4633-ada8-1c9efe354629#dev/backups/backup-log)
- Move your files backup into
web/sites/default/files
- Move your files backup into
- Run
- Run
git checkout master
andlando refresh
to build your local environment to match themaster
branch. (This runs composer install, drush updb, drush cim, and drush cr.) - You should now be able to visit https://portlandor.lndo.site in your browser.
- To enable XDebug, run
lando xdebug-on
. Runlando xdebug-off
to turn it off for increased performance. - When you are done with your development for the day, run
lando stop
to shut off your development containers orlando poweroff
if you want to stop all lando containers.
See other Lando with Pantheon commands at https://docs.devwithlando.io/tutorials/pantheon.html.
By default the site runs in "development" mode locally, which means that caching is off and twig debugging is on, etc. These settings are managed in web/sites/default/local.services.yml. While it is possible to update these settings if the developer wishes to run the site with caching on and twig debug off, updates to this file should never be comitted in the repo, so that developers are always working in dev mode by default.
We are using a modified version of GitHub Flow to keep things simple. While you don't need to fork the repo if you are on the eGov dev team, you will need to issue pull requests from a feature branch in order to get your code into our master
branch. Master is a protected branch and is the dfault branch for new PRs. We use master
to build up commits until we're ready to deploy everything from master
(Pantheon Dev) into the Test environment using the Pantheon dashboard.
To best work with Pantheon Multidev, we are going to keep feature branch names simple, lowercase, and under 11 characters.
git checkout master
git pull origin master
lando latest
lando refresh
git checkout -b powr-[ID]
-
Verify you are on the master branch with
git checkout master
. -
On the master branch, run
git pull origin master
or justgit pull
if you only have the one remote. This will make sure you have the latest changes from the remotemaster
branch. Optionally, runninggit pull -p origin
will prune any local branches not on the remote to help keep your local repo clean. -
Use the issue ID from Jira for a new feature branch name to start work:
git checkout -b powr-[ID]
to create and checkout a new branch. (We use lowercase for the branch name to help create Pantheon multidev environments correctly.) If the branch already exists, you may usegit checkout powr-[ID]
to switch to your branch. If you need to create multiple multidevs for your story, name your additional branchespowr-[ID][a-z]
orpowr-[ID]-[a-z or 1-9]
(but continue use justPOWR-[ID]
in the git commits and PR titles for all branches relating to that Jira story).TLDR:
- New feature branch
git checkout -b powr-[ID]
- New branch from current branch
powr-[ID][a-z]` or `powr-[ID]-[a-z or 1-9]
- Use base branch ID for base/sub-branch commits and PR titles
POWR-[ID] // on base branch powr-123 git commit -m "POWR-123 ..." // on powr-123-a branched from powr-123 git commit -m "POWR-123 ..."
- New feature branch
-
Run
lando latest
at the start of every sprint to update your local database with a copy of the database from Dev. -
Run
lando refresh
to refresh your local environment's dependencies and config. (This runs composer install, drush updb, drush cim, and drush cr.) -
You are now ready to develop on your feature branch.
- In addition to any custom modules or theming files you may have created, you need to export any configuraiton changes to the repo in order for those changes to be synchronized. Run
lando drush cex
(config-export) in your local envionrment to create/update/delete the necessary config files. You will need to commit these to git. If you are updating any modules through acomposer update
command, you will need to runlando drush updb
and thenlando drush cex
to capture any config schema changes. After creating these changes, you need to commit them. - To commit your work, run
git add -A
to add all of the changes to your local repo. (If you prefer to be a little more precise, you cangit add [filename]
for each file or several files separated by a space. - Then create a commit with a comment, such as
git commit -m "POWR-[ID] description of your change."
Every commit message should be prefixec with the Jira story ID. This ties those commits to the Jira issue and makes it easier to review git history. - Just before you push to GitHub, you should rebase your feature branch on the tip of the latest remote
master
branch. To do this rungit fetch origin master
thengit rebase -i origin/master
. This lets you "interactively" replay your change on the tip of the current release branch. You'll need to pick, squash or drop your changes and resolve any conflicts to get a clean commit that can be pushed to release. You may need togit rebase --continue
until all of your changes have been replayed and your branch is clean. Optionally, you may choose togit pull master
in order to fetch and merge master into your branch rather than preform a rebase. This can be helpful if you have already pushed to origin and need to bring in those upstream changes and avoid changing history of your origin branch with agit push --force
. - Run
lando refresh
to refresh your local environment with any changes frommaster
. (This runs composer install, drush updb, drush cim, and drush cr.) This will help you identify if you completed yourrebase
ormerge
correctly. - You can now run
git push -u origin powr-[ID]
. This will push your feature branch and set its remote to a branch of the same name on origin (GitHub).
When your work is ready for code review and merge:
- Create a Pull Request (PR) on GitHub for your feature branch. This will default to the
master
branch—but you may also choose to create a PR against a long running feature branch that will later have a PR tomaster
. Work with the build lead to determine the strategy for your story. - Make sure to include POWR-[ID] and a short description of the feature in your PR title so that Jira can relate that PR to the correct issue.
-
The PR creation triggers an automated CircleCI build, deployment, and test process that will:
- Check out code from your working branch on GitHub.
- Run
composer install
. - Deploy the site to a multidev feature branch site on Pantheon.
- Run
drush cim
to import config changes. - Run
drush updb
to update the database. - Run
drush cr
to rebuild the caches. - Run smoke tests against the feature branch site to make sure the build was successful.
-
If the build fails, you can go back to your local project and correct any issues and repeat the process of getting a clean commit pushed to GitHub. Once a PR exists, every commit to that branch will trigger a new CircleCI build. You only need to run
git push
from your branch if it is already connected to the remote, but you'll probably want to step through the rebase or merge steps if the time between pushes is anything less than a couple of minutes. -
The CI job
visual-regression
runs tests under different users in parallel andfinalize-all
notifies Percy that all tests have finished. Functional tests are performed directly on CI servers while screenshots are sent to Percy for post processing. When a functional test fails, click on theDetails
link next to the jobci/circleci: visual_regression
to review CI log for error messages. When a screenshot comparision test fails, click on theDetails
link next to thepercy/portlandor
job to review and approve the result if the visual difference is only caused by content updates. If the visual difference is not caused by content updates, a comment should be added to the JIRA ticket with a link to the screenshot in question.
- You'll need to prep anything on the multidev site that is needed for QA to complete the test plan. This is also a chance to see if you need to address any issues with an additional commit.
- In Jira, update the test plan of your issue including the URL to the feature branch. Move the issue to "QA" and assign the issue to the QA team.
- Executes the test plan step by step. (As Rick likes to say, "try and break it! Be creative!")
- If defects are found, communicate with the developer and move the issue back to "Todo" in Jira and assign it back to the developer. Document what you find as comments on the issue and decide with the developer if you need to open bugs to address in the future or if you should address the issue now.
- If no defect is found, move the issue to "Merge and Accept" in Jira and assign it to the build master.
Go back to your PR in GitHub and make sure to assign at least one team member as a reviewer. Reviews are required before the code can be merged. We are mostly checking to make sure the merge is going to be clean, but if we are adding custom code, it is nice to have a second set of eyes checking for our coding standards.
There are a few extra steps for the assigned build lead. This person is the final sanity check before merging in changes to the Dev, Test and Live instances on Pantheon. Basically the Dev and Test deploys are an opportunity to practice and test the deployment of the feature.
- After a team member has provided an approval, which may be after responding to feedback and resolving review issues, the build master will be able to push the "Squash and merge" button and commit the work to the
master
branch.- Make sure the PR has
master
set as the base branch and that the merge message is prepended with the Jira issue ID (e.g. "POWR-42 Adding the super duper feature") - The merge triggers an automated CircleCI build on the Dev environment.
- Make sure the PR has
- Test that everything still works on the Dev site. This is just a sanity check since a QA has already been performed.
- Can you confirm the expected code changes were deployed?
- Do permissions need to be rebuilt?
- If all work from the issue is merged and the build was successful, you can move the issue to the done column on our Jira board and delete the feature branch from Github.
- Repeat steps 1-2 to merge additional PRs until you've bundled all of the changes together that you want to go into the next "deployment" to Test, and Live.
- Before the last merge to
master
for the desired deployment. Clone thelive
database todev
using the following command:lando terminus env:clone-content employees.live dev
- After the clone is complete, merging to master will trigger an automated CircleCI build, deployment, and test process on the Dev environment similar to the multidev deployment.
- Verify that the CircleCI build on Dev is successful and passes our automated tests.
We are using the Dev environment to bundle all the approved code together into a single release which can then be deployed to Test, and Live and make sure things are solid. At least once per sprint, or more frequently as needed, our combined changes should be pushed to the Test and Live environments. The test deployment is essentially the last check to see if our code will be safe on Production and build correctly as the Pantheon Quicksilver scripts operate in a slightly different environment than CircleCI's Terminus commands.
- Go to the Pantheon dashboard and navigate to the Test environment.
- Under Deploys, you should see that the code you just deployed to Dev is ready for Test. Check the box to clone the Live database to Test and then merge that code and run the build on Test. You should make sure and provide a handy release message that tells us a little about what is included. Use the PR titles from the merged feature branches to construct release notes.
- After clicking deploy, smoke test your deployment by visiting the configuration sync and status report pages under administration. If config is not imported, it may be necessary to synchronize the configuration by running
lando terminus drush portlandor.test cim -y
. Never use the Drupal configuration synchronization admin UI to run the config import. (There be dragons... and the Drush timeout is significantly longer than the UI timeout. The UI timeout can lead to config coruption and data loss.) - Verify that everything still works on Test.
Once a deployment to Test has been tested and passes, the same set of changes should be promptly deployed to Production by following the same basic procedure above.
Note: The theme build process is automatically triggered everytime your Lando containers start up and whenever you run lando refresh
so you should only need to manually run it if you're editing the .scss or .js source files.
You can run lando yarn install
if you need to install/update your Node dependencies.
- Run
lando yarn start
- Go to
https://portlandor.lndo.site/pattern-lab
Here are some additional commands you may find useful.
- Run
lando yarn run build
to build both the theme assets and Pattern Lab. - Run
lando yarn run watch
to build both the thme assets and Pattern Lab, watch for changes and trigger rebuilds for both.
Pattern Lab is tool used to document the design choices and .twig
templates that make up the Cloudy theme.
To get started with Pattern Lab,
Here are some useful commands when working with Pattern Lab:
- Run
lando yarn run build:pl
to build the Pattern Lab site. - Run
lando yarn run watch:pl
to build the Pattern Lab site, watch for changes and trigger rebuilds. - Run
lando yarn run clean:pl
to delete the Pattern Lab site.
There is a known issue with PL v3 builds erroneously reporting missing template files if they exist outside of the pattern-lab
directory. However, these are false positives. Pattern Lab is finding the file and rendering the pattern.
For more information, see: Pattern Lab Github Issue #1116
Note: Make modifications to the desired SCSS and JavaScript files in the theme. Never modify style.bundle.css
, main.bundle.js
, or anything in the dist
direction directly. We build style.bundle.css
as part of our CI, you should run the development or build script locally to compile your scss files into style.bundle.css.
You have a couple of options for manually compiling the asset files:
- Run
lando yarn run build:wp
to run Webpack and rebuild our CSS/JS assets. - Run
lando yarn run watch:wp
to watch for SCSS/JS file changes and automatically rebuild our assets. - Run
lando yarn run clean:wp
to delete the Webpack assets.
The following is a snippet of Webpack build output from a successful build:
Hash: 83d85b18cfd6b88c5e7e
Version: webpack 4.29.0
Time: 7680ms
Built at: 02/05/2019 2:33:48 PM
Asset Size Chunks Chunk Names
css/style.bundle.css 196 KiB 0 [emitted] main
css/style.bundle.css.map 542 KiB 0 [emitted] main
js/main.bundle.js 81 KiB 0 [emitted] main
js/main.bundle.js.map 320 KiB 0 [emitted] main
Entrypoint main = css/style.bundle.css js/main.bundle.js css/style.bundle.css.map js/main.bundle.js.map
[0] multi ./js/src/main.js ./scss/style.scss 40 bytes {0} [built]
[1] ./js/src/main.js 3 KiB {0} [built]
[4] (webpack)/buildin/global.js 472 bytes {0} [built]
[5] external "jQuery" 42 bytes {0} [built]
[6] ./scss/style.scss 39 bytes {0} [built]
+ 4 hidden modules
Child mini-css-extract-plugin node_modules/css-loader/index.js??ref--5-1!node_modules/postcss-loader/lib/index.js??ref--5-2!node_modules/sass-loader/lib/loader.js??ref--5-3!scss/style.scss:
Entrypoint mini-css-extract-plugin = *
[0] ./node_modules/css-loader??ref--5-1!./node_modules/postcss-loader/lib??ref--5-2!./node_modules/sass-loader/lib/loader.js??ref--5-3!./scss/style.scss 748 KiB {0} [built]
+ 1 hidden module
The output lists some version information, followed by output information, and then debugging information for the path travelled by the webpack configuration. It's helpful to compare this output to the configuration file to understand the output. For this project we use an entrypoint to bundle our javascript and Sass files in the same entrypoint, main in this case. One important note for Javascript compiling is that we are relying on Drupal providing the jQuery window variable so we don't have a conflict where two instances exist. Our library depends on the core/jquery library so it should always be available.
The CSS output from compiling our Sass files is then bundled together using the mini-css-extract-plugin. To complete that process, the Sass file is copiled, then run through PostCSS, then finally, the CSS is loaded and extracted.
The following is an example of a Webpack build that fails:
Hash: 656b419f6eb4a2b6dec6
Version: webpack 4.29.0
Time: 3564ms
Built at: 02/05/2019 2:57:30 PM
2 assets
Entrypoint main = js/main.bundle.js js/main.bundle.js.map
[0] multi ./js/src/main.js ./scss/style.scss 40 bytes {0} [built]
[1] ./js/src/main.js 3 KiB {0} [built]
[4] (webpack)/buildin/global.js 472 bytes {0} [built]
[5] external "jQuery" 42 bytes {0} [built]
[6] ./scss/style.scss 1.31 KiB {0} [built] [failed] [1 error]
+ 2 hidden modules
ERROR in ./scss/style.scss
Module build failed (from ./node_modules/mini-css-extract-plugin/dist/loader.js):
ModuleBuildError: Module build failed (from ./node_modules/sass-loader/lib/loader.js):
@import 'components/_fake';
^
File to import not found or unreadable: components/_fake.
in /Users/michaelmcdonald/dev/portlandor/web/themes/custom/cloudy/scss/_components.scss (line 12, column 1)
at runLoaders (/Users/michaelmcdonald/dev/portlandor/web/themes/custom/cloudy/node_modules/webpack/lib/NormalModule.js:301:20)
at /Users/michaelmcdonald/dev/portlandor/web/themes/custom/cloudy/node_modules/loader-runner/lib/LoaderRunner.js:367:11
at /Users/michaelmcdonald/dev/portlandor/web/themes/custom/cloudy/node_modules/loader-runner/lib/LoaderRunner.js:233:18
at context.callback (/Users/michaelmcdonald/dev/portlandor/web/themes/custom/cloudy/node_modules/loader-runner/lib/LoaderRunner.js:111:13)
at Object.render [as callback] (/Users/michaelmcdonald/dev/portlandor/web/themes/custom/cloudy/node_modules/sass-loader/lib/loader.js:52:13)
at Object.done [as callback] (/Users/michaelmcdonald/dev/portlandor/web/themes/custom/cloudy/node_modules/neo-async/async.js:8077:18)
at options.error (/Users/michaelmcdonald/dev/portlandor/web/themes/custom/cloudy/node_modules/node-sass/lib/index.js:294:32)
@ multi ./js/src/main.js ./scss/style.scss main[1]
Child mini-css-extract-plugin node_modules/css-loader/index.js??ref--5-1!node_modules/postcss-loader/lib/index.js??ref--5-2!node_modules/sass-loader/lib/loader.js??ref--5-3!scss/style.scss:
Entrypoint mini-css-extract-plugin = *
[0] ./node_modules/css-loader??ref--5-1!./node_modules/postcss-loader/lib??ref--5-2!./node_modules/sass-loader/lib/loader.js??ref--5-3!./scss/style.scss 302 bytes {0} [built] [failed] [1 error]
ERROR in ./scss/style.scss (./node_modules/css-loader??ref--5-1!./node_modules/postcss-loader/lib??ref--5-2!./node_modules/sass-loader/lib/loader.js??ref--5-3!./scss/style.scss)
Module build failed (from ./node_modules/sass-loader/lib/loader.js):
@import 'components/_fake';
^
File to import not found or unreadable: components/_fake.
in /Users/michaelmcdonald/dev/portlandor/web/themes/custom/cloudy/scss/_components.scss (line 12, column 1)
Often the last few lines are the most important, and tell you where the error is found. Here, we can see that we had a bad import statement trying to import a non-existent file in _components.scss.
Composer is built into our Lando implementation for package management. We use it primarily to manage Drupal contrib modules and libraries.
Here is a good guide to using Composer with Drupal 8: https://www.lullabot.com/articles/drupal-8-composer-best-practices
Composer cheat sheet: https://devhints.io/composer
Use lando composer require drupal/[module name]
to download contrib modules and add them to the composer.json file. This ensures they're installed in each environment where the site is built. Drupal modules that are added this way must also be enabled using the lando drush pm:enable [module name]
command.
In general it's a good practice to keep dependencies updated to latest versions, but that introduces the risk of new bugs from upstream dependencies. Updating all dependencies should be done judiciously, and usually only at the beginning of a sprint, when there's adequate time to regression test and resolve issues. Individual packages can be updated as needed, without as much risk to the project.
To update all dependencies, run lando composer update
. To update a specific package, for example the Devel module, run lando composer update --with-dependencies drupal/devel
. After updating, make sure to commit the updated composer.lock file.
The composer.lock file contains a commit hash that identifies the exact commit version of the lock file and all dependencies' dependencies. You can think of it as a tag for the exact combination of dependencies being committed, and it's used to determine whether composer.lock is out of date.
When something changes in composer.json, but the lock hash has not been updated, you may receive the warning: ... The lock file is not up to date with the latest changes in composer.json. You may be getting outdated dependencies. Run update to update them. ...
To resolve this, run lando composer update --lock
, which will generate a new hash. If you encounter a conflict on the hash value when you merge or rebase, use the most recent (yours) version of the hash.
Tests can be found under tests/percy/__tests__/
. In order to reduce the time spent in switching user sessions, tests are generally organized by users. The same tests can be run in both local and CI environments.
One time setup: run lando rebuild -y
to install all dependencies required by Jest.
Run all tests at once: lando jest
Run a single test: lando jest [YOUR_TEST_NAME]
. Currently there are 5 tests available: admin, admin_group, marty, ally, and anonymous.
All tests will be run in a CI job. When a test fails, a screenshot of the last visited page can be found in the artifacts.