We have decided to use NVM to manage installed Node versions locally
and in CI to ensure they match and produce consistent builds.
Running `nvm install` will install the Node version specified in the
`.nvmrc` file.
This is now consistent with Document Download Frontend.
See: alphagov/document-download-frontend#114
Signed-off-by: Richard Baker <richard.baker@digital.cabinet-office.gov.uk>
When editing CSS, Javascript or other assets it’s useful to not have to
run `gulp` manually to rebuild things after making changes.
This is why we have the `npm watch` script.
However for the paths to fonts to resolve properly when running locally
it needs the `NOTIFY_ENVIRONMENT` variable set to `development`. Having
to remember to do both of these steps every time is awkward.
For a one-off build of the frontend we added a command to set
`NOTIFY_ENVIRONMENT` to the appropriate value in https://github.com/alphagov/notifications-admin/pull/4049
This commit does the same thing for the watch task, by encapsulating
both steps in one `make` command.
Happy to take alternative suggestions on command naming.
Previous attempt: https://github.com/alphagov/notifications-admin/pull/4048
The problem with the previous attempt is that the assets built on
CI become part of the build artefact used for production [1]. This
switches back to my original approach of using environment.sh, but
with a technique to cope with it being absent on CI. I've tested it
works with and without an environment.sh file.
Note that "npm install" is fine to be on a separate line, since a
non-zero exit code will always cause "make" to stop.
[1]: https://github.com/alphagov/notifications-aws/blob/master/concourse/templates/admin.yml.j2#L47
This depends on an environment variable being set when the assets
are built in a development context [1]. Otherwise, the assets get
their '/static' prefix stripped like they do for production, which
isn't compatible with serving them under '/static' in development.
[1]: 66e5022198/gulpfile.js (L37-L41)
We now use `sass`, a JavaScript version of Sass,
compiled from dart-sass*, so shouldn't need to
rebuild it to solve issues with C libraries.
*https://github.com/sass/dart-sass
Depends on: https://github.com/alphagov/notifications-aws/pull/905
Previously this would print some custom text with each step, and
as optionally loading a virtual environment. This moves the actual
test commands to the Makefile. While this no longer prints custom
text, it does print the command that was run:
Before (skipping other output):
./scripts/run_tests.sh
Code style check passed
Import order check passed
...
JavaScript tests have passed
...
Unit tests have passed
After (skipping other output):
flake8 .
isort --check-only ./app ./tests
npm test
...
py.test -n auto --maxfail=10 tests/
...
I think it's more useful to see the command being run, rather than
having to wait until it succeeds to know what was happening. Having
the command also makes it easier to run it again if it fails, rather
than having to go and find it in a script.
These aren't specific to this repo, and are covered more generally
in the Wiki [1]. Note that:
- The claim about needing multiple Python versions is not true.
- The NPM instructions should be covered by the "make bootstrap".
- The version of Node/NPM is covered by installing the latest one.
[1]: https://github.com/alphagov/notifications-manuals/wiki/Getting-Started
This is more consistent with how we run all other tasks. Note that
the virtual env setup is not generally applicable, and developers
of this repo should follow the guidance in the README.
Currently we have a situation where we're not running tests against
new versions of dependencies, as requirements_for_test.txt is not
being kept in-sync with requirements.txt by pyup. Deploys are only
working because Concourse silently ignores version issues.
From a deployment log:
awscli 1.18.211 has requirement PyYAML<5.4,>=3.10; python_version != "3.4", but you'll have pyyaml 5.4 which is incompatible.
This switches to a single requirements file for test dependencies,
in order to keep it in-sync with requirements.txt i.e. we run our
tests against the same versions of dependencies that we deploy with,
and the build fails if we try to use package versions that are not
mutually compatible, as this example PR shows [1].
ERROR: Cannot install -r requirements_for_test.txt (line 17), -r requirements_for_test.txt (line 198) and pyyaml==5.4.1 because these package versions have conflicting dependencies.
We shouldn't need to have fine-grained locking on test dependencies,
beyond those we want to list manually in the file.
[1]: https://github.com/alphagov/notifications-admin/pull/3804
In the past we've avoided using out-of-the-box solutions for Python
dependency resolution because a) they haven't been very mature and b)
we've had lots of issues with version conflicts. See [[1]], [[2]] for
details. Instead, we've been using a custom Python script that
under-the-hood runs `pip freeze` and saves the output to
`requirements.txt`.
This script works well for us, but it doesn't integrate well with other
tools. On the other hand [`pip-tools`](https://github.com/jazzband/pip-tools)
as of 2020 seems to be well-supported by its maintainers and other
tools; for instance, GitHub's automated update service
[Dependabot](https://dependabot.com) supports `requirements.in` files.
This commit replaces our `freeze-requirements` make command with
`pip-compile`.
The Digital Marketplace team have made this change and seem happy with
the results.
5 minutes isn't long enough to deploy ten instances of the admin app -
it turns out it takes marginally longer than 5 minutes to roll each
instance one after the next. this can lead to confusion as the build
fails, functional tests don't run, but the code may have deployed fine
and be running on production.
We check import order as part of our automated tests. But fixing them
means:
- manually editing them and rechecking
- remembering the parameters to `isort`
- looking up the `isort` command from the last time you ran it
Putting it in the Makefile should make life a bit easier.
So that browsers will cache them for as long as possible. We invalidate
this cache by adding the hash of each file to the query string.
There’s no way of doing this on a whole bucket; it has to be set on each
item. Adding this flag does so at the time of uploading the items.
Value of 10 years in seconds taken from:
0ee3bcb1ee/whitenoise/base.py (L19)
We have a nasty bug where Cloudfront is caching old files against new
URLs because the new code rolls out gradually across the ~10 admin
instances we have in production.
The way we are going to fix this is by pointing Cloudfront at S3, which
doesn’t have the concept of ‘instances’.
This commit does the work to copy the files to the new buckets. It
depends on the beuckets being set up.
We don't want pyup.io upgrading sub-dependencies listed in the
requirements.txt file since it does it whenever a new version is
available regardless of what our application dependencies require.
The list of top-level dependencies is moved to requirements-app.txt,
which is used by `make freeze-requirements` to generate the full
list of requirements in requirements.txt.
(See alphagov/notifications-api#1938 for details.)
Some time between version 6.32 and 6.34 of the Cloudfoundry CLI the
ability to redirect the output of a command into `cf push -f` was
broken.
The only alternative we can think of is writing the file to disk, doing
the deploy, and then deleting it.
We’re careful to write to a directory outside the current repo to avoid:
- including secrets in the deployed package
- accidentally checking the secrets into source control
`/tmp/` seems to be a good place to put it, since, even if the delete
doesn’t run, it will get cleaned up eventually (probably when the
machine next boots).
Right now this only applies to people deploying from their local
machines. At some point it will affect Jenkins too, but isn’t now. So
this commit only fixes the problem for the commands that developers run
locally.
fixup! Write manifests to disk instead of redirecting
Brings in the new environment variables deployment process introduced
in alphagov/notifications-api#1543.
The script is a copy of the API one and make steps are modified to
fit with the existing admin deployment targets.
Remove `cf-build` and `cf-build-with-docker` as they are not being used
Remove `build-codedeploy-artifact` in favor of `build-paas-artifact`
Remove `upload-codedeploy-artifact` in favor of `upload-paas-artifact`
Remove `deploy`, `check-aws-vars`,
`deploy-suspend-autoscaling-processes`,
`deploy-resume-autoscaling-processes`,
`deploy-check-autoscaling-processes` as they are remains of the pre-paas
era.
Consequently some variables became obsolete, namely: `CODEDEPLOY_PREFIX`
`CODEDEPLOY_APP_NAME`, `DNS_NAME`, `AWS_ACCESS_KEY_ID` and
`AWS_SECRET_ACCESS_KEY` and they are removed.