In the past we've avoided using out-of-the-box solutions for Python
dependency resolution because a) they haven't been very mature and b)
we've had lots of issues with version conflicts. See [[1]], [[2]] for
details. Instead, we've been using a custom Python script that
under-the-hood runs `pip freeze` and saves the output to
`requirements.txt`.
This script works well for us, but it doesn't integrate well with other
tools. On the other hand [`pip-tools`](https://github.com/jazzband/pip-tools)
as of 2020 seems to be well-supported by its maintainers and other
tools; for instance, GitHub's automated update service
[Dependabot](https://dependabot.com) supports `requirements.in` files.
This commit replaces our `freeze-requirements` make command with
`pip-compile`.
The Digital Marketplace team have made this change and seem happy with
the results.
5 minutes isn't long enough to deploy ten instances of the admin app -
it turns out it takes marginally longer than 5 minutes to roll each
instance one after the next. this can lead to confusion as the build
fails, functional tests don't run, but the code may have deployed fine
and be running on production.
We check import order as part of our automated tests. But fixing them
means:
- manually editing them and rechecking
- remembering the parameters to `isort`
- looking up the `isort` command from the last time you ran it
Putting it in the Makefile should make life a bit easier.
So that browsers will cache them for as long as possible. We invalidate
this cache by adding the hash of each file to the query string.
There’s no way of doing this on a whole bucket; it has to be set on each
item. Adding this flag does so at the time of uploading the items.
Value of 10 years in seconds taken from:
0ee3bcb1ee/whitenoise/base.py (L19)
We have a nasty bug where Cloudfront is caching old files against new
URLs because the new code rolls out gradually across the ~10 admin
instances we have in production.
The way we are going to fix this is by pointing Cloudfront at S3, which
doesn’t have the concept of ‘instances’.
This commit does the work to copy the files to the new buckets. It
depends on the beuckets being set up.
We don't want pyup.io upgrading sub-dependencies listed in the
requirements.txt file since it does it whenever a new version is
available regardless of what our application dependencies require.
The list of top-level dependencies is moved to requirements-app.txt,
which is used by `make freeze-requirements` to generate the full
list of requirements in requirements.txt.
(See alphagov/notifications-api#1938 for details.)
Some time between version 6.32 and 6.34 of the Cloudfoundry CLI the
ability to redirect the output of a command into `cf push -f` was
broken.
The only alternative we can think of is writing the file to disk, doing
the deploy, and then deleting it.
We’re careful to write to a directory outside the current repo to avoid:
- including secrets in the deployed package
- accidentally checking the secrets into source control
`/tmp/` seems to be a good place to put it, since, even if the delete
doesn’t run, it will get cleaned up eventually (probably when the
machine next boots).
Right now this only applies to people deploying from their local
machines. At some point it will affect Jenkins too, but isn’t now. So
this commit only fixes the problem for the commands that developers run
locally.
fixup! Write manifests to disk instead of redirecting
Brings in the new environment variables deployment process introduced
in alphagov/notifications-api#1543.
The script is a copy of the API one and make steps are modified to
fit with the existing admin deployment targets.
Remove `cf-build` and `cf-build-with-docker` as they are not being used
Remove `build-codedeploy-artifact` in favor of `build-paas-artifact`
Remove `upload-codedeploy-artifact` in favor of `upload-paas-artifact`
Remove `deploy`, `check-aws-vars`,
`deploy-suspend-autoscaling-processes`,
`deploy-resume-autoscaling-processes`,
`deploy-check-autoscaling-processes` as they are remains of the pre-paas
era.
Consequently some variables became obsolete, namely: `CODEDEPLOY_PREFIX`
`CODEDEPLOY_APP_NAME`, `DNS_NAME`, `AWS_ACCESS_KEY_ID` and
`AWS_SECRET_ACCESS_KEY` and they are removed.
Previously we used AWS which meant that we could create wheels
from our requirements and then install them offline which made
deployments quicker.
We're no longer using AWS so let's remove that.
Although CloudFoundry supports installing dependencies in an offline
environment as documented here:
http://docs.cloudfoundry.org/buildpacks/python/#vendoring
To achieve this we create a vendor/ directory which will contain
the packages to install. This uses --no-index and --find-links so will
not resolve for any dependencies from pypi. For this reason there is
assumed confidence that the vendor/ directory will contain all
of the dependencies we need.
Sometimes we want to make changes to the admin app for doing user
research that we don’t want all users to see (because we’re not sure if
they’re the right changes to be making).
Previously this meant doing the research using a team member’s computer,
with the app running locally. This was bad for three reasons:
- requires the time of someone who has the code running locally
- requires the participant to use an unfamiliar computer
- means the participant doesn’t have access to their own Notify account
(or an account that we’ve set up for doing user research with)
The dream* would be to have two versions of the frontend app running
side by side in production. This commit makes the dream real – the two
versions of admin are:
- the normal admin app, accessible on
`www.notifications.service.gov.uk`
- a prototype version meant to be pushed to from a developer’s local
machine**, on a `cloudapps.digital` subdomain
Both of these apps share the same backing services, eg config, API
instance, queues, etc, etc. Which means that the prototype version can
be logged into with the same username and password, and the user will
see their service and all their templates when they do so.
Ideally this wouldn’t mean creating a separate base manifest. However
it’s a feature of Cloud Foundry that you can override the application
name. Which means a separate base manifest and a bit of duplication. 😞
* actually the real dream would be to have a version of admin deployed
for each branch of the admin app, but this might get a bit resource
intensive.
** by running `CF_SPACE=preview make preview cf-deploy-prototype`, where
`preview` is the name of the space you want to deploy to