Depends on: https://github.com/alphagov/notifications-aws/pull/905
Previously this would print some custom text with each step, and
as optionally loading a virtual environment. This moves the actual
test commands to the Makefile. While this no longer prints custom
text, it does print the command that was run:
Before (skipping other output):
./scripts/run_tests.sh
Code style check passed
Import order check passed
...
JavaScript tests have passed
...
Unit tests have passed
After (skipping other output):
flake8 .
isort --check-only ./app ./tests
npm test
...
py.test -n auto --maxfail=10 tests/
...
I think it's more useful to see the command being run, rather than
having to wait until it succeeds to know what was happening. Having
the command also makes it easier to run it again if it fails, rather
than having to go and find it in a script.
This is more consistent with how we run all other tasks. Note that
the virtual env setup is not generally applicable, and developers
of this repo should follow the guidance in the README.
In the past we've avoided using out-of-the-box solutions for Python
dependency resolution because a) they haven't been very mature and b)
we've had lots of issues with version conflicts. See [[1]], [[2]] for
details. Instead, we've been using a custom Python script that
under-the-hood runs `pip freeze` and saves the output to
`requirements.txt`.
This script works well for us, but it doesn't integrate well with other
tools. On the other hand [`pip-tools`](https://github.com/jazzband/pip-tools)
as of 2020 seems to be well-supported by its maintainers and other
tools; for instance, GitHub's automated update service
[Dependabot](https://dependabot.com) supports `requirements.in` files.
This commit replaces our `freeze-requirements` make command with
`pip-compile`.
The Digital Marketplace team have made this change and seem happy with
the results.
Turns out our tests spent a lot of time recreating the app for each test
case, which is quite intense.
This commit makes the fixture sessions level, so the app is only created
once per test session, not once per test function.
This cuts down the time taken to run the test suite to about 50 seconds.
It also makes the tests more parallelizable. Before this change going
from 4 to 8 processes made the tests slower. Now it cuts them down from
about 50 seconds to about 35 seconds[1]. So this commit also lets Pytest
choose the best number of processes to run, since on my machine it
chooses 8, which is the fastest.
Overall this means the
1. With a 2.2GHz quad-core Intel Core i7 processor on a 2015 MacBook Pro
The list of top-level dependencies is moved to requirements-app.txt,
which is used by `make freeze-requirements` to generate the full
list of requirements in requirements.txt.
(See alphagov/notifications-api#1938 for details.)
Done using isort[1], with the following command:
```
isort -rc ./app ./tests
```
Adds linting to the `run_tests.sh` script to stop badly-sorted imports
getting re-introduced.
Chosen style is ‘Vertical Hanging Indent’ with trailing commas, because
I think it gives the cleanest diffs, eg:
```
from third_party import (
lib1,
lib2,
lib3,
lib4,
)
```
1. https://pypi.python.org/pypi/isort
If a PR is going to fail because tests aren’t passing then you:
- should know about it as quick as possible
- shouldn’t waste precious Jenkins CPU running subsequent tests
This commit adds the `-x` flag to pytest, which stops the test run as
soon as one failing test is discovered.
Brings in the new environment variables deployment process introduced
in alphagov/notifications-api#1543.
The script is a copy of the API one and make steps are modified to
fit with the existing admin deployment targets.
flake8-print is a flake8 plugin that checks for `print()` statements in
Python files.
This should save us having to manually spot these when reviewing pull
requests.
The `--enable=T` flag needs to be set until this bug is fixed:
https://github.com/JBKahn/flake8-print/issues/27
flask-script has been deprecated by the internal flask.cli module, but
making this carries a few changes with it
* you should add FLASK_APP=application.py and FLASK_DEBUG=1 to your
environment.sh.
* instead of using `python app.py runserver`, now you must run
`flask run -p 6012`. The -p command is important - the port must be
set before the config is loaded, so that it can live reload nicely.
(https://github.com/pallets/flask/issues/2113#issuecomment-268014481)
* find available commands by just running `flask`.
* run them using flask. eg `flask list_routes`
* define new tasks by giving them the decorator
`@app.cli.command('task-name')`. Task name isn't needed if it's just
the same as the function name. Alternatively, if app isn't available
in the current scope, you can invoke the decorator directly, as seen
in app/commands.py
Back on my old 2013 Macbook Pro, 2 was the optimum number of cores for the fastest test runs. Which is why I chose `-n2` when I originally introduced this flag (see 9ced677ec7).
Now I have a faster Macbook (same as every other developer), which has more cores to take advantage of.
We also use 4 cores when running the API tests.
Cores | Time taken to run tests
--- | ---
1 | 28.31s
2 | 18.15s
3 | 14.33s
4 | 12.32s
5 | 12.98s
6 | 13.49s
8 | 15.37s
previously in run_app_paas.sh, we captured stdout from the app and
piped that into the log file. However, this came up with a bunch of
problems, mainly that exceptions with stack traces often weren't
formatted properly, and kibana could not parse them
instead, with the updated utils library, we can use that to log json
straight to the appropriate directory directly.
Previously we used AWS which meant that we could create wheels
from our requirements and then install them offline which made
deployments quicker.
We're no longer using AWS so let's remove that.
Although CloudFoundry supports installing dependencies in an offline
environment as documented here:
http://docs.cloudfoundry.org/buildpacks/python/#vendoring
To achieve this we create a vendor/ directory which will contain
the packages to install. This uses --no-index and --find-links so will
not resolve for any dependencies from pypi. For this reason there is
assumed confidence that the vendor/ directory will contain all
of the dependencies we need.
PEP8 was renamed to pycodestyle; this issue explains why:
PyCQA/pycodestyle#466
This commit changes our tests to use pycodestyle instead of pep8.
No changes to our code were required as a result.
This file needs to exist before the app can run, so create it automatically
rather than including it as an extra setup step in the README. The API app
already does this.