specifically - don't use `pytest.mark.xfail` directly in parametrize,
instead use `pytest.param(*args, marks=pytest.mark.xfail)`. the old way
is deprecated in pytest4 - for more information see
https://docs.pytest.org/en/latest/deprecations.html#marks-in-pytest-mark-parametrize
Also, make this an error in pytest.ini so if someone adds a new xfail,
it'll crash
When running tests locally Pytest returns a lot of captured logging info.
This is redundant because Pytest also captures stdout.
This commit effectively disables logging output when running tests by
setting the log level to higher than anything a real logging call would
ever emit.
The logging output is still captured by stdout, so nothing is lost here,
we’re just reducing duplication.
Most of the time spent by the admin app to generate a page is spent
waiting for the API. This is slow for three reasons:
1. Talking to the API means going out to the internet, then through
nginx, the Flask app, SQLAlchemy, down to the database, and then
serialising the result to JSON and making it into a HTTP response
2. Each call to the API is synchronous, therefore if a page needs 3 API
calls to render then the second API call won’t be made until the
first has finished, and the third won’t start until the second has
finished
3. Every request for a service page in the admin app makes a minimum
of two requests to the API (`GET /service/…` and `GET /user/…`)
Hitting the database will always be the slowest part of an app like
Notify. But this slowness is exacerbated by 2. and 3. Conversely every
speedup made to 1. is multiplied by 2. and 3.
So this pull request aims to make 1. a _lot_ faster by taking nginx,
Flask, SQLAlchemy and the database out of the equation. It replaces them
with Redis, which as an in-memory key/value store is a lot faster than
Postgres. There is still the overhead of going across the network to
talk to Redis, but the net improvement is vast.
This commit only caches the `GET /service` response, but is written in
such a way that we can easily expand to caching other responses down the
line.
The tradeoff here is that our code is more complex, and we risk
introducing edge cases where a cache becomes stale. The mitigations
against this are:
- invalidating all caches after 24h so a stale cache doesn’t remain
around indefinitely
- being careful when we add new stuff to the service response
---
Some indicative numbers, based on:
- `GET http://localhost:6012/services/<service_id>/template/<template_id>`
- with the admin app running locally
- talking to Redis running locally
- also talking to the API running locally, itself talking to a local
Postgres instance
- times measured with Chrome web inspector, average of 10 requests
╲ | No cache | Cache service | Cache service and user | Cache service, user and template
-- | -- | -- | -- | --
**Request time** | 136ms | 97ms | 73ms | 37ms
**Improvement** | 0% | 41% | 88% | 265%
---
Estimates of how much storage this requires:
- Services: 1,942 on production × 2kb = 4Mb
- Users: 4,534 on production × 2kb = 9Mb
- Templates: 7,079 on production × 4kb = 28Mb
Sometimes you just wanna run some tests directly using the `pytest`
command. But you’re in a new shell, and have forgotten to do
`source environment_test.sh`. The screen fills with red, and your day
just got a little bit worse.
This commit will stop this from ever happening again, by making the
setting environment variables part of running Pytest. It does this with
a plugin called pytest-env[1].
pytest.ini is the standard way of configuring pytest. Creating this file
where it didn’t exist before changes the behaviour of pytest, in that
it will now look for tests in the same directory as the file, rather
than defaulting to the `tests/` directory. So we also have to explicitly
configure pytest[2] to tell it that it should only look in this
directory. Otherwise it gets lost in the weeds of `node_modules`.
1. https://github.com/MobileDynasty/pytest-env
2. https://docs.pytest.org/en/latest/customize.html#confval-testpaths