When we cloned the repository and started making modifications, we
didn't initially keep tests in step. This commit tries to get us to a
clean test run by skipping tests that are failing and removing some
that we no longer expect to use (MMG, Firetext), with the intention that
we will come back in future and update or remove them as appropriate.
To find all tests skipped, search for `@pytest.mark.skip(reason="Needs
updating for TTS:`. There will be a brief description of the work that
needs to be done to get them passing, if known. Delete that line to make
them run in a standard test run (`make test`).
The only impactful change is the major version itself, where I've
fixed the breaking changes due to the upgrade of PyPDF2 [^1] and
checked there are no deprecation warnings when I run the tests.
[^1]: https://github.com/alphagov/notifications-utils/pull/973
This test was calling `.load` on model objects, when it should have
been calling `.dump`. This was not working as expected before the
marshmallow upgrade either - the objects returned were errors and not
template versions.
Boolean fields in marshmallow have various values that get changed to
True or False. The value 'Yes' now gets changed to True, which was
causing a test to start failing. We could change the schemas to stop
'Yes' from being changed to True, but the data for boolean fields comes
from admin, where it is only allowed to have certain values anyway so
this just fixes the test.
https://marshmallow.readthedocs.io/en/stable/upgrading.html#schemas-are-always-strict
`.load` doesn't return a `(data, errors)` tuple any more - only data is
returned. A `ValidationError` is raised if validation fails. The code
now relies on the `marshmallow_validation_error` error handler to handle
errors instead of having to raise an `InvalidRequest`. This has no
effect on the response that is returned (a test has been modified to
check).
Also added a new `password` field to the `UserSchema` so that we don't
have to specially check for password errors in the `.create_user` endpoint
- we can let marshmallow handle them.
This fixes a bug where (letter) notifications left in sending would
temporarily get excluded from billing and status calculations once
the service retention period had elapsed, and then get included once
again when they finally get marked as delivered.*
Status and billing tasks shouldn't need to have knowledge about which
table their data is in and getting this wrong is the fundamental cause
of the bug here. Adding a view across both tables abstracts this away
while keeping the query complexity the same.
Using a view also has the added benefit that we no longer need to care
when the status / billing tasks run in comparison to the deletion task,
since we will retrieve the same data irrespective (see below for a more
detailed discussion on data integrity).
*Such a scenario is rare but has happened.
A New View
==========
I've included all the columns that are shared between the two tables,
even though only a subset are actually needed. Having extra columns
has no impact and may be useful in future.
Although the view isn't actually a table, SQLAlchemy appears to wrap
it without any issues, noting that the package doesn't have any direct
support for "view models". Because we're never inserting data, we don't
need most of the kwargs when defining columns.*
*Note that the "default" kwarg doesn't affect data that's retrieved,
only data that's written (if no value is set).
Data Integrity
==============
The (new) tests cover the main scenarios.
We need to be careful with how the view interacts with the deletion /
archiving task. There are two concerns here:
- Duplicates. The deletion task inserts before it deletes [^1], so we
could end up double counting. It turns out this isn't a problem because
a Postgres UNION is an implicit "DISTINCT" [^2]. I've also verified this
manually, just to be on the safe side.
- No data. It's conceivable that the query will check the history table
just before the insertion, then check the notifications table just after
the deletion. It turns out this isn't a problem either because the whole
query sees the same DB snapshot [^3][^4].*
*I can't think of a way to test this as it's a race condition, but I'm
confident the Postgres docs are accurate.
Performance
===========
I copied the relevant (non-PII) columns from Production for data going
back to 2022-04-01. I then ran several tests.
Queries using the new view still make use of indices on a per-table basis,
as the following query plan illustrates:
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
GroupAggregate (cost=1130820.02..1135353.89 rows=46502 width=97) (actual time=629.863..756.703 rows=72 loops=1)
Group Key: notifications_all_time_view.template_id, notifications_all_time_view.sent_by, notifications_all_time_view.rate_multiplier, notifications_all_time_view.international
-> Sort (cost=1130820.02..1131401.28 rows=232506 width=85) (actual time=629.756..708.914 rows=217563 loops=1)
Sort Key: notifications_all_time_view.template_id, notifications_all_time_view.sent_by, notifications_all_time_view.rate_multiplier, notifications_all_time_view.international
Sort Method: external merge Disk: 9320kB
-> Subquery Scan on notifications_all_time_view (cost=1088506.43..1098969.20 rows=232506 width=85) (actual time=416.118..541.669 rows=217563 loops=1)
-> Unique (cost=1088506.43..1096644.14 rows=232506 width=725) (actual time=416.115..513.065 rows=217563 loops=1)
-> Sort (cost=1088506.43..1089087.70 rows=232506 width=725) (actual time=416.115..451.190 rows=217563 loops=1)
Sort Key: notifications_no_pii.id, notifications_no_pii.job_id, notifications_no_pii.service_id, notifications_no_pii.template_id, notifications_no_pii.key_type, notifications_no_pii.billable_units, notifications_no_pii.notification_type, notifications_no_pii.created_at, notifications_no_pii.sent_by, notifications_no_pii.notification_status, notifications_no_pii.international, notifications_no_pii.rate_multiplier, notifications_no_pii.postage
Sort Method: external merge Disk: 23936kB
-> Append (cost=114.42..918374.12 rows=232506 width=725) (actual time=2.051..298.229 rows=217563 loops=1)
-> Bitmap Heap Scan on notifications_no_pii (cost=114.42..8557.55 rows=2042 width=113) (actual time=1.405..1.442 rows=0 loops=1)
Recheck Cond: ((service_id = 'c5956607-20b1-48b4-8983-85d11404e61f'::uuid) AND (notification_type = 'sms'::notification_type) AND (notification_status = ANY ('{sending,sent,delivered,pending,temporary-failure,permanent-failure}'::text[])) AND (created_at >= '2022-05-01 23:00:00'::timestamp without time zone) AND (created_at < '2022-05-02 23:00:00'::timestamp without time zone))
Filter: ((key_type)::text = ANY ('{normal,team}'::text[]))
-> Bitmap Index Scan on ix_notifications_no_piiservice_id_composite (cost=0.00..113.91 rows=2202 width=0) (actual time=1.402..1.439 rows=0 loops=1)
Index Cond: ((service_id = 'c5956607-20b1-48b4-8983-85d11404e61f'::uuid) AND (notification_type = 'sms'::notification_type) AND (notification_status = ANY ('{sending,sent,delivered,pending,temporary-failure,permanent-failure}'::text[])) AND (created_at >= '2022-05-01 23:00:00'::timestamp without time zone) AND (created_at < '2022-05-02 23:00:00'::timestamp without time zone))
-> Index Scan using ix_notifications_history_no_pii_service_id_composite on notifications_history_no_pii (cost=0.70..906328.97 rows=230464 width=113) (actual time=0.645..281.612 rows=217563 loops=1)
Index Cond: ((service_id = 'c5956607-20b1-48b4-8983-85d11404e61f'::uuid) AND ((key_type)::text = ANY ('{normal,team}'::text[])) AND (notification_type = 'sms'::notification_type) AND (created_at >= '2022-05-01 23:00:00'::timestamp without time zone) AND (created_at < '2022-05-02 23:00:00'::timestamp without time zone))
Filter: (notification_status = ANY ('{sending,sent,delivered,pending,temporary-failure,permanent-failure}'::text[]))
Planning Time: 18.032 ms
Execution Time: 759.001 ms
(21 rows)
Queries using the new view appear to be slower than without, but the
differences I've seen are minimal: the original queries execute in
seconds locally and in Production, so it's not a big issue.
Notes: Performance
==================
I downloaded a minimal set of columns for testing:
\copy (
select
id, notification_type, key_type, created_at, service_id,
template_id, sent_by, rate_multiplier, international,
billable_units, postage, job_id, notification_status
from notifications
) to 'notifications.csv' delimiter ',' csv header;
CREATE TABLE notifications_no_pii (
id uuid NOT NULL,
notification_type public.notification_type NOT NULL,
key_type character varying(255) NOT NULL,
created_at timestamp without time zone NOT NULL,
service_id uuid,
template_id uuid,
sent_by character varying,
rate_multiplier numeric,
international boolean,
billable_units integer NOT NULL,
postage character varying,
job_id uuid,
notification_status text
);
copy notifications_no_pii from '/Users/ben.thorner/Desktop/notifications.csv' delimiter ',' csv header;
CREATE INDEX ix_notifications_no_piicreated_at ON notifications_no_pii USING btree (created_at);
CREATE INDEX ix_notifications_no_piijob_id ON notifications_no_pii USING btree (job_id);
CREATE INDEX ix_notifications_no_piinotification_type_composite ON notifications_no_pii USING btree (notification_type, notification_status, created_at);
CREATE INDEX ix_notifications_no_piiservice_created_at ON notifications_no_pii USING btree (service_id, created_at);
CREATE INDEX ix_notifications_no_piiservice_id_composite ON notifications_no_pii USING btree (service_id, notification_type, notification_status, created_at);
CREATE INDEX ix_notifications_no_piitemplate_id ON notifications_no_pii USING btree (template_id);
And similarly for the history table. I then created a sepatate view
across both of these temporary tables using just these columns.
To test performance I created some queries that reflect what is run
by the billing [^5] and status [^6] tasks e.g.
explain analyze select template_id, sent_by, rate_multiplier, international, sum(billable_units), count(*)
from notifications_all_time_view
where
notification_status in ('sending', 'sent', 'delivered', 'pending', 'temporary-failure', 'permanent-failure')
and key_type in ('normal', 'team')
and created_at >= '2022-05-01 23:00'
and created_at < '2022-05-02 23:00'
and notification_type = 'sms'
and service_id = 'c5956607-20b1-48b4-8983-85d11404e61f'
group by 1,2,3,4;
explain analyze select template_id, job_id, key_type, notification_status, count(*)
from notifications_all_time_view
where created_at >= '2022-05-01 23:00'
and created_at < '2022-05-02 23:00'
and notification_type = 'sms'
and service_id = 'c5956607-20b1-48b4-8983-85d11404e61f'
and key_type in ('normal', 'team')
group by 1,2,3,4;
Between running queries I restarted my local database and also ran
a command to purge disk caches [^7].
I tested on a few services:
- c5956607-20b1-48b4-8983-85d11404e61f on 2022-05-02 (high volume)
- 0cc696c6-b792-409d-99e9-64232f461b0f on 2022-04-06 (highest volume)
- 01135db6-7819-4121-8b97-4aa2d741e372 on 2022-04-14 (very low volume)
All execution results are of the same magnitude using the view compared
to the worst case of either table on its own.
[^1]: 00a04ebf54/app/dao/notifications_dao.py (L389)
[^2]: https://stackoverflow.com/questions/49925/what-is-the-difference-between-union-and-union-all
[^3]: https://www.postgresql.org/docs/current/transaction-iso.html
[^4]: https://dba.stackexchange.com/questions/210485/can-sub-selects-change-in-one-single-query-in-a-read-committed-transaction
[^5]: 00a04ebf54/app/dao/fact_billing_dao.py (L471)
[^6]: 00a04ebf54/app/dao/fact_notification_status_dao.py (L58)
[^7]: https://stackoverflow.com/questions/28845524/echo-3-proc-sys-vm-drop-caches-on-mac-osx
The tests were previously covering a shared function that's now
only used once, so I've inlined it and merged the tests together
with a common naming that's consistent with the code under test.
we add a row in AnnualBilling table whenever a new service is created,
and our billing code assumes this is done. Yet when we were writing
some of the tests, this was not a thing yet, so now we are updating
those tests so they reflect our system well.
Co-authored-by: Pea Tyczynska <pea.tyczynska@digital.cabinet-office.gov.uk>
This is so we have granular data about billable units and costs
so that we can handle multiple sms rates within one financial
year.
We also cast chargeable_units_used_so_far in that subquery
to integer so we don't have type mismatch.
Co-authored-by: Leo Hemsted <leo.hemsted@digital.cabinet-office.gov.uk>
This is functionally very similar to query_service_sms_usage_for_year,
except this query filters by organisation and returns for all live services
within that organisation.
To ensure that the cumulative free allowance counter resets properly for
each service, we use the `partition_by` flag to group up the window
function[^1]. This magically handles all the free allowances
independently for each service.
[^1]: https://www.postgresql.org/docs/current/tutorial-window.html
Co-authored-by: Leo Hemsted <leo.hemsted@digital.cabinet-office.gov.uk>
This effectively reverts [^1], which was only a temporary change.
I suspect the performance problem will go away with [^2].
While we've clearly been managing without this change, it resulted
in several rows being left as incorrect when letter receipts were
delayed. It makes sense for us to run this task for the same period
as we do to aggregate statuses, as status affects billing.
[^1]: e5c76ffda7
[^2]: https://github.com/alphagov/notifications-api/pull/3542
This can happen in the following scenario (primarily for letters):
1. A service has a mixture of "delivered" and "sending" letters,
which the status task aggregates into two rows:
sending | 123
delivered | 456
2. After the 7 day retention has passed, only the "delivered" letters
will be archived [^1].
3. The status task now looks at the history table [^2], which means
it only sees the "delivered" letters.
4. The "sending" letters are eventually "delivered" and archived (before
the 10 day aggregation cutoff).
5. But the status aggregation task doesn't run.
This commit fixes (5).
[^1]: https://github.com/alphagov/notifications-api/pull/3063
[^2]: f87ebb094d/app/dao/fact_notification_status_dao.py (L51)
* notify_db fixture creates the database connection and ensures the test
db exists and has migrations applied etc. It will run once per session
(test run).
* notify_db_session fixture runs after your test finishes and deletes
all non static (eg type table) data.
In unit tests that hit the database (ie: most of them), 99% of the time
we will need to use notify_db_session to ensure everything is reset. The
only time we don't need to use it is when we're querying things such as
"ensure get X works when database is empty". This is such a low
percentage of tests that it's easier for us to just use
notify_db_session every time, and ensure that all our tests run much
more consistently, at the cost of a small bit of performance when
running tests.
We used to use notify_db to access the session object for manually
adding, committing, etc. To dissuade usage of that fixture I've moved
that to the `notify_db_session`. I've then removed all uses of notify_db
that I could find in the codebase.
As a note, if you're writing a test that uses a `sample_x` fixture, all
of those fixtures rely on notify_db_session so you'll get the teardown
functionality for free. If you're just calling eg `create_x` db.py
functions, then you'll need to make you add notify_db_session fixture to
your test, even if you aren't manually accessing the session.
this was added five years ago but never used. if we want to bring back
variable rates per client we might as well get a fresh start since a lot
has changed since then.
an API build failed because one of the tests expected the database to be
empty, but it actually wasn't. This is probably because another test run
earlier on that worker did not clear down properly.
I took the opportunity to refresh all of these tests to ensure they all
correctly tear down, and also use the more modern admin_request fixture
ratehr than the old client one that required us to specify headers and
do json parsing etc.
This slowed me down when making changes to the APIs. As well as
being unnecessary given the structural focus of these tests, I
found the way the test data was generated was quite confusing.
Before:
time pytest tests/app/billing/test_rest.py
...
pytest tests/app/billing/test_rest.py 4.16s user 0.43s system 55% cpu 8.355 total
After:
time pytest tests/app/billing/test_rest.py
...
pytest tests/app/billing/test_rest.py 2.16s user 0.25s system 70% cpu 3.413 total
This test should be focussed on the structural properties of the
API. We can leave more detailed testing of rate multipliers, etc.
to lower-level DAO tests.
This slowed me down when making changes to the DAO functions. It's
really not necessary to do 3 * 367 DB insertions for both tests.
Before:
time pytest tests/app/dao/test_ft_billing_dao.py -k for_year
...
pytest tests/app/dao/test_ft_billing_dao.py -k for_year 3.95s user 0.40s system 62% cpu 6.971 total
After:
time pytest tests/app/dao/test_ft_billing_dao.py -k for_year
...
pytest tests/app/dao/test_ft_billing_dao.py -k for_year 1.84s user 0.25s system 69% cpu 3.006 total
This is unnecessary since we now have separate "_variable_rates"
tests that check the behaviour for multiple rates explicitly for
the two types of notifications it affects.
This can be calculated from the "free_allowance_used" field and the
"chargeable_units" field, but having it included separately is more
convenient as it can be used directly in Admin [^1].
[^1]: 417e7370bb/app/templates/views/usage.html (L38-L39)
This represents the number of chargeable_units that were actually
free due to the free allowance - they won't be included in "cost".
Although the existing calculations in Admin [^1][^2] will still be
correct with a change in SMS rates - it's cost that's the problem
- it makes sense to have all the knowledge about calculating usage
consistently in these two APIs.
Note that the Integer casting is covered by the API-level tests in
test_rest.
[^1]: 474d7dfda8/app/main/views/dashboard.py (L490)
[^2]: c63660d56d/app/main/views/dashboard.py (L350)
This will replace the manual calculations in Admin [^1][^2] for SMS
and also in API [^3] for annual letter costs.
Doing the calculation here also means we correctly attribute free
allowance to the earliest rows in the billing table - Admin doesn't
know when a given rate was applied so can't do this without making
assumptions about when we change our rates.
Since the calculation now depends on annual billing, we need to
change all the tests to make sure a suitable row exists. I've also
adjusted the test data to match the assumption that there can only
be one SMS rate per bst_date.
Note about "OVER" clause
========================
Using "rows=" ("ROWS BETWEEN") makes more sense than "range=" as
we want the remainder to be incremental within each group in a
"GROUP BY" clause, as well as between groups i.e
# ROWS BETWEEN (arbitrary numbers to illustrate)
date=2021-04-03, units=3, cost=3.29
date=2021-04-03, units=2, cost=4.17
date=2021-04-04, units=2, cost=5.10
vs.
# RANGE BETWEEN
date=2021-04-03, units=3, cost=4.17
date=2021-04-03, units=2, cost=4.17
date=2021-04-04, units=2, cost=5.10
See [^4] for more details and examples.
[^1]: https://github.com/alphagov/notifications-admin/blob/master/app/templates/views/usage.html#L60
[^2]: 072c3b2079/app/billing/billing_schemas.py (L37)
[^3]: 474d7dfda8/app/templates/views/usage.html (L98)
[^4]: https://learnsql.com/blog/difference-between-rows-range-window-functions/
There is no such thing as a "billing unit". The data this field
contained was also a confusing mixture of two types:
- For emails and letters, it was just "notifications_sent".
- For SMS, it was the "chargeable_units" (billable * multiplier).
This replaces the single, ambiguous "billing_units" field with
"chargeable_units" and "notifications_sent" in both usage APIs.
Once Admin is using them we can remove the old field.