Flake8 Bugbear checks for some extra things that aren’t code style
errors, but are likely to introduce bugs or unexpected behaviour. A
good example is having mutable default function arguments, which get
shared between every call to the function and therefore mutating a value
in one place can unexpectedly cause it to change in another.
This commit enables all the extra warnings provided by Flake8 Bugbear,
except for:
- the line length one (because we already lint for that separately)
- B903 Data class should either be immutable or use `__slots__` because
this seems to false-positive on some of our custom exceptions
- B902 Invalid first argument 'cls' used for instance method because
some SQLAlchemy decorators (eg `declared_attr`) make things that
aren’t formally class methods take a class not an instance as their
first argument
It disables:
- _B306: BaseException.message is removed in Python 3_ because I think
our exceptions have a custom structure that means the `.message`
attribute is still present
Matches the work done in other repos:
- https://github.com/alphagov/notifications-admin/pull/3172/files
A few weeks ago, we deleted some pdf letters that had reached their
retention period. However, these letters were in the 'created' state so
it's very arguable that we should not have deleted them because we were
expecting to resend them and were unable to. Part of the reason for this
is that we marked the letters back to `created` as the status but we did
not nullify the `sent_at` timestamp, meaning the check on
ebb43082d5/app/dao/notifications_dao.py (L346)
did not catch it. Regardless of that check, which controls whether the
files were removed from S3, they were also archived into the
`notification_history` table as by default.
This commit does changes our code such that letters that are not in
their final state do not go through our retention process. This could
mean they violate their retention policy but that is likely the lesser
of two evils (the other being we delete them and are unable to resend
them).
Note, `sending` letters have been included in those not to be removed
because there is a risk that we give the letter to DVLA and put it in
`sending` but then they come back to us later telling us they've had
problems and require us to resend.
This is for the IBAG format (similar to CAP format, but proprietary)
used in the XMLs that we exchange with broadcast providers (specifically
Vodafone).
Now that we’re grouping jobs sent from contact lists within their
parent, they shouldn’t also be listed on the jobs page at the top level.
The jobs page uses the uploads API, not the jobs API, so this commit
makes sure that filtering is happening in the proper place.
(instead of using the id from broadcast_event)
we need every XML blob we send to have a different ID. if we're sending
different XML blobs for each provider, then each one should have a
different identifier. So, instead of taking the identifier from the
broadcast_event, take it from the broadcast_provider_message instead.
Note: We're still going to the broadcast_event for most fields, to
ensure they stay consistent between different providers. The last thing
we want is for different phone networks to get different content
limit means we only return 50k letters, if there are more than that for
a service we'll skip them and they won't be picked up until the next
day.
If you remove the limit, sqlalchemy prefetches query results so it can
build up ORM results, for example collapsing joined rows into single
objects with chidren. SQLAlchemy streams the data into a buffer, and
normally will still prefetch the entire resultset so it can ensure
integrity of the session, (so that if you modify one result that is
duplicated further down in the results, both rows are updated in the
session for example). However, we don't care about that, but we do care
about preventing the result set taking up too much memory. We can use
`yield_per` to yield from sqlalchemy to the iterator (in this case the
`for letter in letters_awaiting_sending` loop in letters_pdf_tasks.py) -
this means every time we hit 10000 rows, we go back to the database to
get the next 10k. This way, we only ever need 10k rows in memory at a
time.
This has some caveats, mostly around how we handle the data the query
returns. They're a bit hard to parse but I'm pretty sure the notable
limitations are:
* It's dangerous to modify ORM objects returned by yield_per queries
* It's dangerous to join in a yield_per query if you think there will be
more than one row per item (for example, if you join from notification
to service, there'll be multiple result rows containing the same
service, and if these are split over different yield chunks, then we
may experience undefined behaviour.
These two limitations are focused around there being no guarantee of
having one unique row per item.
For more reading:
https://docs.sqlalchemy.org/en/13/orm/query.html?highlight=yield_per#sqlalchemy.orm.query.Query.yield_perhttps://www.mail-archive.com/sqlalchemy@googlegroups.com/msg12443.html
previously we were returning the entire ORM object. Returning columns
has a couple of benefits:
* Means we can join on to services there and then, avoiding second
queries to get the crown status of the service later in the collate
flow.
* Massively reduces the amount of data we return - particularly free
text fields like personalisation that could be potentially quite big.
5 columns rather than 26 columns.
* Minor thing, but will skip some CPU cycles as sqlalchemy will no
longer construct an ORM object and try and keep track of changes. We
know this function doesn't change any of the values to persist them
back, so this is an unnecessary step from sqlalchemy.
Disadvantages are:
* The dao_get_letters_to_be_printed return interface is now much more
tightly coupled to the get_key_and_size_of_letters_to_be_sent_to_print
function that calls it.
we've seen issues where tasks mysteriously hang and do not process
large volumes of letters - in this case >150k letters in created state.
to try and get at least some letters out of the door, limit the query to
only return 50k letters per postage type. We may need to run the task
multiple times, or letters may get delayed until the next day when
they'd be picked up (provided there's enough capacity then). The task
should only be re-run AFTER the ftp tasks have all finished, and updated
the letters to sending, or we run the risk of sending the same letters
twice.
For context, the largest ever letter day we've sent is ~65k in march of
this year.
we had issues where we had 150k 2nd class notifications, and the collate
task never ran properly, presumably because the volume of data being
returned was too big.
to try and help with this, we can switch to streaming rather than using
`.all` and building up lists of data. This should help, though the
initial query may be a problem still.
We have had a few instances where letters have caused problems. Particularly for precompiled letters, often the issue comes from the same service.
The hope is that by adding a sort order this will help the print provider narrow down the problem.
There is a small degradation of the performance of the query, but it's not enough to concern me.
At the moment we display the count of scheduled jobs on the dashboard
by sending all the scheduled jobs to the admin app and letting it work
out the stats.
This is inefficient and, because the get jobs response has a page size
of 50, becomes incorrect if a service schedules more than 50 jobs.
This commit adds a separate endpoint which gives the admin app the stats
it needs directly and correctly.
Since we’ve doubled the number of rows in a job, jobs can take twice as
long to insert all the notifications. We don’t check for missing rows
until we’re pretty confident that the original tasks have finished
processing. This means we need to double the time we wait to still be
as sure.
we don't name letters based on the day we send them on, rather, the day
we create them on. If we process a letter for a second time for whatever
reason, even if it's a couple of days later, it'll still go in a folder
based on the created_at timestamp. There's still a slight confusion,
however - if the timestamp is after 5:30pm, the folder will be for the
day after. However, still the day after creation, so I think created_at
still makes the most sense.
Remove the term `sending_date` to try and make this relationship more
apparent.
`_now`? why would we ever use a different _now? instead say created_at,
because that's what it'll always be set to, even if we're replaying old
letters. We always set the folder name to when the letter was
created_at, or we might not know where to look to find it.
`dont_use_sending_date` doesn't really tell us what might happen if we
don't use it - the answer is we return an empty string. we ignore the
folder entirely. so lets call it that.
Also, remove use of freeze_gun in the tests, to prove that we don't use
the current time in any calculations. Also add an assert to a mock in
the get_pdf_for_templated_letter test, because we were mocking but not
asserting before, so the tests didn't fail when the function signature
changed.
Our code was assuming that any notifications with `international` set to
`True` were text messages. It was then trying to look up delivery
information for a notification which wasn’t sent to a phone number,
causing an exception.
`international` for letters in `ft_billing` was always False. Now that
letters can be international, this changes the column value to the value
of `international` for the notification.
use the new endpoint from cbc proxy. create a new task that just
serializes the event and sends it across rather than sending a template
and the broadcast message.
some changes to serialize to make it json friendly etc. it also expects
sent_at and transmitted_finishes_at to always be set (we set them in the
code but don't enforce it n the DB right now), as they're required by
utils template. not sure whether we'll update db constraints to be more
strict or utils template to be more permissive just yet, wait until we
find out more about the requirements of the CBCs we integrate with.
Reflects the new name of the feature.
Note that the name of the underlying table hasn’t changed because it’s
explicitly set to `service_whitelist`. Changing this will be a more
involved process.
It's clear that we need a way to track updates to a broadcast message.
It's also clear that we'll need some kind of audit log that captures
exactly what was sent out in a message.
This commit adds a new database table, `broadcast_event`, which maps 1:1
with CAP XML sent to the CBCs. We'll create one of these just before
sending out.
The main driver for this was that cancel and update messages need to
contain a list of references of all previous messages that they're
amending. This is of format `{sender},{identifier},{sent_timestamp}`,
and the identifier itself needs to be unique for each message.
Usage for all services is a platform admin report that groups letters by
postage. We want it to show `europe` and `rest-of-world` letters under a
single category of `international`, so this updates the query to do
that and to order appropriately.
task takes a brodcast_message_id, and makes a post to the cbc-proxy
for now, hardcode the url to the notify stub. the stub requires template
as the admin/api get it, so use the marshmallow schema to json dump it.
Note - this also required us to tweak the BroadcastMessage.serialize
function so that it converts uuids in to ids - flask's jsonify function
does that for free but requests.post doesn't sadly.
if the request fails (either 4xx or 5xx) just raise an exception and let
it bubble up for now - in the future we'll add retry logic