This commit proposes two types of caching to speed up the post
notification processing time.
Since every notification needs a template, if we can avoid going to the
db to get the template then this should be faster.
The first cache is an LRU cache, which means that, for a given
`template_id`, `service_id` and `version` the app will go to memory
rather than the database to get the template. This should be much
faster, and we should be guaranteed that a combination of those three
parameters should always return the same result.
But, we don’t know the template version at the time of making the call.
So this commit also adds another layer of caching to find the most
recent version number of a template. Since this can be changed by other
instances we need to cache it externally, so this cache uses Redis.
Redis will be a lot slower than going to memory, but hopefully quicker
than going to the database. By only caching the minimal amount of data
possible in Redis (just the version as a number) we’re hopefully
minimising the performance hit from going to an external service.
SQS fails if the message body is over 256kb. Normally our messages are
quite small, but if we're using the new save-api-email task with an
email that has a large body, we can get over that limit. If so, handle
the exception and fall back to the existing code path (saving to the
database and sending a deliver-email task, which only has a notification
id.
It is possible a service has data rention that is smaller than the time it takes to get a delivery receipt.
This PR refactors process_ses_receipt to update NotificationHistory if the Notifcation has already been purged.
So we keep a record of who first uploaded a list it’s better to archive
a list than completely delete it.
The list in the database doesn’t contain any recipient info so this
isn’t a change to what data we’re retaining.
This means updating the endpoints that get contact lists to exclude ones
that are archived.
Once a contact list is gone from the database there’s no way to
reference it again. Any jobs have made their own copy.
So we can clean it up, meaning we’re not storing personal data longer
than we need to.
This was one of things we de-scoped when we first shipped this feature.
In order to safely delete a list, we first need to make sure any jobs
aren’t referencing it.
Instead of saving the email notification to the db add it to a queue to save later.
This is an attempt to alleviate pressure on the db from the api requests.
This initial PR is to trial it see if we see improvement in the api performance an a reduction in queue pool errors. If we are happy with this we could remove the hard coding of the service id.
In a nutshell:
- If POST /v2/notification/email is from our high volume service (hard coded for now) then create a notification to send to a queue to persist the notification to the db.
- create a save_api_email task to persist the notification
- return the notification
- New worker app to process the save_api_email tasks.
- fix test name
- make query filter consistent with each other
- add comment for clarity
- add inner loop to continue to insert and delete notifications while the delete count is greater than 0
The insert_notification_history_delete_notifications function uses a temp table to store the data to insert and delete. This will save extra queries while performing the insert and delete operations.
The function is written in such a way that if the task is stop while processing when it's started up again it will just pick up where it left off.
I've made a decision to delete all test data in one query, I don't anticipate a problem with that.
The performance of this might also be better than last nights test because we are inserting everything we need for the NotificationHistory insert, so we don't need the join to Notifications to perform the insert.
Need to test the performance of this function, then we can call it from the task.
- Create a temporary table to insert ids of the desired rows, limit by 10K (might be too low).
- Insert into NotifcationHistory select from notification where id in temp table
- Delete from Notifications where id in temp table
- drop temp table
We should be able to iterate of this. The query stats for the query to create the temp table are very good, 17ms.
you should always order by when doing a limit/offset, to guarantee that
the second time you run that query, the order hasn't changed and you
aren't just repeating the task with an overlap of notifications.
Luckily, in this case we haven't lost any data, because:
* we have an on conflict do update statement so when we returned
duplicate rows it would just do an update
* when we delete, we cross-reference with the notification history so if
a row always got missed, we won't delete it.
This resulted in, for example, govuk email still having a handful of
notifications in the table from 9th despite the task running succesfully
every day until the 18th of march.
order by created_at ascending so that we start with oldest notifications
first, in case it's interrupted part way through.
insert/update, and then delete notifications in hourly batches. This
means that if the task gets interrupted part-way through, we'll have at
least something to show for it. Previously we would insert and update
into the history table but might not delete from the notification table
properly.
Keeping the offsets and limits for confidence around reliability and
queries timing out.
Keeping the join to notification_history to ensure we don't delete
anything prematurely while our DB is in a bit of a weird state with lots
of these tasks failing over the last week.
This runs on the new `sms-callbacks` queue. The function
`process_sms_client_response` has been replaced with a task called
`process_sms_client_response`. This involved some reorganisation of the
existing code and tests.
We are formatting the postcode here, because if we did it in template
preview, that could break flows like API and admin one-off, since
we are not validating postcode there yet, and format_postcode
needs a nice validated postcode.
We are not doing it in admin, as then we would have to either
rewrite the CSV file or pass data differently to API. First would
be nasty, second is a lot of overhead.
In the long run we might want to move postcode formatting to
template preview so that the postcode in letter preview looks the same
before and after user sends it, but now to get it out quickly it's better
to format the postcode here in the task.
- Table to store meta data for the emergency contact list for a service.
- Endpoint for fetching contact lists for service
- Endpoint for saving contact list for service.
The list will be stored in S3. The service will then be able to send emergency announcements to staff.
https://github.com/alphagov/notifications-api/pull/2742/files#diff-d9a6761afaff4a491e60b64f6063654fL265
introduced a change in behaviour which causes template preview to 500
for a precompiled letter that is invalid but not because the content is
outside the printable area.
The changes it back to the previous behaviour, where we b64encode and
utf-8 decode for a page that is invalid for reasons that are not content
outside the printable area, for example a non portrait a4 letter.
I didn't end up writing a unit test for this because I really can't
figure out the existing behaviour we expect for this use case. This
method is too big and complex but I'm confident by looking at the change
in the commit mentioned above that this puts behaviour back to how it
was. Manual testing also confirms the fix.
I've also taken this opportunity to improve two test names whilst I was
looking at things.
- update check_sms_content_char_count to use the SMSTemplate.is_message_too_long function, and updated the error message to align with the message returned by the admin app.
- Update the the code used by version 1 of the api to use the validate_template method.
- I did find a couple of services still using the old api, however, this change should not affect them as I checked the messages being sent and they are not too long.
- We will be sending a message to them to see if they can upgrade.
- Update the log message for authenication to include the URL - makes it easier to track if a service is using version 1 of the api.
If the png page is invalid then show the overlay for that page
If the png page is not invalid, even if other pages are invalid, then
don't shown the overlay for that page
These keeps the behaviour were if you get the pdf for a pdf which has
invalid pages then the whole PDF is requested (not just a single page)
and all of the pages include the overlay
Tests have been improved to be more explicit about what we are testing
We want to start granting access to the org page. But it will be a bit
weird if the invites come from us personally, since the people we’re
inviting don’t know us.
It makes more sense, and sounds more official if the invites appear to
come from the ‘GOV.UK Notify team’ instead.
We moved from sending statsd metrics to hosted graphite to sending to
one that is running on the paas. Therefore we no longer need to send
statsd metrics to a particular prefix at the statsd app as it is only
receiving statsd metrics from our apps (not other users like would have
been the case with HostedGraphite).
This should change no behaviour as the only place the environment
variable was being used was in the gunicorn config and it was an empty
string which is the default behaviour anyway as per:
https://docs.gunicorn.org/en/stable/settings.html#statsd-prefix
If we know that the most recently returned letter was reported more than
7 days ago then we know, without having to go to the database again,
that the count of returned letters in the last 7 days is 0.