Instead of saving the email notification to the db add it to a queue to save later.
This is an attempt to alleviate pressure on the db from the api requests.
This initial PR is to trial it see if we see improvement in the api performance an a reduction in queue pool errors. If we are happy with this we could remove the hard coding of the service id.
In a nutshell:
- If POST /v2/notification/email is from our high volume service (hard coded for now) then create a notification to send to a queue to persist the notification to the db.
- create a save_api_email task to persist the notification
- return the notification
- New worker app to process the save_api_email tasks.
- fix test name
- make query filter consistent with each other
- add comment for clarity
- add inner loop to continue to insert and delete notifications while the delete count is greater than 0
The insert_notification_history_delete_notifications function uses a temp table to store the data to insert and delete. This will save extra queries while performing the insert and delete operations.
The function is written in such a way that if the task is stop while processing when it's started up again it will just pick up where it left off.
I've made a decision to delete all test data in one query, I don't anticipate a problem with that.
The performance of this might also be better than last nights test because we are inserting everything we need for the NotificationHistory insert, so we don't need the join to Notifications to perform the insert.
Need to test the performance of this function, then we can call it from the task.
- Create a temporary table to insert ids of the desired rows, limit by 10K (might be too low).
- Insert into NotifcationHistory select from notification where id in temp table
- Delete from Notifications where id in temp table
- drop temp table
We should be able to iterate of this. The query stats for the query to create the temp table are very good, 17ms.
you should always order by when doing a limit/offset, to guarantee that
the second time you run that query, the order hasn't changed and you
aren't just repeating the task with an overlap of notifications.
Luckily, in this case we haven't lost any data, because:
* we have an on conflict do update statement so when we returned
duplicate rows it would just do an update
* when we delete, we cross-reference with the notification history so if
a row always got missed, we won't delete it.
This resulted in, for example, govuk email still having a handful of
notifications in the table from 9th despite the task running succesfully
every day until the 18th of march.
order by created_at ascending so that we start with oldest notifications
first, in case it's interrupted part way through.
insert/update, and then delete notifications in hourly batches. This
means that if the task gets interrupted part-way through, we'll have at
least something to show for it. Previously we would insert and update
into the history table but might not delete from the notification table
properly.
Keeping the offsets and limits for confidence around reliability and
queries timing out.
Keeping the join to notification_history to ensure we don't delete
anything prematurely while our DB is in a bit of a weird state with lots
of these tasks failing over the last week.
This runs on the new `sms-callbacks` queue. The function
`process_sms_client_response` has been replaced with a task called
`process_sms_client_response`. This involved some reorganisation of the
existing code and tests.
This should add an extra 400 connections maximum, which will not tip us over the allowable 5000 db connections. And it may help with the queue pool connection errors.
We are formatting the postcode here, because if we did it in template
preview, that could break flows like API and admin one-off, since
we are not validating postcode there yet, and format_postcode
needs a nice validated postcode.
We are not doing it in admin, as then we would have to either
rewrite the CSV file or pass data differently to API. First would
be nasty, second is a lot of overhead.
In the long run we might want to move postcode formatting to
template preview so that the postcode in letter preview looks the same
before and after user sends it, but now to get it out quickly it's better
to format the postcode here in the task.
- Table to store meta data for the emergency contact list for a service.
- Endpoint for fetching contact lists for service
- Endpoint for saving contact list for service.
The list will be stored in S3. The service will then be able to send emergency announcements to staff.
https://github.com/alphagov/notifications-api/pull/2742/files#diff-d9a6761afaff4a491e60b64f6063654fL265
introduced a change in behaviour which causes template preview to 500
for a precompiled letter that is invalid but not because the content is
outside the printable area.
The changes it back to the previous behaviour, where we b64encode and
utf-8 decode for a page that is invalid for reasons that are not content
outside the printable area, for example a non portrait a4 letter.
I didn't end up writing a unit test for this because I really can't
figure out the existing behaviour we expect for this use case. This
method is too big and complex but I'm confident by looking at the change
in the commit mentioned above that this puts behaviour back to how it
was. Manual testing also confirms the fix.
I've also taken this opportunity to improve two test names whilst I was
looking at things.
- update check_sms_content_char_count to use the SMSTemplate.is_message_too_long function, and updated the error message to align with the message returned by the admin app.
- Update the the code used by version 1 of the api to use the validate_template method.
- I did find a couple of services still using the old api, however, this change should not affect them as I checked the messages being sent and they are not too long.
- We will be sending a message to them to see if they can upgrade.
- Update the log message for authenication to include the URL - makes it easier to track if a service is using version 1 of the api.