This deletes a big ol' chunk of code related to letters. It's not everything—there are still a few things that might be tied to sms/email—but it's the the heart of letters function. SMS and email function should be untouched by this.
Areas affected:
- Things obviously about letters
- PDF tasks, used for precompiling letters
- Virus scanning, used for those PDFs
- FTP, used to send letters to the printer
- Postage stuff
The tests were previously covering a shared function that's now
only used once, so I've inlined it and merged the tests together
with a common naming that's consistent with the code under test.
Currently we alert if a service wastes £16 of SMS. It may cost us
around that amount just to deal with the alert, especially if the
service refuses to clean up their data.
This bumps the threshold to something more alarming, which should
make it more reasonable to suspend the service if we can show that
they've already wasted public money. £160 seems like a reasonable
compromise between have wasted vs could waste.
Note: we previously compromised on 1000 [1] down from 63K [2]. I
think we can afford to go a little bit higher.
[1]: https://github.com/alphagov/notifications-api/pull/3234
[2]: https://github.com/alphagov/notifications-api/pull/3221
Last year we had an issue with the daily limit cache and the query that was populating it. As a result we have not been checking the daily limit properly. This PR should correct all that.
The daily limit cache is not being incremented in app.notifications.process_notifications.persist_notification, this method is and should always be the only method used to create a notification.
We increment the daily limit cache is redis is enabled (and it is always enabled for production) and the key type for the notification is team or normal.
We check if the daily limit is exceed in many places:
- app.celery.tasks.process_job
- app.v2.notifications.post_notifications.post_notification
- app.v2.notifications.post_notifications.post_precompiled_letter_notification
- app.service.send_notification.send_one_off_notification
- app.service.send_notification.send_pdf_letter_notification
If the daily limits cache is not found, set the cache to 0 with an expiry of 24 hours. The daily limit cache key is service_id-yyy-mm-dd-count, so each day a new cache is created.
The best thing about this PR is that the app.service_dao.fetch_todays_total_message_count query has been removed. This query was not performant and had been wrong for ages.
The query had a group by on notification_type and notification_status, this not only slows the query down but is wrong. The query only looked at the first result, but this query would return as many rows as different notification types and status, meaning the results do not include the correct number.
Are we concerned that all status types are included. For example letters can be cancelled or have validation-failures which shouldn't be included in the daily limit check.
It's always going to be in the future anyway.
After some analysis the query does perform better without it.
I'll make a note to update other queries where we get todays
notification data to remove the end date filter in a separate PR.
Second attempt [1]. This increases the threshold so:
- It's a more substantial amount of money lost (£16).
- It's 10% of the minimum free allowance for a service.
- It's greater than the threshold we have for TV numbers (500).
Having a higher threshold for this alert will help prevent wasted
effort investigating more negligible failures, and reduces the
ambiguity of whether we should take action: we should.
[1]: https://github.com/alphagov/notifications-api/pull/3221
Introduce a contextmanger function to handle exceptions and nested
transactions. Using the nested_transaction will start a
nested transaction with `db.session.begin_nested`, once the nested
transaction is complete the commit will happen.
`@transactional` has been updated to commit unless in a nested
transaction.
db update/insert.
Using a savepoint for the multiple transactions allows us to rollback if
there is an error when executing the second db transaction.
However, this does add a bit of complexity. Developers need to manage
the db session when calling multiple nested tranactions.
Unit tests have been added to test this functionality and some end to
end tests have been done to make sure all transactions are rollback if
there is an exception while executing the transaction.
This done so that we do not use statsd on our http endpoint.
We decided we do not need metrics that this gave us. If we
change our minds, we will add Prometheus-friendly decorators
instead in the future.
This copies what we do to a user's email address when archiving the user
by prefixing it with `_archived_{date}`. We already prefixed the
service name and email_reply_to with `_archived`, but this didn't allow
a service with the same name to be archived more than once.
When running the purge command I found about 4 users who could not be
deleted because their user id was still referenced in the services table
as they had created the service yet they were not a member of that
service anymore.
I have fixed this by checking that if they are not a member but created
the service then we also delete the service for them.
Note, I've followed the previous convention of no tests for this
function. I've run it locally and executed the code path so there should
be no major flaws in the code. There is a small chance I wasn't able to
exactly replicate the state that existed in preview on my local but
hopefully it was close enough to be accurate.
* it doesn't delete service email reply to or letter contacts, or
contact lists. I don't think the contact lists will ever be an issue
but it doesn't hurt to add it to the list of things to remove.
* it doesn't remove users from organisations before deleting the users
there may be more tables that link to Service that should be deleted,
but for now just add these ones that I could spot.