The `body` field of the ‘preview a template’ endpoint should contain the
content of the template with the placeholders replaced.
It should not be rendered into a plain text email, which does stuff like
normalising newlines (so `\r\n` becomes `\n`). However the object that
the `create_post_template_preview_response` function receives is an
instance of `PlainTextEmailTemplate`. So we can’t just call `str()` on
it. Instead we need to call the `__str__` method of
`WithSubjectTemplate` which gives us the result we want.
We were already doing this the right way in the V1 endpoint[1] but
forgot to do it here too.
1. 0108749daa/app/template/rest.py (L181-L184)
Text messages have placeholders in their body.
Emails have them in their subject line too.
Letters have them in their body, subject line and contact block.
We were only looking in the the body and subject when processing a job,
therefore the thing assembling the letter was not looking in all the
CSV columns it needed to, because it hadn’t been told about any
placeholders in the contact block.
Fixing this means always making sure we use the correct `Template`
instance for the type of template we’re dealing with. Which we were
already doing in a different part of the codebase. So it makes sense to
reuse that.
Turns out we fixed the same bug for email subjects over 3 years ago:
3ed97151ee
This has been replaced by a new task, `sanitise-letter`, to this deletes
all the code in the old task and ensures that when antivirus is not
enabled locally we are calling the new task.
Move finding of letter logic into a separate method so it can be unit
tested rather than having to test it by checking the contents of a
zendesk ticket api call.
This will enable us to change the zendesk api ticket call message
without needing to edit lots of tests.
Dropping unnecessary indexes on Notification and Notication_History
Create new composite indexes that should give queries better performance
Drop and create new indexes.
We are not running this migration on production because we want to run each one at a time.
Run this on staging then start run some load tests
"if you want something done right, then do it yourself"
The ORM was building a weird and inefficient query to get all the users
for a given organisation_id:
old query:
```sql
SELECT users.*
FROM users
WHERE (
EXISTS (
SELECT 1
FROM user_to_organisation, organisation
WHERE users.id = user_to_organisation.user_id AND
organisation.id = user_to_organisation.organisation_id AND
organisation.id = '63b9557c-22ea-42ac-bcba-edaa50e3ae51'
)
) AND
users.state = 'active'
ORDER BY users.created_at
```
Note the cross join on users_to_org and org, there are a lot of users in
organisations on preview, as one is added every functional test run.
Even though they're all filtered out and the query returns the correct
data, it still does the nested loop under the hood.
new query:
```sql
SELECT users.*
FROM users
JOIN user_to_organisation AS user_to_organisation_1 ON users.id = user_to_organisation_1.user_id
JOIN organisation ON organisation.id = user_to_organisation_1.organisation_id
WHERE organisation.id = '63b9557c-22ea-42ac-bcba-edaa50e3ae51' AND users.state = 'active' ORDER BY users.created_at;
```
Much more straightforward.
* it doesn't delete service email reply to or letter contacts, or
contact lists. I don't think the contact lists will ever be an issue
but it doesn't hurt to add it to the list of things to remove.
* it doesn't remove users from organisations before deleting the users
there may be more tables that link to Service that should be deleted,
but for now just add these ones that I could spot.
This task didn't have retries before, based on the assumption that if
the task failed it was likely to be due to a Boto error, so retrying
would not help because a file was probably not in the expected bucket.
During an incident with the database, we had some letters that were
stuck in the `pending-virus-check` state because this task failed.
This change adds retries to the task if there was an Exception that was
not a `BotoClientError`.
SQS fails if the message body is over 256kb. Normally our messages are
quite small, but if we're using the new save-api-email task with an
email that has a large body, we can get over that limit. If so, handle
the exception and fall back to the existing code path (saving to the
database and sending a deliver-email task, which only has a notification
id.
It is possible a service has data rention that is smaller than the time it takes to get a delivery receipt.
This PR refactors process_ses_receipt to update NotificationHistory if the Notifcation has already been purged.
So we keep a record of who first uploaded a list it’s better to archive
a list than completely delete it.
The list in the database doesn’t contain any recipient info so this
isn’t a change to what data we’re retaining.
This means updating the endpoints that get contact lists to exclude ones
that are archived.
Once a contact list is gone from the database there’s no way to
reference it again. Any jobs have made their own copy.
So we can clean it up, meaning we’re not storing personal data longer
than we need to.