Also test deleting jobs with flexible data retention
Also update tests for default data retention following logic
change: dao_get_jobs_older_than_data_retention now counts
today at the start of the day, not at a time when function runs
and updated tests reflect that
Bumped notifications-utils to 3.7.0. Version 3.7.0 includes the
`convert_utc_to_bst` and `convert_bst_to_utc` functions and the
`LETTER_PROCESSING_DEADLINE` constant, so these have been removed from
this repo and anywhere using these has now been updated to get these
from `notifications-utils`.
Also bumped pytest by a patch version to bring in a bug fix.
When we first built letters you could only send them via a CSV upload, initially we needed a way to send those files to dvla per job.
We since stopped using this page. So let's delete it!
The `save_email` and `save_sms` jobs were updated previously to take an
optional `sender_id` and to use this if it was available. This commit
now gets the `sender_id` from the S3 metadata if it exists and passes it
through the the tasks which save the job notifications. This means SMS
and emails sent through jobs can use a specified `sender_id` instead of
the default.
Started passing `sender_id` to the `save_email`, `save_sms` and
`process_job` tasks, with a default value of `None`.
If `sender_id` is provided, the `save_email` and `save_sms` tasks will
use it to determine the reply-to email address or the SMS sender for the
notifications in the job. The `process_job` task will start using the
value in another commit.
This was done so when notification is timed out from sending/pending
to temporary_failure, this change has to always be caught
in the ft_notification_status
- Updated notifications_dao.update_notification_status_by_id with an optional parameter to set the sent_by, this will eliminate a separate update to notifcaitons.
- Added the callback url to the log message, that way we can see if it's the same url failing.
- Stop sending the status callbacks for PENDING status.
We were passing both dvla_org_id and filename to template-preview
temporarily while we switch to only using filename. Now that
template-preview is set up to use the filename, we can stop sending the
dvla_org_id too.
We now pass `filename`, the filename of the letter logo to use, through
to Template Preview in addition to the `dvla_org_id`. Once Template
Preview has been updated to only use the `filename` we will stop
sending the `dvla_org_id`.
it doesn't match a string 😩
I couldn't think of a good way to test this in a unit test, since
it involves changing the service id on all of the components of a
service.
the move from virus scan to validation failed function was called with
the wrong variables, and had some internal logic that was slightly
wrong.
Also, Don't use `update_notification_by_id` for notifications if they
are not in `created`, `sending`, `pending`, or `sent`. It silently
doesn't update them. I didn't want to do a deeper investigation into
the reasons behind this terrifying state machine as part of this commit
so I just changed the functions to call `dao_update_notification`
manually
- pass new, sanitised pdf for sending
- move invalid pdfs to a newly created bucket
- set status fro notifications that failed pdf validation to a new status validation-failed
- adjust existing tests
There was a datetime bug in the query which resulted in files not being sent to the postal provider.
The trigger-letter-pdfs-for-day task is no longer needed, so rather than fix the query just call collate_letter_pdfs_for_day directly.
Less code is always better.
Deployment considerations: I realized this is strictly not backwards compatible if the scheduled job is in progress and a task is on the queue that no longer exists. This is ok since we will deploy this well before 17:50.
Adds an API endpoint `/letters/returned` that accepts a list of
notification references and creates a task to update their status.
Adds a new task that uses the list of references to update the status
of notifications to 'returned-letter'.
The update is currently done using a single query and logs the
number of changed records (including notification history records).
This could potentially be done within the `/letters/returned` endpoint,
but creating a job right away allows us to extend this more easily
in the future (e.g. logging missing notifications or adding callbacks).
The job is using the database tasks queue.
For returned letter updates most notifications won't exist in the
notifications table, so in order to find out whether the reference
matches any known letters we need to check the count of updated
history records.
Notify antivirus, on success, calls the process_virus_scan_passed taks.
Previously, this task would:
* update status to created (or delivered for test keys)
* copy the file from the scan bucket to either the live or test bucket
based on the results
* delete the old file in the scan bucket
We want it to:
* download file from scan bucket
* sanitise PDF using new template-preview functionality
* if sanitise failed, set to new status "validation-failed" and save
the pdf somewhere.
* send new pdf to live/test bucket
* update status to created (or delivered for test keys)
* delete the original file in the scan bucket
This PR does some of that:
* download file from scan bucket
* sanitise PDF using new template-preview functionality
* if sanitise failed, just log.
* send OLD pdf to live/test bucket
* update status to created (or delivered for test keys)
* delete the original file in the scan bucket
So if sanitising fails, we won't fall over and not deliver the letter,
we'll just log a message for now. If sanitise throws an unexpected
error (as opposed to a 400), we'll retry up to fifteen times (the same
as when creating a new letter). I've added the code for using the
sanitised pdf, but it's commented out for now