We don't use FUNCTIONAL_TEST_PROVIDER_SERVICE_ID or
UNCTIONAL_TEST_PROVIDER_SMS_TEMPLATE_ID anymore so we can safely
delete them from config and tests.
Also test deleting jobs with flexible data retention
Also update tests for default data retention following logic
change: dao_get_jobs_older_than_data_retention now counts
today at the start of the day, not at a time when function runs
and updated tests reflect that
When we first built letters you could only send them via a CSV upload, initially we needed a way to send those files to dvla per job.
We since stopped using this page. So let's delete it!
The `save_email` and `save_sms` jobs were updated previously to take an
optional `sender_id` and to use this if it was available. This commit
now gets the `sender_id` from the S3 metadata if it exists and passes it
through the the tasks which save the job notifications. This means SMS
and emails sent through jobs can use a specified `sender_id` instead of
the default.
Started passing `sender_id` to the `save_email`, `save_sms` and
`process_job` tasks, with a default value of `None`.
If `sender_id` is provided, the `save_email` and `save_sms` tasks will
use it to determine the reply-to email address or the SMS sender for the
notifications in the job. The `process_job` task will start using the
value in another commit.
We were passing both dvla_org_id and filename to template-preview
temporarily while we switch to only using filename. Now that
template-preview is set up to use the filename, we can stop sending the
dvla_org_id too.
We now pass `filename`, the filename of the letter logo to use, through
to Template Preview in addition to the `dvla_org_id`. Once Template
Preview has been updated to only use the `filename` we will stop
sending the `dvla_org_id`.
the move from virus scan to validation failed function was called with
the wrong variables, and had some internal logic that was slightly
wrong.
Also, Don't use `update_notification_by_id` for notifications if they
are not in `created`, `sending`, `pending`, or `sent`. It silently
doesn't update them. I didn't want to do a deeper investigation into
the reasons behind this terrifying state machine as part of this commit
so I just changed the functions to call `dao_update_notification`
manually
- pass new, sanitised pdf for sending
- move invalid pdfs to a newly created bucket
- set status fro notifications that failed pdf validation to a new status validation-failed
- adjust existing tests
* Changed update_fact_billing DAO function to update the table with the
real data for postage instead of hard-coding in 'second'.
* Added a test for the create nightly billing task to test that rows
with different postage are being inserted correctly.
Removed the occasionally failing test to check how ft_billing upserts
postage data. This test will be re-added once the postage column has been
added to the primary key.
* Updated the 'fetch_billing_data_for_day' DAO function to take postage into
account
* Updated the 'update_fact_billing' DAO function to insert postage for
new rows. When updating rows which are identical apart from the postage, the
original row will be kept. (This behaviour will change once postage is
added to the primary key - at this point, upserting will add a new row.)
* Also changed some fixtures / test set up functions to take postage
into account
Sets the updated_at to current time when the NotificationHistory
is modified (which happens occasionally for example to set status
to returned letter).
There was a datetime bug in the query which resulted in files not being sent to the postal provider.
The trigger-letter-pdfs-for-day task is no longer needed, so rather than fix the query just call collate_letter_pdfs_for_day directly.
Less code is always better.
Deployment considerations: I realized this is strictly not backwards compatible if the scheduled job is in progress and a task is on the queue that no longer exists. This is ok since we will deploy this well before 17:50.
Adds an API endpoint `/letters/returned` that accepts a list of
notification references and creates a task to update their status.
Adds a new task that uses the list of references to update the status
of notifications to 'returned-letter'.
The update is currently done using a single query and logs the
number of changed records (including notification history records).
This could potentially be done within the `/letters/returned` endpoint,
but creating a job right away allows us to extend this more easily
in the future (e.g. logging missing notifications or adding callbacks).
The job is using the database tasks queue.
since there'll be a bunch of threads running functional test tasks at
the same time, there's no point always trying to start from the same
second and then stepping back to the same one-second-back file each
time. Also, this leads us to an increased risk of race conditions.
This change takes the same thirty second range, but shuffles it. The
tests, since they're no longer deterministic, now use a new Matcher
object (w/ credit to alexey) to match any filename from within that
thirty second range
when a test letter is created on dev or preview, we upload a file to
the dvla ftp response bucket, to test that our integration with s3
works. s3 triggers an sns notification, which we pick up, and then we
download the file and mark the letters it mentions as delivered.
However, if two tests run at the same time, they'll create the same
file on s3. One will just overwrite the next, and the first letter will
never move into delivered - this was causing functional tests to
intermittently fail.
This commit makes the test letter task check if the file exists - if it
does, it moves back one second and tries again. It tries this thirty
times before giving up.
Added the letter_rate table to the list of tables which does not get
deleted after each test run and changed the tests to use the real letter
rates.
Also removed the letter rate DAO since this was only being used in
tests, so was no longer needed.