This will transform each notification in a job to a row in a file.
The file is then uploaded to S3.
The files will later be aggregated by the notifications-ftp app to send to dvla.
The method to upload the file to S3 should be pulled into notifications-utils package.
It is the same method used in notifications-admin.
note that all of these tests have to be checked to ensure that they
still call through to notify_db_session (notify_db not required) to
tear down the database after the test runs - since it's no longer
required to pass it in to the function just to invoke the sample_user
function
this is so that the filtering, which we do on the admin side, is applied
before pagination - so that the pages returned are all valid displayable
jobs. unfortunately this means that another config value has to be copied
to the server side but it's not the end of the world
- uses 4 rather than 8 entries to test the sort (2 notifications × 2
columns on which we’re sorting)
- makes sure we test for when a scheduled job was created before a job
that’s been processed already
- removes any relative datetimes so the tests are independant of
database speed
Say you have a dashboard with some jobs you sent. Normally looks like:
job | sent
--- | ---
file.csv | **5pm**
file.csv | 3pm
file.csv | 1pm
file.csv | 11am
However if your 5pm job was scheduled at lunchtime, then it will look
like this:
job | sent
--- | ---
file.csv | 3pm
file.csv | 1pm
file.csv | **5pm**
file.csv | 11am
This is because the jobs are sorted by when they were created, not when
they were sent. It looks wrong.
**For jobs that have already been sent**
This commit changes the sort order to be based on `processed_at`
instead.
**For upcoming jobs**
If a job doesn’t have a `processed_at` time then it’s scheduled, but
hasn’t started yet. Only in this case should we still be sorting by
`created_at`.
If you schedule a job you might change your mind or circumstances might
change. So you need to be able to cancel it. This commit adds a `POST`
endpoint for individual jobs which sets their status to `cancelled`.
This also means adding a new status of `cancelled`, so there’s a
migration…
Update notifications/sms|email endpoint to send the template version to the queue.
Update the process_job celery talk to send the template version to the queue.
When the send_sms|send_email task runs it will get the template by id and version.
Created a data migration script to add the template_vesion column for jobs and notifications.
The existing jobs and notifications are given the template_version of the current template.
There is a chance this is the wrong template version, but deemed okay since the application is not live.
Create unit test for the dao_get_template_versions method.
Rename /template/<id>/version to /template/<id>/versions which returns all versions for that template id and service id.
This PR removes the need for the email_safe function. The api does not create the email_from field for the service.
Tests were updated to reflect this change.
- brings boto S3 into new AWS folder
- CSV processing utils method
Rejigs the jobs rest endpoint - removes some now unused endpoints,
Calls to the task with the job, job processing in task, delegating SMS calls to the sms task