Started passing `sender_id` to the `save_email`, `save_sms` and
`process_job` tasks, with a default value of `None`.
If `sender_id` is provided, the `save_email` and `save_sms` tasks will
use it to determine the reply-to email address or the SMS sender for the
notifications in the job. The `process_job` task will start using the
value in another commit.
Sets the updated_at to current time when the NotificationHistory
is modified (which happens occasionally for example to set status
to returned letter).
Adds an API endpoint `/letters/returned` that accepts a list of
notification references and creates a task to update their status.
Adds a new task that uses the list of references to update the status
of notifications to 'returned-letter'.
The update is currently done using a single query and logs the
number of changed records (including notification history records).
This could potentially be done within the `/letters/returned` endpoint,
but creating a job right away allows us to extend this more easily
in the future (e.g. logging missing notifications or adding callbacks).
The job is using the database tasks queue.
process_incomplete_jobs loops through jobs processing them in a single
task. This means that if the job statuses are all 'error', and then the
process_incomplete_jobs task fails, the later jobs in the list that
never got picked up won't have their status set back to in progress, or
their processing_started time - so will be stuck in 'error' forever.
Instead, we set the job statuses to in progress and the start time to
now before we process any - so if the incomplete_jobs task fails later,
the jobs will be picked up (again) by the check_job_statuses task in
half an hour's time
the process_incomplete_jobs task runs through all incomplete jobs in
a loop, so it might not get a chance to update the processing_started
time of the last job before check_job_status runs again (every minute).
So before we even trigger the process_incomplete_jobs task, lets set
the status of the jobs to error, so that we don't identify them for
re-processing again.
we might stop processing jobs mid-way through if, for example, a
deploy or downscale kills the box working on it. We have a scheduled
task that identifies any job that we started processing more than half
an hour ago that is still processing.
However, we encountered a bug where we triggered the
process_incomplete_job multiple times, because the processing_started
of the job was still set to half an hour ago. If we reset the
processing_started to the current time, then it won't get picked up by
future runs of the check_job_status scheduled task.
* Added missing items from template which are required
* Returned the file as a JSON string with the file as a base64 encoded
string
* Updated tests to match teh desired format
- was getting KeyError: 'Error' test failures due to the side_effect not generating enough information to be used in the ClientError class, this PR adds missing information.
- call create fake response file task if in research mode only on preview and development environments to not impact response files on staging and live
It seems selecting the service_letter_contact in the validation method was causing SQLAlchemy to persist the object. When the dao was called to save the object nothing was different so we didn't persist the history object.
It may be time to take another look at how we version. :(
- Added has_permission helper in models.py to check permission in service
- Moved letters pdf tasks to separate file
- Moved letters pdf tests to own file
- test that `create_letters_pdf` only gets called when `letters_as_pdf` permission set
- test that `build_dvla_file` does not get called when `letters_as_pdf` permission set
- test creating the pdf call to notifications template preview service
- test uploading the data to s3 calls upload_letters_pdf function
this reduces the amount of error messages we log (we'll no longer log
at error level when build-dvla-file-for-job retries while waiting for
the task to finish), and make sure we retry in those cases above - db
or s3 having temporary troubl
This causes an issue when it hits the max retry limit, and tries to
throw your exception to let you deal with it - at this point it was
moaning because we pass in a string
if it's not defined, and we're inside an exception block celery uses
that instead.
this involved:
* moving that task to callback_tasks to prevent circular imports
* updating the dummy research mode callbacks (with actual SNS messages from the
ses simulator emails)
* refactoring tests
Instead of retrying if there are genuine errors, only retry if there are
errors which are unexpected as otherwise the retries will happen and
fail for the same reason e.g. that the message has changed format and
will require a code update.
- Updated process_ses_results to only retry if there in an unknown
exception
- Update test and assert that there is a retry there is a unknown
exception
Created three new celery tasks:
* save_sms (will replace send_sms)
* save_email (will replace send_email)
* save_letter (will replace persist_letter)
The difference between the new tasks and the tasks they are replacing is
that we no longer pass in the datetime as a parameter.
The code has been changed to use the new tasks, and the tests now run
against the new tasks too. The old tasks will need be removed in a separate
commit.
Comments are PR review. Updated code style in a few places to make it
more consistent with other code, added tests for letters and emails
so they are testedt, refactored some database queries to dao file
- Fixed code style
- Refactored database queries to dao code
- Added tests for emails and sms.
- Moved the process_incomplete_jobs to tasks.py
- Moved the process_incomplete_jobs test to test_tasks.py
- Cleaned up imports and other code style issues.
As the new tasks is not a scheduled one, moved the the tasks.py file.
This makes it more consisted with other tasks. Updated a few code style
issues to make it more consistent with other coe and hence more
maintainable in future.
- Added log for when a job starts so that we will know when the processing of a job starts with the number of notifications
- Added dao method to get total notifications for a job id
- Added a test to check whether the number of notifications in the table matches the job notification_count