* Removed extra log messages so there are not two log messages being
generated per exception, as InvalidRequest also logs, updated the
InvalidRequest log message to include the exception type and exception
information
* Added extra asserts to ensure the exception messages are printed
before sending it to template preview. This stops the whole pdf file
being sent to template preview for each page which is really inefficient
on network traffic and memory usage.
* Added logic to the endpoint to extract the specific page requested
* Updated tests to add a mock for the new call to utils
* Added a new test case for exceptions in the PDF extraction process
The problem has been resolved but we need to replay the messages that are missing. We have been sent a file containing client_references for all the notificaitons that the service would needs updates for.
Which means we can remove the need to request the data from the database.
In order for the PR to be backwards compatible I have added an optional parameter "encrypted_status_update".
If this is not None then the new code is called.
The next PR will send the encrypted data to this task.
A final PR will remove the code that uses the database to get the notification and service callback api.
We have seen problems with the service callback workers due to the db
connection pool being exhausted. When the worker picks up the task, it
makes a db query to get the notification, a query to get the callback
url, and then closes the session before it makes the 3rd party request.
However, even closing the session before the (potentially lengthy)
web request wasn't enough - we've seen significant amounts of
`sqlalchemy.exc.TimeoutError`s.
This reverts commit 2dfbd93c7e
The JobStatistics table is going to be deleted. There are currently
3 tasks which use the JobStatistics model via the Statistics DAO, so we
need to make sure that these tasks aren't being used before they are
deleted in a separate PR.
This commit deletes:
* The `create_initial_notification_statistic_tasks` function which gets
used to call the `record_initial_job_statistics` task.
* The `create_outcome_notification_statistic_tasks` function which gets
used to call the `record_outcome_job_statistics` task.
* And the scheduling of the `timeout-job-statistics` scheduled task.
correct type for the call as there are some that require json and some
binary. The additional checks ensure that that json decode either fails
or succeeds in the correct case.
- to be a precompiled letter, the template must be a letter, be hidden, and have a matching template name to the one expected in config['PRECOMPILED_TEMPLATE_NAME']
In the update_letter_notifications_statuses task we now check whether
each row in the response file that we receive from the DVLA has the
value of 'Sorted' or 'Unsorted' in the postcode validation field. We
then calculate the number of Sorted and Unsorted rows for each day and
save each day as a row in the daily_sorted_letter table.
The data in daily_sorted_letter table should be in local time, so we
convert the datetime before saving.
The response files we receive from the DVLA when we send letters
contain a row for each letter and a field with a value of 'Unsorted' or
'Sorted'. This table will be used to store the total number of
'Unsorted' and 'Sorted' letter notifications per day.