Grouping the letters into a maximum number of files is necessary because
the SQS task needs to be under a certain size. We also compress the task
when sending.
add collate-letter-pdfs task (name pending). This retrieves a list of
letter pdf files (just the metadata, not the actual data) from s3, and
loops through them, calling the ftp task zip-and-send-letter-pdfs. It
groups them up by adding them to lists while counting the total
filesize, if it gets over a certain filesize (currently set to 500mb)
it breaks at that chunk, sends off that list of files to the ftp app,
and then starts building up a new list.
DVLA have a hard 2gb limit on how big the zip files we can send is -
however we're going to be limited by the amount of memory on the ftp
app well before we get around to handling 2gb of pdf data - so the
limit is 500mb for now. We'll adjust it after we see how ftp performs.
- Introduce a `_raise` flag for `get_notification_by_id` so that sql alchemy will raise the NoResults error rather than the app
- Refactor `dao_set_created_live_letter_api_notifications_to_pending` to use a join for getting services that don't have `letters_as_pdf` as marginally faster.
- Added has_permission helper in models.py to check permission in service
- Moved letters pdf tasks to separate file
- Moved letters pdf tests to own file
- check if service_callback_api exist before putting tasks on queue
- create_service_callback_api in tests before asserting if send_delivery_status_to_service has been called.
this reduces the amount of error messages we log (we'll no longer log
at error level when build-dvla-file-for-job retries while waiting for
the task to finish), and make sure we retry in those cases above - db
or s3 having temporary troubl
This causes an issue when it hits the max retry limit, and tries to
throw your exception to let you deal with it - at this point it was
moaning because we pass in a string
if it's not defined, and we're inside an exception block celery uses
that instead.
this involved:
* moving that task to callback_tasks to prevent circular imports
* updating the dummy research mode callbacks (with actual SNS messages from the
ses simulator emails)
* refactoring tests
Added a new endpoint which combines the usage of the stats table and the
data from the notifications tables, instead of using all the data from
the notification_history table. This should speed up the query times
and improve the page performance.
- Updated to make the stats create and update function transactional as
it actually wasn't committing the data to the table
- Added the get from the stats table
- Add a a method to combine the two results
- Added the endpoint
There are some Null template_ids in the production database which was
causing a failure as the stats_template_usage_by_month has a constraint
that it should not be null. This update only adds populated template_ids
- Updated Celery task
- Add a new test to test for null template_ids and not try to add them
to the stats
- Modified the services_dao to return an int instead of a datetime to
make usage easier and removed the BST function on year as it is not
relevant for year
- Improved tests do there is less logic by ordering the result so there
is less reliance on the template id
- Renamed variable in stats_template_usage_by_month_dao.py to make it
consistent with the method
Currently some pages are timing out due to the time it takes to perform
database queries. This is an attempt to improve the performance by
performing the query against the notification history table once a day
and use the notification table for a delta between midnight and the when
the page is run and combine the results.
- Added Celery task for doing the work
- Added a dao to handle the insert and update of the stats table
- Updated tests to test the new functionality
also, downgrade client returning bad status codes from `exception` to
error - it's not a problem with our system so we don't want it to trip
any cloudwatch alerts or anything.
Instead of retrying if there are genuine errors, only retry if there are
errors which are unexpected as otherwise the retries will happen and
fail for the same reason e.g. that the message has changed format and
will require a code update.
- Updated process_ses_results to only retry if there in an unknown
exception
- Update test and assert that there is a retry there is a unknown
exception
- Updated the retry and max_retries of the process_ses_results celery
task to be the same as other retry strategies in that file
- Provided a message with the 200 to be similar to how other responses
are handled