this means if we end up with some notifications sending and others not,
due to problems with the ftp connectivity for example, we don't re-send
those that worked.
As a reminder, letter pdf notifications start as created and stay that
way until we have sent the zip file to DVLA, at which point they are
updated to sending
#
It seems selecting the service_letter_contact in the validation method was causing SQLAlchemy to persist the object. When the dao was called to save the object nothing was different so we didn't persist the history object.
It may be time to take another look at how we version. :(
Grouping the letters into a maximum number of files is necessary because
the SQS task needs to be under a certain size. We also compress the task
when sending.
add collate-letter-pdfs task (name pending). This retrieves a list of
letter pdf files (just the metadata, not the actual data) from s3, and
loops through them, calling the ftp task zip-and-send-letter-pdfs. It
groups them up by adding them to lists while counting the total
filesize, if it gets over a certain filesize (currently set to 500mb)
it breaks at that chunk, sends off that list of files to the ftp app,
and then starts building up a new list.
DVLA have a hard 2gb limit on how big the zip files we can send is -
however we're going to be limited by the amount of memory on the ftp
app well before we get around to handling 2gb of pdf data - so the
limit is 500mb for now. We'll adjust it after we see how ftp performs.
- Added has_permission helper in models.py to check permission in service
- Moved letters pdf tasks to separate file
- Moved letters pdf tests to own file
- test that `create_letters_pdf` only gets called when `letters_as_pdf` permission set
- test that `build_dvla_file` does not get called when `letters_as_pdf` permission set
- test creating the pdf call to notifications template preview service
- test uploading the data to s3 calls upload_letters_pdf function
- check if service_callback_api exist before putting tasks on queue
- create_service_callback_api in tests before asserting if send_delivery_status_to_service has been called.
this reduces the amount of error messages we log (we'll no longer log
at error level when build-dvla-file-for-job retries while waiting for
the task to finish), and make sure we retry in those cases above - db
or s3 having temporary troubl
This causes an issue when it hits the max retry limit, and tries to
throw your exception to let you deal with it - at this point it was
moaning because we pass in a string
if it's not defined, and we're inside an exception block celery uses
that instead.
this involved:
* moving that task to callback_tasks to prevent circular imports
* updating the dummy research mode callbacks (with actual SNS messages from the
ses simulator emails)
* refactoring tests
`create_custom_template` is not using `dao_create_template` since
it explicitly sets a template id. Notifications.template relationship
now refers to a TemplateHistory objects, so we need to make sure that
`create_custom_template` creates a matching TemplateHistory record
when it creates a Template.
There are some Null template_ids in the production database which was
causing a failure as the stats_template_usage_by_month has a constraint
that it should not be null. This update only adds populated template_ids
- Updated Celery task
- Add a new test to test for null template_ids and not try to add them
to the stats
- Modified the services_dao to return an int instead of a datetime to
make usage easier and removed the BST function on year as it is not
relevant for year
- Improved tests do there is less logic by ordering the result so there
is less reliance on the template id
- Renamed variable in stats_template_usage_by_month_dao.py to make it
consistent with the method
- Some tests for failing because of an import which was overriding the
one above it in test_scheduled_tasks.py
- Import error fixed in test_services_dao.py after a rename of a method
Currently some pages are timing out due to the time it takes to perform
database queries. This is an attempt to improve the performance by
performing the query against the notification history table once a day
and use the notification table for a delta between midnight and the when
the page is run and combine the results.
- Added Celery task for doing the work
- Added a dao to handle the insert and update of the stats table
- Updated tests to test the new functionality