- Added a new task to process incomplete jobs
- Added tests to test the new method
- Updated the check for incomplete jobs method to start the new task
This will effectively resume tasks which for some reason were interrupted
whilst they were being processed. In some cases only some of the csv
was processed, this will find the place in the csv and continue processing
from that point.
1. No longer create jobs when creating letters from api 🎉
2. Bulk update notifications based on the notification references after
we send them to DVLA - either as success or as error
so that it doesn't appear generic when it's actually specific to
sending the daily notification totals. To do this, split it out into a
separate performance_platform directory, containing the business logic,
and make the performance_platform_client incredibly thin - all it
handles is adding ids to payloads, and sending stats.
Also, some changes to the config (not all done yet) since there is one
token per endpoint, not one for the whole platform as we'd previously
coded
- Created TaskNames for DVLA_FILES rather than have DVLA_FILES in QueueNames
- Removed PROCESS_FTP from all_queues() as this was causing problems in picking up letter job tasks
- Created test to ensure that we don't arbitrarily add queue names to all_queues
previously in run_app_paas.sh, we captured stdout from the app and
piped that into the log file. However, this came up with a bunch of
problems, mainly:
* exceptions with stack traces often weren't formatted properly,
and kibana could not parse them
* celery logs were duplicated - we'd collect both the json logs and
the human readable stdout logs.
instead, with the updated utils library, we can use that to log json
straight to the appropriate directory directly.
in tests, we were replacing os.environ with a basic dict so that
we didn't overwrite the contents of the real environment during tests.
However, os.environ doesn't accept non-str values, so this commit
changes the fixture so that it asserts all values set are strings.
We needed to change how we store ip whitelist stuff in the env because
of this.
* Alter config so an error will be raised if you forget to mock out a
celery call in one of your tests
* Remove an unneeded exception type that was masking errors
* Two separate jobs, one for sms&email and another for letter
* Change celery task for delete to accept template type filter
* General refacor of tests to make more readable
This allows us to reference it across the API code base and return it in the API.
But not currently attached to the service DB model - a static method on the class.
Same as how we ignore ‘send yourself a test’ messages (see:
d8467bfc3c). The dashboard gets clogged
up with one off messages otherwise, which affects:
- performance
- users ability to find their jobs