There are some Null template_ids in the production database which was
causing a failure as the stats_template_usage_by_month has a constraint
that it should not be null. This update only adds populated template_ids
- Updated Celery task
- Add a new test to test for null template_ids and not try to add them
to the stats
- Modified the services_dao to return an int instead of a datetime to
make usage easier and removed the BST function on year as it is not
relevant for year
- Improved tests do there is less logic by ordering the result so there
is less reliance on the template id
- Renamed variable in stats_template_usage_by_month_dao.py to make it
consistent with the method
Currently some pages are timing out due to the time it takes to perform
database queries. This is an attempt to improve the performance by
performing the query against the notification history table once a day
and use the notification table for a delta between midnight and the when
the page is run and combine the results.
- Added Celery task for doing the work
- Added a dao to handle the insert and update of the stats table
- Updated tests to test the new functionality
also, downgrade client returning bad status codes from `exception` to
error - it's not a problem with our system so we don't want it to trip
any cloudwatch alerts or anything.
Instead of retrying if there are genuine errors, only retry if there are
errors which are unexpected as otherwise the retries will happen and
fail for the same reason e.g. that the message has changed format and
will require a code update.
- Updated process_ses_results to only retry if there in an unknown
exception
- Update test and assert that there is a retry there is a unknown
exception
- Updated the retry and max_retries of the process_ses_results celery
task to be the same as other retry strategies in that file
- Provided a message with the 200 to be similar to how other responses
are handled
Created three new celery tasks:
* save_sms (will replace send_sms)
* save_email (will replace send_email)
* save_letter (will replace persist_letter)
The difference between the new tasks and the tasks they are replacing is
that we no longer pass in the datetime as a parameter.
The code has been changed to use the new tasks, and the tests now run
against the new tasks too. The old tasks will need be removed in a separate
commit.
Comments are PR review. Updated code style in a few places to make it
more consistent with other code, added tests for letters and emails
so they are testedt, refactored some database queries to dao file
- Fixed code style
- Refactored database queries to dao code
- Added tests for emails and sms.
- Moved the process_incomplete_jobs to tasks.py
- Moved the process_incomplete_jobs test to test_tasks.py
- Cleaned up imports and other code style issues.
As the new tasks is not a scheduled one, moved the the tasks.py file.
This makes it more consisted with other tasks. Updated a few code style
issues to make it more consistent with other coe and hence more
maintainable in future.
- Added a new task to process incomplete jobs
- Added tests to test the new method
- Updated the check for incomplete jobs method to start the new task
This will effectively resume tasks which for some reason were interrupted
whilst they were being processed. In some cases only some of the csv
was processed, this will find the place in the csv and continue processing
from that point.
- Added log for when a job starts so that we will know when the processing of a job starts with the number of notifications
- Added dao method to get total notifications for a job id
- Added a test to check whether the number of notifications in the table matches the job notification_count
we now no longer create a job. At the end of the post there is no
action, as we don't have any tasks to queue immediately - if it's a
real notification it'll get picked up in the evening scheduled task.
If it's a test notification, we create it with an initial status of
sending so that we can be sure it'll never get picked up - and then we
trigger the update-letter-notifications-to-sent-to-dvla task to sent
the sent-at/by.
1. No longer create jobs when creating letters from api 🎉
2. Bulk update notifications based on the notification references after
we send them to DVLA - either as success or as error
this means that if the task is accidentally ran twice (eg we autoscale
notify-celery-worker-beat to 2), it won't send letters twice.
Additionally, update some function names and config variables to make
it clear that they are referring to letter jobs, rather than all letter
content
Removed the tests for trial mode service for the scheduled tasks and the process job.
Having the validation in the POST notification and create job endpoint is enough.
Updated the test_service_whitelist test because the order of the array is not gaurenteed.
they were always caught locally by celery's base handler, however,
we weren't logging them ourselves, which meant it wouldn't be put into
the json logs that are sent to cloudwatch.
specifically, all of the performance platform specific data layout now
happens in performance_platform_client.py - stuff like setting the
_timestamp, period etc, and the perf platform-specific nomenclature is
all handled there.
so that it doesn't appear generic when it's actually specific to
sending the daily notification totals. To do this, split it out into a
separate performance_platform directory, containing the business logic,
and make the performance_platform_client incredibly thin - all it
handles is adding ids to payloads, and sending stats.
Also, some changes to the config (not all done yet) since there is one
token per endpoint, not one for the whole platform as we'd previously
coded