- Added log for when a job starts so that we will know when the processing of a job starts with the number of notifications
- Added dao method to get total notifications for a job id
- Added a test to check whether the number of notifications in the table matches the job notification_count
we now no longer create a job. At the end of the post there is no
action, as we don't have any tasks to queue immediately - if it's a
real notification it'll get picked up in the evening scheduled task.
If it's a test notification, we create it with an initial status of
sending so that we can be sure it'll never get picked up - and then we
trigger the update-letter-notifications-to-sent-to-dvla task to sent
the sent-at/by.
1. No longer create jobs when creating letters from api 🎉
2. Bulk update notifications based on the notification references after
we send them to DVLA - either as success or as error
this means that if the task is accidentally ran twice (eg we autoscale
notify-celery-worker-beat to 2), it won't send letters twice.
Additionally, update some function names and config variables to make
it clear that they are referring to letter jobs, rather than all letter
content
Removed the tests for trial mode service for the scheduled tasks and the process job.
Having the validation in the POST notification and create job endpoint is enough.
Updated the test_service_whitelist test because the order of the array is not gaurenteed.
they were always caught locally by celery's base handler, however,
we weren't logging them ourselves, which meant it wouldn't be put into
the json logs that are sent to cloudwatch.
specifically, all of the performance platform specific data layout now
happens in performance_platform_client.py - stuff like setting the
_timestamp, period etc, and the perf platform-specific nomenclature is
all handled there.
so that it doesn't appear generic when it's actually specific to
sending the daily notification totals. To do this, split it out into a
separate performance_platform directory, containing the business logic,
and make the performance_platform_client incredibly thin - all it
handles is adding ids to payloads, and sending stats.
Also, some changes to the config (not all done yet) since there is one
token per endpoint, not one for the whole platform as we'd previously
coded
- Created TaskNames for DVLA_FILES rather than have DVLA_FILES in QueueNames
- Removed PROCESS_FTP from all_queues() as this was causing problems in picking up letter job tasks
- Created test to ensure that we don't arbitrarily add queue names to all_queues
When populating the monthly billing records on a schedule, we need
to ensure the correct month is being updated.
As an example, if the current datetime is 31 Mar 2016, 23:00. The
BST equivalent is the 1st April. Therefore we need to ensure we
update billing for April, not March. This takes care of that.
- The new task has not been added to the beat application yet.
- Added an updated_at column to the monthly billing table, we may want to only calculate from the last updated date rather than the entire month.
We don't use boto2 on the api anymore, not since celery 4.0.2
Note - if you run locally with boto2 still installed you'll see errors
that complain about things like:
boto.exception.SQSError: SQSError: 403 Forbidden
<?xml version="1.0"?><ErrorResponse xmlns="http://queue.amazonaws.com/doc/2012-11-05/"><Error><Type>Sender</Type><Code>SignatureDoesNotMatch</Code><Message>Credential should be scoped to a valid region, not 'queue'. </Message><Detail/></Error><RequestId>52207ca4-9131-58cb-89ae-2d45f06623a3</RequestId></ErrorResponse>
If so, make sure boto2 is completely uninstalled.
If the service has not set the url then nothing happens.
If the request to the service url returns with 500 or greater the task is retries.
The task is created when the SMS provider post the inbound SMS.