We no longer will send them any stats so therefore don't need the code
- the code to work out the nightly stats
- the performance platform client
- any configuration for the client
- any nightly tasks that kick off the sending off the stats
We will require a change in cronitor as we no longer will have this task
run meaning we need to delete the cronitor check.
We current do this as part of send-daily-performance-platform-stats but
now this moves it into its own separate task. This is for two reasons
- we will shortly get rid of the send-daily-performance-platform-stats
task as we no longer will need to send anything to performance
platform
- even if we did decide to keep the task
send-daily-performance-platform-stats and remove the specific bits
that relate to the performance platform, it's probably nicer to
rewrite the new task from scratch to make sure it's all clear and easy
to understand
In 8285ef5f89
we turned off alerting on 2nd class letters still being in sending on
certain days of the week because we were only sending letters out on
Mon, Wed, Fri.
Now we have swapped back to sending out 2nd class letters on all
workdays so this change can be reverted. Note, I haven't reverted the
commit exactly but more so the behaviour, whilst leaving in some tests
to explicitly test 2nd class letters for the alert in case we change
this again.
Based on this https://github.com/alphagov/notifications-api/pull/2788
where some concerns were raised. This should be a quicker fix to get our
the deletions to run sequentially for all the notification types. Note,
email is first as most important as makes up the larger numbers (we
wouldn't want it to start with SMS, fail half way for a reason that only
affects SMS and for that to affect the email deletion).
We hope that by running sequentially we will reduce conflicts writing to
the same index and this will speed up the total time it takes to finish
deleting all notification types older than their retention time. There
is a risk that whilst quicker per job, as they now run sequentially
rather than potentially overlapping, they will take longer overall. We
will need to monitor to see.
Move finding of letter logic into a separate method so it can be unit
tested rather than having to test it by checking the contents of a
zendesk ticket api call.
This will enable us to change the zendesk api ticket call message
without needing to edit lots of tests.
When we send a zip file of letters to DVLA we expect them to send back an acknowledgement of those files.
Previously they named the files like NOTIFY.20180202091254.ACK.TXT and the contents would contain the name of the zip file we sent with a date of when they got it.
They have updated this format to mirror the format of the zip file because there was an instance where they sent 2 files of the same name so the later overwrote the first.
Since the name matches our name, there is no need to get the file from S3 but just compare file names.
* call variables unambiguous things like `start_time` or `bst_date` to
reduce risk of passing in the wrong thing
* simplify the count_dict object - remove nested dict and start_date
fields as superfluous
* use static datetime objects in tests rather than calculating them
each time
Changed the query to get the performance platform stats from ft_notification_status. But the date used for the query needed to be a date, not datetime so the equality worked.
make a decorator that pings cronitor before and after each task run.
Designed for use with nightly tasks, so we have visibility if they
fail. We have a bunch of cronitor monitors set up - 5 character keys
that go into a URL that we then make a GET to with a self-explanatory
url path (run/fail/complete).
the cronitor URLs are defined in the credentials repo as a dictionary
of celery task names to URL slugs. If the name passed in to the
decorator isn't in that dict, it won't run.
to use it, all you need to do is call `@cronitor(my_task_name)`
instead of `@notify_celery.task`, and make sure that the task name and
the matching slug are included in the credentials repo (or locally,
json dumped and stored in the CRONITOR_KEYS environment variable)