- previously this was unbounded, so it got all jobs older then 7 days. In excess of 75,000 🔥
- this meant that the job took (a) a long time and (b) a lot memory and (c) doing the same thing every day
These changes mean that the job has a 2 day eligible window for jobs, minimising the number of eligible jobs in a run, whilst still retaining some leeway in event if it failing one night.
In principle the job runs early morning on a given day. The previous 7 days are left along, and then the previous 2 days worth of files are deleted:
so:
runs on
31st
30,29,28,27,26,25,24 are ignored
23,22 jobs here have files deleted
21 and earlier are ignored.
If the date range is with the last 7 days we query Notifcations.
If the date range is outside of the last 7 days we query NotificationHistory.
NotificationHistory does not persist notifications created with a test api key.
This will transform each notification in a job to a row in a file.
The file is then uploaded to S3.
The files will later be aggregated by the notifications-ftp app to send to dvla.
The method to upload the file to S3 should be pulled into notifications-utils package.
It is the same method used in notifications-admin.
* Add notify user id in config
* Add dao method to get provider history versions along with tests
* BUG: Provider switching did not handle case where priorities were equal. This
* adds a fix to properly cover this case along with tests
Cache expires every 10 minutes, but will help with the every 2 second query, especially when a job is running.
There is some clean up and qa to do for this yet
The status dictionary was being assigned once, and then subsequent
uses of it were by reference. This meant that each template type was
pointing at the same dictionary, and updating one meant updating all.
This commit adds a dictionary comprehension, which gets evaluated once
for each template type, so each template type has its own `dict` of
statuses.
Before
--
```
Email SMS Letter
| | |
{'sending':, 'failed', …}
```
After
--
```
Email SMS Letter
| | |
{'sending':, {'sending':, {'sending':,
'failed', 'failed', 'failed',
…} …} …}
```
Before we were only checking counts for SMS, because we’d only created
SMS notifications.
This commit also checks the counts for email and letter, which should be
0. But they’re not. So this commit is adding the test case which
demonstrates the bug.
Update the PermissionsDao.get_permissions_by_user_id to only return permissions for active services,
this will make the admin app return a 403 if someone (otherthan platform admin) tries to look at an inactive service.
Removed the active flag in sample_service the dao_create_service overiddes this attribute.
This endpoint will eventualy replace the weekly breakdown one. By month
for a given financial year is better, because it gives us consistency
with the breakdown of financial usage (and eventually consistency with
the template usage).
The code to do this is a bit convoluted, in order to fill out the counts
for months and statuses where we don’t have notifications.
This will make the admin side of this easier, because we can rely on
there always being numbers available. The admin side will deal with
summing the statuses (eg `temporary-failure` > `failed`) because this
is presentational.
This commit also modifies the usage count to use `.between()` for
consistency.
- Renaming /service/<id>/deactivate to /service/<id>/archive to match language on the UI.
- Will need to update admin before deleting the deactive service method
- Created dao and endpoint methods to suspend and resume a service.
- I confirm the use of suspend and resume with a couple people on the team, seems to be the right choice.
The idea is that if you archive a service there is no coming back from that.
To suspend a service is marking it as inactive and revoking the api keys. To resume a service is to mark the service as active, the service will need to create new API keys.
It makes sense that if a service is under threat that the API keys should be renewed.
The next PR will update the code to check that the service is active before sending the request or allowing any actions by the service.
We already filter the usage-by-month query by financial year. When we
show the total usage for a service, we should be able to filter this
by financial year.
Then, when the two lots of data are put side by side, it all adds up.