We already filter the usage-by-month query by financial year. When we
show the total usage for a service, we should be able to filter this
by financial year.
Then, when the two lots of data are put side by side, it all adds up.
The financial year start April 1, 00:00 BST and our dates are stored as UTC.
Added a test for get_april_fools.
Added some test more test data for get_billable_unit_count_per_month.
When the start_date and end_date query argruments exists in the request,
the query will return the results from the NotificationHistory table for the given date range.
We will need to check the performance of this query, but this will only be used by the platform admin page.
set all existing rows to have a version of 1 (also copy across values
to populate the new provider_details_history table in the upgrade
script)
in dao_update_provider_details bump the provider_details.version by 1
and then duplicate into the history table as a new row
(done manually as opposed to the decorator used in template_history
since this is only edited in this one place and the decorator is icky)
* to be used for auditing changes to provider details
* not hooked in yet
* also made provider_details.active non-nullable. set all existing null
provider_details.active to FALSE. none are set false on live so
should hopefully be fairly uncontroversial
* refactored NotificationHistory.from_notification and update from noti
to share with provider_details_history
- note this is an unexpectedly big change.
- When we create a service we pass the service id to the persist method. This means that we don't have the service available to check if in research mode.
- All calling methods (expecting the one where we use the notify service) have the service available. So rather than reload it I changed the method signature to pass the service, not the ID to persist.
- Touches a few places.
Note this means that the update or create methods will fall over on a null service. But this seems correct.
Goes back to the story which we need to play to make the service available as the API user so that the need to load and pass around services is minimised.
Return multiple notifications for a service.
Choosing a page_size or a page_number is no longer allowed.
Instead, there is a `next` link included with will return the
next {default_page_size} notifications in the sequence.
Query parameters accepted are:
- template_type: filter by specific template types
- status: filter by specific statuses
- older_than: return a chronological list of notifications older
than this one. The notification with the id that is passed in
is _not_ returned.
Note that both `template_type` and `status` can accept multiple
parameters. Thus it is possible to call
`/v2/notifications?status=created&status=sending&status=delivered`
this means that any errors will cause the entire thing to roll back
unfortunately, to do this we have to circumvent our regular code, which calls commit a lot, and lazily loads a lot of things, which will flush, and cause the version decorators to fail. so we have to write a lot of stuff by hand and re-select the service (even though it's already been queried) just to populate the api_keys and templates relationship on it
- It would be nice to refactor the send_sms and send_email tasks to use these common functions as well, that way I can get rid of the new Notifications.from_v2_api_request method.
- Still not happy with the format of the errors. Would like to find a happy place, where the message is descript enough that we do not need external documentation to explain the error. Perhaps we still only need documentation to explain the trial mode concept.
this is so that the filtering, which we do on the admin side, is applied
before pagination - so that the pages returned are all valid displayable
jobs. unfortunately this means that another config value has to be copied
to the server side but it's not the end of the world
Say you have a dashboard with some jobs you sent. Normally looks like:
job | sent
--- | ---
file.csv | **5pm**
file.csv | 3pm
file.csv | 1pm
file.csv | 11am
However if your 5pm job was scheduled at lunchtime, then it will look
like this:
job | sent
--- | ---
file.csv | 3pm
file.csv | 1pm
file.csv | **5pm**
file.csv | 11am
This is because the jobs are sorted by when they were created, not when
they were sent. It looks wrong.
**For jobs that have already been sent**
This commit changes the sort order to be based on `processed_at`
instead.
**For upcoming jobs**
If a job doesn’t have a `processed_at` time then it’s scheduled, but
hasn’t started yet. Only in this case should we still be sorting by
`created_at`.
this helps manage the transaction by keeping it inside one function in the dao,
so after the function completes you know that the transaction has been released
and concurrent processing can resume
we were running into issues where multiple beats queue up the
run_scheduled_jobs task at the same time, and concurrency issues with selecting
scheduled jobs causes both tasks to trigger processing of the job.
Use with_for_update, which calls through to the postgres SELECT ... FOR UPDATE
which locks other SELECT FOR UPDATES (ie other threads running same code) until
the rows are set to pending and the transaction completes - so the second
thread will not find any rows
Notifications with a `billable_units` count of `0` wont have any effect
on the result, but including them in the query will slow down the
grouping and summing of the results because it’ll have to loop over more
rows.