- Renaming /service/<id>/deactivate to /service/<id>/archive to match language on the UI.
- Will need to update admin before deleting the deactive service method
- Created dao and endpoint methods to suspend and resume a service.
- I confirm the use of suspend and resume with a couple people on the team, seems to be the right choice.
The idea is that if you archive a service there is no coming back from that.
To suspend a service is marking it as inactive and revoking the api keys. To resume a service is to mark the service as active, the service will need to create new API keys.
It makes sense that if a service is under threat that the API keys should be renewed.
The next PR will update the code to check that the service is active before sending the request or allowing any actions by the service.
We already filter the usage-by-month query by financial year. When we
show the total usage for a service, we should be able to filter this
by financial year.
Then, when the two lots of data are put side by side, it all adds up.
- Added the `simulate` notification logic to version 2. We have 3 email addresses and phone numbers that are used
to simulate a successful post to /notifications. This was missed out of the version 2 endpoint.
- Added a test to template_dao to check for the default value of normal for new templates
- in v2 get_notifications, casted the path param to a uuid, if not uuid abort(404)
while it's nice to use the decorator to signify fixtures with side
effects, it has unfortunate problems of completely overriding any
fixtures you've declared in the funcargs - so isn't really suitable
for our usecase where we often have other fixtures we rely on to
return values to us.
So for consistency, let's remove this and stick to using funcargs
to define our fixtures
when you invoke the fixture `sample_user`, it does two things: it
creates the user in the database, but also returns the user, a useful
object that you may want to manipulate or reference in your test.
however, when you invoke the fixture `notify_db_session`, it doesn't
do anything - rather, it *promises* to clear up the database tables
at the end of the test run.
because we have no need of the notify_db_session object in our tests
(indeed, for a long time this fixture just returned `None`), using
`pytest.mark.usefixtures('notify_db_session')` brings attention to the
fact that this is a side-effect fixture rather than a data setup
fixture. Functionally it is identical to passing as a parameter
note that all of these tests have to be checked to ensure that they
still call through to notify_db_session (notify_db not required) to
tear down the database after the test runs - since it's no longer
required to pass it in to the function just to invoke the sample_user
function
The financial year start April 1, 00:00 BST and our dates are stored as UTC.
Added a test for get_april_fools.
Added some test more test data for get_billable_unit_count_per_month.
When the start_date and end_date query argruments exists in the request,
the query will return the results from the NotificationHistory table for the given date range.
We will need to check the performance of this query, but this will only be used by the platform admin page.
- note this is an unexpectedly big change.
- When we create a service we pass the service id to the persist method. This means that we don't have the service available to check if in research mode.
- All calling methods (expecting the one where we use the notify service) have the service available. So rather than reload it I changed the method signature to pass the service, not the ID to persist.
- Touches a few places.
Note this means that the update or create methods will fall over on a null service. But this seems correct.
Goes back to the story which we need to play to make the service available as the API user so that the need to load and pass around services is minimised.