currently, they're made by creating a one-line job, but we want to
reduce task/csv file noise so we're moving them to persist in the
same vein as API usage. However, we can't just call through to that
since there are some differences:
* no api keys
* tighter control over API format
* no scheduling
* no client references
etc.
So, re-using as much of the v2 validation stuff as possible, I've
created this file that just does basic validation, and then calls
through to persist_notification and schedules a task. Woo.
We need this for the two way stuff in the admin app.
We already have this as a public endpoint, but the admin app can’t use
it, because the admin app auths with its own key, not that of the
service it’s acting on behalf of.
This endpoint makes sure that a request originating from one service
can’t be used to see notifications belonging to another service.
Marshmallow validates and deserialises - BUT, when it deserialises,
it explicitly sets `sms_sender=None`, even when you haven't passed
sms_sender in. This is problematic, because we wanted to take advantage
of sqlalchemy's default value to set sms_sender to `GOVUK` when the
actual DB commit happens.
Instead, still use marshmallow for validating, but manually carry out
the json deserialisation in the model class.
This fixes a bug that only manifested when the database was upgraded,
but the code hadn't updated. 🎉
Since the response has changed I have created new endpoints so that the deployments for Admin are more managable.
Removed print statements from some tests.
There are three authentication methods:
- requires_no_auth - public endpoint that does not require an Authorisation header
- requires_auth - public endpoints that need an API key in the Authorisation header
- requires_admin_auth - private endpoint that requires an Authorisation header which contains the API key for the defined as the client admin user
This endpoint will eventualy replace the weekly breakdown one. By month
for a given financial year is better, because it gives us consistency
with the breakdown of financial usage (and eventually consistency with
the template usage).
The code to do this is a bit convoluted, in order to fill out the counts
for months and statuses where we don’t have notifications.
This will make the admin side of this easier, because we can rely on
there always being numbers available. The admin side will deal with
summing the statuses (eg `temporary-failure` > `failed`) because this
is presentational.
This commit also modifies the usage count to use `.between()` for
consistency.
- Renaming /service/<id>/deactivate to /service/<id>/archive to match language on the UI.
- Will need to update admin before deleting the deactive service method
- Created dao and endpoint methods to suspend and resume a service.
- I confirm the use of suspend and resume with a couple people on the team, seems to be the right choice.
The idea is that if you archive a service there is no coming back from that.
To suspend a service is marking it as inactive and revoking the api keys. To resume a service is to mark the service as active, the service will need to create new API keys.
It makes sense that if a service is under threat that the API keys should be renewed.
The next PR will update the code to check that the service is active before sending the request or allowing any actions by the service.
We already filter the usage-by-month query by financial year. When we
show the total usage for a service, we should be able to filter this
by financial year.
Then, when the two lots of data are put side by side, it all adds up.