There are no more notifications whose statuses are "failed", as
the "failed" status has now been replaced with statuses that are
more specific about the nature of the failure.
However, we still want to be able to filter by failing
notifications. (ie "/v2/notifications?status=failed").
Created a `.substitute_status()` method which takes a status
string or list of status strings and, if it finds 'failure',
replaces it with the other failing status types.
This way, calling for nottifications with "?status=failed" is
internally treated as
"status = ['technical-failure', 'temporary-failure', 'permanent-failure']"
Some notification statuses assume that a notification has been
updated (ie, it cannot have been created in that state).
This caused a bug in our sample notification fixture when trying
to create a notificaiton in a 'complete' status.
This commit groups the completed statuses in a list, the way other
statuses have been grouped together so that they're more portable.
Also fixed the sample_notification fixture.
Return multiple notifications for a service.
Choosing a page_size or a page_number is no longer allowed.
Instead, there is a `next` link included with will return the
next {default_page_size} notifications in the sequence.
Query parameters accepted are:
- template_type: filter by specific template types
- status: filter by specific statuses
- older_than: return a chronological list of notifications older
than this one. The notification with the id that is passed in
is _not_ returned.
Note that both `template_type` and `status` can accept multiple
parameters. Thus it is possible to call
`/v2/notifications?status=created&status=sending&status=delivered`
The "cost" value was flawed for a couple of reasons.
1. Lots of messages are free, so in those instances the "cost"
doesn't tell you anything
2. The query to get the rate was expensive and we don't have
an obvious way to get it back very efficiently for large numbers
of notifications.
So we scrapped it.
We're formally using the ISO 8601 UTC datetime format, and so the
correct way to output the data is by appending the timezone.
("Z" in the case of UTC*).
Unfortunately, Python's `datetime` formatting will just ignore the
timezone part of the string on output, which means we just have to
append the string "Z" to the end of all datetime strings we output.
Should be fine, as we will only ever output UTC timestamps anyway.
* https://en.wikipedia.org/wiki/ISO_8601#UTC
This is the schema that individual notifications will conform to
when they are returned from this API.
JSON logic enforces that the right keys are set depending on the
`"type"`. (eg a schema with `"type": "sms"` must have a
`"phone_number"` value and it cannot have an `"email_address"`)
The new 'v2' API wants to return less data than the previous one,
which was sending back tons of fields the clients never used.
This new route returns only useful information, with the JSON
response dict being built up in the model's `.serialize()` method.
Note that writing the test for this was a bit painful because of
having to treat loads of keys differently. Hopefully we think this
is a good way to write this test, because if we don't, we should
start thinking of a better way to check the values are what we
expect.
In the V2 API, the GET response for an individual notification
returns a 'cost' value, which we can get by multiplying the
billable units by the per-message rate of the supplier who
sent the message.
Any notifications with billable units > 0 but without a
corresponding `ProviderRates` entry will blow up the application,
so make sure you've got one.
Update the format_checkers to raise the specific exception that why the validator can handle multiple messages.
Which led to a refactor of build_error_message.
1) It's incr not inc on the redis client, so renamed the calls everywhere
2) Redis returns bytes/string rather than an int if the value stored is an int. Cast the result to an int before use. Not you can set up the GET to do this transparently but I've not done this as we *may * use GETS for non-int and the callback sets up the cast for the connection not the call.
After we have written to the database and placed it on a deliver queue we count it in the cache against the service.
This is the equivalent of doing it at the end of the API call.
These means that the cache count is on Notifications in the database NOT notifications sent to providers. If the provider fails to accept the notification, it still counts.
I think this is correct, as they have done the work to send it so we should count it, though there is an argument that we should count them on sending?
- Uses Redis cache to check for current count
- If not present then sets the value based on the database state
- Any Redis errors are swallowed. Cache failures should NOT fail the request.