We're formally using the ISO 8601 UTC datetime format, and so the
correct way to output the data is by appending the timezone.
("Z" in the case of UTC*).
Unfortunately, Python's `datetime` formatting will just ignore the
timezone part of the string on output, which means we just have to
append the string "Z" to the end of all datetime strings we output.
Should be fine, as we will only ever output UTC timestamps anyway.
* https://en.wikipedia.org/wiki/ISO_8601#UTC
This is the schema that individual notifications will conform to
when they are returned from this API.
JSON logic enforces that the right keys are set depending on the
`"type"`. (eg a schema with `"type": "sms"` must have a
`"phone_number"` value and it cannot have an `"email_address"`)
The new 'v2' API wants to return less data than the previous one,
which was sending back tons of fields the clients never used.
This new route returns only useful information, with the JSON
response dict being built up in the model's `.serialize()` method.
Note that writing the test for this was a bit painful because of
having to treat loads of keys differently. Hopefully we think this
is a good way to write this test, because if we don't, we should
start thinking of a better way to check the values are what we
expect.
In the V2 API, the GET response for an individual notification
returns a 'cost' value, which we can get by multiplying the
billable units by the per-message rate of the supplier who
sent the message.
Any notifications with billable units > 0 but without a
corresponding `ProviderRates` entry will blow up the application,
so make sure you've got one.
Update the format_checkers to raise the specific exception that why the validator can handle multiple messages.
Which led to a refactor of build_error_message.
1) It's incr not inc on the redis client, so renamed the calls everywhere
2) Redis returns bytes/string rather than an int if the value stored is an int. Cast the result to an int before use. Not you can set up the GET to do this transparently but I've not done this as we *may * use GETS for non-int and the callback sets up the cast for the connection not the call.
After we have written to the database and placed it on a deliver queue we count it in the cache against the service.
This is the equivalent of doing it at the end of the API call.
These means that the cache count is on Notifications in the database NOT notifications sent to providers. If the provider fails to accept the notification, it still counts.
I think this is correct, as they have done the work to send it so we should count it, though there is an argument that we should count them on sending?
- Uses Redis cache to check for current count
- If not present then sets the value based on the database state
- Any Redis errors are swallowed. Cache failures should NOT fail the request.
this means that any errors will cause the entire thing to roll back
unfortunately, to do this we have to circumvent our regular code, which calls commit a lot, and lazily loads a lot of things, which will flush, and cause the version decorators to fail. so we have to write a lot of stuff by hand and re-select the service (even though it's already been queried) just to populate the api_keys and templates relationship on it
From a support ticket:
> it's possible to add a personalisation token with trailing whitespace
> (eg. "key " rather than "key"). Can this be trimmed in the UI to guard
> against this? (one of our devs copied and pasted it from a document
> and inadvertently included the space)
> Nothing major but caused a few hours of investigations!
Rather than trim the placeholder in the template, we should treat
placeholders in API calls the same way we do with CSV files, ie we
ignore case and spacing in the name of the placeholder. So
`(( First Name))` is equivalent to `((first_name))`, and both would be
populated with a dictionary like `{'firstName': 'Chris'}`.
Depends on:
- [x] https://github.com/alphagov/notifications-utils/pull/77