We want to be able to toggle the numbers on the platform admin page between
including and excluding notifications sent using test keys, so that we can see
both real use of the platform and all load on it.
This parameter defaults to True, which is the existing behaviour.
- seems phonenumber/emailaddress from the CSV are now passed in as personalisation.
- assume the renderer does the correct thing here. Will need to check with @quis
This PR fixes that and adds a test for it.
I am confused as to why I had to change the test_validators test that is checking if the mock is called.
Why did this code pass on preview?
Created a new schema that accepts request parameters for the
get_notifications v2 route.
Using that to validate now instead of the marshmallow validation.
Also changed the way formatted error messages are returned because
the previous way was cutting off our failing `enum` messages.
In this PR the id for the notification is passed in and used to created the notification, which causes a integrity error.
Normally when we get a SQLAlchemy error here we send the message to the retry queue, but if the notification already exists
we just ignore it.
There are no more notifications whose statuses are "failed", as
the "failed" status has now been replaced with statuses that are
more specific about the nature of the failure.
However, we still want to be able to filter by failing
notifications. (ie "/v2/notifications?status=failed").
Created a `.substitute_status()` method which takes a status
string or list of status strings and, if it finds 'failure',
replaces it with the other failing status types.
This way, calling for nottifications with "?status=failed" is
internally treated as
"status = ['technical-failure', 'temporary-failure', 'permanent-failure']"
Some notification statuses assume that a notification has been
updated (ie, it cannot have been created in that state).
This caused a bug in our sample notification fixture when trying
to create a notificaiton in a 'complete' status.
This commit groups the completed statuses in a list, the way other
statuses have been grouped together so that they're more portable.
Also fixed the sample_notification fixture.
The "cost" value was flawed for a couple of reasons.
1. Lots of messages are free, so in those instances the "cost"
doesn't tell you anything
2. The query to get the rate was expensive and we don't have
an obvious way to get it back very efficiently for large numbers
of notifications.
So we scrapped it.
We want to log the usage of the various API clients we have so that we understand when they can be cycled.
To this end we are going to count usage in statsd.
All notify clients have a suer agent, of the format: NOTIFY-API-{LANGUAGE}-CLIENT/version.number
For example, NOTIFY-API-PYTHON-CLIENT/3.0.0
We convert that into a statsd/graphite friendly string of the format: notify-api-python-client.3-0-0
So we can subdivide on client and client version on our dashboards.
Present but unknown User agents are records as "non-notify-user-agent"
Missing are presented as "unknown"
Our previous test ws returning a notification without a `sent_by`
attribute, which meant that cost was always 0.
Unfortunately, this meant that returning a real value for cost was
untested and (whaddya know) it broke immediately.
Old test scenario:
- billable_units=1, sent_by=None, cost=0
New scenarios
- billable_units=0, sent_by='mmg', cost=0
- billable_units=1, sent_by='mmg', cost=1
Emualated the validation methods that exist [in the python-client](620e5f7014/integration_test/__init__.py).
The `validate_v0` function loads json schemas from a local
`/schemas` directory, whereas the new `validate` function (which
we're going to use for our v2 API calls) uses the common
`get_notification_response` python schema defined in
"app/v2/notifications/notification_schemas.py".
Removed the new `v2` schema from the last commit as it's no longer
being used.
Also, refactored common code in the GET and POST contract files
so that making requests and converting responses to JSON are
pulled out into common functions.
Converted python-based schema in
"app/v2/notifications/notification_schema.py" into a pure json
schema and tested it with the new "v2/" API route to confirm that
it validates.
Also refactored some common code in the public contract GET tests
that returns notifications.
We're formally using the ISO 8601 UTC datetime format, and so the
correct way to output the data is by appending the timezone.
("Z" in the case of UTC*).
Unfortunately, Python's `datetime` formatting will just ignore the
timezone part of the string on output, which means we just have to
append the string "Z" to the end of all datetime strings we output.
Should be fine, as we will only ever output UTC timestamps anyway.
* https://en.wikipedia.org/wiki/ISO_8601#UTC
The new 'v2' API wants to return less data than the previous one,
which was sending back tons of fields the clients never used.
This new route returns only useful information, with the JSON
response dict being built up in the model's `.serialize()` method.
Note that writing the test for this was a bit painful because of
having to treat loads of keys differently. Hopefully we think this
is a good way to write this test, because if we don't, we should
start thinking of a better way to check the values are what we
expect.
In the V2 API, the GET response for an individual notification
returns a 'cost' value, which we can get by multiplying the
billable units by the per-message rate of the supplier who
sent the message.
Any notifications with billable units > 0 but without a
corresponding `ProviderRates` entry will blow up the application,
so make sure you've got one.