This makes this test a couple of seconds faster - 0.7s instead of 2.5s for me
locally. sample_notification also creates a service, template, user and
permissions, but we don't need any of these objects to exist in the database
for this test. It's particularly helpful for this test because there are so
many parameterized cases. Thanks @leohemsted for suggesting doing this here.
This means that these codes won't be delayed by large jobs going through the
send-sms/email queues. send_user_sms_code now works much more like the
endpoints for sending notifications, by persisting the notification and only
using the deliver_sms task (instead of using send_sms as well).
The workers consuming the `notify` queue should be able to handle the deliver
task as well, so no change should be needed to the celery workers to support
this.
I think there's also a change in behaviour here: previously, if the Notify
service was in research mode, 2FA codes would not have been sent out, making
it impossible to log into the admin. Now, a call to this endpoint will always
send out the notification even if we've put the Notify service into research
mode, since we set the notification's key type to normal and ignore the
service's research mode setting when sending the notification to the queue.
We want to be able to toggle the numbers on the platform admin page between
including and excluding notifications sent using test keys, so that we can see
both real use of the platform and all load on it.
This parameter defaults to True, which is the existing behaviour.
- seems phonenumber/emailaddress from the CSV are now passed in as personalisation.
- assume the renderer does the correct thing here. Will need to check with @quis
This PR fixes that and adds a test for it.
I am confused as to why I had to change the test_validators test that is checking if the mock is called.
Why did this code pass on preview?
Created a new schema that accepts request parameters for the
get_notifications v2 route.
Using that to validate now instead of the marshmallow validation.
Also changed the way formatted error messages are returned because
the previous way was cutting off our failing `enum` messages.
In this PR the id for the notification is passed in and used to created the notification, which causes a integrity error.
Normally when we get a SQLAlchemy error here we send the message to the retry queue, but if the notification already exists
we just ignore it.
There are no more notifications whose statuses are "failed", as
the "failed" status has now been replaced with statuses that are
more specific about the nature of the failure.
However, we still want to be able to filter by failing
notifications. (ie "/v2/notifications?status=failed").
Created a `.substitute_status()` method which takes a status
string or list of status strings and, if it finds 'failure',
replaces it with the other failing status types.
This way, calling for nottifications with "?status=failed" is
internally treated as
"status = ['technical-failure', 'temporary-failure', 'permanent-failure']"
Some notification statuses assume that a notification has been
updated (ie, it cannot have been created in that state).
This caused a bug in our sample notification fixture when trying
to create a notificaiton in a 'complete' status.
This commit groups the completed statuses in a list, the way other
statuses have been grouped together so that they're more portable.
Also fixed the sample_notification fixture.
The "cost" value was flawed for a couple of reasons.
1. Lots of messages are free, so in those instances the "cost"
doesn't tell you anything
2. The query to get the rate was expensive and we don't have
an obvious way to get it back very efficiently for large numbers
of notifications.
So we scrapped it.
We want to log the usage of the various API clients we have so that we understand when they can be cycled.
To this end we are going to count usage in statsd.
All notify clients have a suer agent, of the format: NOTIFY-API-{LANGUAGE}-CLIENT/version.number
For example, NOTIFY-API-PYTHON-CLIENT/3.0.0
We convert that into a statsd/graphite friendly string of the format: notify-api-python-client.3-0-0
So we can subdivide on client and client version on our dashboards.
Present but unknown User agents are records as "non-notify-user-agent"
Missing are presented as "unknown"
Our previous test ws returning a notification without a `sent_by`
attribute, which meant that cost was always 0.
Unfortunately, this meant that returning a real value for cost was
untested and (whaddya know) it broke immediately.
Old test scenario:
- billable_units=1, sent_by=None, cost=0
New scenarios
- billable_units=0, sent_by='mmg', cost=0
- billable_units=1, sent_by='mmg', cost=1