In the V2 API, the GET response for an individual notification
returns a 'cost' value, which we can get by multiplying the
billable units by the per-message rate of the supplier who
sent the message.
Any notifications with billable units > 0 but without a
corresponding `ProviderRates` entry will blow up the application,
so make sure you've got one.
From a support ticket:
> it's possible to add a personalisation token with trailing whitespace
> (eg. "key " rather than "key"). Can this be trimmed in the UI to guard
> against this? (one of our devs copied and pasted it from a document
> and inadvertently included the space)
> Nothing major but caused a few hours of investigations!
Rather than trim the placeholder in the template, we should treat
placeholders in API calls the same way we do with CSV files, ie we
ignore case and spacing in the name of the placeholder. So
`(( First Name))` is equivalent to `((first_name))`, and both would be
populated with a dictionary like `{'firstName': 'Chris'}`.
Depends on:
- [x] https://github.com/alphagov/notifications-utils/pull/77
this is so that the filtering, which we do on the admin side, is applied
before pagination - so that the pages returned are all valid displayable
jobs. unfortunately this means that another config value has to be copied
to the server side but it's not the end of the world
Say you have a dashboard with some jobs you sent. Normally looks like:
job | sent
--- | ---
file.csv | **5pm**
file.csv | 3pm
file.csv | 1pm
file.csv | 11am
However if your 5pm job was scheduled at lunchtime, then it will look
like this:
job | sent
--- | ---
file.csv | 3pm
file.csv | 1pm
file.csv | **5pm**
file.csv | 11am
This is because the jobs are sorted by when they were created, not when
they were sent. It looks wrong.
**For jobs that have already been sent**
This commit changes the sort order to be based on `processed_at`
instead.
**For upcoming jobs**
If a job doesn’t have a `processed_at` time then it’s scheduled, but
hasn’t started yet. Only in this case should we still be sorting by
`created_at`.
We want to show developers a log of notifications they’ve sent using the
API in the admin app. In order to indentify a notification it’s probably
helpful to know:
- who the notification was sent to (which we expose)
- when the notification was created (which we expose)
- which key was used to create the notification (which we expose, but
only as the `id` of the key)
Developers don’t see the `id` of the API key in the app anywhere, so
this isn’t useful information. Much more useful is the `type` and `name`
of the key. So this commit changes the schema to also return these.
This commit does some slightly hacky stuff with conftest because it
breaks a lot of other tests if the sample notification has a real API
key or an API key with a non-unique name.
If you schedule a job you might change your mind or circumstances might
change. So you need to be able to cancel it. This commit adds a `POST`
endpoint for individual jobs which sets their status to `cancelled`.
This also means adding a new status of `cancelled`, so there’s a
migration…
- As before this is now driven from the notifications history table
- Removed from updates and create
- Signatures changes to removed unused params hits many files
- Also potential issue around rate limiting - we used to get the number sent per day from the stats table - which was a single row lookup, now we have to count this. This applies to EVERY API CALL. Probably not a good thing and should be addressed urgently.
We had a test like this for sending sms, but not email. This meant that,
for example, we weren’t checking that the provider was getting passed
the HTML and plain text versions of the email.
if api_key used to access endpoint is type team, endpoints only return that type -
will 404 if you provide a different ID. Same applies for normal (normal api keys
cannot see team notifications)
also, for convenience, set sample_notification to supply key_type of KEY_TYPE_NORMAL
by default
It seems like an oversight not to include the notification type in the notifcation.
When updating statistics a query to the template table is required to get the type, this update will mean that query does not have to happen.
If the notification ends up in the retry queue and the delivery app is restarted the notification will get sent twice.
This is because when the app is restarted another message will be in the retry queue as message available which is a
duplicate of the one in the queue that is a message in flight.
This should complete https://www.pivotaltracker.com/story/show/121844427
* single-column static data table that currently contains two types: 'normal' and 'team'
* key_type foreign-keyed from api_keys
- must be not null
- existing rows set to 'normal'
* key_type foreign-keyed from notifications
- nullable
- existing rows set to null
* api_key foreign-keyed from notifications
- nullable
- existing rows set to null
notifications, when retrieved by notification id, or service id (i.e.
all notifications for service).
There is a new element returned at top level of notification json called
body, which is the template content merged with personalisation. This
is consistent with api to endpoint to create notification which returns
what was sent as 'body' in json response.
Merging of template with personalisation is done in the
NotificationStatusSchema.
Personalisation data in encrypted before storing in db.