We have hit throttling limits from SES approximately once a week during
a spike of traffic from GOV.UK. The rate limiting usually only lasts a
couple of minutes but generates enough exceptions to cause a p1 but with
no potential action for the responder.
Therefore we downgrade the warning for this case to a warning and assume
traffic will level back out such that the problem resolves itself.
Note, we will still get exceptions if we go over our daily limit, rather
than our per minute sending limit, which does require immediate action
by someone responding.
If we were to continually go over our per second sending rate for a long
continous period of time, then there is a chance we may not be aware but
given the risk of this happening is low I think it's an acceptable risk
for the moment.
The error message for when an invitation to Notify had expired was
displaying in admin with square brackets round it because admin is not
expecting the message to be a list
(a85134ee22/app/models/user.py (L500))
we won't let trial mode services send real broadcasts, and it's helpful
for users to see the flow of messages without having to have a second
person with them
Reflects the new name of the feature.
Note that the name of the underlying table hasn’t changed because it’s
explicitly set to `service_whitelist`. Changing this will be a more
involved process.
It's clear that we need a way to track updates to a broadcast message.
It's also clear that we'll need some kind of audit log that captures
exactly what was sent out in a message.
This commit adds a new database table, `broadcast_event`, which maps 1:1
with CAP XML sent to the CBCs. We'll create one of these just before
sending out.
The main driver for this was that cancel and update messages need to
contain a list of references of all previous messages that they're
amending. This is of format `{sender},{identifier},{sent_timestamp}`,
and the identifier itself needs to be unique for each message.
spent WAY too long trying to figure out why my user wasn't being created
in tests. the user isn't created if their email already exists in the
system, but email isn't a required field when creating!
Note: I tried just removing the check to see if the user already exists,
but 16 tests try and create duplicate users. I'm of the belief we should
just fix all those tests but I didn't have the energy for it right now
The task was raising a JobIncompleteError, yet it's not an error the task is performing it's task correctly and calling the appropriate task to restart the job.
Also used apply_sync to create the task instead of send_task.
At the moment we return a count of recent jobs for contact lists, where
recent is defined as being within the service’s data retention period.
This lets us write nice bits of UI copy like ‘used 3 times in the last
7 days’. But it’s hard to write the copy for when the count is 0,
because this could be for one of two reasons:
- the contact list has never been used
- the contact list has been used, but not within the data retention
period for that channel
At the moment we can’t know which of those reasons is the case, so we
can’t write nice clear content like ‘never been used’.
This commit adds a property to contact lists which says whether they’ve
ever been used.
It also renames the existing, as-yet-unused property to make clear that
it’s only counting within the data retention (so can still be 0 even if
`has_jobs` is `True`).
Usage for all services is a platform admin report that groups letters by
postage. We want it to show `europe` and `rest-of-world` letters under a
single category of `international`, so this updates the query to do
that and to order appropriately.
New rows giving the prices of letters with a post_class of `europe` and
`rest-of-world` have been added to the `letter_rates` table. All rates
are currently the same for international letters.
task takes a brodcast_message_id, and makes a post to the cbc-proxy
for now, hardcode the url to the notify stub. the stub requires template
as the admin/api get it, so use the marshmallow schema to json dump it.
Note - this also required us to tweak the BroadcastMessage.serialize
function so that it converts uuids in to ids - flask's jsonify function
does that for free but requests.post doesn't sadly.
if the request fails (either 4xx or 5xx) just raise an exception and let
it bubble up for now - in the future we'll add retry logic
This done so that we do not use statsd on our http endpoint.
We decided we do not need metrics that this gave us. If we
change our minds, we will add Prometheus-friendly decorators
instead in the future.
make it clear that this is for the public api, and we shouldn't add
fields to it without considering impacts
also add the broadcast_messages relationship on service and template to
the exclude from the marshmallow schemas, so it's not included elsewhere
new broadcast_message table. contains a whole bunch of timestamps, a
couple of user ids for who created and who approved, some
personalisation and a template id/version. areas is a jsonb field - it
is expected that this will be an array, for example `["england",
"wales"]`
intended valid state transitions are as follows, from an initial status
of draft
* draft -> pending-approval, rejected
* pending-approval -> broadcasting, rejected, (draft maybe)
* broadcasting -> completed, cancelled
rejected, completed, cancelled and technical-failure are terminating
states.
also copy across the enum switching code from migration 0060 (adding
letter type).
had to go through the code and change a few places where we filter on
template types. i specifically didn't worry about jobs or notifications.
Also, add braodcast_data - a json column that might contain arbitrary
broadcast data that we'll figure out as we go. We don't know what it'll
look like, but it should be returned by the API
This prevents a race condition when we get delivery receipt before
updating notification to sending, and so the sending status would
supersede the delivered status, and the notification would time out
as temporary-failure after three days.