We have hit throttling limits from SES approximately once a week during
a spike of traffic from GOV.UK. The rate limiting usually only lasts a
couple of minutes but generates enough exceptions to cause a p1 but with
no potential action for the responder.
Therefore we downgrade the warning for this case to a warning and assume
traffic will level back out such that the problem resolves itself.
Note, we will still get exceptions if we go over our daily limit, rather
than our per minute sending limit, which does require immediate action
by someone responding.
If we were to continually go over our per second sending rate for a long
continous period of time, then there is a chance we may not be aware but
given the risk of this happening is low I think it's an acceptable risk
for the moment.
The error message for when an invitation to Notify had expired was
displaying in admin with square brackets round it because admin is not
expecting the message to be a list
(a85134ee22/app/models/user.py (L500))
This keeps things consistent with the live environment and also how we
do it for the admin app where it is entirely up to environment variables
whether redis is enabled or not. This changes nothing in terms of
functionality as currently in our environment variables redis is enabled
for the API in staging.
we won't let trial mode services send real broadcasts, and it's helpful
for users to see the flow of messages without having to have a second
person with them
dnspython had been changed from 1.16.0 to 2.0.0 in a previous commit,
but this was not compatible with eventlet 0.25.2. This bumps eventlet to
a later version, which has the effect of downgrading dnspython again.
There are a few indexes that we still need to drop from prod notification_history. Indexes on prod can take too long to run in a migration so we need to run them manually.
Reflects the new name of the feature.
Note that the name of the underlying table hasn’t changed because it’s
explicitly set to `service_whitelist`. Changing this will be a more
involved process.
It's clear that we need a way to track updates to a broadcast message.
It's also clear that we'll need some kind of audit log that captures
exactly what was sent out in a message.
This commit adds a new database table, `broadcast_event`, which maps 1:1
with CAP XML sent to the CBCs. We'll create one of these just before
sending out.
The main driver for this was that cancel and update messages need to
contain a list of references of all previous messages that they're
amending. This is of format `{sender},{identifier},{sent_timestamp}`,
and the identifier itself needs to be unique for each message.
spent WAY too long trying to figure out why my user wasn't being created
in tests. the user isn't created if their email already exists in the
system, but email isn't a required field when creating!
Note: I tried just removing the check to see if the user already exists,
but 16 tests try and create duplicate users. I'm of the belief we should
just fix all those tests but I didn't have the energy for it right now