While the API still has a column and field for personalisation I think
it makes sense for it to consider the personalisation when serialising
the broadcast, so we should have a test for this.
This way the API still works as a coherent whole, and the admin app just
happens to be a client of the API which doesn’t implement the
personalisation feature.
If we want to remove personalisation from the API at another time we
should do it wholesale.
At the moment we display the count of scheduled jobs on the dashboard
by sending all the scheduled jobs to the admin app and letting it work
out the stats.
This is inefficient and, because the get jobs response has a page size
of 50, becomes incorrect if a service schedules more than 50 jobs.
This commit adds a separate endpoint which gives the admin app the stats
it needs directly and correctly.
Since we’ve doubled the number of rows in a job, jobs can take twice as
long to insert all the notifications. We don’t check for missing rows
until we’re pretty confident that the original tasks have finished
processing. This means we need to double the time we wait to still be
as sure.
we don't name letters based on the day we send them on, rather, the day
we create them on. If we process a letter for a second time for whatever
reason, even if it's a couple of days later, it'll still go in a folder
based on the created_at timestamp. There's still a slight confusion,
however - if the timestamp is after 5:30pm, the folder will be for the
day after. However, still the day after creation, so I think created_at
still makes the most sense.
Remove the term `sending_date` to try and make this relationship more
apparent.
`_now`? why would we ever use a different _now? instead say created_at,
because that's what it'll always be set to, even if we're replaying old
letters. We always set the folder name to when the letter was
created_at, or we might not know where to look to find it.
`dont_use_sending_date` doesn't really tell us what might happen if we
don't use it - the answer is we return an empty string. we ignore the
folder entirely. so lets call it that.
Also, remove use of freeze_gun in the tests, to prove that we don't use
the current time in any calculations. Also add an assert to a mock in
the get_pdf_for_templated_letter test, because we were mocking but not
asserting before, so the tests didn't fail when the function signature
changed.
We were determing the filename for precompiled letters before we had
checked if the letters were international. This meant that a letter
could have a filename indicating it was 2nd class, but once we had
sanitised the letter and checked the address we were setting the
notification to international.
This stopped these letters from being picked up to be sent to the DVLA,
since the filename and postage of the letter did not match.
We now regenerate the filename after the letter has been sanitised (and when
we know the postage) and use the updated filename when moving the letter
into the live PDF letters bucket.
It is not of the form
[[lat, long][lat, long]] as this would only hold a single polygon. It
instead needs to handle multiple polygons so instead is of the form
[[[lat, long][lat, long]]].
Our code was assuming that any notifications with `international` set to
`True` were text messages. It was then trying to look up delivery
information for a notification which wasn’t sent to a phone number,
causing an exception.
`international` for letters in `ft_billing` was always False. Now that
letters can be international, this changes the column value to the value
of `international` for the notification.
We want to display flash messages in admin when invites have been
cancelled. This message needs to display the user's email address, so
this commit adds endpoints to GET a single invited service user and org
user so that we can look up the email address of a cancelled user.
We were asking for the latest version of a letter template rather than
the version that the notification was sent with. This mean that if you
previewed a letter and had made edits to the template since it was sent
you would be shown an incorrect preview.
use the new endpoint from cbc proxy. create a new task that just
serializes the event and sends it across rather than sending a template
and the broadcast message.
some changes to serialize to make it json friendly etc. it also expects
sent_at and transmitted_finishes_at to always be set (we set them in the
code but don't enforce it n the DB right now), as they're required by
utils template. not sure whether we'll update db constraints to be more
strict or utils template to be more permissive just yet, wait until we
find out more about the requirements of the CBCs we integrate with.
We have hit throttling limits from SES approximately once a week during
a spike of traffic from GOV.UK. The rate limiting usually only lasts a
couple of minutes but generates enough exceptions to cause a p1 but with
no potential action for the responder.
Therefore we downgrade the warning for this case to a warning and assume
traffic will level back out such that the problem resolves itself.
Note, we will still get exceptions if we go over our daily limit, rather
than our per minute sending limit, which does require immediate action
by someone responding.
If we were to continually go over our per second sending rate for a long
continous period of time, then there is a chance we may not be aware but
given the risk of this happening is low I think it's an acceptable risk
for the moment.
The error message for when an invitation to Notify had expired was
displaying in admin with square brackets round it because admin is not
expecting the message to be a list
(a85134ee22/app/models/user.py (L500))
we won't let trial mode services send real broadcasts, and it's helpful
for users to see the flow of messages without having to have a second
person with them