https://github.com/alphagov/notifications-utils/pull/742
We had been seeing phone numbers making it through in jobs containing
this no-break space which were then causing errors as we could not
process the job as it thought it was an invalid number. This should
solve the problem and strip that special character.
we're using statsd to monitor how long provider requests are taking.
However, there's lots of busy work that happens inside our statsd
metrics timing window. Things like json dumping and loading, building
headers, exception handling, etc.
for firetext/mmg, the response object from requests has an elapsed
property [1], which captures from sending raw data to parsing the
response headers. for ses, it's a bit trickier, but boto3 exposes a few
event hooks [2]. it's hard to find them without stepping through the
code, but the interesting ones are before-call, after-call,
after-call-error, request-created, and response-received. The
before-call and after-call involve some marshalling, built-in retrying,
etc, while request-created and response-received are much lower level.
They might be called more than once per ses request, if boto3 itself
retries the request on 5xx, 429 and low level socket errors [3].
Add these as new `raw-request-time` metrics rather than overwriting to
avoid changing the meaning of an existing metric, and to let us compare
the metrics to see if there's a noticeable difference at all
[1] https://requests.readthedocs.io/en/master/api/#requests.Response.elapsed
[2] https://boto3.amazonaws.com/v1/documentation/api/latest/guide/events.html
[3] https://boto3.amazonaws.com/v1/documentation/api/latest/guide/retries.html#legacy-retry-mode
This copies what we do to a user's email address when archiving the user
by prefixing it with `_archived_{date}`. We already prefixed the
service name and email_reply_to with `_archived`, but this didn't allow
a service with the same name to be archived more than once.
The assumption that a commit is issued in between migration files was false. This will force the db connection to commit, the alter table commands will complete and the alembic version updated.
The `notifications`, `notification_history`, `templates` and `templates_history`
tables all had a check constraint on the postage column which specified
that the postage had to be `first` or `second` if the notification or
template was a letter. We now have two more options for postage -
`europe` and `rest-of-world`.
It's not possible to alter a check constraint, so the constraints have
to be dropped then recreated. We are not recreating the constraint on
the `notification_history` table since values here are always copied
from the `notifications` table.
The constraints get added as `NOT VALID` at first - this stage will lock
the tables, so updating the `notification` table and `templates` and
`templates_history` are done in separate migrations so that we don't lock
all tables at the same time. In a third migration we then run
`VALIDATE CONSTRAINT` for all tables - this will lock a row at a time,
not the whole table.
When running the purge command I found about 4 users who could not be
deleted because their user id was still referenced in the services table
as they had created the service yet they were not a member of that
service anymore.
I have fixed this by checking that if they are not a member but created
the service then we also delete the service for them.
Note, I've followed the previous convention of no tests for this
function. I've run it locally and executed the code path so there should
be no major flaws in the code. There is a small chance I wasn't able to
exactly replicate the state that existed in preview on my local but
hopefully it was close enough to be accurate.
This PR tries to parse the date, if that throws an error return now as the datereceived. This will at least allow the message to be persisted. Typically the DateReceived, provider_date, and the created_at date in the inbound_sms table are within a second of each other.
for use when we don't want API to serve any traffic, but paas is still
running. It's a simple nginx_buildpack app that is pushed separately,
and then two makefile commands that toggle the routes (and also
stop/start the nginx app).
For all endpoints/methods it returns a 503, with the response body.
```
{
"status_code": 503,
"errors": [
{
"error": "PlannedMaintenanceError",
"message": "We’re performing some essential updates. Notify will be back shortly. Please check https://status.notifications.service.gov.uk/ for more details"
}
]
}
```
NB: If you hit `/` it'll still return 404 - as this is defined in the
paas-proxy instance on aws.
If you’ve sent a bunch of jobs from the same contact list then a handy
way to differentiate between them will be date sent, but also template
name (in effect the message you sent).
This commit extends the job response to include template name, using the
same pattern as for template type.
Because we’ll be grouping jobs under their parent contact lists it will
be useful to have a way of showing how many times a contact list has
been used. This will give the right information scent to indicate that
clicking into a contact list is where you go to see its jobs. This means
that the API needs to return a count of jobs for each contact list.
Putting this code feels very non-idiomatic for our API. So suggestions
about how to better architect it welcome…
Rather than showing all jobs that have been ‘copied’ from a contact list
I think it makes more sense to group them under the contact list. This
way it’s easier to see what messages have been sent to a given group of
contacts over time.
Part of this work means the API needs to return only jobs that have been
created from a given contact list, when asked.
We want to add another argument here, and doing so would make the line
length too long with all the arguments on one line.
Also uses the * operator to enforce keyword-only arguments.
Because we won’t be showing uploaded letters individually on the uploads
page any more we need a way of listing them. This should be by printing
day, to match how we’re grouping them on the uploads page.
The response matches a normal `get_notifications` response so we can
reuse the same code in the admin app.
Some teams have started uploading quite a lot of letters (in the
hundreds per week). They’re also uploading CSVs of emails. This means
the uploads page ends up quite jumbled.
This is because:
- there’s just a lot of items to scan through
- conceptually it’s a bit odd to have batches of things displayed
alongside individual things on the same page
So instead this commit starts grouping together uploaded letters. It
does this by the date on which we ‘start’ printing them, or in other
words the time at which they can no longer be cancelled.
This feels like a natural grouping, and it matches what we know about
people’s mental models of ‘batches’ and ‘runs’ when talking about
printing.
The code for this is a bit gnarly because:
- timezones
- the print cutoff doesn’t align with the end of a day
- we have to do this in SQL because it wouldn’t be efficient to query
thousands of letters and then do the timezone calculations on them in
Python