The nightly job to delete email notifications was failing because it was
timing out (`psycopg2.errors.QueryCanceled: canceling statement due to statement timeout`).
This adds a query limit to the query which inserts or updates
notification history so that it only updates a maximum of 10000 rows at
a time.
deploys take up to five minutes, during which notify-paas-autoscaler
can't scale the app. We saw 502s due to a large volume of traffic
coming in during that time, and we couldn't react cos we were
deploying.
scale up to 25 instances, the autoscaler won't be able to downscale
until after the deploy has finished.
* remove gosuuser - this means we can upgrade the base image to
something more modern and not have to faff around with gpg
* remove unnecessary commands - some things need to exist in the
makefile to keep jenkins happy
* remove concept of building separately - pip install requirements.txt
in the dockerfile
We have a table for datetime dimension that we no longer use and
I believe can be dropped.
First step is to remove the model and release the change. The next
step will be to then add a database migration to drop the actual
table. I believe we need to do it in this order and it can't
be done as a single PR.
All our endpoint should perform a check that the params are valid - this is an easy whay to check that and is standard for our endpoints.
I reverted the query to just filter by job id.
When a service asks for branding I often go and:
- set the branding on the organisation as well
- set the branding on all the organisation’s services
The latter can be quite time consuming, but it does save effort since
existing services from the same organisation won’t have to request
branding. It also improves the consistency of the communications that
users are receiving.
This commit automates that process by applying the branding update to
any services belonging to the organisation, if those services don’t
already have their own custom branding set up.
Now we consistently use the created_at date, so we can always get the right file location and name.
The previous updates to this code were trying to solve the problem if a pdf being created at 17:29, but not ready to upload until 17:31 after the antivirus and validation check.
But in those cases we would have trouble finding the file.
When we cancel a job, we need to check if all notifications are
already in the database. So far, we were querying for all
notification objects in the database and counting them in
admin app, which runs into pagination problems for large jobs,
and could time out for very large jobs.
Code that is within a `with Python.raises(...)` context manager but
comes after the line that raises the exception doesn't get evaluated.
We had some assertions that we never being tested because of this, so
this ensures that they will always get run and fixes them where
necessary.
also create a PDFNotReadyError class, separate to BadRequestError, to
imply to the end user that this is something they should handle
separately to all the other errors
it felt very awkward when the body of a pdf might be empty, might have
things in it, and whether it is empty or not can change even when the
status is the same (a created template notification might have a pdf,
but might not, we don't know).
So move it to its own endpoint, so we can hand craft some 400 errors
that appropriately explain what's going on.