By adding `exec` to the entrypoint bash script for the application, we can trap an EXIT from the script and execute our custom `on_exit` method with checks if the application process is busy before terminating, waiting up to 10 seconds. We don't need to trap `TERM` so that's been removed again.
Written by:
@servingupaces
@tlwr
Since Pytest 5, `ExceptionInfo` objects (returned by `pytest.raises`) now
have the same `str` representation as `repr`. This means that `str(e)`
now needs to be changed to `str(e.value)`.
https://github.com/pytest-dev/pytest/issues/5412
These fixtures were both calling other fixtures as functions and being
called as functions in the tests. Rewriting the tests to make them
Pytest 4 compatible means we are no longer using
`sample_template_without_letter_permission`, so this has been deleted.
When Cloud Foundry applications are to be rescheduled from one cell to
another, or they are stopped, they are sent a SIGTERM signal and 10
seconds later, a SIGKILL signal.
Currently the scripts trap the POSIX defined EXIT handler, rather than
the signal directly.
In order for the signal to properly be propagated to celery, and the
celery workers, the script should call the on_exit function when
receiving a TERM signal.
Signed-off-by: Toby Lorne <toby.lornewelch-richards@digital.cabinet-office.gov.uk>
Co-authored-by: Becca <rebecca.law@digital.cabinet.office.gov.uk>
Co-authored-by: Toby <toby.lornewelch-richards@digital.cabinet.office.gov.uk>
Also use this metadata to decide whether preview pages need
overlay or not. So far we have always added overlay when validation
has failed. Now we will only show it when validation failed due to
content being outside of printable area.
We want to pass the `request_id` to Celery tasks if the task is called
from an HTTP request, so that we can add the `request_id` to the logs.
This change overwrites `apply_async` to add the `request_id` to the
kwargs if available. When we call the task, we then add the `request_id`
to g on Flask's application context.
Tasks called from `send_task` won't have a `request_id` for now, and
this change only affects tasks called from HTTP requests (not from other
tasks or from Celery Beat).
All our requests identify the web server they were served with via the
`server` response header. This opens us up to potential attackers who
might be crawling the internet looking for specific versions with known
vulnerabilities. As our dependencies are open source, this doesn't
affect any targeted attacks as they can just look at our repos on
github, but this theoretically will marginally improve security.
Regardless, the header isn't useful [1], we're not the first people to
want to get rid of it, and gunicorn are in the process of at least
amending it to remove the version information [2].
This shouldn't have any impact on us, though an empty string will be
passed through to debug information in event of a crash. That's fine
though, as we already know what version we're running.
[1] https://www.fastly.com/blog/headers-we-dont-want
[2] https://github.com/benoitc/gunicorn/issues/825
we don't use it since we wrote our own provider stubs for performance
tests.
this removes it from the api - it's still in the DB and will be
retrieved by queries, but is set to disabled on prod
The nightly job to delete email notifications was failing because it was
timing out (`psycopg2.errors.QueryCanceled: canceling statement due to statement timeout`).
This adds a query limit to the query which inserts or updates
notification history so that it only updates a maximum of 10000 rows at
a time.
deploys take up to five minutes, during which notify-paas-autoscaler
can't scale the app. We saw 502s due to a large volume of traffic
coming in during that time, and we couldn't react cos we were
deploying.
scale up to 25 instances, the autoscaler won't be able to downscale
until after the deploy has finished.
* remove gosuuser - this means we can upgrade the base image to
something more modern and not have to faff around with gpg
* remove unnecessary commands - some things need to exist in the
makefile to keep jenkins happy
* remove concept of building separately - pip install requirements.txt
in the dockerfile
We have a table for datetime dimension that we no longer use and
I believe can be dropped.
First step is to remove the model and release the change. The next
step will be to then add a database migration to drop the actual
table. I believe we need to do it in this order and it can't
be done as a single PR.