normally we check the app's status page to see if migrations need
running. However, if the _status endpoint doesn't respond with 200, we
don't necessarily want to abort the deploy - we may be trying to deploy
a code fix that fixes that status endpoint for example.
We don't know whether to run the migrations or not, so err on the side
of caution by re-running the migration. The migration itself might be
the fix that gets the app working after all.
had to do a little song and dance because sometimes the response won't
be populated before an exception is thrown
We have seen the reporting app run out of memory multiple times when
dealing with overnight tasks. The app runs 11 worker threads and we
reduce this to 2 worker threads to put less pressure on a single
instance.
The number 2 was chosen as most of the tasks processed by the reporting
app only take a few minutes and only one or two usually take more than
an hour. This would mean with 2 processes across our current 2
instances, a long running task should hopefully only wait behind a few
short running tasks before being picked up and therefore we shouldn't
see large increase in overall time taken to run all our overnight
reporting tasks.
On top of reducing the concurrency for the reporting app, we also set
CELERYD_PREFETCH_MULTIPLIER=1. We do this as suggested by the celery
docs because this app deals with long running tasks.
https://docs.celeryproject.org/en/3.1/userguide/optimizing.html#optimizing-prefetch-limit
The chance in prefetch multiplier should again optimise the overall time
it takes to process our tasks by ensuring that tasks are given to
instances that have (or will soon have) spare workers to deal with them,
rather than committing to putting all the tasks on certain workers in
advance.
Note, another suggestion for improving suggested by the docs for
optimising is to start setting `ACKS_LATE` on the long running tasks.
This setting would effectively change us from prefetching 1 task per
worker to prefetching 0 tasks per worker and further optimise how we
distribute our tasks across instances. However, we decided not to try
this setting as we weren't sure whether it would conflict with our
visibility_timeout. We decided not to spend the time investigating but
it may be worth revisiting in the future, as long as tasks are
idempotent.
Overall, this commit takes us from potentially having all 18 of our
reporting tasks get fetched onto a single instance to now having a
process that will ensure tasks are distributed more fairly across
instances based on when they have available workers to process the
tasks.
What this setting does is best described in
https://medium.com/@taylorhughes/three-quick-tips-from-two-years-with-celery-c05ff9d7f9eb#d7ec
This should be useful for the reporting app because tasks run by this
app are long running (many seconds). Ideally this code change will
mean that we are quicker to process the overnight reporting tasks,
so they all finish earlier in the morning (although are not individually quicker).
This is only being set on the reporting celery app because this
change trying to do the minimum possible to improve the reliability and speed
of our overnight reporting tasks. It may very well be useful to set this
flag on all our apps, but this should be done with some more
consideration as some of them will deal with much faster tasks (sub
0.5s) and so it may be still be appropriate or may not. Proper
investigation would be needed.
Note, the celery docs on this are also worth a read:
https://docs.celeryproject.org/en/3.1/userguide/optimizing.html#optimizing-prefetch-limit.
However, the language can confuse this with setting with the prefetch
limit. The distinction is that prefetch grabs items off the queue, whereas the
-Ofair behaviour is to do with when items have already been prefetched
and then whether the master celery process straight away gives them to
the child (worker) processes or not.
Note, this behaviour is default for celery version 4 and above but we
are still on version 3.1.26 so we have to enable it ourselves.
Instead of saving the email notification to the db add it to a queue to save later.
This is an attempt to alleviate pressure on the db from the api requests.
This initial PR is to trial it see if we see improvement in the api performance an a reduction in queue pool errors. If we are happy with this we could remove the hard coding of the service id.
In a nutshell:
- If POST /v2/notification/email is from our high volume service (hard coded for now) then create a notification to send to a queue to persist the notification to the db.
- create a save_api_email task to persist the notification
- return the notification
- New worker app to process the save_api_email tasks.
By adding `exec` to the entrypoint bash script for the application, we can trap an EXIT from the script and execute our custom `on_exit` method with checks if the application process is busy before terminating, waiting up to 10 seconds. We don't need to trap `TERM` so that's been removed again.
Written by:
@servingupaces
@tlwr
When Cloud Foundry applications are to be rescheduled from one cell to
another, or they are stopped, they are sent a SIGTERM signal and 10
seconds later, a SIGKILL signal.
Currently the scripts trap the POSIX defined EXIT handler, rather than
the signal directly.
In order for the signal to properly be propagated to celery, and the
celery workers, the script should call the on_exit function when
receiving a TERM signal.
Signed-off-by: Toby Lorne <toby.lornewelch-richards@digital.cabinet-office.gov.uk>
Co-authored-by: Becca <rebecca.law@digital.cabinet.office.gov.uk>
Co-authored-by: Toby <toby.lornewelch-richards@digital.cabinet.office.gov.uk>
- We are running statsd exporter as an app with a public route for
Prometheus to scrape
- This updates preview to send statsd metrics over the CF internal
networking to the statsd exporter
- Removes the sidecar statsd exporters too
This is so that retry-tasks queue, which can have quite a lot of
load, has its own worker, and other queues are paired with queues
that flow similarly:
- letter-tasks with create-letters-pdf-tasks
- job-tasks with database-tasks
cf v3 commands don't appear to support commands in manifest files. They
say that they do, but in practice they fail on cf v3-zdt-push with an
error message "No process types returned from stager". This can be
solved by moving the command from the manifest to a Procfile.
However, the Procfile is part of the source code, and as such is the
same for each app. To get around this, make the Procfile command invoke
a new wrapper script, which checks the NOTIFY_APP_NAME env var and then
calls the correct command
We use exec to start awslogs_agent and then a tail to print logs to
stdout. CF docs[1] recommend to use exec to start processes which seems
to imply that as long as there are commands running the container will
remain up and running.
This commit ensures that if there are no celery tasks running we will
kill any other processes that we have started, so that the container will
no longer be considered healthy by cloudfoundry and will be replaced.
1: https://docs.cloudfoundry.org/devguide/deploy-apps/manifest.html#start-commands
In 4427827b2f and celery monitoring was
changed from using PID files to actually looking at processes.
If celery workers get OOM killed (for instance) the container init
script would not restart them, this is because `get_celery_pids` would
not contain any processes that contained the string celery. This would
cause the pipe to fail (-o pipefail). APP_PIDS would not get updated but
the script would continue to run. This caused the script to not restart
the celery processes.
We think the correct behaviour when celery processes are killed (i.e.
there are no more celery processes running in a container) is to kill
the container. The PaaS should then schedule new ones which may
remediate the cause of the celery processes being killed.
Upon detection of no celery processes running, some diagnostic
information from the environment is sent to the logs, e.g.:
```
CF_INSTANCE_ADDR=10.0.32.4:61012
CF_INSTANCE_INTERNAL_IP=10.255.184.9
CF_INSTANCE_GUID=81c57dbc-e706-411e-6a5f-2013
CF_INSTANCE_PORT=61012
CF_INSTANCE_IP=10.0.32.4
```
Then the script (which is the container entrypoint) exits 1.
Co-author: @servingupaces @tlwr
This addresses some problems that existed in the previous approach:
1. There was a race condition that could occur between the time we were
looking for the existence of the .pid files and actually reading them.
2. If for some reason the .pid file was left behind after a process had
died, the script would never know because we do:
kill -s ${1} ${APP_PID} || true
previously the script would:
try and SIGTERM each celery process every second for the 9 second
timeout, and then SIGKILL every second after, with no upper bound.
This commit changes this to:
* SIGTERM each process once.
* Wait nine seconds (checking if the pid files are still present each
second)
* SIGKILL any remaining processes once.
* exit
remove pip-accel - it's not been updated in two years, and pins our
version of pip to a version that is several breaking changes old.
make sure commands work if you're already in a venv - mostly by
checking for presence of $VIRTUAL_ENV, and ensuring we use the correct
pip to install packages. Also clean up the commands a bit.
you need to `pip install celery[sqs]` to get the additional
dependencies that celery needs to use SQS queues - there are two libs -
boto3 and pycurl.
pycurl is a bunch of python handles around curl, so needs to be
installed from source so it can link to your curl/ssl libs. On paas and
in docker this works fine (needed to add `libcurl4-openssl-dev` to the
docker container), but on macos it can't find openssl. We need to pass
a couple of flags in:
* set the environment variable PYCURL_SSL_LIBRARY=openssl
* pass in the global options `build_ext` and `-I{openssl_headers_path}`.
As shown here:
https://github.com/pycurl/pycurl/issues/530#issuecomment-395403253
Env var is no biggie, but using any install-option flags disables
wheels for the whole pip install run. (See
https://github.com/pypa/pip/issues/2677 and
https://github.com/pypa/pip/issues/4118 for more context on the
install-options flags). A whole bunch of our dependencies don't
install nicely from source (but do from wheel), so this commit installs
pycurl separately as an initial step, with the requisite flags, and
then installs the rest of the requirements as before.
I've updated the makefile and bootstrap.sh files to reflect this, but
if you run `pip install -r requirements.txt` from scratch you will run
into issues.
The list of top-level dependencies is moved to requirements-app.txt,
which is used by `make freeze-requirements` to generate the full
list of requirements in requirements.txt.
This is based on alphagov/digitalmarketplace-api#615, so rationale
from that PR applies here.
We had a problem with unpinned packages on new deployments leading
to failed tests (e.g. alphagov/notifications-admin#2144) which is
why we're implementing this now.
After re-evaluating pipenv again, this still seems like the least
disruptive approach:
* pyup.io has experimental support for Pipfile, but doesn't respect
version ranges or updating hashes in the lock file
* CloudFoundry buildpack recognizes and supports Pipfiles out of the
box, but the support is relatively new. For example until recently
CF would install dev packages during deployment. It's also based on
generating a requirements file from the Pipfile, which doesn't
properly support pinning VCS dependencies (eg it doesn't set the
#egg= version, meaning pip will not upgrade the package if it's
already installed).
* pipenv has a strict dependency resolution algorithm, which doesn't
appear to be well documented and can cause some unexpected failures.
For example, pipenv doesn't seem to be able to install `awscli-cwlogs`
package at all, believing it to have a version conflict for `botocore`
(which it doesn't list as a direct dependency) while neither `pip` nor
`pip-tools` highlight any issues with it.
* While trying out `pipenv install` on our list of dependencies it would
regularly fail to install utils with a "Will try again." message.
While the installation succeeds after a retry, this doesn't inspire
confidence.
* The switch to Pipfile and pipenv-managed virtualenvs requires a series
of changes to `make` targets and scripts - replacing `pip install` with
`pipenv`, removing references to requirements files and prefixing
commands with `pipenv run`. While it's likely to simplify the overall
process of managing dependencies, it would require time to properly
implement across our applications and environments (Jenkins, PaaS,
docker containers, and dev machines).
Our application servers and celery workers write logs both to a
file that is shipped to CloudWatch and to stdout, which is picked
up by CloudFoundry and sent to Logit Logstash.
This works with gunicorn and single-worker celery deployments, however
celery multi daemonizes worker processes, which detaches them from
stdout, so there's no log output in `cf logs` or Logit.
To fix this, we start a separate tail process to duplicate logs written
to a file to stdout, which should be picked up by CloudFoundry.
`exec` is replacing the current shell to run the command, which means that
the script execution stops at that line.
Passing it to the background with `exec "$@" &` won't work either,
because the script will move directly to the next command where it
looks for the `.pid` files that have not yet been created because celery
takes a few seconds to spin up all the processes.
Using `sleep X` to remedy this seems just wrong given that
1. we can use `eval` that blocks until the command returns
2. there is no obvious benefit in sticking with `exec`
The existing script would not work with `celery multi` as it was trying
to put it in the background and the get its pid.~
`celery multi` creates a number of worker processes and stores their
PIDs in files named celeryN.pid, where N the index number of the worker
(starting at 1).
If a PR is going to fail because tests aren’t passing then you:
- should know about it as quick as possible
- shouldn’t waste precious Jenkins CPU running subsequent tests
This commit adds the `-x` flag to pytest, which stops the test run as
soon as one failing test is discovered.
Changes generate manifest script to parse variables file as YAML
and only add variables to the manifest if they're already listed
in the `env` section.
This allows us to use a single variables file for all applications
and avoid duplicating secrets across multiple files while adding
only the relevant secrets to the application manifest.
`./scripts/generate_manifest.py` takes a path to a PaaS manifest file
and a list of variable files and prints a single CloudFoundry manifest.
The generated manifest replaces all `inherit` keys by loading the data
from parent manifests. This allows us to pipe the script output directly
to CF CLI, without saving it to disk, which minimises the risk of it
being accidentally included in the deployment artefact. The combined
manifest might differ from the results produced by CF CLI itself, so
the original manifest shouldn't normally be used on its own.
After combining the manifests the script will load and parse all listed
variable files and add them to the manifest's `env` by merging the files
together in the order they were listed (so in case of any key conflicts
the latest file overwrites previous values), upper-casing keys and
processing any list or dictionary values with `json.dumps`, so that they
can be set as environment variables.
This gives us a full list of environment variables that were previously
parsed from the CloudFoundry user services data.
we saw an issue where the app started, then immediately crashed due to
a setup error. However, jenkins had already returned positively, and
the deploy continued.
cf-deploy should fail if the app doesn't start up.
We do this by looking through the cloudfoundry events, and aborting
if there are any `app.crash` events for the new GUID.