Our application servers and celery workers write logs both to a
file that is shipped to CloudWatch and to stdout, which is picked
up by CloudFoundry and sent to Logit Logstash.
This works with gunicorn and single-worker celery deployments, however
celery multi daemonizes worker processes, which detaches them from
stdout, so there's no log output in `cf logs` or Logit.
To fix this, we start a separate tail process to duplicate logs written
to a file to stdout, which should be picked up by CloudFoundry.
`exec` is replacing the current shell to run the command, which means that
the script execution stops at that line.
Passing it to the background with `exec "$@" &` won't work either,
because the script will move directly to the next command where it
looks for the `.pid` files that have not yet been created because celery
takes a few seconds to spin up all the processes.
Using `sleep X` to remedy this seems just wrong given that
1. we can use `eval` that blocks until the command returns
2. there is no obvious benefit in sticking with `exec`
The existing script would not work with `celery multi` as it was trying
to put it in the background and the get its pid.~
`celery multi` creates a number of worker processes and stores their
PIDs in files named celeryN.pid, where N the index number of the worker
(starting at 1).