We want to pass the `request_id` to Celery tasks if the task is called
from an HTTP request, so that we can add the `request_id` to the logs.
This change overwrites `apply_async` to add the `request_id` to the
kwargs if available. When we call the task, we then add the `request_id`
to g on Flask's application context.
Tasks called from `send_task` won't have a `request_id` for now, and
this change only affects tasks called from HTTP requests (not from other
tasks or from Celery Beat).
The Notify team needs to investigate when a notification is marked as failed.
We will process the whole file and mark the notifications with the appropriate status, if any are failed an exception is raised.
The exception will trigger a cloud watch error for the team to investigate.
Celery tasks require an active app context in order to use app logger,
database or app configuration values. Since there's no builtin support
we create this context by using a custom celery task class NotifyTask.
However, NotifyTask was using `current_app`, which itself is only
available within an app context so the code was pushing an initial
context in run_celery.py.
This works with the default prefork pool implementation, but raises
a "working outside of application context" error with an eventlet
pool since that initial application context is local to a thread
and eventlet celery worker pool will create multiple green threads.
To avoid this, we bind NotifyTask to the app variable with a closure
and use that variable instead of `current_app` to create the context
for executing the task. This avoids any issues caused by shared
initial app context being lost when spawning additional worker threads.
We still need to keep the context push in run_celery.py for prefork
workers since it's required for logging events outside of tasks
(eg logging during `worker_process_shutdown` signal processing).
logging at info level and as such no longer prints out the celery task
timing which are found to be use to find out if a tasks has been called
but also the timing for the task. Added an extra timing message for
celery tasks so that it can be determined if the these are less frequent
than the API calls and provide more useful information
they were always caught locally by celery's base handler, however,
we weren't logging them ourselves, which meant it wouldn't be put into
the json logs that are sent to cloudwatch.
We don't use boto2 on the api anymore, not since celery 4.0.2
Note - if you run locally with boto2 still installed you'll see errors
that complain about things like:
boto.exception.SQSError: SQSError: 403 Forbidden
<?xml version="1.0"?><ErrorResponse xmlns="http://queue.amazonaws.com/doc/2012-11-05/"><Error><Type>Sender</Type><Code>SignatureDoesNotMatch</Code><Message>Credential should be scoped to a valid region, not 'queue'. </Message><Detail/></Error><RequestId>52207ca4-9131-58cb-89ae-2d45f06623a3</RequestId></ErrorResponse>
If so, make sure boto2 is completely uninstalled.