Files
notifications-api/app/celery/provider_tasks.py
Martyn Inglis 4768f0b9fd Change retries policy.
Before we had a long back off, now we have more, but shorter backoffs.

- PREVIOUS
When we had an error talking to a provider we retried quickly and if we still got errors we backed off more and more. Maximum attempts was 5, max delay 4hours. This was to allow us time to ship a build if that was required.

- NOW
Backing off 48 times of 5 minutes each. This gives us the same total backoff, but many more tries in that period.

- WHY
Having the long back off meant messages could be delayed 4 hours. This was happening more and more, as PaaS deploys can place things into the "inflight" state in SQS. The inflight state MUST have an expiry time LONGER than the maximum retry back off. This meant that messages would be delayed 4 hours, even when there was no app error.

By doing this we can reduce this delay to 5 minutes. Whilst still giving us time to fix issues.
2017-05-25 11:12:40 +01:00

56 lines
2.3 KiB
Python

from flask import current_app
from notifications_utils.recipients import InvalidEmailError
from sqlalchemy.orm.exc import NoResultFound
from app import notify_celery
from app.config import QueueNames
from app.dao import notifications_dao
from app.dao.notifications_dao import update_notification_status_by_id
from app.statsd_decorators import statsd
from app.delivery import send_to_providers
@notify_celery.task(bind=True, name="deliver_sms", max_retries=48, default_retry_delay=300)
@statsd(namespace="tasks")
def deliver_sms(self, notification_id):
try:
notification = notifications_dao.get_notification_by_id(notification_id)
if not notification:
raise NoResultFound()
send_to_providers.send_sms_to_provider(notification)
except Exception as e:
try:
current_app.logger.exception(
"SMS notification delivery for id: {} failed".format(notification_id)
)
self.retry(queue=QueueNames.RETRY)
except self.MaxRetriesExceededError:
current_app.logger.exception(
"RETRY FAILED: task send_sms_to_provider failed for notification {}".format(notification_id),
)
update_notification_status_by_id(notification_id, 'technical-failure')
@notify_celery.task(bind=True, name="deliver_email", max_retries=48, default_retry_delay=300)
@statsd(namespace="tasks")
def deliver_email(self, notification_id):
try:
notification = notifications_dao.get_notification_by_id(notification_id)
if not notification:
raise NoResultFound()
send_to_providers.send_email_to_provider(notification)
except InvalidEmailError as e:
current_app.logger.exception(e)
update_notification_status_by_id(notification_id, 'technical-failure')
except Exception as e:
try:
current_app.logger.exception(
"RETRY: Email notification {} failed".format(notification_id)
)
self.retry(queue=QueueNames.RETRY)
except self.MaxRetriesExceededError:
current_app.logger.error(
"RETRY FAILED: task send_email_to_provider failed for notification {}".format(notification_id)
)
update_notification_status_by_id(notification_id, 'technical-failure')