Commit Graph

66 Commits

Author SHA1 Message Date
Katie Smith
04bfd6bfdb Trigger task to publish alerts when sending or cancelling alert
When we send or cancel a broadcast message, we now trigger a task
in govuk-alerts repo that polls our API for alerts and
publishes a fresh list of alerts.

Co-authored-by: Pea Tyczynska <pea.tyczynska@digital.cabinet-office.gov.uk>
2021-10-18 08:41:24 +01:00
Katie Smith
9ff0ca0363 Update how live broadcast Zendesk tickets are created
These now use the Notify Form in Zendesk
2021-09-29 10:59:07 +01:00
sakisv
9faa3d34e1 Fix tests
Specifically, no longer test for a p1 zendesk when sending an alert
and drop misleading "p1" from test name when cancelling an alert.

We're no longer creating a P1 from the code, but we _do_ create a
zendesk ticket when sending out an alert.

When cancelling, what we want to test is that we don't create a second
ticket when the alert is cancelled.
2021-09-10 10:14:28 +03:00
Ben Thorner
bf0bf4e31c Favour new "areas" format for PagerDuty alerts
Broadcasts created via the API [1] and the Admin app [2] should
both now have this field set. It's also more informative to show
this, and broadcasts created via the API don't have IDs anyway.

There's a small risk that an old broadcast that gets approved won't
have this data, but it's for information only and we intend to
backfill all old broadcasts in the near future.

[1]: 023a06d5fb
[2]: 7dbe3afa19
2021-08-27 14:22:12 +01:00
Ben Thorner
a7d92b9058 Replace / remove redundant uses of "areas"
In one case ("areas=['manchester']") the format was even invalid,
but in general the original value of the column is pretty much
irrelevant for tests that involve updating it (it's highly unlikely
the column would default to the same value as the test data).
2021-08-27 13:31:49 +01:00
Ben Thorner
08f48379b4 Move ID generation into link test method
Unlike the other IDs which are stored in the DB, this isn't relevant
for the Celery task as it invokes a link test. Moving it into the
proxy client will also enable us to generate a second ID in the next
commits, where we start doing a link test for the failover lambda.
2021-07-19 16:00:55 +01:00
Ben Thorner
b6774bf0f7 Generate Vodafone link test sequence nos in proxy
Previously the Celery task to trigger a link test had to know about
the special case of a sequence number for Vodafone. Since we're about
to change the client to perform multiple tests it makes sense to give
it the knowledge of how to generate number itself.

Note that we have to import the db inline to avoid a circular import,
since this module is itself imported by app/__init__.py.

Other invocations of the Vodafone client use stored sequence numbers
from the DB, which are called "message numbers" in that context. Since
the two use cases are very different (even the names are different!),
having them in two places shouldn't cause any confusion.
2021-07-19 15:43:36 +01:00
David McDonald
be035664c4 Add operator channel to broadcast settings route
Looks identical to the government channel in terms of the interface
2021-06-09 13:49:06 +01:00
Katie Smith
829b646931 Allow "government" in broadcast_channel schema
This will allow admin to pass through a value of "government" for the
broadcast_channel. We don't have any logic around the value of service.broadcast_channel,
so no updates are needed to the tasks etc.
2021-05-11 16:56:56 +01:00
Katie Smith
4624328c36 Make service_broadcast_settings.provider non-nullable
We set all existing null values to "all", then make the column
non-nullable. Admin is already passing through the value of "all".
2021-05-10 15:59:22 +01:00
Katie Smith
1767535def Allow service.allowed_broadcast_provider to be "all"
We want to replace the value `None` for
service.allowed_broadcast_provider with the value of "all". As a first
step, we need to allow both values. Once notifications-admin has been
changed to pass through "all" and all the data in the database has been
updated, we can update the code to stop supporting both values.
2021-05-06 15:32:02 +01:00
Ben Thorner
a2af8b052a Split up authorisation vs. sequencing checks
While both of these are integrity errors (since we should never
reach this point in the code + data), this just means the original
method comment is still relevant to what immediately follows it.
2021-04-19 17:13:15 +01:00
Ben Thorner
936c9ebdfe Test sanity checks by calling top-level task
Since the checks are only performed in one place we can easily take
extra care to ensure this in the tests, noting that we don't need to
do any additional setup, except if no exception is raised - I've left
these tests as-is, to avoid doing more setup.

Note that we still check the happy path for when a provider message is
already sending - just in a different test [1].

[1]: 3d71815956/tests/app/celery/test_broadcast_message_tasks.py (L263)
2021-04-19 17:13:14 +01:00
Ben Thorner
ee52e3e2c9 Mirror integrity checks from the API
It makes sense to have these checks [1] here, since in future we may
add other ways of creating a broadcast event and omit them.

[1]: 3d71815956/app/broadcast_message/rest.py (L198)
2021-04-19 17:13:13 +01:00
Ben Thorner
0070473f31 Check for suspension before sending a broadcast
This mirrors the check we do for jobs, which are also a high-impact
task [1]. While this shouldn't be possible, just like other checks
we're adding it here to be doubly certain.

[1]: 3d71815956/app/celery/tasks.py (L74)
2021-04-19 17:13:12 +01:00
Ben Thorner
b2398fcaf4 Rename CBCProxyFatalException
We only actually use this when the data we're working with is in an
unexpected state, which is unrelated to the CBC Proxy. Using this
name also means we can re-use this exception in the next commits.

Note that we may still care if a broadcast message has expired, since
it's not expected that someone would send one in this condition.
2021-04-19 17:13:05 +01:00
David McDonald
295162c81d Move CBC proxy enable check
This change will make our development environments closer to production
even if they aren't hooked up to the CBC proxy lambda functions.

Now in development, we will create the broadcast event and create tasks
for each broadcast provider event. We will still not create actual
broadcast provider message rows in the DB and talk to the CBC proxies.

This should be helpful in development to catch any issues we introduce
to do with sending broadcast messaging. In time we may wish to have some
fake CBC proxies in the AWS tools account that we can interact with to
make it even more realistic.
2021-04-12 17:05:41 +01:00
Leo Hemsted
4a5b1c23bd only send zendesk P1 for alerts
we don't need to be re-notified when someone clicks cancel
2021-04-08 12:22:18 +01:00
Leo Hemsted
9bd8c0239c look for 'live', not 'production'
config['NOTIFY_ENVIRONMENT'] is hardcoded to `'live'` in the Live config
class. The values as seen on the environment which we send real messages
from:

```
>>> json.loads(os.environ['VCAP_APPLICATION'])['space_name']  # what cloudfoundry sets
'production'
>>> os.environ['NOTIFY_ENVIRONMENT']  # we set this from cloudfoundry
'production'
>>> current_app.config['NOTIFY_ENVIRONMENT']  # hardcoded in the Live config
'live'
>>> current_app.config['NOTIFICATION_QUEUE_PREFIX']  # pulled from env var of same name
'live'
>>> current_app.config['ENV']  # this is an unrelated flask variable
'production'
```
2021-04-08 12:17:22 +01:00
Leo Hemsted
df393e36c5 send a p1 when a broadcast goes out on production
it's important to keep tabs on when these things leave our system.
Sending a zendesk ticket that triggers a P1 is probably our simplest way
of notifying the team when this happens (it's what we do with out of
hours emergencies on the admin app too). We don't have any direct
pagerduty integrations from the api app, but we already have the zendesk
client hooked up.

After broadcasts go live, we may want to change this to a P2 (but even
then, there's arguments for keeping it P1 to start with I think).

Don't cause a P1 if it goes out on staging as that might be MNOs testing.
2021-04-06 11:32:19 +01:00
Ben Thorner
a91fde2fda Run auto-correct on app/ and tests/ 2021-03-12 11:45:45 +00:00
David McDonald
3ea86bfb48 Remove hardcoded default to use test channel
There is no need for a default now as every broadcast service has set on
it which broadcast channel to use.
2021-02-23 17:15:07 +00:00
David McDonald
9b21e6b04f Use sample_broadcast_service fixture
Now that every service has a row in the service_broadcast_settings
table, we want all our tests to use the `sample_broadcast_service`
fixture as this ensures it has a row in that table and is correctly
representitive of what a real broadcast service looks like.
2021-02-23 17:15:06 +00:00
Leo Hemsted
0088bcd98b only retry if the broadcast message task is in sending
previously we would retry if the task was queued up for retry but the
status is in "received-ack" or "received-err". We don't expect that a
task will be retried after getting this status, but if there are
duplicate tasks that could happen. Lets plan for the worst by saying
"only process a retry if the task is currently in sending".

this way, if a duplicate task is on retry and the first task goes
through succesfully, the duplicate task will give up.
2021-02-18 12:03:36 +00:00
Katie Smith
6b8ebb3421 Fix linting errors 2021-02-16 09:03:38 +00:00
Leo Hemsted
fed0d4c40e Merge pull request #3137 from alphagov/revert-revert-revert
Bring back retry logic
2021-02-15 12:21:13 +00:00
Leo Hemsted
4f89be6944 Revert "Merge pull request #3125 from alphagov/revert-retry"
This reverts commit 6b9a50beff, reversing
changes made to 33f93dfea2.
2021-02-09 17:01:04 +00:00
David McDonald
b2213dad19 Move provider restriction into broadcast settings
This means we will have a much easier way of knowing what the settings
are for a broadcast service.

Note, we can just move data directly into the newer table as there is
nothing on the API or admin app that is putting data in the
`service_broadcast_provider_restriction` table, this was being done
manually for the few services that needed it.
2021-02-09 15:40:32 +00:00
Leo Hemsted
bee0059e53 Revert "Merge pull request #3101 from alphagov/retry-broadcasts"
This reverts commit 1bd99c779d, reversing
changes made to d390eb2cac.
2021-02-08 11:02:34 +00:00
Leo Hemsted
49e6ec1ead Revert "Merge pull request #3123 from alphagov/retry-loop-fix"
This reverts commit 541a765811, reversing
changes made to 6a9ac654a6.
2021-02-08 11:01:33 +00:00
Leo Hemsted
d582e35471 dont try and send broadccast event if it's already in technical-failure
this gives us an option to manually set a status in the database and
avoid things being stuck in a retry loop forever
2021-02-05 12:52:37 +00:00
Leo Hemsted
0ddebc63a8 reduce broadcast retry delay to 4 mins and drop prefetch.
### The facts

* Celery grabs up to 10 tasks from an SQS queue by default
* Each broadcast task takes a couple of seconds to execute, or double
  that if it has to go to the failover proxy
* Broadcast tasks delay retry exponentially, up to 300 seconds.
* Tasks are acknowledged when celery starts executing them.
* If a task is not acknowledged before its visibility timeout of 310
  seconds, sqs assumes the celery app has died, and puts it back on the
  queue.

### The situation

A task stuck in a retry loop was reaching its visbility timeout, and as
such SQS was duplicating it. We're unsure of the exact cause of reaching
its visibility timeout, but there were two contributing factors: The
celery prefetch and the delay of 300 seconds. Essentially, celery grabs
the task, keeps an eye on it locally while waiting for the delay ETA to
come round, then gives the task to a worker to do. However, that worker
might already have up to ten tasks that it's grabbed from SQS. This
means the worker only has 10 seconds to get through all those tasks and
start working on the delayed task, before SQS moves the task back into
available.

(Note that the delay of 300 seconds is translated into a timestamp based
on the time you called self.retry and put the task back on the queue.
Whereas the visibility timeout starts ticking from the time that a
celery worker picked up the task.)

### The fix

#### Set the max retry delay for broadcast tasks to 240 seconds

Setting the max delay to 240 seconds means that instead of a 10 second
buffer before the visibility timeout is tripped, we've got a 70 second
buffer.

#### Set the prefetch limit to 1 for broadcast workers

This means that each worker will have up to 1 currently executing task,
and 1 task pending execution. If it has these, it won't grab any more
off the queue, so they can sit there without their visibility timeout
ticking up.

Setting a prefetch limit to 1 will result in more queries to SQS and a
lower throughput. This might be relevant in, eg, sending emails. But the
broadcast worker is not hyper-time critical.

https://docs.celeryproject.org/en/3.1/getting-started/brokers/sqs.html?highlight=acknowledge#caveats
https://docs.celeryproject.org/en/3.1/userguide/optimizing.html?highlight=prefetch#reserve-one-task-at-a-time
2021-02-05 12:49:51 +00:00
Leo Hemsted
1ef3f96bd7 test sending broadcast message for all statuses of existing provider_msg
also clean up some comments
2021-02-04 11:50:06 +00:00
Leo Hemsted
bbae209200 check provider message status etc when sending rather than when retrying
previously if we were deciding whether to retry or not, it meant that
future events wouldn't have context of what the task is doing. We'd
run into issues with not knowing what references to include when
updating/cancelling in future events.

Instead of deciding whether to retry or not, always retry. Instead, when
any event sends, regardless of whether it is a first time or a retry,
check the status of previous events for that broadcast message. There
are a few things that will mean we don't send.

* If the finishes_at time has already elapsed (ie: we have been trying
  to resend this message and haven't had any luck and now the data is
  obselete)
* A previous event has no provider message (this means that we never
  picked the previous event off the queue for some reason)
* A previous event has a provider message that has anything other than
  an ack response. This includes sending (the old message is currently
  being sent), and technical-failure/returned-error (the old message is
  currently in the retry loop, having experienced issues).
2021-02-03 18:11:52 +00:00
Leo Hemsted
96a0935d1c update broadcast provider message status on success/error
so we can distinguish errorring messages that are currently retrying
from those that sent succesfully.
2021-02-03 18:03:16 +00:00
Leo Hemsted
3dcbfc3612 re-use existing provider message if task retries
previously it would crash with a unique constraint error. now, grab the
previous message.
2021-02-03 18:01:54 +00:00
Leo Hemsted
ac34fb9c05 retry sending broadcasts
Retry tasks if they fail to send a broadcast event. Note that each task
tries the regular proxy and the failover proxy for that provider. This
runs a bit differently than our other retries:

Retry with exponential backoff. Our other tasks retry with a fixed delay
of 5 minutes between tries. If we can't send a broadcast, we want to try
immediately. So instead, implement an exponential backoff (1, 2, 4, 8,
... seconds delay). We can't delay for longer than 310 seconds due to
visibility timeout settings in SQS, so cap the delay at that amount.

Normally we give up retrying after a set amount of retries (often 4
hours). As broadcast content is much more important than normal
notifications, we don't ever want to give up on sending them to phones...

...UNLESS WE DO!

Sometimes we do want to give up sending a broadcast though! Broadcasts
have an expiry time, when they stop showing up on peoples devices, so if
that has passed then we don't need to send the broadcast out.

Broadcast events can also be superceded by updates or cancels. Check
that the event is the most recent event for that broadcast message, if
not, give up, as we don't want to accidentally send out two conflicting
events for the same message.
2021-02-03 16:43:01 +00:00
David McDonald
f90b479c8d Use service setting to pick broadcast channel
This falls back to the "test" channel if they do not have a
ServiceBroadcastSetting for the moment, but we intend in future PRs to
enforce that all broadcast services will have this property.
2021-02-01 14:10:41 +00:00
David McDonald
b2ed9efe85 Extend test cases for every MNO
Seems to be no harm in extending these for every mobile network just to
give us slightly better coverage
2021-02-01 14:10:40 +00:00
David McDonald
2aad3163e6 Allow CBC proxy client to take channel
This moves the hardcoding to test channels one step up to where we call
`create_and_send_broadcast`

We can then after this, start to differ whether we give it the 'test' or
'severe' channel based on the services channel setting.
2021-02-01 14:10:38 +00:00
David McDonald
20627d96ea Put all broadcast tasks on the broadcast worker 2021-01-13 17:21:40 +00:00
Leo Hemsted
2e929754ff add content to broadcast_message and make template fields nullable
we want to be able to create broadcast messages without templates. To
start with, these will come from the API, but in future we may want to
let people create via the admin interface without creating a template
too.

populate a non-nullable content field with the values supplied via the
template (or supplied directly if via api).
2021-01-08 18:58:17 +00:00
Pea Tyczynska
45b806f6db Remove unused args from cancel broadcast call in tasks 2020-12-14 11:31:05 +00:00
Pea Tyczynska
8af4b27fd6 Separate functions for cbc clients
Also move message_format to the clients.
2020-12-09 11:13:50 +00:00
Pea Tyczynska
553565bc91 Send message format to CBC
Either cap or ibag
2020-12-08 11:15:26 +00:00
Pea Tyczynska
e95dc9450e Include message number in send_broadcast_provider_message 2020-12-07 13:13:12 +00:00
Pea Tyczynska
a186d2d296 Format sequential number into an 8 char long hex
As per Vodafone spec for ibag format message number
2020-12-07 13:13:11 +00:00
Pea Tyczynska
b34bffaae6 Sends sequential number to Vodafone as link test 2020-12-07 13:13:11 +00:00
Leo Hemsted
72f8a15d4f respect service broadcast provider restrictions when sending 2020-12-03 13:39:09 +00:00
Leo Hemsted
e2fa0116a0 add CBC_PROXY_ENABLED config flag to control if tasks are triggered
previously we made some incorrect assumptions about set-up on staging
and prod - they currently don't have any cbc_proxy aws creds at all.

We shoudn't be attempting canaries or link tests when there's no AWS
infrastructure to connect to.

We also shouldn't bother writing a row into the database at all for the
broadcast_provider_message since we're not even attempting to send, and
we shouldn't get confused between messages that failed and messages we
never wanted to send at all.
2020-11-26 10:16:22 +00:00