If the reference from cancel CAP XML we received via API does not
match with any existing broadcast, return 404.
Do the same if service id doesn't match.
Also refactor code to cancel broadcast out into separate function
It should be a separate function that is only called by create_broadcast
function. This will prevent create_broadcast from becoming too
big and complex and doing too many things.
This is similar to the corresponding endpoint for services. However,
it is a little simpler since we don't need to worry about always having
at least one team member for an organisation.
The new dao function added, `dao_remove_user_from_organisation`, is also
simpler than `dao_remove_user_from_service` since we don't have any
organisation permissions to deal with.
Investigation with EXPLAIN and EXPLAIN ANALYZE for the notification
history table shows this is another instance of [1] but for the key
type column. Swapping "!=" for "IN" solves the problem.
[1]: https://github.com/alphagov/notifications-api/pull/3360
If a service has not sent any SMS for the financial year the free allowance was showing up as 0 rather than the number in annual billing. The query has been updated to use an outer join so that the free allow will be returned when there is no ft_billing.
There is a potential performance enhancement to only return the data for the services of the organisation in the `fetch_sms_free_allowance_remainder_until_date` subquery. I will investigate in a subsequent PR.
This covers that we only exclude test notifications and the key
type is copied over correctly. In the next commits we're going to
modify this part of the query, so it's important it's covered.
We want admin to send a POST request to this route if the data contains
a message recipient (a phone number or email address) so that this does
not show in the logs. This changes the route to accept both GET and POST
requests.
We saw it fail again last night to calculate how many notifications
were sent for one of our services to put in the ft_notification_status
table. It ran in to the sqlalchemy statement timeout again.
To get us through the holiday
period lets make it 2 hours as surely that will be enough and then
we can fix this properly
Having a pool size of 30 connections means that if we receive a big
number of requests, with the current configuration, the API would end up
holding onto 30 connections per worker * 4 workers per instance * 35
instances = 4200 connections. With a limit of 5000 connections, this
means that we would only have 800 connections to share between the
workers or for overflow usage (btw, even the overflow for the API would
take us above the 5000 limit - 10 overflow connections per worker * 4 *
35 = 1400 connections, total 5600 _only_ for the API).
During our load tests this led to a deadlock situation where nothing
could retrieve connections to deal with a queue build-up.
The reduced pool size allowed for a much more graceful degradation of
the service where, after significant load we would increase the response
times but still manage to serve all the requests.
When running the night reporting tasks we are seeing that some tasks are failing because the query is timing out. We need to revisit how to optimise the query but this will at least let the process finish.
At the moment, when we are processing and sending an SMS we open
a DB connection at the start of the celery task and then close it
at the end of the celery task. Nice and simple.
However, during that celery task we make an HTTP call out to our
SMS providers. If our SMS providers have problems or response times
start to slow then it means we have an open DB connection sat waiting
for our SMS providers to respond which could take seconds. If our
SMS providers grind to a halt, this would cause all of the
celery tasks to hold on to their connections and we would run out
of DB connections and Notify would fall over.
We think we can solve this by closing the DB session which releases
the DB connection back to the pool.
Note, we've seen this happen in staging during load testing if our
SMS provider stub has fallen over. We've never seen it in production
and it may be less unlikely to happen as we are balancing traffic
across two providers and they generally have very good uptime.
One downside to be aware of is there could be a slight increase in
time spent to send an SMS as we will now spend a bit of extra time
closing the DB session and then reopening it again after the HTTP
request is done.
Note, there is no reason this approach couldn't be copied for our
email provider too if it appears successful.
We can't control who might be sending messages on inbound numbers
that we own i.e. this log isn't an actionable error. Looks like it
used to represent something that _was_ an error [1], but that's not
the case anymore, so it seems reasonable to downgrade it.
[1]: d99ab329eb (diff-80d123d9abb40f80a221979940657a2751cc7cb33f255aa8f352a8324023e022L125)
If the S3 object is missing [1], then that's what we want, so we
don't need such a severe log for it, but we still want to know as
it's not expected. This is separate to more general "ClientError"
exceptions, which could mean anything.
There weren't any tests to cover missing S3 objects, so I've added
one. I don't think we need a test for ClientErrors:
- If there was no handler, the task would fail and we'd learn about
it that way.
- The scope of the calling task is now much smaller, so it matters
less than it used to [2].
[1]: 81a79e56ce/app/letters/utils.py (L52)
[2]: f965322f25
a bunch of these tests are now covered in the task test, so got rid of
some. Now that the "how long ago to delete" questions is asked in the
task rather than in the dao, and only one service is looked at at a
time, we don't need to worry about data retention, etc. Hopefully made
the tests simpler - there may still be some duplicates or overlaps
between the various cases.
we really don't gain anything by running each service delete in sequence
- we get the services, and then just loop through them deleting per
service. By deleting per service in separate tasks, we can take
advantage of parallelism. the only thing we lose is some log lines but I
don't think we're that interested in them.
only set query limit at the move_notifications dao function - the task
doesn't really care about the technical implementation of how it deletes
the notifications
common_functions is full of AWS commands to manipulate workers running
on ec2 instances. We haven't done any of that for years since we moved
to AWS
delete_sqs_queues contains scripts to get a list of sqs queues and put
their details in a csv, or take a details csv and then delete all those
queues.
it's not clear what the use-case was for it but no-one's used it for
years and we can just use the admin console if we really need to.
two vulnerabilities in <4.6.5 (GHSL-2021-1037 and GHSL-2021-1038)
https://github.com/lxml/lxml/blob/master/CHANGES.txt
also removes docopt as we don't use it except for a dev script (which we
might not need anyway)
This will help us monitor issues with delivery receipts and keep
track of provider performance over time.
I'm not concerned about performance here:
- The number of notifications to time out is usually small.
- This task only runs once a day.
- Calls to StatsD are quick and cheap.
A gauge is more useful as we can visualise it and combine it with
other stats - we already have other stats for the total number of
notifications sent by provider, and we can extrapolate the number
of slow notifications using this, if needed.
We also still have logs to say the task is running, as well as a
log in the calling code when we actually make a switch [1], so
we're not losing anything by removing the log here.
[1]: a9306c4557/app/celery/scheduled_tasks.py (L117)
Previously this was limited to 500K notifications. While we don't
expect to reach this limit, it's not impossible e.g. if we had a
repeat of the incident where one of our providers stopped sending
us status updates. Although that's not great, it's worse if our
code can't cope with the unexpectedly high volume.
This reuses the technique we have elsewhere [1] to keep processing
in batches until there's nothing left. Specifying a cutoff point
means the total amount of work to do can't keep growing.
[1]: 2fb432adaf/app/dao/notifications_dao.py (L441)