This reduces the code to copy when we add a new provider. I don't
think we need to log the URL or status code each time:
- The URL is always the same.
- A "200" status code is implicit in "success".
- Other status codes will be reported as exceptions.
Removing these specific elements means "record_outcome" is generic
and can be de-duplicated in the base class.
This is never overridden and can't be used in practie because all
SMS clients have to use the same interface. Removing it will make
it possible to DRY-up some of the code in this method.
Previously we used a combination of "provider.name" and "get_name()"
which was confusing. Using a non-property function also gave me the
impression that the name was more dynamic than it actually is.
This is enough to update a notification in DB:
1. First create a notification in the UI and sent it.
2. Then reset its attributes to pretend it's for Reach.
update notifications set
sent_at = null,
sent_by = null,
notification_status='sending'
where id='some-uuid';
3. Change "notification_id" to "<some-uuid>" in the code.
4. Call the boilerplate endpoint for Reach callbacks.
curl -X POST localhost:6011/notifications/sms/reach
Interestingly there's no foreign key constraint on "sent_by" in the
DB, so this just works: the notification is updated.
Fixes:
> reduced_provider = providers[identifier]
E KeyError: 'firetext'
Note that the mock return value in the other test was wrong [^1].
[^1]: bff97f0bbe/app/dao/provider_details_dao.py (L73)
there's not anything we know we need to do now that we resolve stuck
letters automatically. Letters couuld still get into this state, so it's
worth alerting us. However, we don't have anything concrete that we know
how to fix these letters, so we should just remove the runbook entirely.
Currently "test_send_letter_notification_via_api" fails at the final
stage in create-fake-letter-response-file [^1]:
requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=6011): Max retries exceeded with url: /notifications/letter/dvla (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0xffff95ffc460>: Failed to establish a new connection: [Errno 111] Connection refused'))
This only applies when running in Docker so the default should still
be "localhost" for the Flask app itself.
[^1]: 5093064533/app/celery/research_mode_tasks.py (L57)
It is currently 60 seconds but we have had two incidents in the
past week where there is a connection error talking to a service
and the request takes up to 60 seconds before failing. When this
happens, if there are a few of these callbacks then all of them
will completely hog the service callback worker and build up a big
queue of all the other service callbacks.
5 seconds has been chosen as that is still a pretty decent length
time for a simple web request that should just be giving them a
little bit of information for them to store. 5 seconds should be a
sufficient enough reduction that we dramatically reduce this problem
for the moment.
Open to this number
being changed in the future based on how we see it perform.
Since sept 2019 we've had to log on to production around once every
twenty days to restart the virus scan task for a letter. Most of the
time this is just a case of making sure the file is in the scan bucket,
and then triggering the task. If the file isn't in the scan bucket we'd
need to do some more manual investigation to find out exactly where the
file got stuck, but I can only remember times when it's been in the scan
bucket.
So if the file is in the scan bucket, we can just check that with code
and kick the task off automatically.
This follows the pattern for invite emails where the admin app tells the
API which domain to use when generating the link.
This will starting working once the admin change is merged:
- [ ] TBC
It won’t break anything if it’s merged before the admin change.
Daily volumes report: total volumes across the platform aggregated by whole business day (bst_date)
Volumes by service report: total volumes per service aggregated by the date range given.
NB: start and end dates are inclusive
if we have too many returned letters, we'll exceed SQS's max task size
of 256kb. Cap it to 5000 - this is probably a bit conservative but
follows the initial values we used when implementing this for the
collate-letters-task[^1]. Also follow the pattern of compressing the
sqs payload just to reduce it a little more.
[^1]: https://github.com/alphagov/notifications-api/pull/1536
We had an inbound number in the database with a value of ''. This
could happen if there are blank lines in the inbound numbers file
we use for the `insert-inbound-numbers` command. To avoid this
happening again, the command now calls `.strip()` on each line of the
file and only inserts a row if the result is truthy (i.e. not '').
This changes the scheduled task to raise an alert if letters are still
sending from 1530 to 1700. DVLA have reported that our "monitoring is
executing just before we actually mark them as ‘despatched’ and send
you the feedback files." and asked us to make the check a little later.
We don't actually contact DVLA until the morning after the alert anyway,
so this won't affect the process of getting in touch with them.
This change will require Cronitor to be updated for the new time.
Currently we alert if a service wastes £16 of SMS. It may cost us
around that amount just to deal with the alert, especially if the
service refuses to clean up their data.
This bumps the threshold to something more alarming, which should
make it more reasonable to suspend the service if we can show that
they've already wasted public money. £160 seems like a reasonable
compromise between have wasted vs could waste.
Note: we previously compromised on 1000 [1] down from 63K [2]. I
think we can afford to go a little bit higher.
[1]: https://github.com/alphagov/notifications-api/pull/3234
[2]: https://github.com/alphagov/notifications-api/pull/3221
This makes a few changes to:
- Make local development consistent with our other apps. It's now
faster to start Celery locally since we don't try to build the
image each time - this is usually quick, but unnecessary.
- Add support for connecting to a local Redis instance. Note that
the previous suggestion of "REDIS = True" was incorrect as this
would be turned into the literal string "True".
I've also co-located and extended the recipes in the Makefile to
make them a bit more visible.
Before we implemented ‘cancel’ any updates to a broadcast went through
the admin app. This meant the admin app could deal with clearing the
cache any time a broadcast was updated by a user performing an action.
Now that a broadcast can be updated without the admin app being involved
we have another place we need to clear the cache from.
If we don’t do this then the broadcast can look like it’s still going
even though it’s successfully been cancelled.
Because the `<reference>` field of a `cancel` message can contain an
arbitrary number of items it’s possible for it to reference more than
one current alert.
In this case it is ambiguous which alert should be cancelled, so we
should raise a custom error.
This will help people know that they have to manually go into Notify and
figure out which alert(s) to cancel there.
It is possible that, among the references Environment Agency give us for
which broadcast to cancel, there could be references for older, already
expired broadcasts.
This would be the case if someone cancelled a broadcast in Notify, then
issued and try to re-cancel another broadcast to the same area. The
Flood Warning Service has no way of knowing that the first broadcast has
been cancelled in Notify already, so it would add the reference to the
list of things to be cancelled.
We can avoid this from happening by filtering-out already-cancelled and
expired broadcasts before looking up which one should be cancelled.
The XML for an alerts requires a `<description>` field. The XML for
a `<cancel>` may have a `<description>` field populated (although we
ignore the contents) but it may also be empty.
This commit updates the schema to leave the all the validation to the
view layer, which can decide when or when not to validate the content of
the `<description>` field.
Previously we were looping over data from the Notifications/History
table and then shovelling it into the status table, one row at a time
- plus an extra delete to clean up any existing data.
This replaces that with a batch insertion, similar to how we archive
notifications [1], but using a simple subquery (via "from_select" [2])
instead of a temporary table.
To make the select compatible with the insert, I've used "literal"
to inject the constant pieces of data, so each row has everything it
needs to go into the status table.
[1]: 9ce6d2fe92/app/dao/notifications_dao.py (L295)
[2]: https://docs.sqlalchemy.org/en/14/core/dml.html#sqlalchemy.sql.expression.Insert.from_select
This is consistent with the way we do billing updates [1] and is a
bit less clunky. Functionally it should be the same - note that the
tests already cover the "overwriting" behaviour if a row exists.
[1]: 9ce6d2fe92/app/dao/fact_billing_dao.py (L522)
If someone tries to cancel a broadcast but the references don’t match
and existing broadcast we correctly return a 404.
If they don’t provide any references then we get an exception. This
commit catches the missing references and returns a 400. I think this
is more appropriate because it’s malformed request, rather than a
well-formed request that doesn’t match our data. It also lets us write a
more specific and helpful error message.