The maximum content count of a broadcast varies depending on its
encoding, so we can’t simply validate it against a schema. This commit
moves to using the validation from `notifications-utils`, and raising a
custom error response.
We don’t support these methods at the moment. Instead we were just
ignoring the `msgType` field, so issuing one of these commands would
cause a new alert to be broadcast 🙃
We might want to support `Cancel` in the future, but for now let’s
reject anything that isn’t `Alert` (CAP terminology for the initial
broadcast).
We’re going to let people pass in fairly complex polygons, but:
- we don’t want to store massive polygons
- we don’t want to pass the CBCs massive polygons
So this commit adds a step to simplify the polygons before storing them.
We think it’s best for us to do this because:
- writing code to do polygon simplification is non-trivial, and we don’t
want to make all potential integrators do it
- the simplification we’ve developed is domain-specific to emergency
alerting, so should throw away less information than
There’s a bit more detail about how we simplify polygons in
https://github.com/alphagov/notifications-admin/pull/3590/files
This gives us some extra confidence that there aren’t any problems with
the data we’re getting from the other service. It doesn’t address any
specific problems we’ve seen, rather it seems like a sensible precaution
to take.
This commit makes the existing endpoint also accept CAP XML, should the
appropriate `Content-Type` header be set.
It uses the translation code we added in a previous commit to convert
the CAP to a dict. We can then validate that dict against with the JSON
schema to ensure it’s something we can work with.
We know there is at least one system which wants to integrate with
Notify to send out emergency alerts, rather than creating them manually.
This commit adds an endpoint to the public API to let them do that.
To start with we’ll just let the system create them in a single call,
meaning they still have to be approved manually. This reduces the risk
of an attacker being able to broadcast an alert via the API, should the
other system be compromised.
We’ve worked with the owners of the other system to define which fields
we should care about initially.
These tasks need to repeatedly get the same template and service from
the database. We should be able to improve their performance by getting
the template and service from the cache instead, like we do in the REST
endpoint code.
Flake8 Bugbear checks for some extra things that aren’t code style
errors, but are likely to introduce bugs or unexpected behaviour. A
good example is having mutable default function arguments, which get
shared between every call to the function and therefore mutating a value
in one place can unexpectedly cause it to change in another.
This commit enables all the extra warnings provided by Flake8 Bugbear,
except for:
- the line length one (because we already lint for that separately)
- B903 Data class should either be immutable or use `__slots__` because
this seems to false-positive on some of our custom exceptions
- B902 Invalid first argument 'cls' used for instance method because
some SQLAlchemy decorators (eg `declared_attr`) make things that
aren’t formally class methods take a class not an instance as their
first argument
It disables:
- _B306: BaseException.message is removed in Python 3_ because I think
our exceptions have a custom structure that means the `.message`
attribute is still present
Matches the work done in other repos:
- https://github.com/alphagov/notifications-admin/pull/3172/files
This code is already restricted to SMS and email, so this is only to make it obvious if there is a code refactor down the line. Perhaps this is overkill and we back out this commit.
When we initially added a new task to persist the notifications for a high volume service we wanted to implement it as quickly as possible, so ignored SMS.
This will allow a high volume service to send SMS, the SMS will be sent to a queue to then persist and send the SMS, similar to emails.
At this point I haven't added a new application to consume the new save-api-sms-tasks. But we can add a separate application or be happy with how the app scales for both email and sms.
The task was raising a JobIncompleteError, yet it's not an error the task is performing it's task correctly and calling the appropriate task to restart the job.
Also used apply_sync to create the task instead of send_task.
make it clear that this is for the public api, and we shouldn't add
fields to it without considering impacts
also add the broadcast_messages relationship on service and template to
the exclude from the marshmallow schemas, so it's not included elsewhere
had to go through the code and change a few places where we filter on
template types. i specifically didn't worry about jobs or notifications.
Also, add braodcast_data - a json column that might contain arbitrary
broadcast data that we'll figure out as we go. We don't know what it'll
look like, but it should be returned by the API
Because the IDs of our callback and inbound SMS APIs were stored in
lists instead of directly on the serialised model they weren’t getting
cast to a string before trying to JSONify them. And JSON doesn’t know
what to do with a UUID object.
For some reason this was only affecting the endpoint for fetching
inbound SMS.
Years ago we started to implement a way to schedule a notification. We hit a problem but we never came up with a good solution and the feature never made it back to the top of the priority list.
This PR removes the code for scheduled_for. There will be another PR to drop the scheduled_notifications table and remove the schedule_notifications service permission
Unfortunately, I don't think we can remove the `scheduled_for` attribute from the notification.serialized method because out clients might fail if something is missing. For now I have left it in but defaulted the value to None.
Same as we’re doing for templates.
This means avoiding a database call, even for services that don’t hit
our API so often.
They’ll still need to go to the database for the API keys, because we’re
not comfortable putting the API key secrets in Redis.
But once a service has got its keys from the database we commit the
transaction, so the connection can be freed up until we need it again to
insert the notification.
Same as we’ve done for templates.
For high volume services this should mean avoiding calls to external
services, either the database or Redis.
TTL is set to 2 seconds, so that’s the maximum time it will take for
revoking an API key or renaming a service to propagate.
Some of the tests created services with the same service ID. This
caused intermittent failures because the cache relies on unique service
IDs (like we have in the real world) to key itself.
By serialising these straight away we can:
- not go back to the database later, potentially closing the connection
sooner
- potentially cache the serialised data, meaning we don’t touch the
database at all
For some reason our V1 get template response wraps the whole template in
a dictionary with one key, `'data'`:
0d99033889/app/template/rest.py (L166)
That means when the admin app caches the response it also caches it in
this format.
The API needs to do the same, otherwise it will be cacheing data with a
schema that the admin app isn’t expecting, and vice-versa.
In the previous PR I removed the `update_notification` method to reduce the need for another update query. However, that meant the notification was marked as delivered without an updated_at timestamp.
It is weird to set the updated_at when we create the notification. So is this a better fix? Or do I put the update back now?
I recommend we push this fix now.