The maximum content count of a broadcast varies depending on its
encoding, so we can’t simply validate it against a schema. This commit
moves to using the validation from `notifications-utils`, and raising a
custom error response.
We don’t support these methods at the moment. Instead we were just
ignoring the `msgType` field, so issuing one of these commands would
cause a new alert to be broadcast 🙃
We might want to support `Cancel` in the future, but for now let’s
reject anything that isn’t `Alert` (CAP terminology for the initial
broadcast).
We’re going to let people pass in fairly complex polygons, but:
- we don’t want to store massive polygons
- we don’t want to pass the CBCs massive polygons
So this commit adds a step to simplify the polygons before storing them.
We think it’s best for us to do this because:
- writing code to do polygon simplification is non-trivial, and we don’t
want to make all potential integrators do it
- the simplification we’ve developed is domain-specific to emergency
alerting, so should throw away less information than
There’s a bit more detail about how we simplify polygons in
https://github.com/alphagov/notifications-admin/pull/3590/files
This gives us some extra confidence that there aren’t any problems with
the data we’re getting from the other service. It doesn’t address any
specific problems we’ve seen, rather it seems like a sensible precaution
to take.
This commit makes the existing endpoint also accept CAP XML, should the
appropriate `Content-Type` header be set.
It uses the translation code we added in a previous commit to convert
the CAP to a dict. We can then validate that dict against with the JSON
schema to ensure it’s something we can work with.
We know there is at least one system which wants to integrate with
Notify to send out emergency alerts, rather than creating them manually.
This commit adds an endpoint to the public API to let them do that.
To start with we’ll just let the system create them in a single call,
meaning they still have to be approved manually. This reduces the risk
of an attacker being able to broadcast an alert via the API, should the
other system be compromised.
We’ve worked with the owners of the other system to define which fields
we should care about initially.
Flake8 Bugbear checks for some extra things that aren’t code style
errors, but are likely to introduce bugs or unexpected behaviour. A
good example is having mutable default function arguments, which get
shared between every call to the function and therefore mutating a value
in one place can unexpectedly cause it to change in another.
This commit enables all the extra warnings provided by Flake8 Bugbear,
except for:
- the line length one (because we already lint for that separately)
- B903 Data class should either be immutable or use `__slots__` because
this seems to false-positive on some of our custom exceptions
- B902 Invalid first argument 'cls' used for instance method because
some SQLAlchemy decorators (eg `declared_attr`) make things that
aren’t formally class methods take a class not an instance as their
first argument
It disables:
- _B306: BaseException.message is removed in Python 3_ because I think
our exceptions have a custom structure that means the `.message`
attribute is still present
Matches the work done in other repos:
- https://github.com/alphagov/notifications-admin/pull/3172/files
Add different error message for email and text if content is too long.
Use utils version with is_message_too_long method implemented for email templates.
We want to add validation for an email that's too long, that way the user knows why the message is failing. At the moment if an email is too long it will get a technical failure, after the retries fail. This way the email post will get a validation error.
Once this: https://github.com/alphagov/notifications-utils/pull/804 is reverted, we can update the utils version.
This code is already restricted to SMS and email, so this is only to make it obvious if there is a code refactor down the line. Perhaps this is overkill and we back out this commit.
When we initially added a new task to persist the notifications for a high volume service we wanted to implement it as quickly as possible, so ignored SMS.
This will allow a high volume service to send SMS, the SMS will be sent to a queue to then persist and send the SMS, similar to emails.
At this point I haven't added a new application to consume the new save-api-sms-tasks. But we can add a separate application or be happy with how the app scales for both email and sms.
The task was raising a JobIncompleteError, yet it's not an error the task is performing it's task correctly and calling the appropriate task to restart the job.
Also used apply_sync to create the task instead of send_task.
make it clear that this is for the public api, and we shouldn't add
fields to it without considering impacts
also add the broadcast_messages relationship on service and template to
the exclude from the marshmallow schemas, so it's not included elsewhere
had to go through the code and change a few places where we filter on
template types. i specifically didn't worry about jobs or notifications.
Also, add braodcast_data - a json column that might contain arbitrary
broadcast data that we'll figure out as we go. We don't know what it'll
look like, but it should be returned by the API
Years ago we started to implement a way to schedule a notification. We hit a problem but we never came up with a good solution and the feature never made it back to the top of the priority list.
This PR removes the code for scheduled_for. There will be another PR to drop the scheduled_notifications table and remove the schedule_notifications service permission
Unfortunately, I don't think we can remove the `scheduled_for` attribute from the notification.serialized method because out clients might fail if something is missing. For now I have left it in but defaulted the value to None.
By serialising these straight away we can:
- not go back to the database later, potentially closing the connection
sooner
- potentially cache the serialised data, meaning we don’t touch the
database at all
This commit changes the code in post notification endpoint to handle a
serialised template (ie a `dict`) rather than a database object.
This is the first step towards being able to cache the template and not
hit the database on every request.
There should be no functional changes here, it’s just refactoring.
There are some changes to the tests where the signature of functions
has changed.
Importing of the template schema has to be done at a function level,
otherwise Marshmallow gets weird.
This commit also copies the `JSONModel` class from the admin app, which
turns serialised data (a dict made from JSON) into an object on which
certain predefined properties are allowed.
This means we can still do the caching of serialised data, without
having to change too much of the code in the app, or make it ugly by
sprinkling dict lookups everywhere.
We’re not copying all of JSONModel from the admin app, just the bits we
need. We don’t need to compare or hash these objects, they’re just used
for lookups. And redefining `__getattribute__` scares Leo.
In the previous PR I removed the `update_notification` method to reduce the need for another update query. However, that meant the notification was marked as delivered without an updated_at timestamp.
It is weird to set the updated_at when we create the notification. So is this a better fix? Or do I put the update back now?
I recommend we push this fix now.
After the commit we issue two calls to the db to get service and get notification. This is because after the commit the ORM wants to ensure that the data model objects are the latest.
So far this is just a proof of concept, but the letter flow needs to be updated and we should be able to get rid of research mode. And it needs some tidy up.
Two new metrics:
auth_db_connection_duration_seconds (histogram)
wraps the first DB call of post notifications. This includes waiting
to get a connection from the pool, and also making the actual request
to the db to retrieve the service and api keys. (i'm not sure there's
an easy way to separate these two things)
post_notification_json_parse_duration_seconds
wraps parsing the v2 post notifications json parsing and schema
validation. Shouldn't include any async code
For services that have permission to send international letters we
should not reject letters that are addressed to another country. We
should still reject letters that are badly-addressed.
We’ve added some new properties to the templates in utils that we can
use instead of doing weird things like
`WithSubjectTemplate.__str__(another_instance)`
This is part of moving away from `postcode` and towards `address line 7`
We think it will be easier for people to map their existing data to our
API if we let them fill in any 3 lines, instead of requiring at least
1, 2, and postcode specifically.
We don’t need to reformat the postcode here once template preview takes
care of it when rendering the PDF.
It’s better (and less code) to store what people give us, so we give
them back the same thing.
The `body` field of the ‘preview a template’ endpoint should contain the
content of the template with the placeholders replaced.
It should not be rendered into a plain text email, which does stuff like
normalising newlines (so `\r\n` becomes `\n`). However the object that
the `create_post_template_preview_response` function receives is an
instance of `PlainTextEmailTemplate`. So we can’t just call `str()` on
it. Instead we need to call the `__str__` method of
`WithSubjectTemplate` which gives us the result we want.
We were already doing this the right way in the V1 endpoint[1] but
forgot to do it here too.
1. 0108749daa/app/template/rest.py (L181-L184)