Add different error message for email and text if content is too long.
Use utils version with is_message_too_long method implemented for email templates.
We want to add validation for an email that's too long, that way the user knows why the message is failing. At the moment if an email is too long it will get a technical failure, after the retries fail. This way the email post will get a validation error.
Once this: https://github.com/alphagov/notifications-utils/pull/804 is reverted, we can update the utils version.
This code is already restricted to SMS and email, so this is only to make it obvious if there is a code refactor down the line. Perhaps this is overkill and we back out this commit.
When we initially added a new task to persist the notifications for a high volume service we wanted to implement it as quickly as possible, so ignored SMS.
This will allow a high volume service to send SMS, the SMS will be sent to a queue to then persist and send the SMS, similar to emails.
At this point I haven't added a new application to consume the new save-api-sms-tasks. But we can add a separate application or be happy with how the app scales for both email and sms.
had to go through the code and change a few places where we filter on
template types. i specifically didn't worry about jobs or notifications.
Also, add braodcast_data - a json column that might contain arbitrary
broadcast data that we'll figure out as we go. We don't know what it'll
look like, but it should be returned by the API
Years ago we started to implement a way to schedule a notification. We hit a problem but we never came up with a good solution and the feature never made it back to the top of the priority list.
This PR removes the code for scheduled_for. There will be another PR to drop the scheduled_notifications table and remove the schedule_notifications service permission
Unfortunately, I don't think we can remove the `scheduled_for` attribute from the notification.serialized method because out clients might fail if something is missing. For now I have left it in but defaulted the value to None.
By serialising these straight away we can:
- not go back to the database later, potentially closing the connection
sooner
- potentially cache the serialised data, meaning we don’t touch the
database at all
This commit changes the code in post notification endpoint to handle a
serialised template (ie a `dict`) rather than a database object.
This is the first step towards being able to cache the template and not
hit the database on every request.
There should be no functional changes here, it’s just refactoring.
There are some changes to the tests where the signature of functions
has changed.
Importing of the template schema has to be done at a function level,
otherwise Marshmallow gets weird.
This commit also copies the `JSONModel` class from the admin app, which
turns serialised data (a dict made from JSON) into an object on which
certain predefined properties are allowed.
This means we can still do the caching of serialised data, without
having to change too much of the code in the app, or make it ugly by
sprinkling dict lookups everywhere.
We’re not copying all of JSONModel from the admin app, just the bits we
need. We don’t need to compare or hash these objects, they’re just used
for lookups. And redefining `__getattribute__` scares Leo.
In the previous PR I removed the `update_notification` method to reduce the need for another update query. However, that meant the notification was marked as delivered without an updated_at timestamp.
It is weird to set the updated_at when we create the notification. So is this a better fix? Or do I put the update back now?
I recommend we push this fix now.
After the commit we issue two calls to the db to get service and get notification. This is because after the commit the ORM wants to ensure that the data model objects are the latest.
So far this is just a proof of concept, but the letter flow needs to be updated and we should be able to get rid of research mode. And it needs some tidy up.
Two new metrics:
auth_db_connection_duration_seconds (histogram)
wraps the first DB call of post notifications. This includes waiting
to get a connection from the pool, and also making the actual request
to the db to retrieve the service and api keys. (i'm not sure there's
an easy way to separate these two things)
post_notification_json_parse_duration_seconds
wraps parsing the v2 post notifications json parsing and schema
validation. Shouldn't include any async code
For services that have permission to send international letters we
should not reject letters that are addressed to another country. We
should still reject letters that are badly-addressed.
We’ve added some new properties to the templates in utils that we can
use instead of doing weird things like
`WithSubjectTemplate.__str__(another_instance)`
This is part of moving away from `postcode` and towards `address line 7`
We think it will be easier for people to map their existing data to our
API if we let them fill in any 3 lines, instead of requiring at least
1, 2, and postcode specifically.
We don’t need to reformat the postcode here once template preview takes
care of it when rendering the PDF.
It’s better (and less code) to store what people give us, so we give
them back the same thing.
This has been replaced by a new task, `sanitise-letter`, to this deletes
all the code in the old task and ensures that when antivirus is not
enabled locally we are calling the new task.
SQS fails if the message body is over 256kb. Normally our messages are
quite small, but if we're using the new save-api-email task with an
email that has a large body, we can get over that limit. If so, handle
the exception and fall back to the existing code path (saving to the
database and sending a deliver-email task, which only has a notification
id.
Instead of saving the email notification to the db add it to a queue to save later.
This is an attempt to alleviate pressure on the db from the api requests.
This initial PR is to trial it see if we see improvement in the api performance an a reduction in queue pool errors. If we are happy with this we could remove the hard coding of the service id.
In a nutshell:
- If POST /v2/notification/email is from our high volume service (hard coded for now) then create a notification to send to a queue to persist the notification to the db.
- create a save_api_email task to persist the notification
- return the notification
- New worker app to process the save_api_email tasks.
When a precompiled letter is sent via the admin app, we now pass in the address which can be set in the Notifications.to field.
Once a precompiled letters sent by the API has passed validation we can set the address in Notifications.to field.
The celery tasks to validate precompiled letters sent by the API will be done in another PR.
By adding `force=True` to request.get_json() the mime type is ignore. If the data is not valid json the method will return a `BadRequestError` we catch that and throw our own error with a clear error message "Invalid JSON supplied in POST data".
If the json is valid return the json data or an empty dict if None is passed in.
This PR improves the error messages if the json is invalid, previously, the error message was "None object type" message which is not very helpful.
Also use this metadata to decide whether preview pages need
overlay or not. So far we have always added overlay when validation
has failed. Now we will only show it when validation failed due to
content being outside of printable area.