We will split them into three categories:
- first class
- second class
- international - this zip file will have letters for both europe
and rest-of-world postage classes
We were checking this separately in two places in the code. Now
we will have this logic in one place, in validators.
Also pull in utils version that recognises crown depenency numbers
as international.
The constaints on notifications and notification_history have already
been dropped in production, but still exist in staging and in dev
environments.
The constraints on templates and templates_history exist in all
environments.
We did not have a JSON schema for updating a template. Since we will
remove the postage constraint from the templates table, this adds a JSON
schema for updating a template so that we can use it to check that the
postage is one of the allowed values.
In the previous PR I removed the `update_notification` method to reduce the need for another update query. However, that meant the notification was marked as delivered without an updated_at timestamp.
It is weird to set the updated_at when we create the notification. So is this a better fix? Or do I put the update back now?
I recommend we push this fix now.
By default Marshallow includes unknown properties. This means every time
a new property is added to the service model it gets included in the
JSON-serialised response sent to the admin app.
This is particuarly bad because it means that for returned letters the
ID of every returned letter. So the JSON stored in Redis for the
Check Your State Pension service is 86kb.
Similarly the JSON stored in Redis for a big user of inbound text
messaging is 458kb(!!!) because it has the ID of every received text
message. That’s ~8,500 UUIDs.
Luckily the admin app tells us exactly which keys it’s using here:
5952d9c26d/app/models/service.py (L31-L52)
```python
- `active`
- `contact_link`
- `email_branding`
- `email_from`
- `id`
- `inbound_api`
- `letter_branding`
- `letter_contact_block`
- `message_limit`
- `name`
- `prefix_sms`
- `research_mode`
- `service_callback_api`
- `volume_email`
- `volume_sms`
- `volume_letter`
- `consent_to_research`
- `count_as_live`
- `go_live_user`
- `go_live_at`
}
```
Plus these which it does not get automatically:
- `email_branding`
- `letter_branding`
- `organisation`
- `organisation_type`
- `permissions`
- `restricted`
The API is returning all of these:
- `active`
- `all_template_folders`
- `annual_billing`
- `consent_to_research`
- `contact_link`
- `contact_list`
- `count_as_live`
- `created_by`
- `crown`
- `email_branding`
- `email_from`
- `go_live_at`
- `go_live_user`
- `id`
- `inbound_api`
- `inbound_number`
- `inbound_sms`
- `letter_branding`
- `letter_contact_block`
- `letter_logo_filename`
- `message_limit`
- `name`
- `organisation`
- `organisation_type`
- `permissions`
- `prefix_sms`
- `rate_limit`
- `research_mode`
- `restricted`
- `returned_letters`
- `service_callback_api`
- `users`
- `version`
- `volume_email`
- `volume_letter`
- `volume_sms`
- `whitelist`
So the ones that the admin is getting but not expecting are:
- `all_template_folders`
- `annual_billing`
- `contact_list`
- `created_by`
- `crown`
- `inbound_number`
- `inbound_sms`
- `letter_logo_filename`
- `rate_limit`
- `returned_letters`
- `users`
- `version`
- `whitelist`
Which is what this PR adds to the exclude list, except for `created_by`
which is keeps because it’s needed to validate the JSON provided when
creating a service.
After the commit we issue two calls to the db to get service and get notification. This is because after the commit the ORM wants to ensure that the data model objects are the latest.
So far this is just a proof of concept, but the letter flow needs to be updated and we should be able to get rid of research mode. And it needs some tidy up.
we have one global metrics variable `metrics = GDSMetrics()`, and we
then call `metrics.init_app` from within the flask application set up.
The v2/test_errors.py app_for_test fixture calls create_app, would call
metrics.init_app multiple times for the same metrics instance. This
causes errors, so change the fixture to session level so it only calls
once per test run.
us not recognising a code or provider not having sent the detailed
status.
It seems like Firetext is sometimes sending us permanent-failure
without detailed status. It could be due to:
- them really not sending any detailed status
- them sending a status code we don't recognise
- them sending 000 code that means 'no errors', which we ignore
To see which one it is, and to debug such issues quicker in the
future, this PR adds status and detailed status codes to the logs.
Also log detailed delivery status for firetext in the same place in addition
to it being logged from notifications_dao.
Logging detailed delivery statuses will help us see why messages
fail to deliver. In the future we could persist detailed delivery
status in the database.
This copies what we do to a user's email address when archiving the user
by prefixing it with `_archived_{date}`. We already prefixed the
service name and email_reply_to with `_archived`, but this didn't allow
a service with the same name to be archived more than once.
This PR tries to parse the date, if that throws an error return now as the datereceived. This will at least allow the message to be persisted. Typically the DateReceived, provider_date, and the created_at date in the inbound_sms table are within a second of each other.
If you’ve sent a bunch of jobs from the same contact list then a handy
way to differentiate between them will be date sent, but also template
name (in effect the message you sent).
This commit extends the job response to include template name, using the
same pattern as for template type.
Because we’ll be grouping jobs under their parent contact lists it will
be useful to have a way of showing how many times a contact list has
been used. This will give the right information scent to indicate that
clicking into a contact list is where you go to see its jobs. This means
that the API needs to return a count of jobs for each contact list.
Putting this code feels very non-idiomatic for our API. So suggestions
about how to better architect it welcome…
Rather than showing all jobs that have been ‘copied’ from a contact list
I think it makes more sense to group them under the contact list. This
way it’s easier to see what messages have been sent to a given group of
contacts over time.
Part of this work means the API needs to return only jobs that have been
created from a given contact list, when asked.
Because we won’t be showing uploaded letters individually on the uploads
page any more we need a way of listing them. This should be by printing
day, to match how we’re grouping them on the uploads page.
The response matches a normal `get_notifications` response so we can
reuse the same code in the admin app.
Some teams have started uploading quite a lot of letters (in the
hundreds per week). They’re also uploading CSVs of emails. This means
the uploads page ends up quite jumbled.
This is because:
- there’s just a lot of items to scan through
- conceptually it’s a bit odd to have batches of things displayed
alongside individual things on the same page
So instead this commit starts grouping together uploaded letters. It
does this by the date on which we ‘start’ printing them, or in other
words the time at which they can no longer be cancelled.
This feels like a natural grouping, and it matches what we know about
people’s mental models of ‘batches’ and ‘runs’ when talking about
printing.
The code for this is a bit gnarly because:
- timezones
- the print cutoff doesn’t align with the end of a day
- we have to do this in SQL because it wouldn’t be efficient to query
thousands of letters and then do the timezone calculations on them in
Python
The standard way that we indicate that there are more results than can
be returned is by paginating. So even though we don’t intend to paginate
the search results in the admin app, it can still use the presence or
absence of a ‘next’ link to determine whether or not to show a message
about only showing the first 50 results.
we're seeing issues with email clients sniffing links, and causing them
to expire before the user gets a chance to click on them. Temporarily
disable the expiry while we work on a more permanent solution.
The link will still expire after half an hour, and sms codes aren't
affected by this change
If a service has permission to send international letters then it should
tell template preview, so that template preview knows what rule to
apply when it’s validating the address of the letter.
Depends on:
- [ ] https://github.com/alphagov/notifications-template-preview/pull/445
`allow_international_letters` is a new, required argument, so the tests
that make some `Row` objects need to provide that.
There are now 8 possible address columns (7 plus postcode) so the tests
need to expect that. But this won’t have any user-facing impact.
For services that have permission to send international letters we
should not reject letters that are addressed to another country. We
should still reject letters that are badly-addressed.