Usage for all services is a platform admin report that groups letters by
postage. We want it to show `europe` and `rest-of-world` letters under a
single category of `international`, so this updates the query to do
that and to order appropriately.
task takes a brodcast_message_id, and makes a post to the cbc-proxy
for now, hardcode the url to the notify stub. the stub requires template
as the admin/api get it, so use the marshmallow schema to json dump it.
Note - this also required us to tweak the BroadcastMessage.serialize
function so that it converts uuids in to ids - flask's jsonify function
does that for free but requests.post doesn't sadly.
if the request fails (either 4xx or 5xx) just raise an exception and let
it bubble up for now - in the future we'll add retry logic
new blueprint `/service/<id>/broadcast-message` with the following
endpoints:
* GET / - get all broadcast messages for a service
* GET /<id> - get a single broadcast message
* POST / - create a new broadcast message
* POST /<id> - update an existing broadcast message's data
* POST /<id>/status - move a broadcast message to a new status
I've kept the regular data update (eg personalisation, start and end
times) separate from the status update, just to keep separation of
concerns a bit more rigid, especially around who can update.
I've included schemas for the three POSTs, they're pretty
straightforward.
make it clear that this is for the public api, and we shouldn't add
fields to it without considering impacts
also add the broadcast_messages relationship on service and template to
the exclude from the marshmallow schemas, so it's not included elsewhere
new broadcast_message table. contains a whole bunch of timestamps, a
couple of user ids for who created and who approved, some
personalisation and a template id/version. areas is a jsonb field - it
is expected that this will be an array, for example `["england",
"wales"]`
intended valid state transitions are as follows, from an initial status
of draft
* draft -> pending-approval, rejected
* pending-approval -> broadcasting, rejected, (draft maybe)
* broadcasting -> completed, cancelled
rejected, completed, cancelled and technical-failure are terminating
states.
also copy across the enum switching code from migration 0060 (adding
letter type).
had to go through the code and change a few places where we filter on
template types. i specifically didn't worry about jobs or notifications.
Also, add braodcast_data - a json column that might contain arbitrary
broadcast data that we'll figure out as we go. We don't know what it'll
look like, but it should be returned by the API
Years ago we started to implement a way to schedule a notification. We hit a problem but we never came up with a good solution and the feature never made it back to the top of the priority list.
This PR removes the code for scheduled_for. There will be another PR to drop the scheduled_notifications table and remove the schedule_notifications service permission
Unfortunately, I don't think we can remove the `scheduled_for` attribute from the notification.serialized method because out clients might fail if something is missing. For now I have left it in but defaulted the value to None.
The `notifications`, `notification_history`, `templates` and `templates_history`
tables all had a check constraint on the postage column which specified
that the postage had to be `first` or `second` if the notification or
template was a letter. We now have two more options for postage -
`europe` and `rest-of-world`.
It's not possible to alter a check constraint, so the constraints have
to be dropped then recreated. We are not recreating the constraint on
the `notification_history` table since values here are always copied
from the `notifications` table.
The constraints get added as `NOT VALID` at first - this stage will lock
the tables, so updating the `notification` table and `templates` and
`templates_history` are done in separate migrations so that we don't lock
all tables at the same time. In a third migration we then run
`VALIDATE CONSTRAINT` for all tables - this will lock a row at a time,
not the whole table.
Because we’ll be grouping jobs under their parent contact lists it will
be useful to have a way of showing how many times a contact list has
been used. This will give the right information scent to indicate that
clicking into a contact list is where you go to see its jobs. This means
that the API needs to return a count of jobs for each contact list.
Putting this code feels very non-idiomatic for our API. So suggestions
about how to better architect it welcome…
We’ve added some new properties to the templates in utils that we can
use instead of doing weird things like
`WithSubjectTemplate.__str__(another_instance)`
Text messages have placeholders in their body.
Emails have them in their subject line too.
Letters have them in their body, subject line and contact block.
We were only looking in the the body and subject when processing a job,
therefore the thing assembling the letter was not looking in all the
CSV columns it needed to, because it hadn’t been told about any
placeholders in the contact block.
Fixing this means always making sure we use the correct `Template`
instance for the type of template we’re dealing with. Which we were
already doing in a different part of the codebase. So it makes sense to
reuse that.
Turns out we fixed the same bug for email subjects over 3 years ago:
3ed97151ee
The `notifications`, `notification_history`, `templates` and `templates_history`
tables all had a check constraint on the postage column which specified
that the postage had to be `first` or `second` if the notification or
template was a letter. We now have two more options for postage -
`europe` and `rest-of-world`.
It's not possible to alter a check constraint, so the constraints have
to be dropped then recreated. We are not recreating the constraint on
the `notification_history` table since values here are always copied
from the `notifications` table.
The constraints get added as `NOT VALID` at first - this stage will lock
the tables, so updating the `notification` table and `templates` and
`templates_history` are done in separate migrations so that we don't lock
all tables at the same time. In a third migration we then run
`VALIDATE CONSTRAINT` for all tables - this will lock a row at a time,
not the whole table.
So we keep a record of who first uploaded a list it’s better to archive
a list than completely delete it.
The list in the database doesn’t contain any recipient info so this
isn’t a change to what data we’re retaining.
This means updating the endpoints that get contact lists to exclude ones
that are archived.
- Table to store meta data for the emergency contact list for a service.
- Endpoint for fetching contact lists for service
- Endpoint for saving contact list for service.
The list will be stored in S3. The service will then be able to send emergency announcements to staff.
the queries all return lots of columns, but each query has columns it
doesn't care about. eg emails don't have billable units or international
flag, letters don't have international flag, sms don't have a page count
etc. additionally, the query was grouping on things that never change,
like service id and notification type.
by making all of these literals (as in `select 1 as foo`) we see times
that are over 50% quicker for gov.uk email service.
Note: One of the tests changed because previously it involved emails and
sms with statuses that they could never be (eg returned-letter)
intention is for this to be null, 1, or many, based on how many
documents were linked to within the message. nullable column, so that it
doesn't require a lengthy access exclusive lock on the table when
creating.
This is to bring it in line with other serialized dates in User
model, like logged_in_at and password_changed_at.
Also get rid of check if password_changed_at has value, as
it is a non-nullable column, so it needs to always have value.
Also set a default value for email_access_validated_at, to bring
it in line with other non-nullable columns.
and update it when users have to use their email to interact with
Notify service.
Initial population:
If user has email_auth, set last_validated_at to logged_in_at.
If user has sms_auth, set it to created_at.
Then:
Update email_access_valdiated_at date when:
- user with email_auth logs in
- new user is created
- user resets password when logged out, meaning we send them an
email with a link they have to click to reset their password.
Now that the encryption module has been moved from this app to utils, we
can remove it from here (along with its tests) and import it from utils
instead. This also renames the `encryption.py` file to `hashing.py`,
since it no longer contains the encryption class.
- Check if right keys in new history rows
- Improve model and get rid of old revision version
- Add updated migration file
- Test data when inserting into inbound sms history
SMS and emails may be marked as `NOTIFICATION_PENDING`. These will be
billed as they will have been sent to the provider and will eventually
turn to a final state such as `NOTIFICATION_DELIVERED` or
`NOTIFICATION_PERMANENT_FAILURE`.
This change will fix a discrepency on the billing page were the number
of messages being billed was less than the number of messages reported
as sent on a services dashboard when some of those messages were in a
pending state.
In reality, I don't think this bug would have had any longer affects for
incorrect billing as messages would not stay in the pending state for
too long and billing calculations would happen after that point.
The table will contain notification ids for services that have returned letters. This will make it easy to query the data in Notification_history since we can join on the primary key.
previously we checked notifications table, and if the results were
zero, checked the notification history table to see if there's data
in there. When we know that data isn't in notifications, we're still
checking. These queries take half a second per service, and we're
doing at least ten for each of the five thousand services we have in
notify. Most of these services have no data in either table for any
given day, and we can reduce the amount of queries we do by only
checking one table.
Check the data retention for a service, and then if the date is older
than the retention, get from history table.
NOTE: This requires that the delete tasks haven't run yet for the day!
If your retention is three days, this will look in the Notification
table for data from three days ago - expecting that shortly after the
task finishes, we'll delete that data.
We have a table for datetime dimension that we no longer use and
I believe can be dropped.
First step is to remove the model and release the change. The next
step will be to then add a database migration to drop the actual
table. I believe we need to do it in this order and it can't
be done as a single PR.
this is only applicable when getting a single notification by id. it's
also ignored if the notification isn't a letter.
Otherwise, it overwrites the 'body' field in the response.
If the notification's pdf is available, it returns that, base64
encoded. If the pdf is not available, it returns an empty string.
The pdf won't be available if the notification's status is:
* pending-virus-scan
* virus-scan-failed
* validation-failed
* technical-failure
The pdf will be retrieved from the correct s3 bucket based on its type
This will let us do some filtering of this list in the admin. It’s
better to do it there because it means the admin can use the same cached
response from Redis each time.
Previously we were doing it based on their email address. This will also
apply it if they self-select as a GP surgery, even if they don’t have an
NHS email address.
Although their allowances are the same as what we call `nhs_local` it
makes more sense to store them separately because:
- we already present them as two separate choices to the user
- we may want to handle them differently in the future, eg in terms of
what branding choices are available to them
This table is no longer used or referenced in the code.
We can remove this mapping table now that the organisation to service relationship is handled by the foreign key in Services.
This is the second commit in the series to add organisation_id to Service.
- Data migration to update services.organisation_id from data in organisation_to_service
(The rollback will lose any updates to organisation unless the script is updated to set organistion_to_service from service.organisation_id )
- Update Service.organisation relationship to a ForeignKey relationship to Organisation.
- Update Organisation.services to a backref relationship to Service.