this was added five years ago but never used. if we want to bring back
variable rates per client we might as well get a fresh start since a lot
has changed since then.
Changes:
53.0.0
---
* `notifications_utils.columns.Columns` has moved to
`notifications_utils.insensitive_dict.InsensitiveDict`
* `notifications_utils.columns.Rows` has moved to
`notifications_utils.recipients.Rows`
* `notifications_utils.columns.Cell` has moved to
`notifications_utils.recipients.Cell`
52.0.0
---
* Deprecate the following unused `redis_client` functions:
- `redis_client.increment_hash_value`
- `redis_client.decrement_hash_value`
- `redis_client.get_all_from_hash`
- `redis_client.set_hash_and_expire`
- `redis_client.expire`
51.3.1
---
* Bump govuk-bank-holidays to cache holidays for next year.
Add check constraint that created_by_id should not be null, unless
created_by_api_key_id is not null to the migration script. It is
already in the models file.
Also remove check constraint for cancelled_by_id from models, as
this field would only be filled for broadcasts with cancelled
status.
Also add some spacing in that migration script so it is easier
to read.
We don’t store everything that comes in the CAP XML when someone creates
a broadcast via the API.
One thing we do store is `<identifier>` (in a column called `reference`)
which is a unique (to the external system) identifier for the broadcast.
We show this in the front end instead of the template name, because
broadcasts created from the API don’t use templates.
However this ID isn’t very friendly – the Environment Agency just supply
a UUID.
The Environment Agency also populate the `<event>` field with some human
readable text, for example:
> 013 Issue Severe Flood Warning EA
(013 is an area code which will be meaningful to the Flood Warning
Service team)
We should show this in the UI instead of the reference. The first step
towards this is storing it in the database and returning it in the REST
endpoints.
Later we can have the admin app prefer `cap_event` over `reference`,
where `cap_event` is present.
We can’t backfill this data because we don’t keep a copy of the original
XML.
Seems like `<event>` is a mandatory property of `<info>`, so we don’t
need to worry about the field being missing (`<info>` is optional in
CAP but we require it because it contains stuff like the areas which
we need in order to send out the broadcast`).
***
https://www.pivotaltracker.com/story/show/176927060
make sure timestamps returned from the api are always consistent.
The only place in models where we're serializing a BST timestamp is on
the Notification.serialize_for_csv method now, which at least is a bit
different as this is user-facing (it also returns a formatted
human-readable notification_status for example).
This is so we can distinguish custom broadcasts in the Admin app
[1]. I've also extended the POST test for custom broadcasts to
check we're correctly reading data for "names", as this wasn't
being tested previously.
[1]: 411fda81c0
This is necessary until:
- The Admin app is using the new "areas(_2)" format to store and
retrieve data.
- We've migrated all existing broadcast messages to use the new
format.
Note that "areas" / "ids" isn't actually used for anything except
printing out the PagerDuty message - it's not sent to the proxy [1].
[1]: 6edc6c70aa/app/celery/broadcast_message_tasks.py (L190-L193)
Currently we have:
- An "areas" column in the DB that stores a JSON blob.
- An "areas" field inside the "areas" JSON that stores area IDs.
- Each field has to be manually copied into the JSON column.
We want to move to:
- An "areas" column in the DB (unchanged).
- An "ids" field inside the "areas" JSON (to replace "areas").
- The Admin app sending other data inside an "areas" JSON field.
The API design for areas is confusing and difficult to extend.
Here we duplicate the current API functionality using an "areas_2"
field. Once the Admin app is using this field, we'll be able to
rename it to just "areas", which is where we want to get to.
In the next commits we'll build on this to support the migration
from "areas"."areas" to "areas"."ids".
We want to have new permissions which will be used specifically for
broadcasts:
- `create_broadcasts`
- `approve_broadcasts`
- `reject_broadcasts`
- `cancel_broadcasts`
Cancel and reject will always go together, but having separate database
permissions makes things easier to change in the future.
The permission column of the permissions table is an enum. We can add values
in the alembic upgrade script, but removing individual values from an
enum is not supported by Postgres. To remove values, we have to recreate
the enum with the old values.
It's not a big deal if a user is no longer eligible to register a
security key, so we may as well let them continue using it. This
avoids putting them in a limbo state if we don't immediately change
their auth type when they're no longer eligible to use the feature.
Currently we have some data-driven roles to say who can use this
feature. Adding a flag in the API means we can avoid API calls in
the Admin app to determine the same.
Allowing members of the GOV.UK Notify service to use the feature
is a workaround, so we can avoid making someone a Platform Admin
before they've protected their account with it.
It looks like we were allowing broadcasts to transition from draft to
broadcasting in one go. This isn't valid now. It should go draft,
pending approval and then broadcasting.
It looks like this was a leftover bit of support in our code for when we
were building stuff out and is no longer needed.
It's possible a letter can pass our validation but our print provider can not print the letter. The letter will be marked as permanent failure in this case. Typically happens with precompiled letters.
- Update the Notification and NotificationHistory model to reflect the database.
- Updates to datatypes, removal of indexes and addition of indexes.
Why?
After running the `flask db migrate` command there are many deltas because we did some work to update the notification and notification_history tables, however, the SQLAlchemy models were not updated to reflect those changes. This PR cleans up all those deltas.
However, there are still some differences that can be done but we can look at that in another PR.
Also fix tests:
First add init file so the tests are found correctly, then update
the tests after we stopped serialising webauthn
registration_response.
added some simple validation to the delete endpoint for sanity, but
generally my assumption is that more validation will happen on the admin
side.
noteably im not checking whether the credentials are duplicated, nor is
there a uniqueness constraint in the database - I'm not sure if the
credential blob will always reliably be equivalent, and I believe the
browser should hopefully take care of dupes.
so we can be in line with what the admin handles, and keep it simple on
the api side and do as little manipulation of binary data as possible.
### Minor changes
* id is a UUID we can use for referencing within notify. No relation to
the key itself.
* name is a user viewable name that can be set/edited
* fix updated_at to have onupdate, not default
### Simplify the webauthn data
credential_data is the data we store about an authenticator that we'll
use to identify the key when logging in. includes the credential_id, the
public_key, and the aaguid (which identifies the authenticator
make/model)
registration_response is the data containing audit information - in the
future we can use this to ensure that the authenticators used are of
high quality.
both of these fields are CBOR (a kind of binary json), encoded in
base64 so that they can be embedded within our regular JSON api
endpoints. we don't anticipate the api ever needing to interact with
this data directly.
We want to replace the value `None` for
service.allowed_broadcast_provider with the value of "all". As a first
step, we need to allow both values. Once notifications-admin has been
changed to pass through "all" and all the data in the database has been
updated, we can update the code to stop supporting both values.
This adds a type table for broadcast providers, which is the pattern we
follow with our models (e.g. we have a `broadcast_channel_types` table).
As well as the four providers, the migration populates it with `all`
which is the value that will replace `null` in a later change.
It should be safe to add the foreign key constraint to the
`service_broadcast_settings` in the same migration since the column is
still nullable and we don't have data in that column that is not in the
types table.
- sqlalchemy.sql.expression.case must include an else statement.
- clearly define list of columns for inbound_sms_history insert, getting the list from InboundSmsHistory.__table__.c was causing data type errors.
- remove relationships when not needed, the foreign key relationship is established in the creation of the column. This will get rid of the warnings referenced here: http://sqlalche.me/e/14/qzyx.
- update queries now that he user relationship in ServiceUser db model has been removed.
- move the check that a template is archived to the view instead of the dao method. The check was clearing the session before the version history could be done.
Deleting notifications in the night tasks still needs to be
investigated. The raw sql is causing an error.
- sqlalchemy.sql.expression.case must include an else statement.
- clearly define list of columns for inbound_sms_history insert, getting the list from InboundSmsHistory.__table__.c was causing data type errors.
- remove relationships when not needed, the foreign key relationship is established in the creation of the column. This will get rid of the warnings referenced here: http://sqlalche.me/e/14/qzyx.
- update queries now that he user relationship in ServiceUser db model has been removed.
- move the check that a template is archived to the view instead of the dao method. The check was clearing the session before the version history could be done.
Deleting notifications in the night tasks still needs to be
investigated. The raw sql is causing an error.
This is an extra precaution for the table to ensure data integrity. Since we only update/insert the data using the annual_billing_dao methods the integrity is in tact. I've check the data on preview, staging and prod there are no violations of this unique key.
The performance platform is going away soon. The only stat that we do not have in our database is the processing time. Let me clarify the only statistic we don't have in our database that we can query efficiently is the processing time. Any queries on notification_history are too inefficient to use on a web page.
Processing time = the total number of normal/team emails and text messages plus the number of messages that have gone from created to sending within 10 seconds per whole day. We can then easily calculate the percentage of messages that were marked as sending under 10 seconds.
Now that every service has a row in the service_broadcast_settings
table, we want all our tests to use the `sample_broadcast_service`
fixture as this ensures it has a row in that table and is correctly
representitive of what a real broadcast service looks like.