Add check constraint that created_by_id should not be null, unless
created_by_api_key_id is not null to the migration script. It is
already in the models file.
Also remove check constraint for cancelled_by_id from models, as
this field would only be filled for broadcasts with cancelled
status.
Also add some spacing in that migration script so it is easier
to read.
We don’t store everything that comes in the CAP XML when someone creates
a broadcast via the API.
One thing we do store is `<identifier>` (in a column called `reference`)
which is a unique (to the external system) identifier for the broadcast.
We show this in the front end instead of the template name, because
broadcasts created from the API don’t use templates.
However this ID isn’t very friendly – the Environment Agency just supply
a UUID.
The Environment Agency also populate the `<event>` field with some human
readable text, for example:
> 013 Issue Severe Flood Warning EA
(013 is an area code which will be meaningful to the Flood Warning
Service team)
We should show this in the UI instead of the reference. The first step
towards this is storing it in the database and returning it in the REST
endpoints.
Later we can have the admin app prefer `cap_event` over `reference`,
where `cap_event` is present.
We can’t backfill this data because we don’t keep a copy of the original
XML.
Seems like `<event>` is a mandatory property of `<info>`, so we don’t
need to worry about the field being missing (`<info>` is optional in
CAP but we require it because it contains stuff like the areas which
we need in order to send out the broadcast`).
***
https://www.pivotaltracker.com/story/show/176927060
The broadcast user permissions are changing, so to avoid confusion and
permissions which exist in the database but don't display on the
frontend we are going to remove all existing permissions for users of
broadcast services. This also updates the permissions of invited users
who are still pending.
The exception to this is the `view_activity` permission, which we always
add for broadcast users even if they have no other permissions.
(aad017a184/app/main/forms.py (L1043))
Drop index concurrently will drop the index without locking out concurrent selects, inserts, updates, and deletes on the index's table namely on notifications.
We want to have new permissions which will be used specifically for
broadcasts:
- `create_broadcasts`
- `approve_broadcasts`
- `reject_broadcasts`
- `cancel_broadcasts`
Cancel and reject will always go together, but having separate database
permissions makes things easier to change in the future.
The permission column of the permissions table is an enum. We can add values
in the alembic upgrade script, but removing individual values from an
enum is not supported by Postgres. To remove values, we have to recreate
the enum with the old values.
so we can be in line with what the admin handles, and keep it simple on
the api side and do as little manipulation of binary data as possible.
### Minor changes
* id is a UUID we can use for referencing within notify. No relation to
the key itself.
* name is a user viewable name that can be set/edited
* fix updated_at to have onupdate, not default
### Simplify the webauthn data
credential_data is the data we store about an authenticator that we'll
use to identify the key when logging in. includes the credential_id, the
public_key, and the aaguid (which identifies the authenticator
make/model)
registration_response is the data containing audit information - in the
future we can use this to ensure that the authenticators used are of
high quality.
both of these fields are CBOR (a kind of binary json), encoded in
base64 so that they can be embedded within our regular JSON api
endpoints. we don't anticipate the api ever needing to interact with
this data directly.
This adds a type table for broadcast providers, which is the pattern we
follow with our models (e.g. we have a `broadcast_channel_types` table).
As well as the four providers, the migration populates it with `all`
which is the value that will replace `null` in a later change.
It should be safe to add the foreign key constraint to the
`service_broadcast_settings` in the same migration since the column is
still nullable and we don't have data in that column that is not in the
types table.
This is an extra precaution for the table to ensure data integrity. Since we only update/insert the data using the annual_billing_dao methods the integrity is in tact. I've check the data on preview, staging and prod there are no violations of this unique key.
The performance platform is going away soon. The only stat that we do not have in our database is the processing time. Let me clarify the only statistic we don't have in our database that we can query efficiently is the processing time. Any queries on notification_history are too inefficient to use on a web page.
Processing time = the total number of normal/team emails and text messages plus the number of messages that have gone from created to sending within 10 seconds per whole day. We can then easily calculate the percentage of messages that were marked as sending under 10 seconds.
At the moment, we currently have broadcast services that have the
'broadcast' service permission in the DB but don't have a corresponding
row in the service_broadcast_settings table. It's important to give them
a row in this table because this will control which broadcast channel
their messages will go out on. All broadcast services will be expected
to have this row so we can then remove hardcoded defaults from our code
that currently set what channel a message should go out on.
Whilst, when this gets deployed, we will have released
https://github.com/alphagov/notifications-admin/pull/3794, which means
that when a service setting is changed via that form, they will get the
corresponding row in the service_broadcast_settings form, we don't want
to ask every developer to go and fill in this form for every service on
their local dev environment and also do the same to every service we
have in preview, staging and production. Therefore a migration is the
best way to backfill the data.
Note, I made the decision that a service that is in trial mode will be
given the 'severe' channel. This is because this is what most trial mode
services should have. Only the MNOs would ever really have a trial mode
service on the test channel. And given they are in trial mode, it is not
too risky to give them the 'severe' channel.
For services that are live though, although there are no services in
production that have the broadcast permission and are live, it is not
worth taking that risk and accidently setting that service to send an
alert out on the 'severe' channel. We play it say by choosing 'test'.
Update `send_user_2fa_code` to send from number when recipient is international
Update `update_user_attribute` to send from number when recipient is international
This means we will have a much easier way of knowing what the settings
are for a broadcast service.
Note, we can just move data directly into the newer table as there is
nothing on the API or admin app that is putting data in the
`service_broadcast_provider_restriction` table, this was being done
manually for the few services that needed it.
This will allow us to store details of which channel a service should be
sending to.
See the comment about how all broadcast services can have a row in the
table but may not at the moment. This has been done for speed as it's
the quickest way to let us set up different services to send to
different channels for some needed testing with the mobile handset
providers in the coming week.
This is a boolean column. It will be set to True for broadcasts
created from training broadcast accounts.
This will help us debug, for example by excluding all the stubbed
broadcasts when we have some trouble with real broadcasts.
So:
'billing_contact_email_address' becomes 'billing_contact_email_addresses'
AND
'billing_contact_name' becomes 'billing_contact_names'
This is to signify that each of those fields can contain numerous
items
The fields are:
Purchase order number - string field
Billing contact name - text field to acommodate possible multiple
contacts
Billing contact email address - text field to acommodate possible
multiple contacts
Billing reference - string field
All these fields are nullable. Notify platform admins will be
able to check and edit those values in Service Settings
section in Notify interface.
This will help make billing tasks and reports simpler.
Similar fields will also be added to Organisation model and
db table.
When we have a public API there will be no human creating the broadcast
message. Instead it will be created by an API integration, authenticated
by a key (just like for emails or texts).
This updates the database to:
- add a new foreign key from BroadcastMessages to API keys
- add a `reference` column
It doesn’t change the model yet, because the model is used by previous
migrations, so would cause them to fail when run before the new columns
exist. We can make this change in later pull requests.
We shouldn’t import models into migrations because if the model changes
later down the line then the migration can’t be re-run at a later date
(for example to rebuild a database from scratch).
We don’t need to encode the content before storing it (we’ll always do
that before rendering/sending) so we don’t need to use
`BroadcastMessageTemplate`.
And given that no past broadcasts will have personalisation, we don’t
need to replace the personalisation in the template before rendering it.
So we can just copy the raw content from the templates table.
new broadcast messages will have content filled whether they have a
tempalte or not, but old ones won't so populate.
Stole the session constructor from 0044_jos_to_notification_hist.py
we want to be able to create broadcast messages without templates. To
start with, these will come from the API, but in future we may want to
let people create via the admin interface without creating a template
too.
populate a non-nullable content field with the values supplied via the
template (or supplied directly if via api).