added some simple validation to the delete endpoint for sanity, but
generally my assumption is that more validation will happen on the admin
side.
noteably im not checking whether the credentials are duplicated, nor is
there a uniqueness constraint in the database - I'm not sure if the
credential blob will always reliably be equivalent, and I believe the
browser should hopefully take care of dupes.
so we can be in line with what the admin handles, and keep it simple on
the api side and do as little manipulation of binary data as possible.
### Minor changes
* id is a UUID we can use for referencing within notify. No relation to
the key itself.
* name is a user viewable name that can be set/edited
* fix updated_at to have onupdate, not default
### Simplify the webauthn data
credential_data is the data we store about an authenticator that we'll
use to identify the key when logging in. includes the credential_id, the
public_key, and the aaguid (which identifies the authenticator
make/model)
registration_response is the data containing audit information - in the
future we can use this to ensure that the authenticators used are of
high quality.
both of these fields are CBOR (a kind of binary json), encoded in
base64 so that they can be embedded within our regular JSON api
endpoints. we don't anticipate the api ever needing to interact with
this data directly.
We want to replace the value `None` for
service.allowed_broadcast_provider with the value of "all". As a first
step, we need to allow both values. Once notifications-admin has been
changed to pass through "all" and all the data in the database has been
updated, we can update the code to stop supporting both values.
This adds a type table for broadcast providers, which is the pattern we
follow with our models (e.g. we have a `broadcast_channel_types` table).
As well as the four providers, the migration populates it with `all`
which is the value that will replace `null` in a later change.
It should be safe to add the foreign key constraint to the
`service_broadcast_settings` in the same migration since the column is
still nullable and we don't have data in that column that is not in the
types table.
- sqlalchemy.sql.expression.case must include an else statement.
- clearly define list of columns for inbound_sms_history insert, getting the list from InboundSmsHistory.__table__.c was causing data type errors.
- remove relationships when not needed, the foreign key relationship is established in the creation of the column. This will get rid of the warnings referenced here: http://sqlalche.me/e/14/qzyx.
- update queries now that he user relationship in ServiceUser db model has been removed.
- move the check that a template is archived to the view instead of the dao method. The check was clearing the session before the version history could be done.
Deleting notifications in the night tasks still needs to be
investigated. The raw sql is causing an error.
- sqlalchemy.sql.expression.case must include an else statement.
- clearly define list of columns for inbound_sms_history insert, getting the list from InboundSmsHistory.__table__.c was causing data type errors.
- remove relationships when not needed, the foreign key relationship is established in the creation of the column. This will get rid of the warnings referenced here: http://sqlalche.me/e/14/qzyx.
- update queries now that he user relationship in ServiceUser db model has been removed.
- move the check that a template is archived to the view instead of the dao method. The check was clearing the session before the version history could be done.
Deleting notifications in the night tasks still needs to be
investigated. The raw sql is causing an error.
This is an extra precaution for the table to ensure data integrity. Since we only update/insert the data using the annual_billing_dao methods the integrity is in tact. I've check the data on preview, staging and prod there are no violations of this unique key.
The performance platform is going away soon. The only stat that we do not have in our database is the processing time. Let me clarify the only statistic we don't have in our database that we can query efficiently is the processing time. Any queries on notification_history are too inefficient to use on a web page.
Processing time = the total number of normal/team emails and text messages plus the number of messages that have gone from created to sending within 10 seconds per whole day. We can then easily calculate the percentage of messages that were marked as sending under 10 seconds.
Now that every service has a row in the service_broadcast_settings
table, we want all our tests to use the `sample_broadcast_service`
fixture as this ensures it has a row in that table and is correctly
representitive of what a real broadcast service looks like.
This means we will have a much easier way of knowing what the settings
are for a broadcast service.
Note, we can just move data directly into the newer table as there is
nothing on the API or admin app that is putting data in the
`service_broadcast_provider_restriction` table, this was being done
manually for the few services that needed it.
Retry tasks if they fail to send a broadcast event. Note that each task
tries the regular proxy and the failover proxy for that provider. This
runs a bit differently than our other retries:
Retry with exponential backoff. Our other tasks retry with a fixed delay
of 5 minutes between tries. If we can't send a broadcast, we want to try
immediately. So instead, implement an exponential backoff (1, 2, 4, 8,
... seconds delay). We can't delay for longer than 310 seconds due to
visibility timeout settings in SQS, so cap the delay at that amount.
Normally we give up retrying after a set amount of retries (often 4
hours). As broadcast content is much more important than normal
notifications, we don't ever want to give up on sending them to phones...
...UNLESS WE DO!
Sometimes we do want to give up sending a broadcast though! Broadcasts
have an expiry time, when they stop showing up on peoples devices, so if
that has passed then we don't need to send the broadcast out.
Broadcast events can also be superceded by updates or cancels. Check
that the event is the most recent event for that broadcast message, if
not, give up, as we don't want to accidentally send out two conflicting
events for the same message.
This will allow us to store details of which channel a service should be
sending to.
See the comment about how all broadcast services can have a row in the
table but may not at the moment. This has been done for speed as it's
the quickest way to let us set up different services to send to
different channels for some needed testing with the mobile handset
providers in the coming week.
This is a boolean column. It will be set to True for broadcasts
created from training broadcast accounts.
This will help us debug, for example by excluding all the stubbed
broadcasts when we have some trouble with real broadcasts.
So:
'billing_contact_email_address' becomes 'billing_contact_email_addresses'
AND
'billing_contact_name' becomes 'billing_contact_names'
This is to signify that each of those fields can contain numerous
items
The fields are:
Purchase order number - string field
Billing contact name - text field to acommodate possible multiple
contacts
Billing contact email address - text field to acommodate possible
multiple contacts
Billing reference - string field
All these fields are nullable. Notify platform admins will be
able to check and edit those values in Service Settings
section in Notify interface.
This will help make billing tasks and reports simpler.
Similar fields will also be added to Organisation model and
db table.
When we have a public API there will be no human creating the broadcast
message. Instead it will be created by an API integration, authenticated
by a key (just like for emails or texts).
This updates the database to:
- add a new foreign key from BroadcastMessages to API keys
- add a `reference` column
It doesn’t change the model yet, because the model is used by previous
migrations, so would cause them to fail when run before the new columns
exist. We can make this change in later pull requests.
we want to be able to create broadcast messages without templates. To
start with, these will come from the API, but in future we may want to
let people create via the admin interface without creating a template
too.
populate a non-nullable content field with the values supplied via the
template (or supplied directly if via api).
some services only send to one provider. This is a platform admin
setting to allow us to test integrations and providers manually without
affecting other broadcasts from different services.
one-to-one - a service can either send to all as normal, or send to only
one provider.
replacing get_earlier_provider_messages. The old function returned the
previous references for earlier events for a broadcast_message. However,
these depend on the message sent to a specific provider, so the function
needs to change. It now takes in a provider, and only returns
broadcast_provider_messages sent to that provider. If there are earlier
broadcast_events without a provider_message for the chosen provider, it
raises an exception - you cannot cancel a message if all the previous
events have not been created properly (as we wouldn't know what
references to cancel).
(instead of using the id from broadcast_event)
we need every XML blob we send to have a different ID. if we're sending
different XML blobs for each provider, then each one should have a
different identifier. So, instead of taking the identifier from the
broadcast_event, take it from the broadcast_provider_message instead.
Note: We're still going to the broadcast_event for most fields, to
ensure they stay consistent between different providers. The last thing
we want is for different phone networks to get different content
A BroadcastEvent knows when an event was sent and should expire
We pass through these values directly to the CBC Proxy, because
BroadcastEvent knows how they should be formatted
Signed-off-by: Toby Lorne <toby.lornewelch-richards@digital.cabinet-office.gov.uk>
Brings in:
- [x] https://github.com/alphagov/notifications-utils/pull/801
Formats the content of the template at the time of creating the event.
This means that any downstream code (eg Celery tasks) can assume the
content is already formatted correctly.
Also, these downstream tasks don’t know which template the broadcast
was created from, so if we support personalisation in the future this is
the most sensible place to bring together the template and the
personalisation.
---
I had to re-create some of the deleted code from utils for stuff like
formatting the timestamp to the CAP standard.