The performance platform is going away soon. The only stat that we do not have in our database is the processing time. Let me clarify the only statistic we don't have in our database that we can query efficiently is the processing time. Any queries on notification_history are too inefficient to use on a web page.
Processing time = the total number of normal/team emails and text messages plus the number of messages that have gone from created to sending within 10 seconds per whole day. We can then easily calculate the percentage of messages that were marked as sending under 10 seconds.
SMS.
This is not a catch all for international SMS, the rules are quite
complex and still not completely understood. We are talking with our
provider who maybe able to sort this out for us. But in the meantime,
this should solve for the case that we understand.
Now that every service has a row in the service_broadcast_settings
table, we want all our tests to use the `sample_broadcast_service`
fixture as this ensures it has a row in that table and is correctly
representitive of what a real broadcast service looks like.
previously we would retry if the task was queued up for retry but the
status is in "received-ack" or "received-err". We don't expect that a
task will be retried after getting this status, but if there are
duplicate tasks that could happen. Lets plan for the worst by saying
"only process a retry if the task is currently in sending".
this way, if a duplicate task is on retry and the first task goes
through succesfully, the duplicate task will give up.
boto returns a `StreamingBody`[1] response rather than a json struct.
We're currently just logging things like "Error calling lambda
o2-1-proxy with function error <botocore.response.StreamingBody object
at 0x7f74cd6e02e8>" which is obviously less than ideal. Also make the
tests properly reflect this - annoyingly it appears like we can't use
moto to reliably test this interface as the moto `mock_lambda` decorator
needs you to be running inside a docker container??
[1] https://botocore.amazonaws.com/v1/documentation/api/latest/reference/response.html#botocore.response.StreamingBody
This was passing locally, but failing on Concourse due to a different
order of TemplateHistory items being returned. This changes the
test so that it can't randomly fail based on the order of template
history items returned.
We've decided we don't get any value from these emails any more, so this
stops us (Notify support) receiving them. We still let teams know an MOU
has been signed.
Update `send_user_2fa_code` to send from number when recipient is international
Update `update_user_attribute` to send from number when recipient is international
Add caching by using the SeriralisedTemplate and SerialisedService objects
Removed extra call to the database to fetch the notification after the commit by saving the created_at and key_type to a local variable. After the update to the notification to mark it as sending the db.session is committed. Any reference to the the Notification data model after that will require a query to fetch the object again because it is considered "dirty" or out of date.
Added name, sms_prefix and email branding to SerialisedService.
Refactor the get_html_options to work with the SerialisedService object.
Removed the need to validate and format the to field by using `normalised_to`, since when persisting the notification the `normalised_to` field has already had this done.
Removed the validate and format for reply_to_text for email reply_to, this has been done when the email address has been added via the frontend, no need to validate this address every time a services sends an email.
Note, I haven't added anything for the `go_live_user` because it doesn't
quite make sense because here a user isn't requesting to go live. So
there should be no reason to record this.
We will in time though want to add audit events to capture every change
to the service broadcast settings, that will actually capture who has
done what.
We are in a weird situation where at the moment, we have services with
the broadcast permission that do not have a row in the
service_broadcast_settings table and therefore do not have defined
whether they should send messages on the 'test' or 'severe' channel.
We currently get around this when we send broadcast messages out as
such:
https://github.com/alphagov/notifications-api/blob/master/app/celery/broadcast_message_tasks.py#L51
We need to something equivalent for the broadcast channel that the API
says the service is on. In time, when we have added a row in the
service_broadcast_settings table for every service with the broadcast
permission then we can remove both of these two hardcodings.
Note, one option would have been to move the default of `test` on to the
`Service` model rather than having it in both the
broadcast_message_tasks file and the `ServiceSchema` class. However, I
went for the quickest thing which was to add it here.
Some of the fixtures weren't needed so have been removed.
I've also moved from using `client.post` to using `admin_request.post`
which saves a bit of code too.
Also one small assertion tidied up to make it a bit stronger regarding
permissions.
We will use this to easily identify all our broadcast services. There
could be other ways to deal with finding and seeing all broadcast
services but this is a good and easy way to start.
We think it would be a security risk to show the name of services
involved in emergency alerts as they be responsible for things such as
counter terrorism.
On top of that, showing broadcast services in the list of all services
could enable someone to use that information to try and trick an admin
into letting them access of a particular service given the fact they
know the name of it
We can use the `sample_broadcast_service` as this gives us a broadcast
service with service broadcast settings already for us to update rather
than needing to create our own settings db row
This will make our `sample_broadcast_service` look more realistic.
The one downside to this, is that there will be a short amount of time
where we have broadcast services that do not have a row in the
service_broadcast_settings table until we have backfilled this data. Our
unit tests therefore won't have coverage for this case but I think the
risk is small and acceptable for the moment as this will no longer be
the case in say a weeks time.
The maximum content count of a broadcast varies depending on its
encoding, so we can’t simply validate it against a schema. This commit
moves to using the validation from `notifications-utils`, and raising a
custom error response.
We don’t support these methods at the moment. Instead we were just
ignoring the `msgType` field, so issuing one of these commands would
cause a new alert to be broadcast 🙃
We might want to support `Cancel` in the future, but for now let’s
reject anything that isn’t `Alert` (CAP terminology for the initial
broadcast).
these will happen if, for example, you have issues connecting to AWS or
permission issues.
Still failover if we get one of these exceptions, as I think it might be
possible to have a problem only related to one of the lambdas.
By adding SerialisedTemplate we can avoid a database call for the template. This is useful when sending many many emails/sms for the same template/version.