the provider details tests were previously very stateful - they
would update a value, and then because provider_details is a "static"
table that is not wiped by the notify_db_session fixture, the tests
were forced to make a second update that reverted that change. if the
test fails for whatever reason, the provider_details table ends up
permanently modified, playing havoc on tests further down the line.
this commit adds the fixture `restore_provider_details` to conftest.
this fixture stores a copy of the contents of ProviderDetails and
ProviderDetailsHistory tables outside of the session, runs the test,
and then clears the table and puts those old values in
this means that the tests have been cleaned up so that they do not
do operations twice in reverse. they've also been cleaned up
generally, including fixture optimisations and such
set all existing rows to have a version of 1 (also copy across values
to populate the new provider_details_history table in the upgrade
script)
in dao_update_provider_details bump the provider_details.version by 1
and then duplicate into the history table as a new row
(done manually as opposed to the decorator used in template_history
since this is only edited in this one place and the decorator is icky)
the first argument to ANY logger.____ function is ALWAYS cast to a
string and used as a format argument for ALL remaining arguments
using %s formatting. even `logger.exception`, which just logs as
normal and then appends the stack trace.
so we shouldn't be passing `e` into logger.exception - just
`logger.exception('something went wrong!')`
also de-duplicated a test
* to be used for auditing changes to provider details
* not hooked in yet
* also made provider_details.active non-nullable. set all existing null
provider_details.active to FALSE. none are set false on live so
should hopefully be fairly uncontroversial
* refactored NotificationHistory.from_notification and update from noti
to share with provider_details_history
- note this is an unexpectedly big change.
- When we create a service we pass the service id to the persist method. This means that we don't have the service available to check if in research mode.
- All calling methods (expecting the one where we use the notify service) have the service available. So rather than reload it I changed the method signature to pass the service, not the ID to persist.
- Touches a few places.
Note this means that the update or create methods will fall over on a null service. But this seems correct.
Goes back to the story which we need to play to make the service available as the API user so that the need to load and pass around services is minimised.
The reason for doing this is to ensure the tasks performed for the Notify users are not queued behind a large job, a way to
ensure priority for messages.
The reason for doing this is to ensure the tasks performed for the Notify users are not queued behind a large job, a way to
ensure priority for messages.
- Problem was that on notification creation we pass the template ID not the template onto the new notification object
- We then set the history object from this notification object by copying all the fields. This is OK at this point.
- We then set the relationship on the history object based on the template, which we haven't passed in. We only passed the ID. This means that SQLAlchemy nulls the relationship, removing the template_id.
- Later we update the history row when we send the message, this fixes the data. BUT if we ever have a send error, then this never happens and the template is never set on the history table.
Fix:
Set only the template ID when creating the history object.
* Ensure we dont raise exception if e.cause does not contain a message
* Ensure we handle case where e.path may be empty
* Refactor existing tests to conform to new format
This means that these codes won't be delayed by large jobs going through the
send-sms/email queues. send_user_sms_code now works much more like the
endpoints for sending notifications, by persisting the notification and only
using the deliver_sms task (instead of using send_sms as well).
The workers consuming the `notify` queue should be able to handle the deliver
task as well, so no change should be needed to the celery workers to support
this.
I think there's also a change in behaviour here: previously, if the Notify
service was in research mode, 2FA codes would not have been sent out, making
it impossible to log into the admin. Now, a call to this endpoint will always
send out the notification even if we've put the Notify service into research
mode, since we set the notification's key type to normal and ignore the
service's research mode setting when sending the notification to the queue.
We want to be able to toggle the numbers on the platform admin page between
including and excluding notifications sent using test keys, so that we can see
both real use of the platform and all load on it.
This parameter defaults to True, which is the existing behaviour.