This PR updates those queries to look in either Notification or NotificationHistory. Since the data does not exist in both tables at the same time we can do with and not worry about the data retention.
The query will iterate over each service, then each notification type and query the data if no results then try the history table.
* The `_should_record_notification_in_history_table` function stopped being
used in this commit: c23ae15f32
* `NOTIFICATIONS_ALERT` stopped being used in this commit: 5aa37f09b6
The organisation_type of a service should match the organisation_type of
the service's organisation (if there is one). This changes
dao_update_organisation and dao_add_service_to_organisation to set the
organisation_type of any services when adding / updating an org.
the agreement_signed field can also be edited by a platform admin - if
that happened we might not have details for who signed it, and even if
we did they shouldn't find out about, and we don't need an email since
we were the ones who clicked the button.
the `agreement_signed_by` field is only set when a user confirms that
they are signing the MOU on the admin page - not if a platform admin
modifies the page from the platform admin page
we build up one personalisation dict, and then pass it in to all the
different templates - so be careful editing things. also of note, we
check if the agreement_signed_on_behalf_of is set, and send a different
template with slightly different wording to the person who clicked the
confirm button.
We occasionally get an SMS with 0 `billable_units` if the `delivery-sender-worker`
is stopped in the middle of processing a notification - we have to fix
these manually. This change checks the billable units when we get the response from
our SMS provider and sets the correct billable units if it's 0.
This is because that error is caused by our providers and we
cannot do anything about it but it can make our logs hard to read
and actionable errors harder to spot
Utils 33.0.0 adds alt text to email branding - the HTMLEmailTemplate now
initializes slightly differently as a result (with both `branding_name`
and `branding_text`).
Added a scheduled task to run once a day and check if there were any
letters from before 17.30 that still have a status of 'created'. This
logs an exception instead of trying to fix the error because the fix
will be different depending on which bucket the letter is in.
Added a task which runs twice a day on weekdays and checks for letters that have
been in the state of `pending-virus-check` for over 90 minutes. This is
just logging an exception for now, not trying to fix things, since we
will need to manually check where the issue was.
The `process_virus_scan_passed` task now catches S3 errors - if these
occur, it logs an exception and puts the letter in a `technical-failure`
state. We don't retry the task, because the most common reason for
failure would be the letter not being in the expected S3 bucket, in
which case retrying would make no difference.
At the moment this response returns a list of service IDs for hundreds
of organisations.
The admin app doesn’t use this information, but having to wait for it to
be serialized and sent across the network slows it down all the same.
This is changing because we’re going to introduce accepting contracts
and MoUs online.
Previously
---
We had one column for who signed the agreement, which is foreign keyed
to the user table. This is still relevant, because there will always be
a user who is clicking the button.
Now
---
We add two new fields for the name and email address of the person on
whose behalf the agreement is being accepted. This person:
- is different from the one signing the agreement
- won’t necessarily have a Notify account
The admin app now needs to know a few extra things about orgs and
services in order to list them. At the moment it does this by making
multiple API calls.
This commit adds extra fields to the existing response. Once the admin
app is using this fields we’ll be able to remove:
- `reponse['services_without_organisations']`
- `reponse['organisations']['services']`
For a user to be able to be archived, each service that they are a
member of must have at least one other user who is active and who has
the 'manage-settings' permission.
To archive a user we remove them from all their services and
organisations, remove all permissions that they have and change some of
their details:
- email_address will start with '_archived_<date>'
- the current_session_id is changed (to sign them out of their current
session)
- mobile_number is removed (so we also need to switch their auth type to
email_auth)
- password is changed to a random password
- state is changed to 'inactive'
If any of the steps fail, we rollback all changes.
a little complicated because the free_sms_fragment_limit comes from
the annual_billing table. This relies on there always being at least
one row for every service on annual billing - I checked on prod and
that is true.
Join to the annual billing table, then join to a subquery getting the
latest year for that service to extract only the most recent year.
a bit of DRY - use the column definitions to determine what goes into
the dict, and use a `next` iterator rather than a while loop to find
the existing service row. Take advantage of dict mutability to avoid
needing to refer to the list by index.
Also change the tests so if there's an error, the diff is slightly
more readable. But not much