Last year we had an issue with the daily limit cache and the query that was populating it. As a result we have not been checking the daily limit properly. This PR should correct all that.
The daily limit cache is not being incremented in app.notifications.process_notifications.persist_notification, this method is and should always be the only method used to create a notification.
We increment the daily limit cache is redis is enabled (and it is always enabled for production) and the key type for the notification is team or normal.
We check if the daily limit is exceed in many places:
- app.celery.tasks.process_job
- app.v2.notifications.post_notifications.post_notification
- app.v2.notifications.post_notifications.post_precompiled_letter_notification
- app.service.send_notification.send_one_off_notification
- app.service.send_notification.send_pdf_letter_notification
If the daily limits cache is not found, set the cache to 0 with an expiry of 24 hours. The daily limit cache key is service_id-yyy-mm-dd-count, so each day a new cache is created.
The best thing about this PR is that the app.service_dao.fetch_todays_total_message_count query has been removed. This query was not performant and had been wrong for ages.
The query had a group by on notification_type and notification_status, this not only slows the query down but is wrong. The query only looked at the first result, but this query would return as many rows as different notification types and status, meaning the results do not include the correct number.
Are we concerned that all status types are included. For example letters can be cancelled or have validation-failures which shouldn't be included in the daily limit check.
Flake8 Bugbear checks for some extra things that aren’t code style
errors, but are likely to introduce bugs or unexpected behaviour. A
good example is having mutable default function arguments, which get
shared between every call to the function and therefore mutating a value
in one place can unexpectedly cause it to change in another.
This commit enables all the extra warnings provided by Flake8 Bugbear,
except for:
- the line length one (because we already lint for that separately)
- B903 Data class should either be immutable or use `__slots__` because
this seems to false-positive on some of our custom exceptions
- B902 Invalid first argument 'cls' used for instance method because
some SQLAlchemy decorators (eg `declared_attr`) make things that
aren’t formally class methods take a class not an instance as their
first argument
It disables:
- _B306: BaseException.message is removed in Python 3_ because I think
our exceptions have a custom structure that means the `.message`
attribute is still present
Matches the work done in other repos:
- https://github.com/alphagov/notifications-admin/pull/3172/files
This done so that we do not use statsd on our http endpoint.
We decided we do not need metrics that this gave us. If we
change our minds, we will add Prometheus-friendly decorators
instead in the future.
* it doesn't delete service email reply to or letter contacts, or
contact lists. I don't think the contact lists will ever be an issue
but it doesn't hurt to add it to the list of things to remove.
* it doesn't remove users from organisations before deleting the users
there may be more tables that link to Service that should be deleted,
but for now just add these ones that I could spot.
This will switch on this feature for new services.
After this we will:
- give existing services this permission with a database migration
- remove this permission from the codebase entirely so that everyone has
this feature and can’t switch it off
and update it when users have to use their email to interact with
Notify service.
Initial population:
If user has email_auth, set last_validated_at to logged_in_at.
If user has sms_auth, set it to created_at.
Then:
Update email_access_valdiated_at date when:
- user with email_auth logs in
- new user is created
- user resets password when logged out, meaning we send them an
email with a link they have to click to reset their password.
Since Pytest 5, `ExceptionInfo` objects (returned by `pytest.raises`) now
have the same `str` representation as `repr`. This means that `str(e)`
now needs to be changed to `str(e.value)`.
https://github.com/pytest-dev/pytest/issues/5412
Code that is within a `with Python.raises(...)` context manager but
comes after the line that raises the exception doesn't get evaluated.
We had some assertions that we never being tested because of this, so
this ensures that they will always get run and fixes them where
necessary.
Previously we were doing it based on their email address. This will also
apply it if they self-select as a GP surgery, even if they don’t have an
NHS email address.
This is the second commit in the series to add organisation_id to Service.
- Data migration to update services.organisation_id from data in organisation_to_service
(The rollback will lose any updates to organisation unless the script is updated to set organistion_to_service from service.organisation_id )
- Update Service.organisation relationship to a ForeignKey relationship to Organisation.
- Update Organisation.services to a backref relationship to Service.
- Add oranisation_id to Service data model.
- Update methods to create service and associate service to organisation to set the organisation_id on the Service.
- Create the missing test, if the service user email matches a domain for an organisation then associate the service to the organisation and inherit crown and organisation_type from the organisation.
a little complicated because the free_sms_fragment_limit comes from
the annual_billing table. This relies on there always being at least
one row for every service on annual billing - I checked on prod and
that is true.
Join to the annual billing table, then join to a subquery getting the
latest year for that service to extract only the most recent year.
a bit of DRY - use the column definitions to determine what goes into
the dict, and use a `next` iterator rather than a while loop to find
the existing service row. Take advantage of dict mutability to avoid
needing to refer to the list by index.
Also change the tests so if there's an error, the diff is slightly
more readable. But not much
This data includes service and org name, consent to research,
contact details and both intended and factual notifications
volumes by notification type.
This query was created to get data for a csv report for our
platform admins.
This relationship is via the `Organisation` now; we don’t use this
column to fudge a relationship based on the user’s email address and the
matching something in these columns.
The NHS is a special case because it’s not one organisation, but it does
have one consistent brand. So anyone working for an NHS organisation
should have their default branding set when they create a service, even
if we know nothing about their specific organisation.
This is fiendishly difficult error to discover on your own.
It’s caused when, during the creation of a row in the database, you run
a query on the same table, or a table that joins to the table you’re
inserting into. What I think is happening is that the database is forced
to flush the session before running the query in order to maintain
consistency.
This means that the session is clean by the time the history stuff comes
to do its work, so there’s nothing for it to copy into the history
table, and it silently fails to record history.
Hopefully raising an exception will:
- prevent this from failing silently
- save whoever comes across this issue in the future a whole load of
time
This sets the folder permissions for a user when adding them to a
service. If a user is being added to a service after accepting an
invite, we need to account for the possibility that the folders we are
trying to add them to have been deleted before they accepted the invite.