Before the search term was either:
- an email address (or partial email address)
- a phone number (or partial phone number)
Now it can also be:
- a reference (or partial reference)
We can take a pretty good guess, by looking at the search term, whether
the thing the user is searching by email address or phone number. This
helps us:
- only show relevant notifications
- normalise the search term to give the best chance of matching what we
store in the `normalised_to` field
However we can’t look at a search term and guess whether it’s a
reference, because a reference could take any format. Therefore if the
user hasn’t told us what kind of thing their search term is, we should
stop trying to guess.
Change method name to be more relevant
Check if verification notification we send is a correct one
pass in notify db session for tests instead of sample_user
Refactor tests by using admin_request instead of client
Refactor all tests for affected reply-to endpoints for good measure
Allow overwriting reply-to address with the same address
Skip checking for duplicates if it's an reply-to email update
Fix refactored tests
Verify duplicates exception not needed
Check for duplicate reply-to email address has been added on:
-verification endpoint, so we do not send the verifying notification
needlessly
- add reply-to email address and update reply-to email address
endpoints, as those can be hit multiple times after the email address
has been verified (so the same email address could end up being added
multiple times). EDIT: this has now been prevented on admin app,
but it's better to retain double-check for safety.
This data includes service and org name, consent to research,
contact details and both intended and factual notifications
volumes by notification type.
This query was created to get data for a csv report for our
platform admins.
This relationship is via the `Organisation` now; we don’t use this
column to fudge a relationship based on the user’s email address and the
matching something in these columns.
This sets the folder permissions for a user when adding them to a
service. If a user is being added to a service after accepting an
invite, we need to account for the possibility that the folders we are
trying to add them to have been deleted before they accepted the invite.
Updated the add_user_to_service endpoint to only handle data in the
'new' format (`{"permissions": [...]}` instead of `[permission_1, permission_2]`)
since Admin has been updated to send data the new way.
This change means that we no longer need the Marshmallow Permission
schema, so it can be deleted.
The data posted to the `add_user_to_service` endpoint is currently sent as a
list of permissions:
`[{'permission': MANAGE_SETTINGS}, {'permission': MANAGE_TEMPLATES}]`.
This endpoint is going to also be used for folder permissions, so the
data now needs to be nested:
`{'permissions': [{'permission': MANAGE_SETTINGS}, {'permission': MANAGE_TEMPLATES}]}`
This changes the add_user_to_service endpoint to accept data in either
format. Once admin is sending data in the new format, the code can be
simplified.
However, until we can create a letter without a logo, we will still default to hm-government, because the dvla_organisation is set on the service.
This does simplify the code.
Also removed the inserts to letter_branding in the data migration file, because we can deploy this before the rest of the work is finished. But we will need to do it later.
now that we're reading from two tables (ft_notification_status and
notifications) for stats, we'll get a couple of rows for each
notification type. If a service doesn't have any rows in one of those
tables, the query will return a row with nulls for the notification
types and counts. Some services will have history but no stats from
today, others will have data from today but no history.
This commit acknowledges that any row might have nulls, not just the
first row.
Flask-SQLAlchemy paginate function issues a separate query to get
the total count of rows for a given filter. This query (with
filters used by the API integration Message log page) is slow for
services with large number of notifications.
Since Message log page doesn't actually allow users to paginate
through the response (it only shows the last 50 messages) we can
use limit instead of paginate, which requires passing in another
flag from admin to the dao method.
`count` flag has been added to `paginate` in March 2018, however
there was no release of flask-sqlalchemy since then, so we need
to pull the dev version of the package from Github.
Admin app needs to get the service data retention for the specified
notification type, so to avoid iterating through the list of all
existing service data retention settings we restore the endpoint
to get the individual data retention period.
Allows getting notification counts for a given number of days to
support services with custom data retention periods (admin dashboard
page should still display counts for the last 7 days, while the
notifications page displays all stored notifications).
If the request is for the big numbers on the activity page, then we need to use the number right number of days.
Added an end point to get the data retention for the service and notification type, which is needed on the activity page to say how long the report is available for.
Added the option to filter by one_off messages to the DAO function
`get_notifications_for_service`. Previously, one-off notifications
were not returned - this has changed so that the default is for
one-off notifications to be returned. Also simplified the `include_jobs`
filter for this function.
The DAO function gets used in 3 places - for the V1 and V2 API endpoints,
which will now start to return one-off messages. It also gets used by
the admin app which needs to pass in `include_one_off=False` to the
`get_all_notifications_for_service` where we don't want one-off
notifications to show, such as the API message log page.
We no longer need the `/platform-stats` route in the service blueprint,
because admin is using the new `/platform-stats` route in the platform stats
blueprint instead.
it now uses the ft_notification_status table - nice and quick. if you
ask for the current tax year it'll top up the data with numbers from
the notification table.
the data is in exactly the same shape as the old endpoint - no changes
to the admin app are necessary
`@service_blueprint.route('/<uuid:service_id>/fragment/aggregate_statistics')`
is not being used anywhere, so has been removed. The `provider_statistics_dao`
can also be removed, since the function this contained was only used for
the endpoint.
the admin app currently calls get_detailed_service, which gets
notification stats, adds them on to the rest of the detailed service
endpoint, and returns them. However, the admin app then only looks at
the stats - it doesn't look at the rest of the service object.
This is called in a few high profile places - the dashboard, the
notification summary page, and when you send a job. By creating a
separate endpoint that ignores the rest of the service object (and no
marshmallow too!), the hope is that we'll improve some slowness we've
been seeing.
Note: The old detailed function will still need to stay - it's used
by get_services(detailed=True) for the platform admin page.