now that we're reading from two tables (ft_notification_status and
notifications) for stats, we'll get a couple of rows for each
notification type. If a service doesn't have any rows in one of those
tables, the query will return a row with nulls for the notification
types and counts. Some services will have history but no stats from
today, others will have data from today but no history.
This commit acknowledges that any row might have nulls, not just the
first row.
Flask-SQLAlchemy paginate function issues a separate query to get
the total count of rows for a given filter. This query (with
filters used by the API integration Message log page) is slow for
services with large number of notifications.
Since Message log page doesn't actually allow users to paginate
through the response (it only shows the last 50 messages) we can
use limit instead of paginate, which requires passing in another
flag from admin to the dao method.
`count` flag has been added to `paginate` in March 2018, however
there was no release of flask-sqlalchemy since then, so we need
to pull the dev version of the package from Github.
Added cancelled letters to the number of failed letters in the statistics
that get used for the dashboard. At some point, we want to stop
including cancelled letters in the stats, but for now this keeps things
consistent with our current letter failure state, permanent-failure.
Letters should always have a reference, because that’s what DVLA use to
tell us when they’ve sent a letter.
If a letter has a reference of `None` then DVLA say they’ve sent a
letter with a reference of `'None'`. This means we can never reconcile
the letter, which means it stays in `created`, which means it never
gets billed.
We don’t think this has affected any real letters yet, just ones that
we’ve sent as tests.
Bumped notifications-utils to 3.7.0. Version 3.7.0 includes the
`convert_utc_to_bst` and `convert_bst_to_utc` functions and the
`LETTER_PROCESSING_DEADLINE` constant, so these have been removed from
this repo and anywhere using these has now been updated to get these
from `notifications-utils`.
Also bumped pytest by a patch version to bring in a bug fix.
This commit modifies the code paths the admin app uses to send one off
emails and text messages to also accept letters.
This mostly worked already, the two changes were:
- making sure that one-off letters are processed by the correct task,
from the correct queue
- one-off letters sent from a service in research mode don’t get put on
a queue and go straight to `delivered` (because we don’t want to send
them for real)
Admin app needs to get the service data retention for the specified
notification type, so to avoid iterating through the list of all
existing service data retention settings we restore the endpoint
to get the individual data retention period.
Allows getting notification counts for a given number of days to
support services with custom data retention periods (admin dashboard
page should still display counts for the last 7 days, while the
notifications page displays all stored notifications).
If the request is for the big numbers on the activity page, then we need to use the number right number of days.
Added an end point to get the data retention for the service and notification type, which is needed on the activity page to say how long the report is available for.
Added the option to filter by one_off messages to the DAO function
`get_notifications_for_service`. Previously, one-off notifications
were not returned - this has changed so that the default is for
one-off notifications to be returned. Also simplified the `include_jobs`
filter for this function.
The DAO function gets used in 3 places - for the V1 and V2 API endpoints,
which will now start to return one-off messages. It also gets used by
the admin app which needs to pass in `include_one_off=False` to the
`get_all_notifications_for_service` where we don't want one-off
notifications to show, such as the API message log page.
We no longer need the `/platform-stats` route in the service blueprint,
because admin is using the new `/platform-stats` route in the platform stats
blueprint instead.
Both service api tasks work fine if the object is unexpectedly deleted
halfway through - they both check to see if the api details are still
in the DB before trying to send the request.
it now uses the ft_notification_status table - nice and quick. if you
ask for the current tax year it'll top up the data with numbers from
the notification table.
the data is in exactly the same shape as the old endpoint - no changes
to the admin app are necessary
Created a platform-stats blueprint and moved the new platform stats
endpoint to the new blueprint (it was previously in the service
blueprint). Since the original platform stats route and the new platform
stats route are now in different blueprints, their view functions can
have the same name without any issues.
Moved the `fetch_new_aggregate_stats_by_date_range_for_all_services`
DAO function from the services DAO to the Notifications DAO since this
function queries the `notification` and `notification_history` tables.
Also added a test to check that the data returned from the function
takes BST into account.
Added in a new endpoint and DAO function to provide the data for the new
platform admin statistics page. The DAO method gets different data from
the Notifications / NotificationHistory table and also groups it differently.
The old endpoint has not been deleted yet to allow the numbers on the
old and new pages to be compared.
`@service_blueprint.route('/<uuid:service_id>/fragment/aggregate_statistics')`
is not being used anywhere, so has been removed. The `provider_statistics_dao`
can also be removed, since the function this contained was only used for
the endpoint.
the admin app currently calls get_detailed_service, which gets
notification stats, adds them on to the rest of the detailed service
endpoint, and returns them. However, the admin app then only looks at
the stats - it doesn't look at the rest of the service object.
This is called in a few high profile places - the dashboard, the
notification summary page, and when you send a job. By creating a
separate endpoint that ignores the rest of the service object (and no
marshmallow too!), the hope is that we'll improve some slowness we've
been seeing.
Note: The old detailed function will still need to stay - it's used
by get_services(detailed=True) for the platform admin page.
Added a new DAO function which archives letter contact blocks by
setting archived to True. This raises an ArchiveValidationError if
trying to archive the default letter block for a service or the default
letter contact block for a template.
Added a new endpoint for archiving letter contact blocks.
Added a new DAO function which archives SMS senders by setting
archived to True. This raises an ArchiveValidationError, if
trying to archive a default SMS sender or an inbound number.
Added a new endpoint for archiving SMS senders.
Added a new DAO function which archives email reply_to addresses by
setting archived to True. This raises a new type of error, an
ArchiveValidationError, if trying to archive a default reply_to address.
Added a new endpoint for archiving email reply_to addresses.
Updated the DAO methods which return a single SMS sender and all SMS senders
to only return the non-archived senders. Changed the error raised in the Admin
interface from a SQLAlchemyError to a BadRequestError.