Added cancelled letters to the number of failed letters in the statistics
that get used for the dashboard. At some point, we want to stop
including cancelled letters in the stats, but for now this keeps things
consistent with our current letter failure state, permanent-failure.
Letters should always have a reference, because that’s what DVLA use to
tell us when they’ve sent a letter.
If a letter has a reference of `None` then DVLA say they’ve sent a
letter with a reference of `'None'`. This means we can never reconcile
the letter, which means it stays in `created`, which means it never
gets billed.
We don’t think this has affected any real letters yet, just ones that
we’ve sent as tests.
Bumped notifications-utils to 3.7.0. Version 3.7.0 includes the
`convert_utc_to_bst` and `convert_bst_to_utc` functions and the
`LETTER_PROCESSING_DEADLINE` constant, so these have been removed from
this repo and anywhere using these has now been updated to get these
from `notifications-utils`.
Also bumped pytest by a patch version to bring in a bug fix.
This commit modifies the code paths the admin app uses to send one off
emails and text messages to also accept letters.
This mostly worked already, the two changes were:
- making sure that one-off letters are processed by the correct task,
from the correct queue
- one-off letters sent from a service in research mode don’t get put on
a queue and go straight to `delivered` (because we don’t want to send
them for real)
Admin app needs to get the service data retention for the specified
notification type, so to avoid iterating through the list of all
existing service data retention settings we restore the endpoint
to get the individual data retention period.
Allows getting notification counts for a given number of days to
support services with custom data retention periods (admin dashboard
page should still display counts for the last 7 days, while the
notifications page displays all stored notifications).
If the request is for the big numbers on the activity page, then we need to use the number right number of days.
Added an end point to get the data retention for the service and notification type, which is needed on the activity page to say how long the report is available for.
Added the option to filter by one_off messages to the DAO function
`get_notifications_for_service`. Previously, one-off notifications
were not returned - this has changed so that the default is for
one-off notifications to be returned. Also simplified the `include_jobs`
filter for this function.
The DAO function gets used in 3 places - for the V1 and V2 API endpoints,
which will now start to return one-off messages. It also gets used by
the admin app which needs to pass in `include_one_off=False` to the
`get_all_notifications_for_service` where we don't want one-off
notifications to show, such as the API message log page.
We no longer need the `/platform-stats` route in the service blueprint,
because admin is using the new `/platform-stats` route in the platform stats
blueprint instead.
Both service api tasks work fine if the object is unexpectedly deleted
halfway through - they both check to see if the api details are still
in the DB before trying to send the request.
it now uses the ft_notification_status table - nice and quick. if you
ask for the current tax year it'll top up the data with numbers from
the notification table.
the data is in exactly the same shape as the old endpoint - no changes
to the admin app are necessary
Created a platform-stats blueprint and moved the new platform stats
endpoint to the new blueprint (it was previously in the service
blueprint). Since the original platform stats route and the new platform
stats route are now in different blueprints, their view functions can
have the same name without any issues.
Moved the `fetch_new_aggregate_stats_by_date_range_for_all_services`
DAO function from the services DAO to the Notifications DAO since this
function queries the `notification` and `notification_history` tables.
Also added a test to check that the data returned from the function
takes BST into account.
Added in a new endpoint and DAO function to provide the data for the new
platform admin statistics page. The DAO method gets different data from
the Notifications / NotificationHistory table and also groups it differently.
The old endpoint has not been deleted yet to allow the numbers on the
old and new pages to be compared.
`@service_blueprint.route('/<uuid:service_id>/fragment/aggregate_statistics')`
is not being used anywhere, so has been removed. The `provider_statistics_dao`
can also be removed, since the function this contained was only used for
the endpoint.
the admin app currently calls get_detailed_service, which gets
notification stats, adds them on to the rest of the detailed service
endpoint, and returns them. However, the admin app then only looks at
the stats - it doesn't look at the rest of the service object.
This is called in a few high profile places - the dashboard, the
notification summary page, and when you send a job. By creating a
separate endpoint that ignores the rest of the service object (and no
marshmallow too!), the hope is that we'll improve some slowness we've
been seeing.
Note: The old detailed function will still need to stay - it's used
by get_services(detailed=True) for the platform admin page.
Added a new DAO function which archives letter contact blocks by
setting archived to True. This raises an ArchiveValidationError if
trying to archive the default letter block for a service or the default
letter contact block for a template.
Added a new endpoint for archiving letter contact blocks.
Added a new DAO function which archives SMS senders by setting
archived to True. This raises an ArchiveValidationError, if
trying to archive a default SMS sender or an inbound number.
Added a new endpoint for archiving SMS senders.
Added a new DAO function which archives email reply_to addresses by
setting archived to True. This raises a new type of error, an
ArchiveValidationError, if trying to archive a default reply_to address.
Added a new endpoint for archiving email reply_to addresses.
Updated the DAO methods which return a single SMS sender and all SMS senders
to only return the non-archived senders. Changed the error raised in the Admin
interface from a SQLAlchemyError to a BadRequestError.
Updated the DAO methods which return a single email reply_to address and
all reply_to addresses to only return the non-archived addresses.
Changed the type of error that gets raised when using the Admin
interface to be BadRequestError instead of a SQLAlchemyError.
this is so that we can filter out inactive organisations and services
note: can't remove user schema completely, as we still use it in
POST /user to create new users
It is possible to search for a phone number when from the email notification page and get a SMS message in return.
This also helps to optimise the query.
The vast majority of messages that are being sent one-off are
time-sensitive. A typical example is a caseworker on the phone who sends
a message at the end of the call. They normally wait until the message
has been delivered, so all the time they’re waiting is time when they
can’t be helping someone else.
What we don’t want to happen is for the messages they’re sending to get
stuck behind a big lump of GOV.UK Subscription emails or passport
reminder texts. I think the best way to do this is shift them onto the
priority queue.
We’re currently seeing queue sizes of up to 5,000 on the ‘normal’
queues; I don’t think there’s any risk of this change making the
priority queue more heavily-laden than this. Especially since the
traffic patterns of users sending one-off messages won’t be spiky.
When creating or updating an organisation an itegrity error is raise if the name is already used.
This change adds a new error handler for the organisation to catch the named unique index and return a 400 with a sensible message.
We have an other error handler for unique service names which was caught in the error handler for all blueprints. A new error handler for the service_blueprint has been created for catch those specific unique constraints.
This is a nice way to encapulate the specific errors for a specific blueprint.
notifications-admin has now been changed to always pass the service_id
to the 'service/unique' endpoint. This means we don't need to cover the
case of there being no service_id and the tests can also be updated.
Changed the '/service/unique' endpoint to optionally accept the
service_id parameter. It now doesn't matter if a user tries to change
the capitalization or add punctuation to their own service name. But
there should still be an error if a user tries to change the punctuation
or capitalization of another service.
service_id needs to be allowed to be None until notifications-admin is
updated to always pass in the service_id.