`@service_blueprint.route('/<uuid:service_id>/fragment/aggregate_statistics')`
is not being used anywhere, so has been removed. The `provider_statistics_dao`
can also be removed, since the function this contained was only used for
the endpoint.
Update test_provider_statistics dao - this is really irrelevant since the endpoint using the query is not being used. We have a PR coming to delete the unused code.
Update rate_multiplier to always be an integer
- Change the usage queries to a union so that billing_units is correct for all notification types. Removing the business logic from the schema.
- Added tests for different fragment counts, rates and sheet counts.
This was because the rate_multiplier was being added as 1 and 1.0 which was not resolving to the same.
This updates the table to use Integrer.
Also changed the logging for the task.
The timezone added is that of the database - locally a db will get timezone=GB, but the servers are using UTC (which is right)
This date is trucated to month, and is BST, here time and timezone is irrelevant.
Added a new DAO function which archives letter contact blocks by
setting archived to True. This raises an ArchiveValidationError if
trying to archive the default letter block for a service or the default
letter contact block for a template.
Added a new endpoint for archiving letter contact blocks.
Added a new DAO function which archives SMS senders by setting
archived to True. This raises an ArchiveValidationError, if
trying to archive a default SMS sender or an inbound number.
Added a new endpoint for archiving SMS senders.
Added a new DAO function which archives email reply_to addresses by
setting archived to True. This raises a new type of error, an
ArchiveValidationError, if trying to archive a default reply_to address.
Added a new endpoint for archiving email reply_to addresses.
Updated the DAO methods which return a single SMS sender and all SMS senders
to only return the non-archived senders. Changed the error raised in the Admin
interface from a SQLAlchemyError to a BadRequestError.
Updated the DAO methods which return a single email reply_to address and
all reply_to addresses to only return the non-archived addresses.
Changed the type of error that gets raised when using the Admin
interface to be BadRequestError instead of a SQLAlchemyError.
Added a new boolean column, `archived`, with a default of False to the
three models which are used to specify the 'reply to' address for
notifications:
* ServiceEmailReplyTo
* ServiceSmsSender
* ServiceLetterContact
to get the data for a day can be reasonably slow (a few hundred
milliseconds), and if someone's viewing a service with no activity we
don't want to do that query seven times every two seconds. So if there
is no data in redis, when we get the data out of the database, we
should put it in redis so we can just grab it from there next time.
This'll happen in two cases:
* redis data is deleted
* the service sent no messages that day
additionally, make sure that we convert nicely from redis' return
values (ascii strings) to unicode keys and integer counts.
New redis keys are partitioned per service per day. New process is as
follows:
* require a count of days to filter by. Currently admin always gives 7.
* for each day, check and see if there's anything in redis. There won't
be if either a) redis is/was down or b) the service didn't send any
notifications that day
- if there isn't, go to the database and get a count out.
* combine all these stats together
* get the names/template types etc out of the DB at the end.
we should be very careful with when we get data from
NotificationHistory - this should probably only be from scheduled
tasks. `dao_get_template_usage` is only called from the template
statistics rest endpoint, so shouldn't ever hit template history.
also, moved tests out to new file to break up the 2k test file a bit
This PR will optimize this query to use a more efficient index.
- Add notification_type to the dao_get_last_template_usage to optimize the query.
- Tested and analyzed query on production database with very significant results.
Before:
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=0.43..1711.35 rows=1 width=935) (actual time=21186.053..21186.053 rows=0 loops=1)
-> Index Scan Backward using ix_notifications_created_at on notifications (cost=0.43..4607493.80 rows=2693 width=935) (actual time=21186.052..21186.052 rows=0 loops=1)
Filter: (((key_type)::text <> 'test'::text) AND (template_id = 'xxxxxx'::uuid))
Rows Removed by Filter: 8244071
Planning time: 0.112 ms
Execution time: 21186.082 ms
After:
QUERY PLAN
---------------------------------------------------------------------------------------------------------------------
Limit (cost=5323.10..5323.10 rows=1 width=935)
-> Sort (cost=5323.10..5323.74 rows=258 width=935)
Sort Key: created_at DESC
-> Index Scan using ix_notifications_template_id on notifications (cost=0.56..5321.81 rows=258 width=935)
Index Cond: (template_id = 'xxxxx'::uuid)
Filter: (((key_type)::text <> 'test'::text) AND (notification_type = 'sms'::notification_type))
Planning time: 1.102 ms
Execution time: 0.584 ms
We need to deal with this, it's ok when updating a notification status from delivered to delivered. But the DailySortedLetter counts are being doubled.
Adding the file_name to the table as a unique key to get around this issue. It will mean we have multiple rows for each billing_day, but that's ok we can aggregate that.
This will also give us a way to see which file created which count.