`@service_blueprint.route('/<uuid:service_id>/fragment/aggregate_statistics')`
is not being used anywhere, so has been removed. The `provider_statistics_dao`
can also be removed, since the function this contained was only used for
the endpoint.
Update test_provider_statistics dao - this is really irrelevant since the endpoint using the query is not being used. We have a PR coming to delete the unused code.
Update rate_multiplier to always be an integer
- Change the usage queries to a union so that billing_units is correct for all notification types. Removing the business logic from the schema.
- Added tests for different fragment counts, rates and sheet counts.
This was because the rate_multiplier was being added as 1 and 1.0 which was not resolving to the same.
This updates the table to use Integrer.
Also changed the logging for the task.
the admin app currently calls get_detailed_service, which gets
notification stats, adds them on to the rest of the detailed service
endpoint, and returns them. However, the admin app then only looks at
the stats - it doesn't look at the rest of the service object.
This is called in a few high profile places - the dashboard, the
notification summary page, and when you send a job. By creating a
separate endpoint that ignores the rest of the service object (and no
marshmallow too!), the hope is that we'll improve some slowness we've
been seeing.
Note: The old detailed function will still need to stay - it's used
by get_services(detailed=True) for the platform admin page.
The timezone added is that of the database - locally a db will get timezone=GB, but the servers are using UTC (which is right)
This date is trucated to month, and is BST, here time and timezone is irrelevant.
Added a new DAO function which archives letter contact blocks by
setting archived to True. This raises an ArchiveValidationError if
trying to archive the default letter block for a service or the default
letter contact block for a template.
Added a new endpoint for archiving letter contact blocks.
Added a new DAO function which archives SMS senders by setting
archived to True. This raises an ArchiveValidationError, if
trying to archive a default SMS sender or an inbound number.
Added a new endpoint for archiving SMS senders.
Added a new DAO function which archives email reply_to addresses by
setting archived to True. This raises a new type of error, an
ArchiveValidationError, if trying to archive a default reply_to address.
Added a new endpoint for archiving email reply_to addresses.
Updated the DAO methods which return a single SMS sender and all SMS senders
to only return the non-archived senders. Changed the error raised in the Admin
interface from a SQLAlchemyError to a BadRequestError.
Updated the DAO methods which return a single email reply_to address and
all reply_to addresses to only return the non-archived addresses.
Changed the type of error that gets raised when using the Admin
interface to be BadRequestError instead of a SQLAlchemyError.
Added a new boolean column, `archived`, with a default of False to the
three models which are used to specify the 'reply to' address for
notifications:
* ServiceEmailReplyTo
* ServiceSmsSender
* ServiceLetterContact
Since the admin app won’t be checking the metadata when it starts a job
now it’s possible that someone could make a post request which attempts
to start a job for an invalid file. This commit adds a check to make
sure that can’t happen.
This is more of an extra safety thing, rather than something that the
admin app or a user will see.
All of our uploads now have the metadata about the job set on them in
S3. So this commit moves to using that metadata, if it’s there, instead
of the data in the body of the post request.
The aim of this is to stop the admin app having to post this data, which
means that it won’t have to keep this data in the session for the
while doing the file upload flow.
to get the data for a day can be reasonably slow (a few hundred
milliseconds), and if someone's viewing a service with no activity we
don't want to do that query seven times every two seconds. So if there
is no data in redis, when we get the data out of the database, we
should put it in redis so we can just grab it from there next time.
This'll happen in two cases:
* redis data is deleted
* the service sent no messages that day
additionally, make sure that we convert nicely from redis' return
values (ascii strings) to unicode keys and integer counts.
New redis keys are partitioned per service per day. New process is as
follows:
* require a count of days to filter by. Currently admin always gives 7.
* for each day, check and see if there's anything in redis. There won't
be if either a) redis is/was down or b) the service didn't send any
notifications that day
- if there isn't, go to the database and get a count out.
* combine all these stats together
* get the names/template types etc out of the DB at the end.
it's used in a few places - it should definitely know what timezones
are and return datetimes rather than dates, which are hard to work with
in terms of figuring out how tz aware they are.
we should be very careful with when we get data from
NotificationHistory - this should probably only be from scheduled
tasks. `dao_get_template_usage` is only called from the template
statistics rest endpoint, so shouldn't ever hit template history.
also, moved tests out to new file to break up the 2k test file a bit