Admin, API and utils were all defining a value for SMS_CHAR_COUNT_LIMIT.
This value has been updated in notifications-utils to allow text
messages to be 4 fragments long and notifications-api now gets the value of
SMS_CHAR_COUNT_LIMIT from notifications-utils instead of defining it in
config.
Also updated some tests to check for the higher limit.
Admin app needs to get the service data retention for the specified
notification type, so to avoid iterating through the list of all
existing service data retention settings we restore the endpoint
to get the individual data retention period.
Allows getting notification counts for a given number of days to
support services with custom data retention periods (admin dashboard
page should still display counts for the last 7 days, while the
notifications page displays all stored notifications).
If there are no files to delete we won't get an excpetion.
Wrap the delete file in a try/except to avoid stopping the entire task.
Fix the missing slash for the file name.
The admin app still needs to use the name column.
Add the text field to the post data schemas.
If the text is not in the post data, then populate it with the data in the name field.
This should make the migration to text easier, and will work until we are able to update the admin app.
if the billing data was incorrect and needs to be rebuilt, we should
remove old rows. Previously we were only upserting new rows, but old
no-longer-relevant rows were staying in the database. This commit
makes the flask command remove *all* rows for that service and day,
before inserting the rows from the original notification data after.
This commit doesn't change the existing nightly task, nor does it
change the upsert that happens upon viewing the usage page. In normal
usage, there should never be a case where the number of billable units
for a rate decreases. It should only ever increase, thus, never need to
be deleted
If the request is for the big numbers on the activity page, then we need to use the number right number of days.
Added an end point to get the data retention for the service and notification type, which is needed on the activity page to say how long the report is available for.
since there'll be a bunch of threads running functional test tasks at
the same time, there's no point always trying to start from the same
second and then stepping back to the same one-second-back file each
time. Also, this leads us to an increased risk of race conditions.
This change takes the same thirty second range, but shuffles it. The
tests, since they're no longer deterministic, now use a new Matcher
object (w/ credit to alexey) to match any filename from within that
thirty second range
when a test letter is created on dev or preview, we upload a file to
the dvla ftp response bucket, to test that our integration with s3
works. s3 triggers an sns notification, which we pick up, and then we
download the file and mark the letters it mentions as delivered.
However, if two tests run at the same time, they'll create the same
file on s3. One will just overwrite the next, and the first letter will
never move into delivered - this was causing functional tests to
intermittently fail.
This commit makes the test letter task check if the file exists - if it
does, it moves back one second and tries again. It tries this thirty
times before giving up.
The `serialize` method of the Notification model now includes the
`created_by_name`. If a notification was sent as a one-off message this
value will now be the name of the person who sent the notification. If
a notification was sent through the API or by a CSV upload we don't
record the id of the sender, so `created_by_name` will be `None`.
This change affects the data that gets returned from these endpoints:
* /v2/notifications/<notification_id>
* /v2/notifications
Added the option to filter by one_off messages to the DAO function
`get_notifications_for_service`. Previously, one-off notifications
were not returned - this has changed so that the default is for
one-off notifications to be returned. Also simplified the `include_jobs`
filter for this function.
The DAO function gets used in 3 places - for the V1 and V2 API endpoints,
which will now start to return one-off messages. It also gets used by
the admin app which needs to pass in `include_one_off=False` to the
`get_all_notifications_for_service` where we don't want one-off
notifications to show, such as the API message log page.