since there'll be a bunch of threads running functional test tasks at
the same time, there's no point always trying to start from the same
second and then stepping back to the same one-second-back file each
time. Also, this leads us to an increased risk of race conditions.
This change takes the same thirty second range, but shuffles it. The
tests, since they're no longer deterministic, now use a new Matcher
object (w/ credit to alexey) to match any filename from within that
thirty second range
when a test letter is created on dev or preview, we upload a file to
the dvla ftp response bucket, to test that our integration with s3
works. s3 triggers an sns notification, which we pick up, and then we
download the file and mark the letters it mentions as delivered.
However, if two tests run at the same time, they'll create the same
file on s3. One will just overwrite the next, and the first letter will
never move into delivered - this was causing functional tests to
intermittently fail.
This commit makes the test letter task check if the file exists - if it
does, it moves back one second and tries again. It tries this thirty
times before giving up.
The `serialize` method of the Notification model now includes the
`created_by_name`. If a notification was sent as a one-off message this
value will now be the name of the person who sent the notification. If
a notification was sent through the API or by a CSV upload we don't
record the id of the sender, so `created_by_name` will be `None`.
This change affects the data that gets returned from these endpoints:
* /v2/notifications/<notification_id>
* /v2/notifications
Added the option to filter by one_off messages to the DAO function
`get_notifications_for_service`. Previously, one-off notifications
were not returned - this has changed so that the default is for
one-off notifications to be returned. Also simplified the `include_jobs`
filter for this function.
The DAO function gets used in 3 places - for the V1 and V2 API endpoints,
which will now start to return one-off messages. It also gets used by
the admin app which needs to pass in `include_one_off=False` to the
`get_all_notifications_for_service` where we don't want one-off
notifications to show, such as the API message log page.
and return data for one more day.
we're not really limiting to 7 days - we're returning 7 entire days,
plus whatever time has elapsed since midnight today. I felt it would be
best to rename the variable to `whole_days` to imply that it's not
"limit this data set to seven days", it's "give me at least seven days".
the endpoint is backwards compatible so we can rename the variable on the front-end later
We no longer need the `/platform-stats` route in the service blueprint,
because admin is using the new `/platform-stats` route in the platform stats
blueprint instead.
Added the letter_rate table to the list of tables which does not get
deleted after each test run and changed the tests to use the real letter
rates.
Also removed the letter rate DAO since this was only being used in
tests, so was no longer needed.
really, it'll be somewhere btween 7 and 8 depending on what time of day
you request it at. But if today is monday, then seven days ago is last
tuesday - but we should return data for last monday as well so that
users see a full week's worth of data
also update/clarify the tests to make sure this is being honored for
all the different widgets on the dashboard
Both service api tasks work fine if the object is unexpectedly deleted
halfway through - they both check to see if the api details are still
in the DB before trying to send the request.
it now uses the ft_notification_status table - nice and quick. if you
ask for the current tax year it'll top up the data with numbers from
the notification table.
the data is in exactly the same shape as the old endpoint - no changes
to the admin app are necessary
By adding the service id, the query performance has improved greatly. It went from 6200ms to 0.04ms.
This should stop the 500s when a template is deleted.
Test notifications are only stored in the notification table, not the
notification_history table. We were deciding which table to query for
the platform admin stats based on the start date passed to the DAO
function. This means that if the start date given was more than 7 days
ago and the end date was within the last 7 days, test notifications were
not being returned. We now use the end date to choose which table to
query which means that test notifications always get returned when they
should.
Created a platform-stats blueprint and moved the new platform stats
endpoint to the new blueprint (it was previously in the service
blueprint). Since the original platform stats route and the new platform
stats route are now in different blueprints, their view functions can
have the same name without any issues.
Moved the `fetch_new_aggregate_stats_by_date_range_for_all_services`
DAO function from the services DAO to the Notifications DAO since this
function queries the `notification` and `notification_history` tables.
Also added a test to check that the data returned from the function
takes BST into account.
Added in a new endpoint and DAO function to provide the data for the new
platform admin statistics page. The DAO method gets different data from
the Notifications / NotificationHistory table and also groups it differently.
The old endpoint has not been deleted yet to allow the numbers on the
old and new pages to be compared.
* Added unit tests for the get_complaint_count endpoint
* Updated the schema to use 'date' format instead of 'datetime'
* Updated the complaint endpoint to convert start_date and end_date to
be dates instead of strings