if the billing data was incorrect and needs to be rebuilt, we should
remove old rows. Previously we were only upserting new rows, but old
no-longer-relevant rows were staying in the database. This commit
makes the flask command remove *all* rows for that service and day,
before inserting the rows from the original notification data after.
This commit doesn't change the existing nightly task, nor does it
change the upsert that happens upon viewing the usage page. In normal
usage, there should never be a case where the number of billable units
for a rate decreases. It should only ever increase, thus, never need to
be deleted
since there'll be a bunch of threads running functional test tasks at
the same time, there's no point always trying to start from the same
second and then stepping back to the same one-second-back file each
time. Also, this leads us to an increased risk of race conditions.
This change takes the same thirty second range, but shuffles it. The
tests, since they're no longer deterministic, now use a new Matcher
object (w/ credit to alexey) to match any filename from within that
thirty second range
when a test letter is created on dev or preview, we upload a file to
the dvla ftp response bucket, to test that our integration with s3
works. s3 triggers an sns notification, which we pick up, and then we
download the file and mark the letters it mentions as delivered.
However, if two tests run at the same time, they'll create the same
file on s3. One will just overwrite the next, and the first letter will
never move into delivered - this was causing functional tests to
intermittently fail.
This commit makes the test letter task check if the file exists - if it
does, it moves back one second and tries again. It tries this thirty
times before giving up.
The `serialize` method of the Notification model now includes the
`created_by_name`. If a notification was sent as a one-off message this
value will now be the name of the person who sent the notification. If
a notification was sent through the API or by a CSV upload we don't
record the id of the sender, so `created_by_name` will be `None`.
This change affects the data that gets returned from these endpoints:
* /v2/notifications/<notification_id>
* /v2/notifications
Added the option to filter by one_off messages to the DAO function
`get_notifications_for_service`. Previously, one-off notifications
were not returned - this has changed so that the default is for
one-off notifications to be returned. Also simplified the `include_jobs`
filter for this function.
The DAO function gets used in 3 places - for the V1 and V2 API endpoints,
which will now start to return one-off messages. It also gets used by
the admin app which needs to pass in `include_one_off=False` to the
`get_all_notifications_for_service` where we don't want one-off
notifications to show, such as the API message log page.
and return data for one more day.
we're not really limiting to 7 days - we're returning 7 entire days,
plus whatever time has elapsed since midnight today. I felt it would be
best to rename the variable to `whole_days` to imply that it's not
"limit this data set to seven days", it's "give me at least seven days".
the endpoint is backwards compatible so we can rename the variable on the front-end later
We no longer need the `/platform-stats` route in the service blueprint,
because admin is using the new `/platform-stats` route in the platform stats
blueprint instead.
Added the letter_rate table to the list of tables which does not get
deleted after each test run and changed the tests to use the real letter
rates.
Also removed the letter rate DAO since this was only being used in
tests, so was no longer needed.
really, it'll be somewhere btween 7 and 8 depending on what time of day
you request it at. But if today is monday, then seven days ago is last
tuesday - but we should return data for last monday as well so that
users see a full week's worth of data
also update/clarify the tests to make sure this is being honored for
all the different widgets on the dashboard
Both service api tasks work fine if the object is unexpectedly deleted
halfway through - they both check to see if the api details are still
in the DB before trying to send the request.
it now uses the ft_notification_status table - nice and quick. if you
ask for the current tax year it'll top up the data with numbers from
the notification table.
the data is in exactly the same shape as the old endpoint - no changes
to the admin app are necessary
By adding the service id, the query performance has improved greatly. It went from 6200ms to 0.04ms.
This should stop the 500s when a template is deleted.
Test notifications are only stored in the notification table, not the
notification_history table. We were deciding which table to query for
the platform admin stats based on the start date passed to the DAO
function. This means that if the start date given was more than 7 days
ago and the end date was within the last 7 days, test notifications were
not being returned. We now use the end date to choose which table to
query which means that test notifications always get returned when they
should.