since there'll be a bunch of threads running functional test tasks at
the same time, there's no point always trying to start from the same
second and then stepping back to the same one-second-back file each
time. Also, this leads us to an increased risk of race conditions.
This change takes the same thirty second range, but shuffles it. The
tests, since they're no longer deterministic, now use a new Matcher
object (w/ credit to alexey) to match any filename from within that
thirty second range
when a test letter is created on dev or preview, we upload a file to
the dvla ftp response bucket, to test that our integration with s3
works. s3 triggers an sns notification, which we pick up, and then we
download the file and mark the letters it mentions as delivered.
However, if two tests run at the same time, they'll create the same
file on s3. One will just overwrite the next, and the first letter will
never move into delivered - this was causing functional tests to
intermittently fail.
This commit makes the test letter task check if the file exists - if it
does, it moves back one second and tries again. It tries this thirty
times before giving up.
The `serialize` method of the Notification model now includes the
`created_by_name`. If a notification was sent as a one-off message this
value will now be the name of the person who sent the notification. If
a notification was sent through the API or by a CSV upload we don't
record the id of the sender, so `created_by_name` will be `None`.
This change affects the data that gets returned from these endpoints:
* /v2/notifications/<notification_id>
* /v2/notifications
Added the option to filter by one_off messages to the DAO function
`get_notifications_for_service`. Previously, one-off notifications
were not returned - this has changed so that the default is for
one-off notifications to be returned. Also simplified the `include_jobs`
filter for this function.
The DAO function gets used in 3 places - for the V1 and V2 API endpoints,
which will now start to return one-off messages. It also gets used by
the admin app which needs to pass in `include_one_off=False` to the
`get_all_notifications_for_service` where we don't want one-off
notifications to show, such as the API message log page.
We've updated the current service callbacks to add callback types
in #1964, but since the table is versioned we also need to add a
type to the history records.
Even though they're not used anywhere at the moment this might make
it easier to restore from a history callback record in the future.
and return data for one more day.
we're not really limiting to 7 days - we're returning 7 entire days,
plus whatever time has elapsed since midnight today. I felt it would be
best to rename the variable to `whole_days` to imply that it's not
"limit this data set to seven days", it's "give me at least seven days".
the endpoint is backwards compatible so we can rename the variable on the front-end later
We no longer need the `/platform-stats` route in the service blueprint,
because admin is using the new `/platform-stats` route in the platform stats
blueprint instead.
The list of top-level dependencies is moved to requirements-app.txt,
which is used by `make freeze-requirements` to generate the full
list of requirements in requirements.txt.
This is based on alphagov/digitalmarketplace-api#615, so rationale
from that PR applies here.
We had a problem with unpinned packages on new deployments leading
to failed tests (e.g. alphagov/notifications-admin#2144) which is
why we're implementing this now.
After re-evaluating pipenv again, this still seems like the least
disruptive approach:
* pyup.io has experimental support for Pipfile, but doesn't respect
version ranges or updating hashes in the lock file
* CloudFoundry buildpack recognizes and supports Pipfiles out of the
box, but the support is relatively new. For example until recently
CF would install dev packages during deployment. It's also based on
generating a requirements file from the Pipfile, which doesn't
properly support pinning VCS dependencies (eg it doesn't set the
#egg= version, meaning pip will not upgrade the package if it's
already installed).
* pipenv has a strict dependency resolution algorithm, which doesn't
appear to be well documented and can cause some unexpected failures.
For example, pipenv doesn't seem to be able to install `awscli-cwlogs`
package at all, believing it to have a version conflict for `botocore`
(which it doesn't list as a direct dependency) while neither `pip` nor
`pip-tools` highlight any issues with it.
* While trying out `pipenv install` on our list of dependencies it would
regularly fail to install utils with a "Will try again." message.
While the installation succeeds after a retry, this doesn't inspire
confidence.
* The switch to Pipfile and pipenv-managed virtualenvs requires a series
of changes to `make` targets and scripts - replacing `pip install` with
`pipenv`, removing references to requirements files and prefixing
commands with `pipenv run`. While it's likely to simplify the overall
process of managing dependencies, it would require time to properly
implement across our applications and environments (Jenkins, PaaS,
docker containers, and dev machines).
Added the letter_rate table to the list of tables which does not get
deleted after each test run and changed the tests to use the real letter
rates.
Also removed the letter rate DAO since this was only being used in
tests, so was no longer needed.
We now support letters of up to 5 sheets long, so we need to store the
rates for 4 and 5 sheet letters (both crown and non-crown) in the
`letter_rates` table.
really, it'll be somewhere btween 7 and 8 depending on what time of day
you request it at. But if today is monday, then seven days ago is last
tuesday - but we should return data for last monday as well so that
users see a full week's worth of data
also update/clarify the tests to make sure this is being honored for
all the different widgets on the dashboard
We have a few old jobs which don’t have a `processing_started` date.
This means that they always sort to the top of the jobs list in admin,
no matter how old they are. We think this is due to an old bug where
jobs would not be updated if a deploy was in progress.
This commit backfills the `processing_started` data for these jobs,
which will be roughly accurate. Complete accuracy is not the goal;
having these jobs not sort to the top of the list is.
This will affect 5 jobs across 3 services on production:
```sql
select service_id, job_status, created_at, updated_at, processing_started, processing_finished, notification_count, notifications_sent, notifications_delivered, notifications_failed from jobs where processing_started is null and job_status = 'in progress';
```
```
service_id | job_status | created_at | updated_at | processing_started | processing_finished | notification_count | notifications_sent | notifications_delivered | notifications_failed
--------------------------------------+-------------+----------------------------+----------------------------+--------------------+---------------------+--------------------+--------------------+-------------------------+----------------------
d47e5a1b-a04b-4398-8935-c8a266ce1d44 | in progress | 2017-09-29 13:49:41.512356 | 2017-10-01 02:01:05.281162 | | | 10615 | 0 | 0 | 0
128b91b6-2996-4107-bb65-51b7c24a728d | in progress | 2017-09-29 09:25:39.802623 | 2017-09-29 16:01:02.154291 | | | 10240 | 0 | 0 | 0
128b91b6-2996-4107-bb65-51b7c24a728d | in progress | 2017-09-29 09:31:52.455919 | 2017-09-29 16:01:01.990054 | | | 9930 | 0 | 0 | 0
128b91b6-2996-4107-bb65-51b7c24a728d | in progress | 2017-08-22 08:15:39.125999 | 2017-08-22 16:01:07.758805 | | | 6967 | 0 | 0 | 0
95316ff0-e555-462d-a6e7-95d26fbfd091 | in progress | 2016-05-27 14:44:18.114564 | 2016-06-13 00:18:14.542795 | | | 2742 | 2238 | 525 | 1713
(5 rows)
```