The list of top-level dependencies is moved to requirements-app.txt,
which is used by `make freeze-requirements` to generate the full
list of requirements in requirements.txt.
This is based on alphagov/digitalmarketplace-api#615, so rationale
from that PR applies here.
We had a problem with unpinned packages on new deployments leading
to failed tests (e.g. alphagov/notifications-admin#2144) which is
why we're implementing this now.
After re-evaluating pipenv again, this still seems like the least
disruptive approach:
* pyup.io has experimental support for Pipfile, but doesn't respect
version ranges or updating hashes in the lock file
* CloudFoundry buildpack recognizes and supports Pipfiles out of the
box, but the support is relatively new. For example until recently
CF would install dev packages during deployment. It's also based on
generating a requirements file from the Pipfile, which doesn't
properly support pinning VCS dependencies (eg it doesn't set the
#egg= version, meaning pip will not upgrade the package if it's
already installed).
* pipenv has a strict dependency resolution algorithm, which doesn't
appear to be well documented and can cause some unexpected failures.
For example, pipenv doesn't seem to be able to install `awscli-cwlogs`
package at all, believing it to have a version conflict for `botocore`
(which it doesn't list as a direct dependency) while neither `pip` nor
`pip-tools` highlight any issues with it.
* While trying out `pipenv install` on our list of dependencies it would
regularly fail to install utils with a "Will try again." message.
While the installation succeeds after a retry, this doesn't inspire
confidence.
* The switch to Pipfile and pipenv-managed virtualenvs requires a series
of changes to `make` targets and scripts - replacing `pip install` with
`pipenv`, removing references to requirements files and prefixing
commands with `pipenv run`. While it's likely to simplify the overall
process of managing dependencies, it would require time to properly
implement across our applications and environments (Jenkins, PaaS,
docker containers, and dev machines).
If a PR is going to fail because tests aren’t passing then you:
- should know about it as quick as possible
- shouldn’t waste precious Jenkins CPU running subsequent tests
This commit adds the `-x` flag to pytest, which stops the test run as
soon as one failing test is discovered.
previously they were using sample_service fixture under the hood, but
with full permissions added - this works fine, **unless** there's
already a service with the name "sample service" in the database. This
can happen for two reasons:
* A previous test didn't tear down correctly
* This test already invoked the sample_service fixture somehow
If this happens, we just return the existing service, without modifying
its values - values that we might change in tests, such as
research mode or letters permissions.
In the future, we'll have to be vigilant! and aware! and careful! to
not use sample_service if we're doing tests involving letters, since
they create a service with a different name now
previously we didn't do this because the tests all used the same DB
(test_notifications_api), however @minglis shared a snippet that simply
creates one test db per thread.
PEP8 was renamed to pycodestyle; this issue explains why:
https://github.com/PyCQA/pycodestyle/issues/466
This commit changes our tests to use pycodestyle instead of pep8.
It also means:
- making a couple of whitespace changes to appease the linter
- disabling warnings for bare `Except`s (ie `Except` instead of `Except
ValueError`) – this seems like a sensible thing to catch but I’m not
going to make meaningful code changes in this commit
call generate-version-file before tests, since they'll fall over if the version file isn't present
use /Users/leohemsted/.virtualenvs/api rather than looking for ./venv/ - if there's some other venv already active, then don't try and look for a current venv to activate