previously we didn't do this because the tests all used the same DB
(test_notifications_api), however @minglis shared a snippet that simply
creates one test db per thread.
in tests, we were replacing os.environ with a basic dict so that
we didn't overwrite the contents of the real environment during tests.
However, os.environ doesn't accept non-str values, so this commit
changes the fixture so that it asserts all values set are strings.
We needed to change how we store ip whitelist stuff in the env because
of this.
We now have a new column in the database, but it isn't being
populated. The first step is to make sure we update this column,
while still keeping the old enum based column up to date as well.
A couple of changes have had to happen to support this - one irritating
thing is that if we're ever querying columns individually, including
`Notification.status`, then we'll need to give that column a label,
since under the hood it translates to `Notification._status_enum`.
Accessing status through the ORM (i.e., my_noti.status = 'sending' or
similar) will work fine.
Once we have the new columns in notifications table, the query will need to include the rate multiplier and if the number is international.
The monthly billing query will be built next.
If the template is marked as priority the notification will be sent using the `notify` queue.
The `notify` queue is a low volume queue, messages here will not be queue behind a large job and should be delivered with in a more consistent time frame.
- Added templates.process_type and templates_history.process_type column.
- Added a template_process_type table to handle the enum for templates.process_type, initial values are normal and priority
https://www.pivotaltracker.com/story/show/135429147
the provider details tests were previously very stateful - they
would update a value, and then because provider_details is a "static"
table that is not wiped by the notify_db_session fixture, the tests
were forced to make a second update that reverted that change. if the
test fails for whatever reason, the provider_details table ends up
permanently modified, playing havoc on tests further down the line.
this commit adds the fixture `restore_provider_details` to conftest.
this fixture stores a copy of the contents of ProviderDetails and
ProviderDetailsHistory tables outside of the session, runs the test,
and then clears the table and puts those old values in
this means that the tests have been cleaned up so that they do not
do operations twice in reverse. they've also been cleaned up
generally, including fixture optimisations and such
replaces previous notify_api fixture with the age old:
```
with notify_api.test_request_context():
with notify_api.test_client() as client:
```
just runs those two context managers in a yield fixture (new pytest 3.0 feature)
requirements should be kept up to date to ensure we get bug fixes and new features as they come - particularly py.test, which we were running an 18 month old version for, and missing out on some useful xfail and fixture enhancements, among other things
- ensured statues not deleted on test runs
- returns in API call
Merge branch 'add-new-column-to-jobs-for-delayed-sending' into scheduled-delivery-of-jobs
Conflicts:
app/models.py
a service now has branding and organisation_id columns, and two new
tables have been aded to reflect these:
* branding is a static types table referring to how a service wants
their emails to be branded:
* 'govuk' for GOV UK branding (default)
* 'org' for organisational branding only
* 'both' for co-branded output with both
* organisation is a table defining an organisation's branding. this
contains three entries, all of which are nullable
* colour - a hex code for a coloured bar on the logo's left
* logo - relative path for that org's logo image
* name - the name to display on the right of the logo
if passed in, returns the service object with additional statistics
dictionary, which will be used in the admin app to populate dashboard
components. A new schema has been created for this to avoid clashing/
causing confusion with the existing schema, which is already used
for PUT/POST as well, and this schema can be easily tailored to
reduce ambiguity and lazy-loading
* single-column static data table that currently contains two types: 'normal' and 'team'
* key_type foreign-keyed from api_keys
- must be not null
- existing rows set to 'normal'
* key_type foreign-keyed from notifications
- nullable
- existing rows set to null
* api_key foreign-keyed from notifications
- nullable
- existing rows set to null
We were dropping all tables at the end of the test run, however the alembic_version table is not part of the metadata so was being persisted. Alembic then doesn't upgrade the database next test run, since the version appears up to date, so we were, in the notify_db_session fixture, recreating from MetaData (sqlalchemy models). This involves quite a costly comparison of the postgres system tables and the tables in models.py, which was adding half a second to each test that uses the notify_db_session fixture (virtually all of them).
In order to set a message as temporary-failure, we check if it is in pending status first.
Otherwise a delivery receipt for failure is set to permanent failure.
Refactored service/rest.py so that all methods are returning a properly formatted error message so that the error message can deal with the response.
Refactoed errors.py to properly format the error message.
Ideally all the primary keys in the db would be UUID in order to guarantee unique ids across distributed dbs.
This updates the services.id to a UUID. All the tables with a foreign key to the services.id are also updated.
The endpoints no longer state a data type of the <service_id> path param.
All the tests are updated to reflect this update.
The thing to pay attention to is the 0011_uuid_service_id.py migration script.
This commit must go with a commit on the notifications_admin app to keep things working.
There will be a small outage until both deploys have happened.