remove pip-accel - it's not been updated in two years, and pins our
version of pip to a version that is several breaking changes old.
make sure commands work if you're already in a venv - mostly by
checking for presence of $VIRTUAL_ENV, and ensuring we use the correct
pip to install packages. Also clean up the commands a bit.
you need to `pip install celery[sqs]` to get the additional
dependencies that celery needs to use SQS queues - there are two libs -
boto3 and pycurl.
pycurl is a bunch of python handles around curl, so needs to be
installed from source so it can link to your curl/ssl libs. On paas and
in docker this works fine (needed to add `libcurl4-openssl-dev` to the
docker container), but on macos it can't find openssl. We need to pass
a couple of flags in:
* set the environment variable PYCURL_SSL_LIBRARY=openssl
* pass in the global options `build_ext` and `-I{openssl_headers_path}`.
As shown here:
https://github.com/pycurl/pycurl/issues/530#issuecomment-395403253
Env var is no biggie, but using any install-option flags disables
wheels for the whole pip install run. (See
https://github.com/pypa/pip/issues/2677 and
https://github.com/pypa/pip/issues/4118 for more context on the
install-options flags). A whole bunch of our dependencies don't
install nicely from source (but do from wheel), so this commit installs
pycurl separately as an initial step, with the requisite flags, and
then installs the rest of the requirements as before.
I've updated the makefile and bootstrap.sh files to reflect this, but
if you run `pip install -r requirements.txt` from scratch you will run
into issues.
previously, we were confusing things by appending to CELERY_QUEUES in
both dev and test configs - these are executed at import time, so the
list contained all queues twice, regardless of what config you're
actually using.
Fortunately, the -Q command that we supply the workers with overrides
this config option, so other environments weren't affected. Given that,
we can tidy up this code by just declaring it in the base config every
time
We were already returning the month, notification_type, billing_units
and rate from the /monthly-usage billing endpoint. This adds in the
postage too so that we can display postage details on the usage page of
admin.
* Changed update_fact_billing DAO function to update the table with the
real data for postage instead of hard-coding in 'second'.
* Added a test for the create nightly billing task to test that rows
with different postage are being inserted correctly.
Removed the occasionally failing test to check how ft_billing upserts
postage data. This test will be re-added once the postage column has been
added to the primary key.
* Updated the 'fetch_billing_data_for_day' DAO function to take postage into
account
* Updated the 'update_fact_billing' DAO function to insert postage for
new rows. When updating rows which are identical apart from the postage, the
original row will be kept. (This behaviour will change once postage is
added to the primary key - at this point, upserting will add a new row.)
* Also changed some fixtures / test set up functions to take postage
into account
Updated the 'migrate-data-to-ft-billing' command to populate the new
postage column of ft_billing. This will be populated with the
postage of the notification for letters, or 'none' for email or sms. We
need to ensure there are no null values in postage so that the postage
column can become part of the primary key later.
Also updated the query to get the right rate letter rate now that we are
updating rates in the letter_rates table.
The FactBilling model and the ft_billing database table have diverged
slightly - this makes some minor changes to the model columns so that
the model matches the table (which appears to be the correct version).
The ft_billing table is currently like this:
Column | Type | Modifiers | Storage | Stats target | Description
--------------------+-----------------------------+-----------+----------+--------------+-------------
bst_date | date | not null | plain | |
template_id | uuid | not null | plain | |
service_id | uuid | not null | plain | |
notification_type | text | not null | extended | |
provider | text | not null | extended | |
rate_multiplier | integer | not null | plain | |
international | boolean | not null | plain | |
rate | numeric | not null | main | |
billable_units | integer | | plain | |
notifications_sent | integer | | plain | |
updated_at | timestamp without time zone | | plain | |
created_at | timestamp without time zone | not null | plain | |
Indexes:
"ft_billing_pkey" PRIMARY KEY, btree (bst_date, template_id, service_id, rate_multiplier, provider, notification_type, international, rate)
"ix_ft_billing_bst_date" btree (bst_date)
"ix_ft_billing_service_id" btree (service_id)
"ix_ft_billing_template_id" btree (template_id)
awscli has a requirement of a new version of botocore
moto has a requirement of an old version of boto3, which requires an
old version of botocore
We had to pin boto to an older version, because of the moto issues.
this commit pins awscli to the version currently deployed on prod, so
that it plays nice with that older version of boto/botocore
We want to bring the start dates for first class letter rates forward by
a month so that we don't see billing errors when sending first class letters now.
(The feature will still go live at the planned time - this is to let us test things
beforehand.)
we had an issue where the notification postage constraint command ran
into a deadlock, after trying to acquire two exclusive access locks on
large frequently modified/read tables.
To avoid this happening, we've had to split the upgrade script into
three - one script to apply the not-valid constraint to notifications
table, one for notification_history, and a third to validate the two
constraints.
Note: The first two scripts acquire exclusive access locks, but the
third only needs a row by row lock.
since this involves changing the exsiting alembic upgrades, if you've
upgraded your db you'll need to run the following three commands to
revert your database to a previous good state.
```
alter table notifications drop constraint chk_notifications_postage_null;
alter table notification_history drop constraint chk_notification_history_postage_null;
update alembic_version set version_num = '0229_new_letter_rates';
```
There are two fun quirks of postgres/sql that we need to work around:
* any `x = y` where x or y is NULL returns NULL, rather than false.
* check constraints accept NULL or true values as good.
so, the check `postage in ('first', 'second')` returns `null` rather
than `false` when postage is null itself. This surprisingly passes the
check constraint. To get around this, we have to add an explicit not
null check as well.
A not valid constraint only checks against new rows, not existing rows.
We can call VALIDATE CONSTRAINT against this new constraint to check
the old rows (which we know are good, having run the command from
74961781). Adding a normal constraint acquires an ACCESS EXCLUSIVE
lock, but validate constraint only needs a SHARE UPDATE EXCLUSIVE lock.
see 9d4b8961 and 0a50993f for more information on marking constraints
as not valid.