In previous tests we check that we can deal with files that end in `pdf`
in various forms of upper and lowercase. I've moved the testing of this
behaviour into it's own test so that's explicit and not just implied
that we care about behaviour on the casing of filenames.
Note however that s3 is case sensitive and we upload all our files in
upper case so technically we'd never expect to see a file ending in
`pdf`. I've updated some of our test data to reflect this but didn't
bother doing it everywhere. I've left the test in anyway but it could be
argued that is is redundant as we don't ever expect to see that case
anyway.
Previously, when running the `collate_letter_pdfs_for_day` task, we
would only send letters that were created between 5:30pm yesterday and
5:30 today.
Now we send letters that were created before 5:30pm today and that are
still waiting to be sent. This will help us automatically attempt to
send letters that may have fallen through the gaps and not been sent the
previous day when they should have been.
Previously we solved the problem of letters that had fallen the gap by
having to run the task with a date parameter for example
`collate_letter_pdfs_for_day('2020-02-18'). We no longer need this date
parameter as we will always look back across previous days too for
letters that still need sending.
Note, we have to change from using the pagination `list_objects_v2` to
instead getting each individual notification from s3. We reduce load by
using `HEAD` rather than `GET` but this will still greatly increase the
number of API calls. We acknowledge there will be a small cost to this,
say 50p for 5000 letters and think this is tolerable. Boto3 also handles
retries itself so if when making one of the many HEAD requests, there is
a networking blip then it should be retried automatically for us.
This will allow us to accept two different ones and therefore allow us
to rotate the secret that the admin client is sending to the API
Due to how the notifications-python-client throws exceptions, we run
into exactly the same issue with not being able to distinguish if a
`TokenDecodeError` is thrown because the token was encrypted with a
different secret key or if because there was a different error when
decoding. I've copied the TODO from `requires_auth` as this is exactly
the same issue.
I've also added a test case for functionality that was missing for an
out of date admin token (old IAT).
And support this change across our code. Note, this is a halfway step
where it is not a list rather than a string but still only supports a
single secret, ie one item in the list.
we don't need it here - as exceptions are re-raised, they will be logged
additionally by error handlers further up. All this exception logger
tells us is that service names are already in use, which isn't something
we're really interested in.
the queries all return lots of columns, but each query has columns it
doesn't care about. eg emails don't have billable units or international
flag, letters don't have international flag, sms don't have a page count
etc. additionally, the query was grouping on things that never change,
like service id and notification type.
by making all of these literals (as in `select 1 as foo`) we see times
that are over 50% quicker for gov.uk email service.
Note: One of the tests changed because previously it involved emails and
sms with statuses that they could never be (eg returned-letter)
This fixes the test in the previous commit and means we will catch other
unexpected jwt errors which are now raised as `TokenError`s and raise an
AuthError based on this.
This will stop us serving 5xx to users when we don't catch an exception.
Also runs make freeze-requirements
Update cffi from 1.13.1 to 1.13.2
Update jsonschema from 3.1.1 to 3.2.0
Update marshmallow-sqlalchemy from 0.19.0 to 0.21.0
Update marshmallow from 2.20.2 to 3.4.0
Update sqlalchemy from 1.3.10 to 1.3.13
Update notifications-python-client from 5.4.1 to 5.5.1
intention is for this to be null, 1, or many, based on how many
documents were linked to within the message. nullable column, so that it
doesn't require a lengthy access exclusive lock on the table when
creating.
We aren't aware of any reason we need to use our fork of boto anymore.
We therefore swap to use `celery[sqs]` which brings in the original
version of boto and we can remove our use of the fork.
This has been tested by running the celery app and seeing it connect to
sqs and grab messages off a queue.
We use boto3 for our interaction with s3. Therefore if an expection is
thrown it will be thrown from the botocore library (which boto3 is built
on top of).
I have copied
app/aws/s3.py::file_exists for an example of this exception catching.