This is not required by DVLA and since [1] we no longer care about
the end of letter filenames when collating them, so removing it is
safe to do. Note that the name of the ZIP files of collated letters
is based on a hash of the filenames, which needed updating in tests.
Before merging this we need to do a test run in Staging, so DVLA can
check that a mixture of the old / new filenames won't cause issues.
[1]: https://github.com/alphagov/notifications-api/pull/3172
Previously we did some unnecessary work:
- Collate task. This had one S3 request to get a summary of the object,
which was then used in another request to get the full object. We only
need the size of the object, which is included in the summary [1].
- Archive task. This had one S3 request to get a summary of the object,
which was then used to make another request to delete it. We still need
both requests, but we can remove the S3.Object in the middle.
[1]: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3.html#objectsummary
Previously, the function would just return a presumed filename. Now that
it actually checks s3, if the file doesn't exist it'll raise an
exception. By default that's a StopIteration at the end of the bucket
iterator, which isn't ideal as this will get supressed if the function
is called within a generator loop further up or anything.
There are a couple of places where we expect the file may not exist, so
we define a custom exception to rescue specifically here. I did consider
subclassing boto's ClientError, but this wasn't straightforward as the
constructor expects to know the operation that failed, which for me is a
signal that it's not an appropriate (re-)use of the class.
Previously we generated the filename we expected a letter PDF to be
stored at in S3, and used that to retrieve it. However, the generated
filename can change over the course of a notification's lifetime e.g.
if the service changes from crown ('.C.') to non-crown ('.N.').
The prefix of the filename is stable: it's based on properties of the
notification - reference and creation - that don't change. This commit
changes the way we interact with letter PDFs in S3:
- Uploading uses the original method to generate the full file name.
The method is renamed to 'generate_' to distinguish it from the new one.
- Downloading uses a new 'find_' method to get the filename using just
its prefix, which makes it agnostic to changes in the filename suffix.
Making this change helps to decouple our code from the requirements DVLA
have on the filenames. While it means more traffic to S3, we rely on S3
in any case to download the files. From experience, we know S3 is highly
reliable and performant, so don't anticipate any issues.
In the tests we favour using moto to mock S3, so that the behaviour is
realistic. There are a couple of places where we just mock the method,
since what it returns isn't important for the test.
Note that, since the new method requires a notification object, we need
to change a query in one place, the columns of which were only selected
to appease the original method to generate a filename.
we don't name letters based on the day we send them on, rather, the day
we create them on. If we process a letter for a second time for whatever
reason, even if it's a couple of days later, it'll still go in a folder
based on the created_at timestamp. There's still a slight confusion,
however - if the timestamp is after 5:30pm, the folder will be for the
day after. However, still the day after creation, so I think created_at
still makes the most sense.
Remove the term `sending_date` to try and make this relationship more
apparent.
`_now`? why would we ever use a different _now? instead say created_at,
because that's what it'll always be set to, even if we're replaying old
letters. We always set the folder name to when the letter was
created_at, or we might not know where to look to find it.
`dont_use_sending_date` doesn't really tell us what might happen if we
don't use it - the answer is we return an empty string. we ignore the
folder entirely. so lets call it that.
Also, remove use of freeze_gun in the tests, to prove that we don't use
the current time in any calculations. Also add an assert to a mock in
the get_pdf_for_templated_letter test, because we were mocking but not
asserting before, so the tests didn't fail when the function signature
changed.
We were determing the filename for precompiled letters before we had
checked if the letters were international. This meant that a letter
could have a filename indicating it was 2nd class, but once we had
sanitised the letter and checked the address we were setting the
notification to international.
This stopped these letters from being picked up to be sent to the DVLA,
since the filename and postage of the letter did not match.
We now regenerate the filename after the letter has been sanitised (and when
we know the postage) and use the updated filename when moving the letter
into the live PDF letters bucket.
This has been replaced by a new task, `sanitise-letter`, to this deletes
all the code in the old task and ensures that when antivirus is not
enabled locally we are calling the new task.
Added a task, `sanitise-letter`, that will be called from antivirus when
a letter has passed virus scan. This calls a new task in
template-preview which will sanitise the PDF.
A second new task, `process-sanitised-letter`, will be called from the
template preview task and deals with updating the notification and
moving it to the relevant bucket.
Also use this metadata to decide whether preview pages need
overlay or not. So far we have always added overlay when validation
has failed. Now we will only show it when validation failed due to
content being outside of printable area.
Now we consistently use the created_at date, so we can always get the right file location and name.
The previous updates to this code were trying to solve the problem if a pdf being created at 17:29, but not ready to upload until 17:31 after the antivirus and validation check.
But in those cases we would have trouble finding the file.
The `get_bucket_name_and_prefix_for_notification` function was looking
at the `sent_at` or `updated_at` at time of a notification to see which
bucket it was in. Precompiled letters sent through the admin app don't
have either of these times - they only have a `created_at` time, so this
lets the function check `created_at` time too.
This function checks various permissions, downloads the PDF from the
transient bucket, creates the notification then moves the letter to the
'normal' bucket.
This has been moved to the letters utils file since it will be used in
more than one place. The notification parameter has been removed so that
the function can be used when we don't have a notification id.
previously, we didn't create templated letters, and just marked them as
delivered straight away. However, we may need to return PDFs for these
letters, so we should create them the same as live letters. Then update
the functions so that they know where to look for these letters.
Using the created at date for the folder is not always going to work because the pdf created_at date could be just before the cut off date but virus scan and validation has yet to happen. By the time the letters is in the created state, the letter goes into the next days bucket.
It can also happen if the letters is stuck in `pending-virus-scan` and we need to restart the task, and the letters is in a different folder.
Bumped notifications-utils to 3.7.0. Version 3.7.0 includes the
`convert_utc_to_bst` and `convert_bst_to_utc` functions and the
`LETTER_PROCESSING_DEADLINE` constant, so these have been removed from
this repo and anywhere using these has now been updated to get these
from `notifications-utils`.
Also bumped pytest by a patch version to bring in a bug fix.
the move from virus scan to validation failed function was called with
the wrong variables, and had some internal logic that was slightly
wrong.
Also, Don't use `update_notification_by_id` for notifications if they
are not in `created`, `sending`, `pending`, or `sent`. It silently
doesn't update them. I didn't want to do a deeper investigation into
the reasons behind this terrifying state machine as part of this commit
so I just changed the functions to call `dao_update_notification`
manually
- pass new, sanitised pdf for sending
- move invalid pdfs to a newly created bucket
- set status fro notifications that failed pdf validation to a new status validation-failed
- adjust existing tests
rather than looping through, just use `next`. We only expect to return
one file, so this won't actually affect the code flow. If, for whatever
reason, the file isn't on s3, whereas previously it would error when
trying to return an uninstantiated variable, now it will raise a
StopIteration
- Separated the logic of precompiled and template letters.
- Remove the check for research mode, research mode is not relevant to api calls. The test key is used for testing.
Refactor upload_pdf_letter to accept a precompile boolean to save a query to template.
move_scanned_pdf_to_test_or_live_pdf_bucket and
move_failed_pdf to consolidate some code so it is easier to maintain in
future as so that _move_s3_object can be used for any new methods.
application. If the Anti-virus app fails due to s3 errors or ClamAV
so does not scan (even after retries) the file at all an error needs
to be raised and the notification set to technical-failure.
Files should be moved to a 'folder' a separate one for ERROR and FAILURE.
* Added new letter task to process the error
* Added a new method to letter utils.py to move a file into an error or
failure folder based on the input
* Added tests to test the task and the utils.py method
- precompiled PDFs sent by test key uploaded to scan bucket
- set status to VIRUS-SCAN-FAILED for pdfs failing virus scan rather than PERMANENT-FAILURE
- Make call to AV app for precompiled letters sent via a test key, and set notification status to PENDING-VIRUS-SCAN