if we partially retry a day, we would create new zip files, containing
different letters (if some were processed succesfully). We need these
files to have different filenames to earlier zip files so that we can
avoid overwriting log data in zips_sent.
Hashing the filename means that we'll only overwrite if it was the same
file containing the same content.
DVLA don't care about the naming conventions of zip files, other than
it must start with `NOTIFY.` and end with `.ZIP`. So lets format the
date in a more readable way, and separate it from the batch number
previously ftp would name the files itself by giving them a timestamp
when uploading. we ran into issues with tasks being picked up multiple
times and as such, uploading duplicate files. By naming the file before
creating the task, we can avoid this issue.
Files are now named `NOTIFY.YYYYMMDD######.ZIP` where the number is a
counter that increments with each task we've issued in that run of
collate-letter-pdfs-for-day
The data posted to the `add_user_to_service` endpoint is currently sent as a
list of permissions:
`[{'permission': MANAGE_SETTINGS}, {'permission': MANAGE_TEMPLATES}]`.
This endpoint is going to also be used for folder permissions, so the
data now needs to be nested:
`{'permissions': [{'permission': MANAGE_SETTINGS}, {'permission': MANAGE_TEMPLATES}]}`
This changes the add_user_to_service endpoint to accept data in either
format. Once admin is sending data in the new format, the code can be
simplified.
If the new folder has a parent folder, it inherits user permissions
from its parent. Else if the new folder is at root level, all users
will have a permission to view it.
When triggered by an admin request `dao_remove_user_from_service`
raised an IntegrityError since the user_to_service delete query was
issued before the folder permissions one, violating the foreign key
constraint on the folder permissions table.
For some reason this isn't caught by the tests in test_services_dao
that check that folder permissions are removed properly.
If we had organisations for GDS and Cabinet Office, then we’d always
want someone whose email address ends in `@cabinet-office.gov.uk` to
match to `cabinet-office.gov.uk` before matching to
`digital.cabinet-office.gov.uk`.
Sorting the list by shortest first addresses this.
Currently we have
- a thing in the database called an ‘organisation’ which we don’t use
- the idea of an organisation which we derive from the user’s email
address and is used to set the default branding for their service and
determine whether they’ve signed the MOU
We should make these two things into one thing, by storing everything
we know about an organisation against that organisation in the database.
This will be much less laborious than storing it in a YAML file that
needs a deploy every time it’s updated.
An organisation can now have:
- domains which we can use to automatically associate services with it
(eg anyone whose email address ends in `dwp.gsi.gov.uk` gets services
they create associated to the DWP organisation)
- default letter branding for any new services
- default email branding for any new services
The timestamps available in the SES receipt don't always correspond
to the time the notification has been sent. We've seen callbacks with
a current timestamp in both 'mail' and 'bounce' objects that referenced
a notification sent a week ago, which means we can't rely on it to skip
archived notifications.
One possible approach would be to look up the notification reference in
the notification_history table, but this goes against our plans to stop
relying on it in the future.
This changes the SES receipts logic to retry missing notifications once
(if the callback timestamp is within the last 5 minutes the task will
retry after a 5 minute delay) to capture callbacks arriving before the
notification reference has been persisted to the DB. Otherwise, we log
the missing notification as a warning instead of error.
It should be nullable so we can tell whether someone has answered the
question already or not.
No real users have entered data into this column yet, so it’s fine to
wipe it.
code inspired by the delete notification code, but with some clean up
since we don't deal with different types etc, and only need to run the
query for services with inbound numbers
also, update tests.app.db.create_inbound_sms to create inbound numbers
and assign them to services to ensure the test db is always accurate
and reflects real world usage