Adds a platform admin button to the service settings to turn on/off
'upload_document' service permission. The permission allows uploading
documents to document-download-api through the post notification API
endpoint.
If a user clicks ‘back’ once they’ve sent a job we don’t want them to
land on the ‘check’ page again. This would suggest that they can send
the same job again (they can’t because that `job_id` is in the database
already). That said, it’s confusing to see that page; the natural thing
is to go jump back another step, to where you uploaded the file.
We’re going to stop storing job metadata in the session. So we can’t
rely on it for checking whether a file is valid. That safeguard is
happening in the API instead now (because it’s looking at the metadata
stored in S3).
For both SMS senders and email reply to addresses this commit adds:
- a delete link
- a confirmation loop
It doesn’t let users delete:
- default SMS senders or reply to addresses (they always have to have
one)
- inbound numbers
It assumes that the API will allow updating of an attribute named
`active` on the respective database rows. It could work in a different
way. We can’t do complete deletion though because these will still be
keyed to notifications.
we reckon users will like to see gov reply-to email addresses because
it will improve their confidence in the email.
however, some services, for a few complex reasons, don't want a gov
reply to address. rather than add their specific domains to the
whitelist for signups etc, just make reply tos allowed from any domain.
We vet reply-tos before services go live anyway.
S3 has a limit of 2kb for metadata:
> the user-defined metadata is limited to 2 KB in size. The size of
> user-defined metadata is measured by taking the sum of the number of
> bytes in the UTF-8 encoding of each key and value.
– https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMetadata.html#object-metadata
This means we have a limit of 1870 bytes for the filename:
```python
encoded = 'notification_count50000template_id665d26e7-ceac-4cc5-82ed-63d773d21561validTrueoriginal_file_name'.encode('utf-8')
sys.getsizeof(b)
>>> 130
2000-130
>>> 1870
```
Or, in other words, ~918 characters:
```python
sys.getsizeof(('ü'*918).encode('utf-8'))
>>> 1869
```
We’re getting the template just to get back its `id`, which is the one
thing we do know in order to get it.
The call to get template is still happening inside `_check_messages`, so
we’ll still catch someone trying to look at this page for a template
that doesn’t exist.
By doing this we no longer have to store it in the session. This is the
last thing that’s currently in the session, so removing it means we can
drop session storage for file uploads entirely.
Storing things in the session is proving buggy – we still have one user
(that we know about) where the session data isn’t getting written, so
they’re blocked from uploading a file.
Since all the info we’re storing in the session is about the file, it
makes sense to store it with the file.
This commit only does the writing of the metadata, once we’re sure this
is working we can do subsequent work to read it back, and remove
reliance on the session.
p1 == "should notify team be alerted of this (via pagerduty)"
urgent == "should the user be told we'll look at it"
* If it's in office hours, it's always urgent. It's never a P1 because
we'll notice it anyway
* If it's outside of office hours, it's urgent and P1 if it's severe,
otherwise it's neither