A platform admin form accepts a list of references (one per line)
received from DVLA and sends them to the API to update notification
statuses.
References we get from DVLA start with `NOTIFY00\d`, which isn't
part of the reference we store in the database, so we remove them
before sending the data to the API.
The new `returned-letter` status should be treated as `delivered`
for now until we decide a way to display returned letters to users.
This is useful if you have lots of people sending messages and want to
report on who’s doing what.
Needs the API updating to return `created_by_name` in its response.
Because we alias domains (eg `foo.gsi.gov.uk` to `foo.gov.uk`, or where
a local council has multiple domains) it could be hard to look up a
brand (which has one domain field).
Therefore we need a way of getting the canonical domain from a user’s
email address, which we can later use to look up their branding.
We often check that a service has an appropriate text message sender as
a condition of them going live. We don’t mention this anywhere.
The services for whom GOVUK is definitely not an appropriate sender are
those in local government. As we have more of these teams starting to
use Notify, we should streamline the process by making this check
automated.
This commit adds that check, for teams who:
- have text message templates
- have self-declared as NHS or local government
If you skip past the templates page (because you don’t have the edit
permission) but then click back you end up in a loop which redirects you
to the page you’re already on.
This commit makes sure that you’re sent back a step further, so you
don’t get stuck in that loop.
Things we’ve noticed from looking at real data that we could handle in a
smarter way:
- removing numbers (there might be a tom.smith2@dept.gov.uk if tom.smith
is already taken)
- removing middle initials (again, these tend to be used for
disambiguation and aren’t included when we ask people for their names)
- ignoring email addresses which only have someone’s initial, not their
first name (because we can’t make a decent guess in this case)
Most people’s names, especially in government are in the format
firstname.lastname@department.gov.uk. This means that you can pretty
reliably guess that their name is ‘Firstname Lastname’.
When users are invited to Notify we know their email address already.
So this commit pre-populates the registration form based on this guess.
This is a nice little detail, but it should also stop the browser
pre-filling the name field with someone’s email address (which I think
happens because the browser assumes a registration form will have an
email field).
This works locally for a long running request and a large number of messages. However I suspect that nginx may be timing out the request. I'd like to try this on staging.
Currently we have a bunch of users who aren’t signed in asking us for
the agreement.
This is bad because:
- it’s slower (for them) than just being able to download it
- it creates work for us
We can’t just offer the agreement to anyone, but we can offer to it to
anyone who’s signed in because we now let people self-select which
version to download when we can’t tell which one to give them.
S3 has a limit of 2kb for metadata:
> the user-defined metadata is limited to 2 KB in size. The size of
> user-defined metadata is measured by taking the sum of the number of
> bytes in the UTF-8 encoding of each key and value.
– https://docs.aws.amazon.com/AmazonS3/latest/dev/UsingMetadata.html#object-metadata
This means we have a limit of 1870 bytes for the filename:
```python
encoded = 'notification_count50000template_id665d26e7-ceac-4cc5-82ed-63d773d21561validTrueoriginal_file_name'.encode('utf-8')
sys.getsizeof(b)
>>> 130
2000-130
>>> 1870
```
Or, in other words, ~918 characters:
```python
sys.getsizeof(('ü'*918).encode('utf-8'))
>>> 1869
```
We prefer people downloading the agreement if they can. If we don’t know
which agreement they should be using (ie we don’t know their crown
status) then we fall back to having them contact us.
Rather than making users contact us to get the agreement, we should just
let them download it, when we know which version to send them.
This commit adds two endpoints:
- one to serve a page which links to the agreement
- one to serve the agreement itself
These pages are not linked to anywhere because the underlying files
don’t exist yet. So I haven’t bothered putting real content on the page
yet either. I imagine the deploy sequence will be:
1. Upload the files to the buckets in each environment
2. Deploy this code through each enviroment, checking the links work
3. Make another PR to start linking to the endpoints added by this
commit
I don’t think it’s a massive risk (we’re certainly mitigating against
any XSS), but having a page on a GOV.UK domain where you can prefill
text on the page from a query string probably isn’t great.
So this commit restricts prefilling the support form to a set of
named questions.
Precompiled letters can now have two additional states:
* pending-virus-check
* virus-scan-failed
Both new states should show in the notifications dashboard, and
virus-scan-failed should appear as an error state, with a descriptive
message. You should not be able to preview a letter in one of the two
new states, so the preview link has been removed for precompiled letters
in these states.
it was only used by the choose service page, and then only in kludgy
ways (eg: creating a list containing one item called "add service"),
so lets rip it out and make this page bespoke. Especially now that it's
changed so much.
this endpoint should probably only be used for the choose-service page
also create an OrganisationBrowsableItem to aid rendering of them
in the front-end.
This makes it easier to write a good message in the request to go live
submission. And encapsulating it in the `GovernmentDomain` class keeps
the view nice and clean.
If a cell in the original file contains a comma, it comes back as two
cells in the downloaded file.
The CSV writer has logic to deal with this. It seems to work a lot
better that just concatenating the columns with commas ourselves.
rather than allow admins to do everything specifically, we should
only block them from things we conciously don't want them to do.
This is "Don't let platform admins send letters from services they're
not in". Everything else the platform admins can do.
This is step one, adding a restrict_admin_usage flag, and setting that
for those restricted endpoints around creating api keys, uploading CSVs
and sending one-off messages.
Also, this commit separates the two use cases for permissions:
* user.has_permission for access control
* user.has_permission_for_service for user info - this is used for
showing checkboxes on the manage-users page for example
With this, we can remove the admin_override flag from the permission
decorator.
platform_admin is a separate concept to permissions, so by removing the
checks for it from the current_user.has_permissions function, we can
simplify things greatly. We already record on the user whether they're
a platform admin anyway.
When downloading a report of a which messages from a job have been
delivered and which have failed we currently only include the Notify
data. This makes it hard to reconcile or do analysis on these reports,
because often the thing that people want to reconcile on is in the data
they’ve uploaded (eg a reference number).
Here’s an example of a user talking about this problem:
> It would also be helpful if the format of the delivery and failure
> reports could include the fields from the recipient's file. While I
> can, of course, cross-reference one report with the other it would be
> easier if I did not have to. We send emails to individuals within
> organisations and it is not always easy to establish the organisation
> from a recipient's email address. This is particularly important when
> emails fail to be delivered as we need to contact the organisation to
> establish a new contact.
– ticket 677
We’ve also seen it when doing research with a local council.
This commit takes the original file, the data from the API, and munges
them together.
Done using isort[1], with the following command:
```
isort -rc ./app ./tests
```
Adds linting to the `run_tests.sh` script to stop badly-sorted imports
getting re-introduced.
Chosen style is ‘Vertical Hanging Indent’ with trailing commas, because
I think it gives the cleanest diffs, eg:
```
from third_party import (
lib1,
lib2,
lib3,
lib4,
)
```
1. https://pypi.python.org/pypi/isort
The thing that matters for which agreement an organisation has to sign
is whether or not that organisation is crown or non-crown.
There is only a partial overlap between crown/non-crown and
local/central. We can’t infer one fro the other. So this commit makes it
explicit by marking all local government organisations as non-crown,
which is something we can know for sure.
We don’t, for example, know the inverse, that all parts of all central
government organisations are crown bodies (but we can mark some of them
as being so later on).
The list of email domains is a different list from the list of all
government domains. And because the list of all government domains is
really long now, it could be unnecessarily slow to search through when
(a lot of the time) all we care about is whether the email address ends
with `.gov.uk`.
This commit:
- makes the logic around looking up a domain a bit more sophisticated
by matching on the longest domain name first
- exposes the details about an organisation to consumers of the
`GovernmentDomain` class
In some cases we can tell based on someone’s email domain whether they
work for a central or local government organisation, and whether they
will need to sign the MOU or agreement in order to go live. So this
commit creates a structure to store this information.
Makes it fiddlier to add new domains, and is only needed to generate the
regular expression. Much cleaner to just insert them as part of
generating the regular expression.