The too many pages error was being returned when the file couldn’t be
read. This commit corrects the error message, and adds a test to make
sure this case is covered.
This way we have a URL we can give people that always points to the
latest version of the spec.
And it makes our code more Flask-idiomatic to be using `url_for` to be
generating a URL, rather than passing around a constant.
It’s hard to read the tests when they have HTML bundled up with content.
So this commit:
- introduces BeautifulSoup to parse the HTML
- asserts separately on the text and any links found in the HTML
We show letter validation errors in two places:
1. In response to a user uploading a PDF
Here we use the error banner pattern because the problem is as a
direct consequence of a user’s action, and is blocking them from
continuing.
2. Once a PDF provided through the API has been validated
We use a less prominent pattern of red text with no border because
the message is reporting on something that’s already happened, and
which wasn’t a direct consequence of the user clicking something
Because the context and patterns used are different we need slightly
different content in each of these situations. Previously we tried to
reuse the same content to make the code cleaner and less repetitive. But
ultimately a clear interface trumps clear code.
This will make the diffs introducing substative changes easier to read.
Consistent indenting and always having trailing commas on lists and
dictionaries makes for smaller diffs.
Before deploying a change to template-preview to return a validation error for letters that are missing the address block, we need to add the new erorr message to admin.
Some content changes have been made to other messages.
The format of the message has changed.
Converting Python data to CSV makes every field a string. This means
that in the report we return to the user every field will be a string,
even if it’s come from an `int` type in Python. This is because the CSV
‘standard’ doesn’t support any kind of typing.
Excel does support types for fields, so we can make our reports more
useful by preserving these types. This is particularly relevant in the
report we generate for performance platform, which needs the `count`
column to be a number type.
This commit adds extra code paths to the `Spreadsheet` class which mean
that it can be instantiated from either CSV data or a list of Python
data. Previously we were converting the Python data to CSV as an
intermediate step, before instantiating the class.
Separated s3_client.py into 3 files - for logos, CSV files and the MOU.
This helps to keep things clearer now that we need to add lots more logo
functions for letters.
This commit adds content pages for the notifications pages, particularly
the letter pages, which will make things clearer now that we will soon be allowing
letters to be cancelled.
The main changes are:
* The confirmation banner for letters sent from a CSV file now states when
printing will start.
* We state the CSV file that notifications were sent from on the
notifications page
* The notification page for letters shows when printing starts (today,
tomorrow, or that date that the letter was printed)
This is useful if you have lots of people sending messages and want to
report on who’s doing what.
Needs the API updating to return `created_by_name` in its response.
Because we alias domains (eg `foo.gsi.gov.uk` to `foo.gov.uk`, or where
a local council has multiple domains) it could be hard to look up a
brand (which has one domain field).
Therefore we need a way of getting the canonical domain from a user’s
email address, which we can later use to look up their branding.
we were seeing isort produce different outputs locally and in docker -
this was due to it having different opinions about whether the tests
module (ie all our unit tests) is a first party (local) or third party
(pip installed) import. It's a first party import, so by defining this
in the setup.cfg isort settings, we can force it to be consistent
between environments.
Note: I don't know why it was different in the first place though
Previously, we were looking at the day of the week - so messages sent
six days ago would show up as "tomorrow". We now look at the actual
date, so that won't happen again.
We were also subtracting an hour to make 00:00 this evening show up as
"midnight today", despite it technically being tomorrow. However, this
means that 00:59 tomorrow morning would show up as "00:59 today", a
full day out. So reduce that to just a minute, so it doesn't affect
other times of day.
This makes it easier to write a good message in the request to go live
submission. And encapsulating it in the `GovernmentDomain` class keeps
the view nice and clean.
If a cell in the original file contains a comma, it comes back as two
cells in the downloaded file.
The CSV writer has logic to deal with this. It seems to work a lot
better that just concatenating the columns with commas ourselves.
When downloading a report of a which messages from a job have been
delivered and which have failed we currently only include the Notify
data. This makes it hard to reconcile or do analysis on these reports,
because often the thing that people want to reconcile on is in the data
they’ve uploaded (eg a reference number).
Here’s an example of a user talking about this problem:
> It would also be helpful if the format of the delivery and failure
> reports could include the fields from the recipient's file. While I
> can, of course, cross-reference one report with the other it would be
> easier if I did not have to. We send emails to individuals within
> organisations and it is not always easy to establish the organisation
> from a recipient's email address. This is particularly important when
> emails fail to be delivered as we need to contact the organisation to
> establish a new contact.
– ticket 677
We’ve also seen it when doing research with a local council.
This commit takes the original file, the data from the API, and munges
them together.
Done using isort[1], with the following command:
```
isort -rc ./app ./tests
```
Adds linting to the `run_tests.sh` script to stop badly-sorted imports
getting re-introduced.
Chosen style is ‘Vertical Hanging Indent’ with trailing commas, because
I think it gives the cleanest diffs, eg:
```
from third_party import (
lib1,
lib2,
lib3,
lib4,
)
```
1. https://pypi.python.org/pypi/isort
There’s over 1000 domains in our file. This is too much for parametrize
to handle when running the tests on multiple cores. End up with this
error:
```
Different tests were collected between gw1 and gw2.
```
The thing that matters for which agreement an organisation has to sign
is whether or not that organisation is crown or non-crown.
There is only a partial overlap between crown/non-crown and
local/central. We can’t infer one fro the other. So this commit makes it
explicit by marking all local government organisations as non-crown,
which is something we can know for sure.
We don’t, for example, know the inverse, that all parts of all central
government organisations are crown bodies (but we can mark some of them
as being so later on).
This adds information about which orgs have signed an MOU to the domain
list. The meaning of the attribute is:
- `true`: MOU signed for the whole organisation
- `false`: no MOU for any part of the organisation
- `null` (or missing): can’t be sure if it’s true or false
This commit:
- makes the logic around looking up a domain a bit more sophisticated
by matching on the longest domain name first
- exposes the details about an organisation to consumers of the
`GovernmentDomain` class
It will also be useful to know (especially for the API):
- when a letter was printed
- if it’s been printed or not
This commit:
- adds code to calculate these two pieces of information
- refactors the function to return a `namedtuple` – a tuple of two items
was manageable, but with four items it was getting hard to know what
each one meant – this lets us label each piece of information that is
being returned
Letter delivery depends on:
- how long it takes to print
- how long it takes to post
Both of these process are impacted by weekends, because people don’t
work on weekends.
It also depends on if you submit your letter before or after 5pm,
because that’s the cut off time for getting a letter printed on a given
day – ie after 5pm on Monday is effectively the same as Tuesday and so
on.
But I reckon all our users need to know is roughly how long it will take
until the letter turns up on the user’s doorstep. So this commit adds
a function to calculate this. Doesn’t surface it on the front end _yet_.