Show valdiation failed messages on letter notifications in red text,
not in the banner like we do on Uploads and Validation checker pages.
This is because it is a different step in the journey: the user
has already sent the notification and styling needs to be in line
with other places where user is checking the notification she already
has sent.
We store our audit history in two ways:
1. A list of versions of a service
2. A list of events to do with API keys
In the future there could be auditing data which we want to display that
is stored in other formats (for example the event table).
This commit adds some objects which wrap around the different types of
auditing data, and expose a consistent interface to them. This
architecture will let us:
- write clean code in the presentation layer to display these events on
a page
- add more types of events in the future by subclassing the `Event` data
type, without having to rewrite anything in the presentation layer
Rather than force us to write the decorators in a specific order let’s
just have one decorator call the other. This should make fewer lines of
code, and fewer annoying test failures. It also means that the same way
of raising a `401` (through the `current_app` method) is used
everywhere.
At the moment we have to update a YAML file and deploy the change to get
a new domain whitelisted.
We already have a thing for adding new domains – the organisation stuff.
This commit extends the validation to look in the `domains` table on the
API if it can’t find anything in the YAML whitelist.
This has the advantage of:
- not having to deploy code to whitelist a new domain
- forcing us to create new organisations as they come along, so that
users’ services automatically get allocated to the organisation once
their domain is whitelisted
Converting Python data to CSV makes every field a string. This means
that in the report we return to the user every field will be a string,
even if it’s come from an `int` type in Python. This is because the CSV
‘standard’ doesn’t support any kind of typing.
Excel does support types for fields, so we can make our reports more
useful by preserving these types. This is particularly relevant in the
report we generate for performance platform, which needs the `count`
column to be a number type.
This commit adds extra code paths to the `Spreadsheet` class which mean
that it can be instantiated from either CSV data or a list of Python
data. Previously we were converting the Python data to CSV as an
intermediate step, before instantiating the class.
Since this code isn’t trying to inherit from the code that also looked
up domain names in `domains.yml` it can go back to being a lot simpler.
This code is thoroughly tested already here:
a249382e69/tests/app/main/test_validators.py (L74-L155)
Upgraded pyexcel-io from 0.5.14 to 0.5.16. This change causes Werkzeug
to be upgraded from 0.14.1 to 0.15.1 which requires some changes:
* ProxyFix now needs to be imported from a different location
* The status code of RequestRedirect has changed from 301 to 308. Since
status code 308 is not currently supported on Internet Explorer with
Windows 7 and 8.1, this subclasses RequestRedirect to keep the status
code of 301.
changelog: https://werkzeug.palletsprojects.com/en/0.15.x/changes/#version-0-15-0
Separated s3_client.py into 3 files - for logos, CSV files and the MOU.
This helps to keep things clearer now that we need to add lots more logo
functions for letters.
To avoid the problem of having confusing defaults, the postage is now
set explicitly on every template.
Putting the postage ‘inside’ the letter template makes the interaction
for changing it consistent with how other parts of the template are
added.
Plus everyone loves skeumorphism.
In the long term, we don't want to show cancelled letters. But for now,
this changes cancelled letters to display in the same way that letters
with a status of permanent-failure, since we are currently giving
letters that we want to cancel the status of permanent failure.
This commit adds content pages for the notifications pages, particularly
the letter pages, which will make things clearer now that we will soon be allowing
letters to be cancelled.
The main changes are:
* The confirmation banner for letters sent from a CSV file now states when
printing will start.
* We state the CSV file that notifications were sent from on the
notifications page
* The notification page for letters shows when printing starts (today,
tomorrow, or that date that the letter was printed)
Bumped the notifications-utils version. The `gmt_timezones` function in
this repo and the `utc_string_to_aware_gmt_datetime` in
notifications-utils are the same, so have updated the code to always use
the version in utils.
This duplicates how the task list pattern is coded in the GOV.UK
Prototype kit[1]. It adds ARIA attributes and the use of a
semantically-meaningful element (`<strong>`) to give more information to
screen reader users.
1. https://govuk-prototype-kit.herokuapp.com/docs/templates/task-list
A platform admin form accepts a list of references (one per line)
received from DVLA and sends them to the API to update notification
statuses.
References we get from DVLA start with `NOTIFY00\d`, which isn't
part of the reference we store in the database, so we remove them
before sending the data to the API.
The new `returned-letter` status should be treated as `delivered`
for now until we decide a way to display returned letters to users.
This is useful if you have lots of people sending messages and want to
report on who’s doing what.
Needs the API updating to return `created_by_name` in its response.
Because we alias domains (eg `foo.gsi.gov.uk` to `foo.gov.uk`, or where
a local council has multiple domains) it could be hard to look up a
brand (which has one domain field).
Therefore we need a way of getting the canonical domain from a user’s
email address, which we can later use to look up their branding.
We often check that a service has an appropriate text message sender as
a condition of them going live. We don’t mention this anywhere.
The services for whom GOVUK is definitely not an appropriate sender are
those in local government. As we have more of these teams starting to
use Notify, we should streamline the process by making this check
automated.
This commit adds that check, for teams who:
- have text message templates
- have self-declared as NHS or local government
If you skip past the templates page (because you don’t have the edit
permission) but then click back you end up in a loop which redirects you
to the page you’re already on.
This commit makes sure that you’re sent back a step further, so you
don’t get stuck in that loop.
Things we’ve noticed from looking at real data that we could handle in a
smarter way:
- removing numbers (there might be a tom.smith2@dept.gov.uk if tom.smith
is already taken)
- removing middle initials (again, these tend to be used for
disambiguation and aren’t included when we ask people for their names)
- ignoring email addresses which only have someone’s initial, not their
first name (because we can’t make a decent guess in this case)
Most people’s names, especially in government are in the format
firstname.lastname@department.gov.uk. This means that you can pretty
reliably guess that their name is ‘Firstname Lastname’.
When users are invited to Notify we know their email address already.
So this commit pre-populates the registration form based on this guess.
This is a nice little detail, but it should also stop the browser
pre-filling the name field with someone’s email address (which I think
happens because the browser assumes a registration form will have an
email field).