it just shows a h1, so isn't helpful for people. We can re-use the 500
error page, which includes instructings "Try again later" and
instructions on what to do next (check the status page, email notify
support).
this required refactoring to ensure we can show the 500 error page while
still returning the required status code
if we expect a 400 (for example, the api returns 400 if the service name
is already in use) then we should handle that from the view function so
we can correctly display a relevation error message to the user. But if
it returns a 400 that we didn't expect (for example, because we sent
variables of the wrong type through to an endpoint with a schema), then
that is a bug that we should fix as any other raised exception. By
treating it as a 500 we encourage users to report it to us, and also
will get an alert email
Currently you have no way of getting to the returned letter page. This
commit adds a link to it from the dashboard, following the pattern of
the new received text messages banner.
This is a friendlier and better way of showing dates anywhere except in
a report which might be archived and referred back to later.
Bit trickier to implement here because a dat requires ‘on’ beforehand,
but we don’t say ‘on today’ in English.
This helps people understand how delivery dates fit into the week,
because if sending a letter spans the weekend it will take a bit longer.
We can remove `|string` now because `utc_string_to_aware_gmt_datetime`
now accepts datetimes.
service contact blocks contain new lines - and jinja2 normally ignores
newlines (as in it keeps them as new lines) - but we need to turn them
into `<br>` tags so that we can show the formatting that the user has
added. We were previously just doing `{{ block | nl2br | safe }}`. nl2br
turns the new lines into `<br>` tags, and then `safe` tells jinja that
it doesn't need to escape the html.
this causes issues if the user adds `<script>alert(1)</script>` to their
contact block (or some other evil xss hack), where that will get let
through due to the safe flag
To solve this, use `Markup(html='escape')` to sanitise any html, and
then convert new lines to <br>.
bump utils
another xss
Then it’s one less cookie we have to get users to opt in to. We don’t
derive any value from Youtube setting cookies.
`youtube-nocookie.com` is a domain provided by Google for this purpose.
Includes:
- in gulpfile.js:
- add details macro to list of those copied from GOVUK Frontend
- remove existing details polyfill
- convert all, but one, <details> tags to use GOVUK Frontend details component
- add jinja boolean filter to help setting 'open' attribute of <details> tags
Notes on the `<details>` not included in this:
The `<details>` used for notifications items on
the API integration page is not possible to
reproduce with the GOV.UK Frontend macro so I'm
splitting the resulting work out into it's own
commit.
Includes:
- in gulpfile.js:
- add details macro to list of those copied from GOVUK Frontend
- remove existing details polyfill
- convert all, but one, <details> tags to use GOVUK Frontend details component
- add jinja boolean filter to help setting 'open' attribute of <details> tags
Notes on the `<details>` not included in this:
The `<details>` used for notifications items on
the API integration page is not possible to
reproduce with the GOV.UK Frontend macro so I'm
splitting the resulting work out into it's own
commit.
we have a hunch that some session related issues that we've seen over
the last few weeks might be related to weird race conditions where
cookies set by subresources (image previews of letters on the send flow)
arrive just as the img request is cancelled because the user has clicked
on a button to navigate to a new page, but still manage to set the
cookie? We're not entirely sure what's going on, but we've got a hunch
that not setting cookies on image fetches sounds sensible. Images are
always loaded as a subresource (ie: through a `src` tag in an html
element), so they should never need to change the cookies, so this seems
sensible. We've done this by creating a new blueprint that doesn't set
session.permanent, and doesn't call `save_serivce_or_org_after_request`
either.
cookies are sent back to the browser if:
`sesion.modified or (session.permanent and 'REFRESH_EVERY_REQUEST')`
(where the latter is a config setting).
Turning off REFRESH_EVERY_REQUEST (which is True by default) means that
we will only update the sesion if it's been modified. In practice,
literally every request is modified in the after_request handler
`save_service_or_org_after_request`. This is accidentally convenient,
as it guarantees that we'll still send back the cookie normally even
though refresh_every_request is disabled. Sending back the cookie
updates the expiry time (20 hours), so we need to keep doing this to
preserve existing session timeout behaviour.
`node_modules` isn't available in a live
environment so the `.njk` templates we're now
using from GOVUK Frontend need to go in the repo'
code.
Adds them in a new `app/templates/vendor` folder
to match how we do that in
`app/assets/stylesheets`.
`node_modules` isn't available in a live
environment so the `.njk` templates we're now
using from GOVUK Frontend need to go in the repo'
code.
Adds them in a new `app/templates/vendor` folder
to match how we do that in
`app/assets/stylesheets`.
Sometimes we manually check that a URL parameter is in a required set.
Sometimes we don’t bother.
This commit adds a URL converter to do this so that:
- we don’t have to re-write the same code every time
- it’s easier to apply this check to other endpoints
This means endpoints that previously allowed a `template_type` or
`message_type` of `None` now 404. So I’ve had to add new routes for
with URLs that don’t include such parameters.
So this…:
```
/services/128b91b6-2996-4107-bb65-51b7c24a728d/notifications/sms.csv
/services/128b91b6-2996-4107-bb65-51b7c24a728d/notifications/None.csv
```
…becomes:
```
/services/128b91b6-2996-4107-bb65-51b7c24a728d/notifications/sms.csv
/services/128b91b6-2996-4107-bb65-51b7c24a728d/notifications.csv
```
This matches what we do for the HTML-responding equivalent (see
265931d217/app/main/views/jobs.py (L215-L216))
Because it means you often have to cast to string in your application
code just to get your tests passing.
The method being monkey patched is originally defined here: b81aa0f18c/src/werkzeug/routing.py (L1272)
Scanning the page is difficult at the moment because it’s hard to tell
how far apart in time events are, and thereby determine which events
might be related.
Grouping the events by day quickly lets users narrow their focus to
a meaningful subset of the events.
We store our audit history in two ways:
1. A list of versions of a service
2. A list of events to do with API keys
In the future there could be auditing data which we want to display that
is stored in other formats (for example the event table).
This commit adds some objects which wrap around the different types of
auditing data, and expose a consistent interface to them. This
architecture will let us:
- write clean code in the presentation layer to display these events on
a page
- add more types of events in the future by subclassing the `Event` data
type, without having to rewrite anything in the presentation layer
This makes it consistent that an option which contains more options has
a hint about how many options it contains.
Also adds a formatter to get us ready for 1,000 services 🎉
The data flow of other bits of our application looks like this:
```
API (returns JSON)
⬇
API client (returns a built in type, usually `dict`)
⬇
Model (returns an instance, eg of type `Service`)
⬇
View (returns HTML)
```
The user API client was architected weirdly, in that it returned a model
directly, like this:
```
API (returns JSON)
⬇
API client (returns a model, of type `User`, `InvitedUser`, etc)
⬇
View (returns HTML)
```
This mixing of different layers of the application is bad because it
makes it hard to write model code that doesn’t have circular
dependencies. As our application gets more complicated we will be
relying more on models to manage this complexity, so we should make it
easy, not hard to write them.
It also means that most of our mocking was of the User model, not just
the underlying JSON. So it would have been easy to introduce subtle bugs
to the user model, because it wasn’t being comprehensively tested. A lot
of the changed lines of code in this commit mean changing the tests to
mock only the JSON, which means that the model layer gets implicitly
tested.
For those reasons this commit changes the user API client to return
JSON, not an instance of `User` or other models.
Upgraded pyexcel-io from 0.5.14 to 0.5.16. This change causes Werkzeug
to be upgraded from 0.14.1 to 0.15.1 which requires some changes:
* ProxyFix now needs to be imported from a different location
* The status code of RequestRedirect has changed from 301 to 308. Since
status code 308 is not currently supported on Internet Explorer with
Windows 7 and 8.1, this subclasses RequestRedirect to keep the status
code of 301.
changelog: https://werkzeug.palletsprojects.com/en/0.15.x/changes/#version-0-15-0
when clients are defined in app/__init__.py, it increases the chance of
cyclical imports. By moving module level client singletons out to a
separate extensions file, we stop cyclical imports, but keep the same
code flow - the clients are still initialised in `create_app` in
`__init__.py`.
The redis client in particular is no longer separate - previously redis
was set up on the `NotifyAdminAPIClient` base class, but now there's one
singleton in `app.extensions`. This was done so that we can access redis
from outside of the existing clients.
We were getting all letter logos from a method in the email branding
client. Since we will be adding more client methods to deal with
letters, it makes things clearer to separate the email and letter
branding clients.
Changed the table for displaying all notifications to show letters which
have the status of 'validation-failed' as 'Validation failed' instead of
'Cancelled'.
The individual notification page for a letter which has failed
validation has not been changed since this already has a description
(letter has content outside the printable area).
In the long term, we don't want to show cancelled letters. But for now,
this changes cancelled letters to display in the same way that letters
with a status of permanent-failure, since we are currently giving
letters that we want to cancel the status of permanent failure.
The CDN URLs aren’t in included in the content security policy. So
browsers will refuse to load them.
This commit:
- adds each of the CDN URLs to the
- only prepend URLs in CSS files with `/static/` if we’re running
locally (because the CDN URLs are like `static.example.com` not
`example.com/static`)
`www.notifications.service.gov.uk` domain is:
- not gzipped
The PaaS proxy used to GZip and set headers for anything served from a
path starting with `/static/`:
76dd511a8a/ansible/roles/paas-proxy/templates/admin.conf.j2 (L53-L64)
Anything served from `static.notifications.service.gov.uk` is:
- GZipped
- and as a bonus, cached by Cloudfront where possible (meaning the
requests won’t ever hit our app)
This commit moves to serving static asset from `/static/` to
`static.notifications.service.gov.uk`, to get the above listed benefits.
***
We could do even better by setting long cache expiry headers on the static subdomain (currently they’re only set to cache for 60 seconds). But that’s out of scope for this commit.