This means we can use tools like "npm audit" to look for security
vulnerabilities we definitely need to fix as they could pose a
direct risk to users. I've checked each of them with @tombye and
also against an external set of principles [^1].
Note: I've skimmed through the package-lock.json to check the only
changes are to add "dev: true", as well as a few integrity hashes.
[^1]: https://betterprogramming.pub/is-this-a-dependency-or-a-devdependency-678e04a55a5c
We added domdiff to replace the DiffDOM library
here:
87f54d1e88
DiffDOM had updated its code to be written to the
ECMAScript 6 (ES6) standard and so needed extra
work to work with the older browsers in our
support matrix. This was recorded as an issue
here:
https://www.pivotaltracker.com/n/projects/1443052/stories/165380360
Domdiff didn't work (see below for more details)
so this replaces it with the morphdom library.
Morphdom supports the same browsers as us and is
relied on by a range of large open source
projects:
https://github.com/patrick-steele-idem/morphdom#what-projects-are-using-morphdom
It was tricky to find alternatives to DiffDOM so
if we have to source alternatives in future, other
options could be:
- https://github.com/choojs/nanomorph
- https://diffhtml.org/index.html (using its
outerHTML method)
Why domdiff didn't work
Turns out that domdiff was replacing the page HTML
with the HTML from the AJAX response every time,
not just when they differed. This isn't a bug.
Domdiff is bare bones enough that it compares old
DOM nodes to new DOM nodes with ===. With our
code, this always results to false because our new
nodes are made from HTML strings from AJAX
response so are never the same node as the old
one.
The intention behind the version of node in the
engines property was for that version to be the
minimum required so it was always missing the `>=`
prefix.
This adds that prefix and also adds a setting for
npm, to prevent use of insecure versions. See this
article for details:
https://github.blog/2021-09-08-github-security-update-vulnerabilities-tar-npmcli-arborist/
A while ago diffDOM moved its code to use ES6
modules and started using various language
features specific to ES6. These two things
happened independently btw.
The result of this is that the version of diffDOM
suitable for our build pipeline, structured as an
immediately invoked function evocation (IIFE),
now requires polyfills of some ES6 features to
work in the older browsers we support, like IE11.
It's also worth noting that in the move to ES6
the maintainers of diffDOM have adopted a process
whereby users who need to support older browsers
now have to add polyfill code for any ES6 features
they choose to use.
This commmit proposes a move to the domdiff
library instead because:
- it runs on all javascript runtimes with no
polyfills
- it is 2KB instead of diffDOM's 25KB
Domdiff takes a different approach to diffDOM, in
that it compares existing nodes and new nodes and
replaces the existing ones with the new ones if
there are differences. By contrast, diffDOM will
make in-place changes to nodes if there are enough
similarities. In other words, in most situations,
diffDOM won't change the node in $component
whereas domdiff will.
Because of this, I've had to change the
updateContent.js code to cache the data-key
attribute's value so we don't lose access to it by
overwrite the $component variable with a different
jQuery selection.
This adds Yubico's FIDO2 library and two APIs for working with the
"navigator.credentials.create()" function in JavaScript. The GET
API uses the library to generate options for the "create()" function,
and the POST API decodes and verifies the resulting credential. While
the options and response are dict-like, CBOR is necessary to encode
some of the byte-level values, which can't be represented in JSON.
Much of the code here is based on the Yubico library example [1][2].
Implementation notes:
- There are definitely better ways to alert the user about failure, but
window.alert() will do for the time being. Using location.reload() is
also a bit jarring if the page scrolls, but not a major issue.
- Ideally we would use window.fetch() to do AJAX calls, but we don't
have a polyfill for this, and we use $.ajax() elsewhere [3]. We need
to do a few weird tricks [6] to stop jQuery trashing the data.
- The FIDO2 server doesn't serve web requests; it's just a "server" in
the sense of WebAuthn terminology. It lives in its own module, since it
needs to be initialised with the app / config.
- $.ajax returns a promise-like object. Although we've used ".fail()"
elsewhere [3], I couldn't find a stub object that supports it, so I've
gone for ".catch()", and used a Promise stub object in tests.
- WebAuthn only works over HTTPS, but there's an exception for "localhost"
[4]. However, the library is a bit too strict [5], so we have to disable
origin verification to avoid needing HTTPS for dev work.
[1]: c42d9628a4/examples/server/server.py
[2]: c42d9628a4/examples/server/static/register.html
[3]: 91453d3639/app/assets/javascripts/updateContent.js (L33)
[4]: https://stackoverflow.com/questions/55971593/navigator-credentials-is-null-on-local-server
[5]: c42d9628a4/fido2/rpid.py (L69)
[6]: https://stackoverflow.com/questions/12394622/does-jquery-ajax-or-load-allow-for-responsetype-arraybuffer
Update all methods that were previous calling @cache.delete('service-{service-id}-template-None') to instead call _delete_template_cache_for_service
Remove call to get service templates, it's not needed since all template version cache is being deleted.
In very old browsers it used to be that you could only make 2 concurrent
requests from the same origin.
So base64 encoding of images into CSS was an optimisation that became
popular because it reduced the number of separate requests.
However base64 encoding images has a few disadvantages:
- it increases the size of the image by about 30%
- it increases the size of the CSS file, which is a
[render blocking resource](https://web.dev/render-blocking-resources/)
so makes the page appear to load more slowly for the sake of some
images which, on most pages, never get used
- GZipping things that are already compressed (for example PNG data) is
very CPU intensive, and might be why Cloudfront sometimes gives up
Removing the inlining of images reduces the size of the CSS we’re
sending to the browser considerably:
–| Before | After | Saving
---|---|---|---
Uncompressed | 198kb | 164kb | 17%
Compressed | 38kb | 23kb | 39%
Cloudfront, our CDN, sometimes decides not to gzip
assets. Because of this, we're going to gzip them
ourselves prior to upload instead.
This will involve:
1. adding gzipping to the make task that uploads
them
2. turning compression off in Cloudfront
There is already a pull request up for number 1:
https://github.com/alphagov/notifications-admin/pull/3733
Because deploying all this will, at some point,
create a state where Cloudfront is set to compress
assets that are already compressed, we need to
test that it doesn't re-compress them.
This adds a frontend build task that generates a
test asset which is:
- a copy of app/static/stylesheets/main.css
- renamed to include a MD5 SHA of its contents
- already gzipped
Once deployed, the test will be to:
1. download the asset from the live environment
2. unzip it
3. diff it against app/static/stylesheets/main.css
Leaflet seems to be the go-to library for rendering maps these days. It
will be useful for the broadcast work.
This commit add the leaflet Javascript and CSS to our asset pipeline.
The Javascript is already minified so all we need to do is copy it. The
CSS is uncompressed so we put it through the same pipe as our other
stylesheets.
I’m keeping these as separate files because they’re quite heavy (or the
JS is at least – 38kb minified) so I want them to only be loaded on the
pages where they’re used. Most users of Notify will never need to see a
map.
By default our AJAX calls were 2 seconds. Then they were 5 seconds
because someone reckoned 2 seconds was putting too much load on the
system. Then we made them 10 seconds while we were having an incident.
Then we made them 20 seconds for the heaviest pages, but back to 5
seconds or 2 seconds for the rest of the pages.
This is not a good situation because:
- it slows all services down equally, no matter how much traffic they
have, or which features they have switched on
- it slows everything down by the same amount, no matter how much load
the platform is under
- the values are set based on our worst performance, until we manually
remember to switch them back
- we spend time during incidents deploying changes to slow down the
dashboard refresh time because it’s a nothing-to-lose change that
might relieve some symptoms, when we could be spending time digging
into the underlying cause
This pull request makes the Javascript smarter about how long it waits
until it makes another AJAX call. It bases the delay on how long the
server takes to respond (as a proxy for how much load the server is
under).
It’s based on the square root of the response time, so is more sensitive
to slow downs early on, and less sensitive to slow downs later on. This
helps us give a more pronounced difference in delay between an AJAX call
that is fast (for example the page for a single notification) and one
that is slow (for example a dashboard for a service with lots of
traffic).
*Some examples of what this would mean for various pages*
Page | Response time | Wait until next AJAX call
---|---|---
Check a reply to address | 130ms | 1,850ms
Brand new service dashboard | 229ms | 2,783ms
HM Passport Office dashboard | 634ms | 5,294ms
NHS Coronavirus Service dashboard | 779ms | 5,977ms
_Example of the kind of slowness we’ve seen during an incident_ | 6,000ms | 18,364ms
GOV.UK email dashboard | `HTTP 504` | 😬
Fix for issue that caused this revert:
https://github.com/alphagov/notifications-admin/pull/3196
Note:
gulp-css-url-adjuster operates on an Abstract
Syntax Tree (AST) derived from `main.css`. The
CSS output from this loses the compression
gulp-sass applies.
This moves compression out of Sass, to a step
after the URLs are adjusted.
Means our rollup bundling doesn't leave any
artefact files lying around that we'd then have to
deal with.
Also includes:
- removal of some JSHint config' marking the
artefacts as scripts to ignore
- use of streamqueue package to allow the same
ordering of scripts as before
Includes Sass that targeted GOV.UK Template HTML
and also moves some link styles to `globals.scss`.
Also removes bits of frontend build that copied
over GOVUK Template files.
Means our rollup bundling doesn't leave any
artefact files lying around that we'd then have to
deal with.
Also includes:
- removal of some JSHint config' marking the
artefacts as scripts to ignore
- use of streamqueue package to allow the same
ordering of scripts as before
Includes Sass that targeted GOV.UK Template HTML
and also moves some link styles to `globals.scss`.
Also removes bits of frontend build that copied
over GOVUK Template files.
Means our rollup bundling doesn't leave any
artefact files lying around that we'd then have to
deal with.
Also includes:
- removal of some JSHint config' marking the
artefacts as scripts to ignore
- use of streamqueue package to allow the same
ordering of scripts as before
Includes Sass that targeted GOV.UK Template HTML
and also moves some link styles to `globals.scss`.
Also removes bits of frontend build that copied
over GOVUK Template files.