On the ‘find user’ page it says ‘sms_auth’ instead of ‘Text message
code’.
This commit fixes that, and adds a handy formatter so it’s easier to do
the right thing in the future.
Having this as a function which does string parsing and manipulation
surprised me a bit when I was trying to figure out why something wasn’t
working.
It’s more in line with the way we do other config like this (for example
`ASSET_PATH`) to make it a simple config variable, rather than trying to
be clever and guess things based on other config variables.
It’s also less code, and is explicit enough that it doesn’t need tests.
Brings in https://github.com/alphagov/notifications-utils/pull/889/files
At the moment, we are not doing any transformation of features before
applying geometric algorithms to them. This is, in effect, assuming that
the earth is flat.
This new version of utils implements the transformation of our polygons
to a Cartesian plane. In other words, it converts them from being
defined in spherical degrees to metres.
For the admin app this means we need to convert places where the code
expects things to be measured in degrees to work in metres instead.
We give estimates of the area for those who can’t see the map. These
estimates were needlessly precise, gave a false sense of accuracy and
were causing intermittent test failures between different environments.
This commit rounds them in the same way that we round the count of
phones.
This adds Yubico's FIDO2 library and two APIs for working with the
"navigator.credentials.create()" function in JavaScript. The GET
API uses the library to generate options for the "create()" function,
and the POST API decodes and verifies the resulting credential. While
the options and response are dict-like, CBOR is necessary to encode
some of the byte-level values, which can't be represented in JSON.
Much of the code here is based on the Yubico library example [1][2].
Implementation notes:
- There are definitely better ways to alert the user about failure, but
window.alert() will do for the time being. Using location.reload() is
also a bit jarring if the page scrolls, but not a major issue.
- Ideally we would use window.fetch() to do AJAX calls, but we don't
have a polyfill for this, and we use $.ajax() elsewhere [3]. We need
to do a few weird tricks [6] to stop jQuery trashing the data.
- The FIDO2 server doesn't serve web requests; it's just a "server" in
the sense of WebAuthn terminology. It lives in its own module, since it
needs to be initialised with the app / config.
- $.ajax returns a promise-like object. Although we've used ".fail()"
elsewhere [3], I couldn't find a stub object that supports it, so I've
gone for ".catch()", and used a Promise stub object in tests.
- WebAuthn only works over HTTPS, but there's an exception for "localhost"
[4]. However, the library is a bit too strict [5], so we have to disable
origin verification to avoid needing HTTPS for dev work.
[1]: c42d9628a4/examples/server/server.py
[2]: c42d9628a4/examples/server/static/register.html
[3]: 91453d3639/app/assets/javascripts/updateContent.js (L33)
[4]: https://stackoverflow.com/questions/55971593/navigator-credentials-is-null-on-local-server
[5]: c42d9628a4/fido2/rpid.py (L69)
[6]: https://stackoverflow.com/questions/12394622/does-jquery-ajax-or-load-allow-for-responsetype-arraybuffer
This matches the existing performance platform page, and I think is a
bit easier to read for high-level numbers where you don’t need to see
that they’re changing second-by-second.
Note, no option at the moment to set the service broadcast account type
as None, or back to without the broadcast permission. This has been done
for speed of development given the chance of us needing this is very
low. We can add it later if we need to.
As formatters we can use them in Jinja or Python code.
It also means we don’t need to import them every time we want to use
them – they’re always available in the template context.
For now this doesn’t remove the macros, it just aliases them to the
formatters. This gives us confidence that the formatters are working the
same way the old macros did, and reduces the diff size of each commit.
We have lots of functions for converting various types of data into
strings to be displayed to the user somewhere.
This commit collects all these functions into their own module, rather
than having them cluttering up `app/__init__.py` or buried amongst
various other things that have ended up in `app/utils.py`.
We added this code in
https://github.com/alphagov/notifications-admin/pull/3371/files to
account for Flask Login renaming its cookies. We wanted our apps to be
compatible with the old and new names, so people didn’t get logged out
when we rolled out the change.
Now that all the cookies with the old names will have expired (some
weekends have passed since March) we can remove this loop.
When looking at Google’s PageSpeed Insights tool as part of the
compression work I noticed a suggestion that we preload our font files.
The tool suggests this should save about 300ms on first page load time.
***
Our font files are referenced from our CSS. This means that the browser
has to download and parse the CSS before it knows where to find the font
files. This means the requests happen in sequence.
We can make the requests happen in parallel by using a `<link>` tag with
`rel=preload`. This tells the browser to start downloading the fonts
before it’s even started downloading the CSS (the CSS will be the next
thing to start downloading, since it’s the next `<link>` element in the
head of the HTML).
Downloading fonts before things like images is important because once
the font is downloaded it causes the layout to repaint, and shift
everything around. So the page doesn’t feel stable until after the fonts
have loaded.
Google call this [cumulative layout shift](https://web.dev/cls/) which
is a score for how much the page moves around. A lower score means a
better experience (and, less importantly for us, means the page might
rank higher in search results)
We’re only preloading the WOFF2 fonts because only modern browsers
support preload, and these browsers also all support WOFF2.
We set an empty `crossorigin` attribute (which means anonymous-mode)
because the preload request needs to match the origin’s CORS mode. See
https://developer.mozilla.org/en-US/docs/Web/HTML/Preloading_content#CORS-enabled_fetches
for more details.
We set `as=font` because this helps the browser use the correct content
security policy, and prioritise which requests to make first.
When a browser loads a Notify page it does the following:
- DNS and TLS handshake for notifications.service.gov.uk
- download some HTML
- sees that the HTML needs to load some CSS
- DNS and TLS handshake for static.notifications.service.gov.uk
- downloads the CSS
We can speed things up a bit in modern browsers by parallelizing this
process a bit. Modern browsers support some HTTP headers[1] that allow
them to connect to other origins sooner.
After this change the steps are:
- DNS and TLS handshake for notifications.service.gov.uk
- receive response headers and simultaneously:
- download some HTML
- DNS and TLS handshake for static.notifications.service.gov.uk
- sees that the HTML needs to load some CSS
- downloads the CSS
1. https://developer.mozilla.org/en-US/docs/Web/Performance/dns-prefetch
Broadcast services only have broadcast templates. But we show the
template type under the name of the template. This is redundant. It
would be better to preview the content of the template instead.
This then makes the templates page consistent with the dashboard.
Depends on:
- [ ] https://github.com/alphagov/notifications-api/pull/2996
Includes the following pages:
- letter contact (in service settings)
- sms-senders page (in service settings)
- email reply-to (in service settings)
- API key page
Note: the call on the letter contact page uses
the first line of the contact address as the
unique identifier for each button.
This commit removes the code the puts areas into the session and instead
creates and then updates a draft broadcast in the database.
This is so we can avoid session-related bugs, and potentially having a
large session when we start adding personalisation etc.
Once a broadcast is ready to go it is set to `broadcasting` straight
away with no approval. We’ll revisit this as we learn more about how
users might want to manage who can create and approve broadcasts.
The tests are a bit light in terms of checking what’s on the page, but
clicking through the pages is probably good enough for now.
So you can check you’ve chosen the right areas, and to give you a clear
idea of where the boundaries of an area are.
The Javascript and CSS for the map is only loaded on this page because
it adds quite a few kb, and we don’t want to be sending assets to the
majority of our users who will never see them.
This solves two problems
- it makes our response times more accurate as it means we start
measuring the response time earlier (otherwise we aren't recording the
time spent by `csrf` and `login_manager`s `before_request` functions
- is a temporary fix for a bug in the gds python metrics library as
explained below.
Currently, when a request comes in it goes through various
`before_request` functions. Currently it goes through the function
introduced by the csrf client and then the one introduced by the metrics
client. If an exception is thrown by the csrf.before_request function
then we do not run the `metrics.before_request` function. This would
happen in the case that a CSRF token is invalid and then the main body
of the request would not process but then all `teardown_request`
functions will run. When the `metrics.teardown_request` function runs it
looks for `g._gds_metrics_start_time`, however this attribute is not
availble on the flask global object as it was not created as the
`metrics.before_request` function that creates it did not run. This then
throws an `AttributeError` and results in a 500 for the user. The short
term solution for this (initing metrics before csrf) means that
`_gds_metrics_start_time` will be set before csrf is at risk of throwing
an exception.
A separate PR will be put into the gds metrics python library to remove
the risk of an `AttributeError` and instead to log a warning instead of
throwing an uncaught exception.
Some teams have started uploading quite a lot of letters (in the
hundreds per week). They’re also uploading CSVs of emails. This means
the uploads page ends up quite jumbled.
This is because:
- there’s just a lot of items to scan through
- conceptually it’s a bit odd to have batches of things displayed
alongside individual things on the same page
So instead we’re going to start grouping together uploaded letters. This
will be by the date on which we ‘start’ printing them, or in other
words the time at which they can no longer be cancelled.
This feels like a natural grouping, and it matches what we know about
people’s mental models of ‘batches’ and ‘runs’ when talking about
printing.
This grouping will be done in the API, so all this commit need to do is:
- be ready to display this new type of pseudo-job
- link to the page that displays all the uploaded letters for a given
print day
Because we won’t be showing uploaded letters individually on the uploads
page any more we need a way of listing them. This should be by printing
day, to match how we’re grouping them on the uploads page.
This code reuses the notifications.html template, but flips the
precedence of the filename and recipient because I reckon when you’re
looking at uploads you’re thinking filename-first.
It was saying ‘16 hours ago’ instead of today. This is because, in
strftime:
- `%M` means minute, not month
- `%D` means short MM/DD/YY date, not day of the month
The test wasn’t catching this because the freeze time and mocked value
from the API were set to the same minute.
We increasingly have teams wanting to do business-continuity type
messaging. They might be without access to their normal systems, which
is where they would otherwise go to get the list of email addresses or
phone numbers.
So we want to give them a place in Notify where they can store their
spreadsheets and use them at a later date.
For the initial pass we’re going to scope this to only allowing
spreadsheets with one column, ie just phone numbers/email addresses.
This is because:
- it minimises the amount of personal info we’re storing
- it reduces the chance of getting a placeholder error when you go to
send the message, which is probably a high-stress situation where you
might not be able to re-generate the file
The code for this is mostly copied from the existing upload CSV journey.
It’s quite duplicative, but that’s what I needed to do to get this out
quickly. There are opportunities for refactoring later.
Similarly, I would have liked to split this up into better commit
messages, but it really was a case of just bashing code out until it
worked 😳
This commit does not:
- implement the ‘view a contact list page’ (it just has a placeholder
because the API isn’t ready at the moment)
- link to this page (because it’s not ready to use yet)
flask_login moves from `user_id` to `_user_id`. unfortunately, this
isn't backwards compatible as if an old cookie only has the old
`user_id`, then flask_login won't find `_user_id` so will mark the user
as unauthenticated.
this code will manually migrate the three flask login cookie variables,
before the flask_login code runs, so that it doesn't freak out
it just shows a h1, so isn't helpful for people. We can re-use the 500
error page, which includes instructings "Try again later" and
instructions on what to do next (check the status page, email notify
support).
this required refactoring to ensure we can show the 500 error page while
still returning the required status code
if we expect a 400 (for example, the api returns 400 if the service name
is already in use) then we should handle that from the view function so
we can correctly display a relevation error message to the user. But if
it returns a 400 that we didn't expect (for example, because we sent
variables of the wrong type through to an endpoint with a schema), then
that is a bug that we should fix as any other raised exception. By
treating it as a 500 we encourage users to report it to us, and also
will get an alert email