This changeset removes webauthn from the Notify.gov admin app. We are not using webauthn at all in our implementation and will be looking at an entirely different authentication system in the near future.
Signed-off-by: Carlo Costino <carlo.costino@gsa.gov>
Daily volumes report: total volumes across the platform aggregated by whole business day (bst_date)
Volumes by service report: total volumes per service aggregated by the date range given.
NB: start and end dates are inclusive
This stops adding the current user to the data sent to the API when
removing a user from an organisation. API only needs to know the
organisation_id and the id of the user we want to remove from
the organisation, so we don't need to pass through the id of the
current user too.
The other change made is to clear the user cache of the user who has
been removed from the org. We don't need to clear any of the organisation
caches, since these values don't contain lists of users for the orgs.
It’s weird to have a method just for updating one attribute. I think
the reason for doing this was to only invalidate the
`organisation-{}-name` cache when absolutely necessary, but:
- we don’t need a separate method to check whether it’s the name being
updated
- it was easy to get around this by calling
`OrganisationsClient.update_organisation` directly, leaving a stale
value in the cache
We have a `client_request` fixture which does a bunch of useful stuff
like:
- checking the status code of the response
- returning a `BeautifulSoup` object
Lots of our tests still use an older fixture called `logged_in_client`.
This is not as good because:
- it returns a raw `Response` object
- doesn’t do the additional checks
- means our tests contain a lot of repetetive boilerplate like `page = BeautifulSoup(response.data.decode('utf-8'), 'html.parser')`
This commit converts all the tests using `logged_in_client` to:
use `client_request` instead.
This updates the `get_notifications_for_service` of the
`NotificationApiClient` to make POST request to the api when the `to`
field is provided. This is done so that we avoid logging personal
details such as email addreses.
If the `to` field is not present, the method will make a GET reqeust and
there will be no change to how it works.
This stat is shown on a few of our pages, such as our homepage,
the performance page and also a platform admin page
and is currently catched for the
default TTL of 1 week. I think there is no reason we can't make
this only cache once an hour and give slightly more up to date stats
which will update more regularly.
This mimics the approach and also the TTL choice of 1 hour that has
been added for the performance page (although there is no
particular bug here to fix, it is just nice to have slightly more up
up to date data).
Note, the API call only takes about 0.3 seconds at the moment
so it is not particularly intensive on the DB to run this more
regularly.
I came across a bug where the performance page might be visited
on say the 22nd and we expect it to show data for yesterday and
the past 7 days (so the 21st and back 7 days) but instead it
only shows it for the 20th and then back 6 days.
The cause is that we have nightly tasks that run to
calculate the number of notifications sent per service (the
ft_notification_status table)
calculate the number of notifications sent within 10 seconds
(the ft_processing_time table)
Both of these tables get filled between the hours of midnight and
2:30am. The first seems to be filled by about 12:30 every night
whereas the processing stats seems to be filled about 2am every
night.
The admin app currently caches the results of the call to the API
(https://github.com/alphagov/notifications-api/blob/master/app/performance_dashboard/rest.py#L24)
to get this data with the key
performance-stats-{start_date}-to-{end_date}.
Now the issue here is that if the performance page is visited at
00:01 on the 23rd, it will call the API for data on the 22nd and
backwards 7 days. The data it gets at 00:01 will not yet have the
data for the 22nd, it will only go as far as the 21st, because
the overnight jobs to put data in the tables have not run yet. The
admin app then caches this data, meaning that for the rest of the
day the page will be missing data for the 22nd, even though it
becomes available after approx 2am. We have seen it a few times
in production where the performance page has indeed been visited
before say 2 am, leaving the page with missing data.
In terms of solutions, there are several potential options.
1. Remove the caching that we do in redis. The downside of this is
that we will hit the API more (currently once a day for the first
visitor to the performance page). We have had 255 requests
to the performance page in the last week and when there is no value
in redis, the call to the API takes approximately 1-2 seconds.
Removing the caching would against why the caching was originally
added which was to act as a basic protection against malicious
DOS attempts.
2. Make the admin app only set the cache value if it is after
2:30am. This one feels a bit gross and snowflakey.
3. The API flushes the redis cache after it has finished its nightly
tasks. We don't have that much precedent of the API interacting
with redis, it is mostly the admin app that does but this is still
a valid option.
4. Cache in redis the data for 2 days ago to 7 days ago and then
get the data for yesterday freshly every time. This would mean
some things are still cached but the smallest bit that we can't rely
on can be gathered fresh.
5. Remove the caching we do in redis but then try and optimise the
API endpoint to return results even faster and with less
interaction with the DB. My hypothesis is that the slowest part
is where the API has to calculate how many notifications Notify
has ever sent
(https://github.com/alphagov/notifications-api/blob/master/app/performance_dashboard/rest.py#L36)
and so we could potentially try and store these totals in the DB
every night and update them nightly rather than having to query all
the data in the table to figure them out every time.
6. Reduce the expiry time of the cache key so although the bug will
still exist, it will only exist for a much shorter time.
In the end, we've gone for option 6. The user impact of
this bug sticking around for an hour is minimal, in particular
because that would be in the early hours of the morning. The solution
is also quick to implement and keeps the protection against a DOS attack.
The value of 1 hour for expiry was a fairly abitrary choice, it could
have been as little as 1 minute or as much as 6 hours.
When the broadcast service settings form is submitted it now removes all
permissions for users in notifications-api. This means it should be
clearing the user cache.
the js `fetch` function will follow redirects blindly and return you the
final 200 response. when there's an error, we don't want to go anywhere,
and we want to use the flask `flash` functionality to pop up an error
page (the likely reason for seeing this is using a yubikey that isn't
associated with your user). using `flash` and then
`window.location.reload()` handles this fine.
However, when the user does log in succesfully we need to properly log
them in - this includes:
* checking their account isn't over the max login count
* resetting failed login count to 0 if not
* setting a new session id in the database (so other browser windows are
logged out)
* checking if they need to revalidate their email access (every 90 days)
* clearing old user out of the cache
This code all happens in the ajax function rather than being in a
separate redirect, so that you can't just navigate to the login flow. I
wasn't able to unit test that function due how it uses the session and
other flask globals, so moved the auth into its own function so it's
easy to stub out all that CBOR nonsense.
TODO: We still need to pass any `next` URLs through the chain from login
page all the way through the javascript AJAX calls and redirects to the
log_in_user function
This naming was introduced in 2016 without explanation [1]. I find it
confusing because:
- It's reminiscent of "_app", which is a Python convention indicating
the variable is internal, so maybe avoid using it.
- It suggests there's some other "app" fixture I should be using (there
isn't, though).
The Python style guide describes using an underscore suffix to avoid
clashes with inbuilt names [1], which is sort of applicable if we need
to import the "app" module [2]. However, we can also avoid clashes by
choosing a different name, without the strange underscore.
[1]: 3b1d521c10
[2]: 78824f54fd/tests/app/main/views/test_forgot_password.py (L5)
This links up the `get_webauthn_credentials_for_user` and
`create_webauthn_credential_for_user` methods of the user api client to
notifications-api.
To send data to the API we need strings to be unicode, so we call
decode('utf-8') on base64 objects.
Co-authored-by: Leo Hemsted <leo.hemsted@digital.cabinet-office.gov.uk>
This passes existing credentials in the server response, to allow
the browser to prevent re-registering the same key for the same
user. Registering the same key multiple times doesn't seem to be
an issue technically; the user has likely got their keys mixed up.
- Chrome says "you don't need to register it again".
- Safari exits with an InvalidStateError.
- Firefox exits with a DOMException.
This adds Yubico's FIDO2 library and two APIs for working with the
"navigator.credentials.create()" function in JavaScript. The GET
API uses the library to generate options for the "create()" function,
and the POST API decodes and verifies the resulting credential. While
the options and response are dict-like, CBOR is necessary to encode
some of the byte-level values, which can't be represented in JSON.
Much of the code here is based on the Yubico library example [1][2].
Implementation notes:
- There are definitely better ways to alert the user about failure, but
window.alert() will do for the time being. Using location.reload() is
also a bit jarring if the page scrolls, but not a major issue.
- Ideally we would use window.fetch() to do AJAX calls, but we don't
have a polyfill for this, and we use $.ajax() elsewhere [3]. We need
to do a few weird tricks [6] to stop jQuery trashing the data.
- The FIDO2 server doesn't serve web requests; it's just a "server" in
the sense of WebAuthn terminology. It lives in its own module, since it
needs to be initialised with the app / config.
- $.ajax returns a promise-like object. Although we've used ".fail()"
elsewhere [3], I couldn't find a stub object that supports it, so I've
gone for ".catch()", and used a Promise stub object in tests.
- WebAuthn only works over HTTPS, but there's an exception for "localhost"
[4]. However, the library is a bit too strict [5], so we have to disable
origin verification to avoid needing HTTPS for dev work.
[1]: c42d9628a4/examples/server/server.py
[2]: c42d9628a4/examples/server/static/register.html
[3]: 91453d3639/app/assets/javascripts/updateContent.js (L33)
[4]: https://stackoverflow.com/questions/55971593/navigator-credentials-is-null-on-local-server
[5]: c42d9628a4/fido2/rpid.py (L69)
[6]: https://stackoverflow.com/questions/12394622/does-jquery-ajax-or-load-allow-for-responsetype-arraybuffer
This adds a new platform admin settings row, leading a page which
shows any existing keys and allows a new one to be registered. Until
the APIs for this are implemented, the user API client just returns
some stubbed data for manual testing.
This also includes a basic JavaScript module to do the main work of
registering a new authenticator, to be implemented in the next commits.
Some more minor notes:
- Setting the headings in the mapping_table is necessary to get the
horizontal rule along the top (to match the design).
- Setting caption to False in the mapping_table is necessary to stop
an extra margin appearing at the top.