This changeset removes webauthn from the Notify.gov admin app. We are not using webauthn at all in our implementation and will be looking at an entirely different authentication system in the near future.
Signed-off-by: Carlo Costino <carlo.costino@gsa.gov>
In response to: [^1][^2][^3].
Originally there were three tests:
- One was testing the "get_user_template_folders" method directly.
- The other two were testing "get_template_folders", which calls
the "_user_" method if a user is passed - this is what the tests
were primarily checking, by passing or not passing a User object.
The two tests of "get_template_folders" also implicitly checked
that it filtered folders based on a "parent_folder_id". I wanted
to emulate the tests but it makes more sense to simplify them, as
the methods are now only used by TemplateList internally.
To simplify, we just keep just two tests to check that the overall
set of folders is filtered based on the presence of a User. We do
lose the implicit check of filtering by "parent_folder_id", but:
- We are still checking this implicitly in the sense that the loop
to generate the overall set of folders terminates. If the method
didn't do any filtering, the loop would be infinite.
- We need to acknowledge that these tests were incomplete to begin
with. There is also lots of coverage in higher level tests [^4],
although it's harder to reason exactly what is covered.
[^1]: https://github.com/alphagov/notifications-admin/pull/4258#discussion_r890144076
[^2]: https://github.com/alphagov/notifications-admin/pull/4258#discussion_r890151220
[^3]: https://github.com/alphagov/notifications-admin/pull/4258#discussion_r890147506
[^4]: 1787f9f42f/tests/app/main/views/test_template_folders.py
This is part of the overall migration of the "_template_folders"
methods to the TemplateList class. Moving the existing tests now
will make the actual migration easier to follow.
To emulate the second and third tests, we need to grab a specific
folder from the TemplateList and then look at its folders - these
are set based on "get_template_folders" as before.
This will make it easier to understand the diff when we move these
tests to operate via TemplateList. Despite the verbosity, the only
attribute we were actually checking here was the name, as a way of
identifying which folders had been returned.
Looking at the three tests:
1. The first is checking we can correctly filter all folders that a
user can access, which involves appending the names of any parent
folders the user doesn't have direct access to.
2. The second is checking the same thing but also that we filter the
set of folders to just the children of a parent.
3. The third is just checking the filtering of child folders, without
any user filtering or name aggregation applied.
I've adapted tests (2) and (3) to make it clearer what is tested,
focussing the tests on a specific folder and its contents.
This is a very low impact bug since a user can always create such
templates after their service is live and not be subject to checks
we do before that point. Still, we may as well fix it.
The main benefit of this change is actually to contribute towards
moving methods like "get_templates" out of the Service class so
we can simplify and cache their results more effectively.
Note: I wanted to simplify the Service class further as the two
"has_" properties are only used once in the app code. Unfortunately
they are tightly coupled in one of the tests as well [^1].
[^1]: bef0382cca/tests/app/main/views/service_settings/test_service_settings.py (L1961-L1962)
This represents a bug where a user can request to go live without
setting a reply-to email address or SMS sender despite the service
having one or more email or SMS templates, respectively.
We will make these tests pass in the next commit.
for a job to be finished there are two requirements:
* the status to be "finished"
* the percentage complete to be 100%
The job status is set to finished when the process_job task finishes
(even though not all process_row may have finished). The
percentage_complete is calculated by comparing the number of
notifications in the database with the number of rows in the
spreadsheet.
This was inadvertently changed from an "and" to an "or" clause two years
ago. This meant that people could download a report when the status was
finished but not all notifications were present in the database. Lets
change it back.
7d52ac97f1 (diff-44b012cad205379c481bed244ddb2294bae5ee85dcd01f4aee932a2bd85b67b2L86-R100)
Some tests use the `client` fixture but don’t call any of its methods.
The reason for doing this is because the test depends on something in
the request context.
This commit replaces all those instances with `client_request`, which
also sets the request context.
These tests are the last ones that still use the `client` fixture. By
replacing it with `client_request` we will be able to say that no tests
should be using the `client` fixture directly.
When we combine simplified polygons from different areas we try to use
the polygons that are defined in metres as a speed optimisation.
This doesn’t work when those polygons are in several different
projections because the distance in metres is relative to the edge of
the projection. Just naively choosing one projection means that some
of the polygons will shift position on the earth by 100s of kilometers
once they are converted back to degrees.
To avoid this we need to:
- check for situations where we have polygons in multiple different
projections
- transforms these polygons back into degrees and let `Polygons` pick a
common projection for all of them before doing any
merging/smoothing/etc.
Brings in https://github.com/alphagov/notifications-utils/pull/889/files
At the moment, we are not doing any transformation of features before
applying geometric algorithms to them. This is, in effect, assuming that
the earth is flat.
This new version of utils implements the transformation of our polygons
to a Cartesian plane. In other words, it converts them from being
defined in spherical degrees to metres.
For the admin app this means we need to convert places where the code
expects things to be measured in degrees to work in metres instead.
the api returns UTC timestamps, we should keep them as UTC timestamps
until the very last moment, and only convert them into BST when we know
we want to return to a user (ie: in contact-list.html and other places
like that)
The original code to raise the exception was flawed: if a broadcast
only had a single area that was invalid, we would assume it was a
custom broadcast [1]. Since the recent changes [2] fixed the flaw
we're now getting exceptions for broadcasts of this kind.
It's not practical to go and manually fix the invalid broadcasts,
and the likelihood is there will be more in future as the set of
areas we support changes. This takes a pragmatic approach of simply
logging the issue and pretending such broadcasts are custom.
[1]: 926ada2f21
[2]: https://github.com/alphagov/notifications-admin/pull/4014/files#diff-2dd8f77d6df281e7674b20263cdf27a3d58b839dc5930c0087ac8b9749b313e4R92
Previously we relied on the API defaulting this field to an empty
array [1], but that conflicts with using it to decide whether a
broadcast is custom or created in this app.
[1]: 3779146cc5/app/models.py (L2342)
Custom broadcasts created directly via the API app won't have area
IDs since [1], where we started to distinguish between "names" (all
broadcasts have these) and IDs (for broadcasts created in this app).
We forgot to propagate the distinction into the code here.
This code fixes the bug for all broadcasts created after [1]. Any
custom broadcasts created before [1] will have their "ids" field set
instead of "names" so we won't show anything for them. This seems
reasonable as we don't support custom broadcasts yet.
[1]: 023a06d5fb
This is useful information to store for the event, which would be
lost if someone subsequently changed them.
Rather than updating lots of mock assertions, I've replaced them
with a single test / assert at a lower level, which is consistent
with auditing being a non-critical function.
I've used the term "admin_roles" in the event data to try and show
that these are not the permissions we store in the DB. This is the
name we use for the abstracted form of permissions in the Admin app.
While we could store the DB permissions, that would be a bit more
effort and arguably it's clearer to keep the event data consistent
with the options the user actually saw / chose.
This naming was introduced in 2016 without explanation [1]. I find it
confusing because:
- It's reminiscent of "_app", which is a Python convention indicating
the variable is internal, so maybe avoid using it.
- It suggests there's some other "app" fixture I should be using (there
isn't, though).
The Python style guide describes using an underscore suffix to avoid
clashes with inbuilt names [1], which is sort of applicable if we need
to import the "app" module [2]. However, we can also avoid clashes by
choosing a different name, without the strange underscore.
[1]: 3b1d521c10
[2]: 78824f54fd/tests/app/main/views/test_forgot_password.py (L5)
We need to re-initialise the webauthn_server module with original
app config, since this state is global across all tests. Since the
behaviour of the original fixture wasn't specific to verifying the
origin, I've renamed the fixture as part of making it global.
In order to keep the fixture simple, I've rewritten the test for
the webauthn_server module, so they don't touch the app fixture.
Previously we would raise a 500 error in a variety of cases:
- If a second key was being registered simultaneously (e.g. in a
separate tab), which means the registration state could be missing
after the first registration completes. That smells like an attack.
- If the server-side verification failed e.g. origin verification,
challenge verification, etc. The library seems to use 'ValueError'
for all such errors [1] (after auditing its 'raise' statements, and
excluding AttestationError [2], since we're not doing that).
- If a key is used that attempts to sign with an unsupported
algorithm. This would normally raise a NotImplemented error as part
of verifying attestation [3], but we don't do that, so we need to
verify the algorithm is supported by the library manually.
This adds error handling to return a 400 response and error message
in these cases, since the error is not unexpected (i.e. not a 500).
A 400 seems more appropriate than a 403, since in many cases it's
not clear if the request data is valid.
I've used CBOR for the transport encoding, to match the successful
request / response encoding. Note that the ordering of then/catch
matters in JS - we don't want to catch our own throws!
[1]: 142587b3e6/fido2/server.py (L255)
[2]: c42d9628a4/fido2/attestation/base.py (L39)
[3]: c42d9628a4/fido2/cose.py (L92)
This links up the `get_webauthn_credentials_for_user` and
`create_webauthn_credential_for_user` methods of the user api client to
notifications-api.
To send data to the API we need strings to be unicode, so we call
decode('utf-8') on base64 objects.
Co-authored-by: Leo Hemsted <leo.hemsted@digital.cabinet-office.gov.uk>
This adds Yubico's FIDO2 library and two APIs for working with the
"navigator.credentials.create()" function in JavaScript. The GET
API uses the library to generate options for the "create()" function,
and the POST API decodes and verifies the resulting credential. While
the options and response are dict-like, CBOR is necessary to encode
some of the byte-level values, which can't be represented in JSON.
Much of the code here is based on the Yubico library example [1][2].
Implementation notes:
- There are definitely better ways to alert the user about failure, but
window.alert() will do for the time being. Using location.reload() is
also a bit jarring if the page scrolls, but not a major issue.
- Ideally we would use window.fetch() to do AJAX calls, but we don't
have a polyfill for this, and we use $.ajax() elsewhere [3]. We need
to do a few weird tricks [6] to stop jQuery trashing the data.
- The FIDO2 server doesn't serve web requests; it's just a "server" in
the sense of WebAuthn terminology. It lives in its own module, since it
needs to be initialised with the app / config.
- $.ajax returns a promise-like object. Although we've used ".fail()"
elsewhere [3], I couldn't find a stub object that supports it, so I've
gone for ".catch()", and used a Promise stub object in tests.
- WebAuthn only works over HTTPS, but there's an exception for "localhost"
[4]. However, the library is a bit too strict [5], so we have to disable
origin verification to avoid needing HTTPS for dev work.
[1]: c42d9628a4/examples/server/server.py
[2]: c42d9628a4/examples/server/static/register.html
[3]: 91453d3639/app/assets/javascripts/updateContent.js (L33)
[4]: https://stackoverflow.com/questions/55971593/navigator-credentials-is-null-on-local-server
[5]: c42d9628a4/fido2/rpid.py (L69)
[6]: https://stackoverflow.com/questions/12394622/does-jquery-ajax-or-load-allow-for-responsetype-arraybuffer