Previously some of the tests for code in the "formatters" module
were in tests for the "utils" module. This moves them to where
they should be. While two of these methods are probably more utils
than formatters, I'd like to postpone a refactor of that module
for now, and focus on slimming down test for utils/__init__.py.
Previously this was duplicated between the "two_factor" and the
"webauthn" views, and required more test setup. This DRYs up the
check and tests it once, using mocks to simplify the view tests.
As part of DRYing up the check into a util module, I've also moved
the "is_less_than_days_ago" function it uses.
This better reflects how the code is reused in other views and is
not specific to two factor actions. We have a pattern of testing
utility functionality for each view (as opposed to testing the util
+ the view calls the util), so I'm leaving the tests as-is.
Was just in one of those meetings where it felt like writing this would
take less time than I’d already spent talking about its relative
priority…
---
In the admin app you can already set the broadcast channel as 'test', 'severe' or 'government'.
Aim:
- Add the 'operator' channel to the list of channels you can pick for the admin app broadcast services
Note:
- The API already supports this - https://github.com/alphagov/notifications-api/pull/3262
- The CBC proxy does not yet support the operator channel and this will need a separate card. That card has not yet been written because the interface has not been agreed between us and the MNOs yet.
- Will need to have the ability to select the operator channel for just a single MNO like we do for the other channels
- If we add this, we shouldn't actually start using it until the MNO in question gives us the go ahead.
---
https://www.pivotaltracker.com/story/show/178485177
Previously this was hidden away in an anonymous __init__.py file.
I did think about splitting the models into individual files, like
we do with the top-level models for the app. Since the models are
only imported in one place - i.e. are all used together - it didn't
seem worth the hassle, so I've kept them in one file.
We only need to assert on the URL for the subsequent POST back to
the server, at which point we can call the test "done()". This is
a technique we use in the following tests as well, so we don't need
to comment about it here.
This is easier than re-assigning the mock functions manually, as
we're reusing Jest's in-built behaviour. Because all the mocks
are restored, we need to move the ones we had in the beforeAll
block into the beforeEach block.
Note: "require('./support/teardown.js')" also resets all Jest
mocks, but "require" only runs once, so we can't use it in a
beforeEach block [1]. We could do a "jest.resetModules()" to fix
that, which seems worse on the whole. I think there's a broader
discussion here about whether we could / should have a global
reset of Jest mocks after each test - I quickly tried this and
it causes some existing tests to fail :-|.
[1]: https://stackoverflow.com/questions/48989643/how-to-reset-module-imported-between-tests
is unable to print the letter we need to mark the letter as failed.
If we mark the letter as a technical-failure, we say that we will fix
the issue, which is wrong because we can not fix the issue.
If we mark the letter as validation-failed, the letter is in wrong
bucket so the letter is not viewable/downloadable by the client.
This PR updates the message for a letter marked as permanent-failure to
better reflect what has actually happened.
Otherwise we get the following error:
________________________________________ ERROR collecting tests/app/utils/test_user.py ________________________________________
import file mismatch:
imported module 'test_user' has this __file__ attribute:
/Users/benthorner/Documents/Projects/admin/tests/app/models/test_user.py
which is not the same as the test file we want to collect:
/Users/benthorner/Documents/Projects/admin/tests/app/utils/test_user.py
HINT: remove __pycache__ / .pyc files and/or use a unique basename for your test file modules
Previously these were lumped together with integration-level tests
for specific endpoints, which test the decorator was applied to the
endpoint in question.
This saves a bit of repetition, and lets us attach other methods to the
collection, rather than having multiple methods on the user object
prefixed with the same name, or random functions floating about.
This provides more room for expansion, so we don't get another
massive file to scroll through. We do also have some top-level
files, such as "formatters.py", which we could consider moving
under utils/ in future.
This takes a similar approach as in the previous commit. Since the
"training channel" doesn't really exist, we need some extra code
to pre-select it if a service is already in training mode. As in
the previous commit, I've removed a few non-critical test cases
where we really don't need to test exhaustively.
Note that we also need some specific code to avoid pre-selecting an
option for non-broadcast services, which only used to work by fluke:
we would try to populate the field with (False, None, 'all'), which
isn't a valid combination, so nothing was selected.
Previously this field had to mimic the final hyphenated string of
the broadcast account type, even though it was only used to select
one of its components. The new, shorter choices make it easier to
simplify the test for the POST request.
I've also deleted a number of test cases for pre-selected radios.
This functionality isn't critical, so we don't need to exhaustively
test every single possible combination of values.
This allows us to start decoupling the form fields from the final,
hyphenated string, which we'll do in the next commits.
Note that I've also removed the conditional that changes the data
of the network field as part of validating it. We shouldn't change
data in validations, and having the new property directly above
makes it clear there's no need for this code.
Previously we had to cope with two forms of the hyphenated string
we use to represent a pending change in broadcast account type.
Using "all" to mean "all providers" matches the behaviour in the
API [1], and means we can remove some complexity.
"training-test-all" isn't ideal, since the provider is irrelevant
for a training mode service. However, this isn't much worse than
the previous "training-test", noting that the channel also has no
relevance. We'll iterate this in later commits.
[1]: 8e1a144f87/migrations/versions/0352_broadcast_provider_types.py (L14)
This is inconsistent with all the other tests in the same file, and
one of them was incorrect ('_post' was testing a GET). I don't think
we get any value from them, given the inconsistency.