Previously this was causing the wrapper function to become a
command before it started mirroring the original (functools.wraps),
which meant any previous option decorators were "lost".*
We didn't notice the problem in the original PR [1] because the new
command under test has its option decorators *after* the command
decorator, in contrast with all other (now broken) commands.
The original wrapper applied the functools decorator first [2],
so this change just reinstates that ordering.
*This is a hand-wavey explanation as I haven't looked into how
functools.wraps interacts with option decorators.
[1]: 922fd2f333#
[2]: 922fd2f333 (diff-c4e75c8613e916687a97191a7a79110dfb47e96ef7df96f7ba25dd94ba64943dL101)
It’s confusing that changing `MAX_VERIFY_CODE_COUNT` also limits the
number of failed login attempts that a user of text messages 2FA can
make.
This makes the parameters independent, and adds a test to make sure any
future changes which affect the limit of failed login attempts are
covered.
I was doing some analysis and saw that in the last 24 hours the most
codes that anyone had was in a 15 minute window was 3.
So I think we can safely reduce this to 5 to get a bit more security
with enough headroom to not have any negative impact to the user.
People with dyslexia and dyscalculia find it difficult to transpose
codes which have consecutive, repeated digits[1].
This commits enhances the algorithm for generating codes to not repeat
the previous digit in a code.
This reduces the key space for our codes from 100,000 possibilities to
65,610 possibilities.
1. https://twitter.com/annaecook/status/1442567679710150662
This updates the tickets that are created when the
`check_if_letters_still_pending_virus_check` scheduled task detects
letters in the `pending-virus-check` state.
This is a bit too niche for the README, which should be focussed on
the bare minimum someone needs to know to get started with a repo.
Moving this content to its own doc is consistent with other apps [1]
and gives it more room to grow.
[1]: https://github.com/alphagov/notifications-admin/tree/master/docs
In response to: https://github.com/alphagov/notifications-api/pull/3305#pullrequestreview-726672421
Previously this was added among the public /v2 endpoints, but it's
only meant for internal use. While only the govuk-alerts app would
be able to access it, the location and /v2 URL suggested otherwise.
This restructures the endpoint so it resembles other internal ones.
make sure timestamps returned from the api are always consistent.
The only place in models where we're serializing a BST timestamp is on
the Notification.serialize_for_csv method now, which at least is a bit
different as this is user-facing (it also returns a formatted
human-readable notification_status for example).
These were only there to ensure a DB session existed for the test
and are now included implicitly as one of the dependencies of the
"sample_user" fixture.
This switches a number of fixtures to use "sample_user", which is
equivalent to calling the previous "create_user" function when it
used to default the email to "notify@...".
We have a lot of commands and it's important we test the ones that
are meant to be used in the future to ensure they work when they're
needed. Testing Flask commands is usually easy as written in their
docs [1], but I had to make some changes to the way we decorate the
command functions so they can work with test DB objects - I couldn't
find any example of someone else encountering the same problem.
[1]: https://flask.palletsprojects.com/en/2.0.x/testing/#testing-cli-commands
Specifically, no longer test for a p1 zendesk when sending an alert
and drop misleading "p1" from test name when cancelling an alert.
We're no longer creating a P1 from the code, but we _do_ create a
zendesk ticket when sending out an alert.
When cancelling, what we want to test is that we don't create a second
ticket when the alert is cancelled.
This is happening on the AWS side now as part of
alphagov/notifications-broadcasts-infra#267 - but we still want to keep
the zendesk ticket as it contains useful context _and_ provides
visibility to the team.
Previously I had to handcraft some SQL to give myself access to a
broadcast service I created locally. I've done this enough times
that I think it's worth automating.
This is so we can distinguish custom broadcasts in the Admin app
[1]. I've also extended the POST test for custom broadcasts to
check we're correctly reading data for "names", as this wasn't
being tested previously.
[1]: 411fda81c0
Unlike broadcasts created in the Admin app, these are only expected
to have "names" and "simple_polygons" in their areas column [1].
The migration command in the Admin app [2] isn't suitable for these
broadcasts as it would try to aggregate their areas, etc.
I've put a conditional on "areas" being present (in the areas column)
so this command doesn't pick up any new custom broadcasts.
[1]: 023a06d5fbo
[2]: https://github.com/alphagov/notifications-admin/pull/4011