When a precompiled letter is sent to us, we set the `to` field as
'Provided as PDF' in
1c1023a877/app/v2/notifications/post_notifications.py (L100-L104)
This then also sets `normalised_to` as `providedaspdf`.
However, when template preview sanitises the letter, pulls out the
address and gives it to the API, we were only setting `to` to be
the new address and had forgotten to also amend `normalised_to` to
be the normalised version. This meant that for all these letters
we accidentally left `normalised_to` as `providedaspdf`. The impact
of this was that we can not then search for these letters in the
admin user interface as they rely on the `normalised_to` field
containing the recipient address.
This commit fixes that bug by also setting the `normalised_to`
field
We don’t store everything that comes in the CAP XML when someone creates
a broadcast via the API.
One thing we do store is `<identifier>` (in a column called `reference`)
which is a unique (to the external system) identifier for the broadcast.
We show this in the front end instead of the template name, because
broadcasts created from the API don’t use templates.
However this ID isn’t very friendly – the Environment Agency just supply
a UUID.
The Environment Agency also populate the `<event>` field with some human
readable text, for example:
> 013 Issue Severe Flood Warning EA
(013 is an area code which will be meaningful to the Flood Warning
Service team)
We should show this in the UI instead of the reference. The first step
towards this is storing it in the database and returning it in the REST
endpoints.
Later we can have the admin app prefer `cap_event` over `reference`,
where `cap_event` is present.
We can’t backfill this data because we don’t keep a copy of the original
XML.
Seems like `<event>` is a mandatory property of `<info>`, so we don’t
need to worry about the field being missing (`<info>` is optional in
CAP but we require it because it contains stuff like the areas which
we need in order to send out the broadcast`).
***
https://www.pivotaltracker.com/story/show/176927060
Previously these metrics weren't very useful because they could be
skewed by long timings for failed notifications, which can take up
to 72 hours to deliver. I'm intentionally not trying to have a dual
running period (with the old and new names) because:
- We don't use the current stats for anything (checking Grafana).
- The current stats get turned into a "bucket" metric in Prometheus
[1][2], which isn't very useful because it can only tell us the mean
time to deliver, but we're actually interested in percentiles.
Switching to a new naming is an opportunity to fix the raw data and
the way it's aggregated, using the same kind of "summary" metric that
we now use for stats about our Celery tasks [3].
[1]: c330a8ac8a/paas/statsd/statsd-mapping.yml (L82)
[2]: https://prometheus.io/docs/practices/histograms/#quantiles
[3]: https://github.com/alphagov/notifications-aws/pull/890
This is the newest version.
Pyup is complaining about vulnerabilities in version 1.0.1, specifically
> Werkzeug version 2.0.2 improves the security of the debugger cookies.
> "SameSite" attribute is set to "Strict" instead of "None", and the
> secure flag is added when on HTTPS.
Previously we were using whatever version of Werkzeug that Flask
specified this pins it to get rid of the vulnerability without having to
upgrade everything at once.
We’ve done this for the admin app already:
https://github.com/alphagov/notifications-admin/pull/4042/files
I suspect the memory usage issues we saw with version 2.0.0 have been
fixed in 2.0.2, per this line in the changelog:
> Fix memory usage for locals when using Python 3.6 or pre 0.4.17 greenlet versions.
> https://github.com/pallets/werkzeug/pull/2212
— https://werkzeug.palletsprojects.com/en/2.0.x/changes/
The `auto-expire-broadcast-messages` task checks for expired broadcasts
at five minute intervals. This change now calls the
`publish-govuk-alerts` task in govuk-alerts if there are expired
broadcasts so that the site is updated.
Co-authored-by: Katie Smith <katie.smith@digital.cabinet-office.gov.uk>
When we send or cancel a broadcast message, we now trigger a task
in govuk-alerts repo that polls our API for alerts and
publishes a fresh list of alerts.
Co-authored-by: Pea Tyczynska <pea.tyczynska@digital.cabinet-office.gov.uk>
Previously this was causing the wrapper function to become a
command before it started mirroring the original (functools.wraps),
which meant any previous option decorators were "lost".*
We didn't notice the problem in the original PR [1] because the new
command under test has its option decorators *after* the command
decorator, in contrast with all other (now broken) commands.
The original wrapper applied the functools decorator first [2],
so this change just reinstates that ordering.
*This is a hand-wavey explanation as I haven't looked into how
functools.wraps interacts with option decorators.
[1]: 922fd2f333#
[2]: 922fd2f333 (diff-c4e75c8613e916687a97191a7a79110dfb47e96ef7df96f7ba25dd94ba64943dL101)
include a link to a runbook entry.
also the list of acknowledgement files can be very long, so make that
the last thing, and use new lines to space out the message.
It’s confusing that changing `MAX_VERIFY_CODE_COUNT` also limits the
number of failed login attempts that a user of text messages 2FA can
make.
This makes the parameters independent, and adds a test to make sure any
future changes which affect the limit of failed login attempts are
covered.
I was doing some analysis and saw that in the last 24 hours the most
codes that anyone had was in a 15 minute window was 3.
So I think we can safely reduce this to 5 to get a bit more security
with enough headroom to not have any negative impact to the user.
People with dyslexia and dyscalculia find it difficult to transpose
codes which have consecutive, repeated digits[1].
This commits enhances the algorithm for generating codes to not repeat
the previous digit in a code.
This reduces the key space for our codes from 100,000 possibilities to
65,610 possibilities.
1. https://twitter.com/annaecook/status/1442567679710150662
This updates the tickets that are created when the
`check_if_letters_still_pending_virus_check` scheduled task detects
letters in the `pending-virus-check` state.
This is a bit too niche for the README, which should be focussed on
the bare minimum someone needs to know to get started with a repo.
Moving this content to its own doc is consistent with other apps [1]
and gives it more room to grow.
[1]: https://github.com/alphagov/notifications-admin/tree/master/docs
In response to: https://github.com/alphagov/notifications-api/pull/3305#pullrequestreview-726672421
Previously this was added among the public /v2 endpoints, but it's
only meant for internal use. While only the govuk-alerts app would
be able to access it, the location and /v2 URL suggested otherwise.
This restructures the endpoint so it resembles other internal ones.
make sure timestamps returned from the api are always consistent.
The only place in models where we're serializing a BST timestamp is on
the Notification.serialize_for_csv method now, which at least is a bit
different as this is user-facing (it also returns a formatted
human-readable notification_status for example).
These were only there to ensure a DB session existed for the test
and are now included implicitly as one of the dependencies of the
"sample_user" fixture.
This switches a number of fixtures to use "sample_user", which is
equivalent to calling the previous "create_user" function when it
used to default the email to "notify@...".
We have a lot of commands and it's important we test the ones that
are meant to be used in the future to ensure they work when they're
needed. Testing Flask commands is usually easy as written in their
docs [1], but I had to make some changes to the way we decorate the
command functions so they can work with test DB objects - I couldn't
find any example of someone else encountering the same problem.
[1]: https://flask.palletsprojects.com/en/2.0.x/testing/#testing-cli-commands
Specifically, no longer test for a p1 zendesk when sending an alert
and drop misleading "p1" from test name when cancelling an alert.
We're no longer creating a P1 from the code, but we _do_ create a
zendesk ticket when sending out an alert.
When cancelling, what we want to test is that we don't create a second
ticket when the alert is cancelled.
This is happening on the AWS side now as part of
alphagov/notifications-broadcasts-infra#267 - but we still want to keep
the zendesk ticket as it contains useful context _and_ provides
visibility to the team.