This is a bit too niche for the README, which should be focussed on
the bare minimum someone needs to know to get started with a repo.
Moving this content to its own doc is consistent with other apps [1]
and gives it more room to grow.
[1]: https://github.com/alphagov/notifications-admin/tree/master/docs
In response to: https://github.com/alphagov/notifications-api/pull/3305#pullrequestreview-726672421
Previously this was added among the public /v2 endpoints, but it's
only meant for internal use. While only the govuk-alerts app would
be able to access it, the location and /v2 URL suggested otherwise.
This restructures the endpoint so it resembles other internal ones.
make sure timestamps returned from the api are always consistent.
The only place in models where we're serializing a BST timestamp is on
the Notification.serialize_for_csv method now, which at least is a bit
different as this is user-facing (it also returns a formatted
human-readable notification_status for example).
These were only there to ensure a DB session existed for the test
and are now included implicitly as one of the dependencies of the
"sample_user" fixture.
This switches a number of fixtures to use "sample_user", which is
equivalent to calling the previous "create_user" function when it
used to default the email to "notify@...".
We have a lot of commands and it's important we test the ones that
are meant to be used in the future to ensure they work when they're
needed. Testing Flask commands is usually easy as written in their
docs [1], but I had to make some changes to the way we decorate the
command functions so they can work with test DB objects - I couldn't
find any example of someone else encountering the same problem.
[1]: https://flask.palletsprojects.com/en/2.0.x/testing/#testing-cli-commands
Specifically, no longer test for a p1 zendesk when sending an alert
and drop misleading "p1" from test name when cancelling an alert.
We're no longer creating a P1 from the code, but we _do_ create a
zendesk ticket when sending out an alert.
When cancelling, what we want to test is that we don't create a second
ticket when the alert is cancelled.
This is happening on the AWS side now as part of
alphagov/notifications-broadcasts-infra#267 - but we still want to keep
the zendesk ticket as it contains useful context _and_ provides
visibility to the team.
Previously I had to handcraft some SQL to give myself access to a
broadcast service I created locally. I've done this enough times
that I think it's worth automating.
This is so we can distinguish custom broadcasts in the Admin app
[1]. I've also extended the POST test for custom broadcasts to
check we're correctly reading data for "names", as this wasn't
being tested previously.
[1]: 411fda81c0
Unlike broadcasts created in the Admin app, these are only expected
to have "names" and "simple_polygons" in their areas column [1].
The migration command in the Admin app [2] isn't suitable for these
broadcasts as it would try to aggregate their areas, etc.
I've put a conditional on "areas" being present (in the areas column)
so this command doesn't pick up any new custom broadcasts.
[1]: 023a06d5fbo
[2]: https://github.com/alphagov/notifications-admin/pull/4011
As of 041d8b48a2
it’s not valid to call `random.choices` without giving at least one of
the options a positive weighting.
This makes sense, because giving a zero weighting is effectively saying
‘theres’s only one choice, but don’t choose it’.
In our codebase this is applicable where there’s only one international
provider, which we want to use even when it’s been de-prioritised for
domestic SMS.
This doesn’t cause a problem now, but will if we upgrade to Python
versions greater than 3.9.0.
Broadcasts created via the API [1] and the Admin app [2] should
both now have this field set. It's also more informative to show
this, and broadcasts created via the API don't have IDs anyway.
There's a small risk that an old broadcast that gets approved won't
have this data, but it's for information only and we intend to
backfill all old broadcasts in the near future.
[1]: 023a06d5fb
[2]: 7dbe3afa19
In one case ("areas=['manchester']") the format was even invalid,
but in general the original value of the column is pretty much
irrelevant for tests that involve updating it (it's highly unlikely
the column would default to the same value as the test data).
For the public API we actually receive a "name" instead of an ID,
which we also want to start sending from the Admin app.
Unlike IDs, which aren't really used anywhere, we want the names
to display the alerts on gov.uk/alerts.
This is necessary until:
- The Admin app is using the new "areas(_2)" format to store and
retrieve data.
- We've migrated all existing broadcast messages to use the new
format.
Note that "areas" / "ids" isn't actually used for anything except
printing out the PagerDuty message - it's not sent to the proxy [1].
[1]: 6edc6c70aa/app/celery/broadcast_message_tasks.py (L190-L193)
Currently we have:
- An "areas" column in the DB that stores a JSON blob.
- An "areas" field inside the "areas" JSON that stores area IDs.
- Each field has to be manually copied into the JSON column.
We want to move to:
- An "areas" column in the DB (unchanged).
- An "ids" field inside the "areas" JSON (to replace "areas").
- The Admin app sending other data inside an "areas" JSON field.
The API design for areas is confusing and difficult to extend.
Here we duplicate the current API functionality using an "areas_2"
field. Once the Admin app is using this field, we'll be able to
rename it to just "areas", which is where we want to get to.
In the next commits we'll build on this to support the migration
from "areas"."areas" to "areas"."ids".
This is a temporary feature to make it easy to migrate the format
of the "areas" column and backfill extra data for it.
It's not possible to use this feature to update the status of an
old broadcast message, so the risk from this override is minimal.