Direct string comparison in multiple places is prone to typos. It
also means that a consumer of the class needs to know that whether
a user is pending or active is held in the `state` property, which
is an implementation detail.
To keep the conditionals in the Jinja template more readable, this
commit moves the logic into a method on the model, where it can
be split over multiple statements and lines.
Implementing `__lt__` makes objects sortable.
Rather than reimplementing `__lt__` each time we want to make an object
sortable by name, we can inherit from the extant `SortByNameMixin`.
for a job to be finished there are two requirements:
* the status to be "finished"
* the percentage complete to be 100%
The job status is set to finished when the process_job task finishes
(even though not all process_row may have finished). The
percentage_complete is calculated by comparing the number of
notifications in the database with the number of rows in the
spreadsheet.
This was inadvertently changed from an "and" to an "or" clause two years
ago. This meant that people could download a report when the status was
finished but not all notifications were present in the database. Lets
change it back.
7d52ac97f1 (diff-44b012cad205379c481bed244ddb2294bae5ee85dcd01f4aee932a2bd85b67b2L86-R100)
When we combine simplified polygons from different areas we try to use
the polygons that are defined in metres as a speed optimisation.
This doesn’t work when those polygons are in several different
projections because the distance in metres is relative to the edge of
the projection. Just naively choosing one projection means that some
of the polygons will shift position on the earth by 100s of kilometers
once they are converted back to degrees.
To avoid this we need to:
- check for situations where we have polygons in multiple different
projections
- transforms these polygons back into degrees and let `Polygons` pick a
common projection for all of them before doing any
merging/smoothing/etc.
These are no longer used anywhere. Note that pyexcel-xlsx is still
a necessary dependency: it acts as an implicit plugin to pyexcel
that, surprisingly, allows pyexcel to...read excel files [1].
[1]: https://github.com/pyexcel/pyexcel-xlsx#as-a-pyexcel-plugin
Looking up the coordinate reference system for a given polygon’s bounds
is surprisingly expensive.
We can make things run faster by precomputing it at the time of
generating the simplified polygons, then storing it in the SQLite
database alongside the polygon data.
Transforming full resolution polygons to linear coordinates is somewhat
expensive. We can make things run faster by:
- looking at `simple_polygons` which have fewer points and are therefore
faster to transform.
- reusing polygons that have already been transformed where possible
Brings in https://github.com/alphagov/notifications-utils/pull/889/files
At the moment, we are not doing any transformation of features before
applying geometric algorithms to them. This is, in effect, assuming that
the earth is flat.
This new version of utils implements the transformation of our polygons
to a Cartesian plane. In other words, it converts them from being
defined in spherical degrees to metres.
For the admin app this means we need to convert places where the code
expects things to be measured in degrees to work in metres instead.
This makes it clearer that this model collection isn’t the organisations
for a user or a service or some other entity, like most model
collections are.
It will also lets us make a separate Organisations model, without the
name conflicting.
The model layer shouldn’t need to be concerned with sorting. For
services this means we can make a `SerialisedModelCollection` rather
than writing a manual loop.
`reference` isn’t very human-friendly – the Environment Agency just
supply a UUID in this field.
The Environment Agency also populate the `<event>`` field with some
human readable text, for example:
> 013 Issue Severe Flood Warning EA
(013 is an ‘area code’ which will be meaningful to the Flood Warning Service team)
This commit changes the frontend to display the value of the `cap_event`
field, if it’s present, which is where the API stores the value of the
`<event>` field from the original CAP XML.
***
Depends on:
- [x] https://github.com/alphagov/notifications-api/pull/3344/files
This was an awkward property to have in a model class because
- it’s presentation not data about the object
- it munged together data from a different object, so it wasn’t well
encapsulated
This commit moves the logic to the template, where it can reference the
`Organisation` and `User` models directly.
We can make the `as_agreement_statement_for_go_live_request` method less
complex by offloading some of the content it returns to the presentation
layer.
The new way of creating support tickets can be seen in
[notifications-utils](https://github.com/alphagov/notifications-utils/pull/899).
This changes tickets created when making a request to go live to use
the new way, while other tickets stay the same for now.
The go live tags have been removed. Some of these had become
unneccessary since you can't make the request to go live unless they are
true (e.g. `notify_go_live_email_reply_to`). Others will always get
added by a Zendesk macro when the ticket is replied to, so we don't need
to add them here.
the api returns UTC timestamps, we should keep them as UTC timestamps
until the very last moment, and only convert them into BST when we know
we want to return to a user (ie: in contact-list.html and other places
like that)
The original code to raise the exception was flawed: if a broadcast
only had a single area that was invalid, we would assume it was a
custom broadcast [1]. Since the recent changes [2] fixed the flaw
we're now getting exceptions for broadcasts of this kind.
It's not practical to go and manually fix the invalid broadcasts,
and the likelihood is there will be more in future as the set of
areas we support changes. This takes a pragmatic approach of simply
logging the issue and pretending such broadcasts are custom.
[1]: 926ada2f21
[2]: https://github.com/alphagov/notifications-admin/pull/4014/files#diff-2dd8f77d6df281e7674b20263cdf27a3d58b839dc5930c0087ac8b9749b313e4R92
Previously we relied on the API defaulting this field to an empty
array [1], but that conflicts with using it to decide whether a
broadcast is custom or created in this app.
[1]: 3779146cc5/app/models.py (L2342)
Custom broadcasts created directly via the API app won't have area
IDs since [1], where we started to distinguish between "names" (all
broadcasts have these) and IDs (for broadcasts created in this app).
We forgot to propagate the distinction into the code here.
This code fixes the bug for all broadcasts created after [1]. Any
custom broadcasts created before [1] will have their "ids" field set
instead of "names" so we won't show anything for them. This seems
reasonable as we don't support custom broadcasts yet.
[1]: 023a06d5fb
The aggregate names don't need to be sorted, but it helps to do
this for the tests, since the implementation of sets may not be
stable between machines and lead to sporadic test failures.
I did also toy with sorting granular area names so they have a
similar ordering, but this would take them out-of-order with IDs
and really the important thing is that the ordering is stable.
Depends on: https://github.com/alphagov/notifications-api/pull/3314
Previously:
- We introduced a new "areas_2" API field that's can
populate the "areas" column in the same way.
- We updated the Admin app to use the new field, so that
the old "areas" and "simple_polygons" API fields are unused.
- We repurposed the unused "areas" API field to use
the new "areas_2" format.
This PR:
- We can switch the Admin app back to the "areas" field,
but in the new format.
Future PRs:
- Remove support for the unused "areas_2" field (migration complete)
This will be run with a CSV of all broadcast messages. Since very
few users are creating or updating broadcasts, it's highly unlikely
we'll encounter a race condition during the update.
These will be used as a fallback for display in gov.uk/alerts. It
also helps to have them in the DB to aid in quickly identifying
where an alert was sent, which is hard from the IDs.
We will look at backfilling names for past alerts in future work.
Previously we just held the new area_ids in memory. Setting them
on the object means we can reuse its functionality to get polygons
and also avoids confusion if in future we try to continue using
the object after calling "add_areas" or "remove_areas".
Previously we were passing a flag to the API which handled this. Now
we are doing it at the time of clicking the link, not at the time of
storing the new password. We don’t need to update the timestamp twice,
so this commit removes the code which tells the API to do it.
When someone uses a fresh password reset link they have proved that they
have access to their inbox.
At the moment, when revalidating a user’s email address we wait until
after they’ve put in the 2FA code before updating the timestamp which
records when they last validated their email address[1].
We can’t think of a good reason that we need the extra assurance of a
valid 2FA code to assert that the user has access to their email –
they’ve done that just by clicking the link. When the user clicks the
link we already update their failed login count before they 2fa. Think
it makes sense to handle `email_access_validated_at` then too.
As a bonus, the functional tests never go as far as getting a 2FA code
after a password reset[2], so the functional test user never gets its
timestamp updated. This causes the functional tests start failing after
90 days. By moving the update to this point we ensure that the
functional tests will keep passing indefinitely.
1. This code in the API (91542ad33e/app/dao/users_dao.py (L131))
which is called by this code in the admin app (9ba37249a4/app/utils/login.py (L26))
2. 5837eb01dc/tests/functional/preview_and_dev/test_email_auth.py (L43-L46)
Theoretically the maximum expiry time of a broadcast should be 24 hours.
If it goes over 24 hours there can be problems.
However we want to make it more conservative to mitigate two potential
issues:
1. The CBC has a repetition period (eg 60 seconds) and a count (eg
1,440). If these were slightly innaccurate or generous it could take
us over 24 hours. For this reason we should give ourselves half an
hour of buffer.
2. It’s possibly that the CBC could interpret a UTC time as BST or vice
versa. Until we’re sure that it’s using UTC everywhere, we need to
remove another whole hour as buffer.
In total this means we remove 1 hour 30 minutes from 24 hours, giving an
expiry time of 22 hours 30 minutes.
While testing alerts on these channels the MNOs sometimes need to
restart their CBCs to make sure everything is failing over properly.
If the CBC does not come back up, for whatever reason, then we are left
in a state where the alert can’t be cancelled.
To minimise the impact to the public in this scenario we should keep the
expiry time at 4 hours for alerts sent on test channels. We recently
increased it back up to 24 hours for all channels, so this in effect is
reverting that change for channels that won’t be used in a real
emergency.