The aggregate names don't need to be sorted, but it helps to do
this for the tests, since the implementation of sets may not be
stable between machines and lead to sporadic test failures.
I did also toy with sorting granular area names so they have a
similar ordering, but this would take them out-of-order with IDs
and really the important thing is that the ordering is stable.
I did consider whether to store this explicitly in the SQLite DB,
but this is less effort for now and we can always switch to that
more robust approach in future if we need to.
This applies some heuristics to try and keep the overall list of
areas short when many are selected in the same wider area.
Currently we only have relationship information between upper and
lower tier local authorities, so we can't / won't aggregate up to
Greater London (it's own special thing) or whole countries.
Depends on: https://github.com/alphagov/notifications-api/pull/3314
Previously:
- We introduced a new "areas_2" API field that's can
populate the "areas" column in the same way.
- We updated the Admin app to use the new field, so that
the old "areas" and "simple_polygons" API fields are unused.
- We repurposed the unused "areas" API field to use
the new "areas_2" format.
This PR:
- We can switch the Admin app back to the "areas" field,
but in the new format.
Future PRs:
- Remove support for the unused "areas_2" field (migration complete)
This will be run with a CSV of all broadcast messages. Since very
few users are creating or updating broadcasts, it's highly unlikely
we'll encounter a race condition during the update.
These will be used as a fallback for display in gov.uk/alerts. It
also helps to have them in the DB to aid in quickly identifying
where an alert was sent, which is hard from the IDs.
We will look at backfilling names for past alerts in future work.
Previously we just held the new area_ids in memory. Setting them
on the object means we can reuse its functionality to get polygons
and also avoids confusion if in future we try to continue using
the object after calling "add_areas" or "remove_areas".
Resolves: https://github.com/alphagov/notifications-admin/pull/3980#discussion_r692919874
Previously it was unclear what kinds of areas this method returned,
and whether there would be any duplicates (due to the hierarchy of
areas we work with). This clarifies that.
In addition, the areas returned may not overlap with the custom one
[1], so we should reword to avoid falsely implying that. We could do
the overlap check as part of the method as an alternative, but that
would create extra work when calculating the ratio of intersection.
We could always add "overlapping areas" as a complementary method to
this one in future.
[1]: https://github.com/alphagov/notifications-admin/pull/3980#discussion_r692919874
Previously we were passing a flag to the API which handled this. Now
we are doing it at the time of clicking the link, not at the time of
storing the new password. We don’t need to update the timestamp twice,
so this commit removes the code which tells the API to do it.
Accepting an invite means that you’ve just clicked a link in your email
inbox. This shows that you have access to your email.
We can make a record of this, thereby extending the time before we ask
you to revalidate your email address.
When someone uses a fresh password reset link they have proved that they
have access to their inbox.
At the moment, when revalidating a user’s email address we wait until
after they’ve put in the 2FA code before updating the timestamp which
records when they last validated their email address[1].
We can’t think of a good reason that we need the extra assurance of a
valid 2FA code to assert that the user has access to their email –
they’ve done that just by clicking the link. When the user clicks the
link we already update their failed login count before they 2fa. Think
it makes sense to handle `email_access_validated_at` then too.
As a bonus, the functional tests never go as far as getting a 2FA code
after a password reset[2], so the functional test user never gets its
timestamp updated. This causes the functional tests start failing after
90 days. By moving the update to this point we ensure that the
functional tests will keep passing indefinitely.
1. This code in the API (91542ad33e/app/dao/users_dao.py (L131))
which is called by this code in the admin app (9ba37249a4/app/utils/login.py (L26))
2. 5837eb01dc/tests/functional/preview_and_dev/test_email_auth.py (L43-L46)
We had an audit in February of this year but did
not update the accessibility statement to reflect
the issues identified as fixed or to include new
issues it produced.
Some of the dates for fixed have also not been
updated for a long time.
This adds those changes, with placeholders for
dates assigned to each issue.
This content is now ready for review. The dates
will be assigned when that is complete.
Theoretically the maximum expiry time of a broadcast should be 24 hours.
If it goes over 24 hours there can be problems.
However we want to make it more conservative to mitigate two potential
issues:
1. The CBC has a repetition period (eg 60 seconds) and a count (eg
1,440). If these were slightly innaccurate or generous it could take
us over 24 hours. For this reason we should give ourselves half an
hour of buffer.
2. It’s possibly that the CBC could interpret a UTC time as BST or vice
versa. Until we’re sure that it’s using UTC everywhere, we need to
remove another whole hour as buffer.
In total this means we remove 1 hour 30 minutes from 24 hours, giving an
expiry time of 22 hours 30 minutes.