Previously we were passing a flag to the API which handled this. Now
we are doing it at the time of clicking the link, not at the time of
storing the new password. We don’t need to update the timestamp twice,
so this commit removes the code which tells the API to do it.
Accepting an invite means that you’ve just clicked a link in your email
inbox. This shows that you have access to your email.
We can make a record of this, thereby extending the time before we ask
you to revalidate your email address.
When someone uses a fresh password reset link they have proved that they
have access to their inbox.
At the moment, when revalidating a user’s email address we wait until
after they’ve put in the 2FA code before updating the timestamp which
records when they last validated their email address[1].
We can’t think of a good reason that we need the extra assurance of a
valid 2FA code to assert that the user has access to their email –
they’ve done that just by clicking the link. When the user clicks the
link we already update their failed login count before they 2fa. Think
it makes sense to handle `email_access_validated_at` then too.
As a bonus, the functional tests never go as far as getting a 2FA code
after a password reset[2], so the functional test user never gets its
timestamp updated. This causes the functional tests start failing after
90 days. By moving the update to this point we ensure that the
functional tests will keep passing indefinitely.
1. This code in the API (91542ad33e/app/dao/users_dao.py (L131))
which is called by this code in the admin app (9ba37249a4/app/utils/login.py (L26))
2. 5837eb01dc/tests/functional/preview_and_dev/test_email_auth.py (L43-L46)
We had an audit in February of this year but did
not update the accessibility statement to reflect
the issues identified as fixed or to include new
issues it produced.
Some of the dates for fixed have also not been
updated for a long time.
This adds those changes, with placeholders for
dates assigned to each issue.
This content is now ready for review. The dates
will be assigned when that is complete.
Theoretically the maximum expiry time of a broadcast should be 24 hours.
If it goes over 24 hours there can be problems.
However we want to make it more conservative to mitigate two potential
issues:
1. The CBC has a repetition period (eg 60 seconds) and a count (eg
1,440). If these were slightly innaccurate or generous it could take
us over 24 hours. For this reason we should give ourselves half an
hour of buffer.
2. It’s possibly that the CBC could interpret a UTC time as BST or vice
versa. Until we’re sure that it’s using UTC everywhere, we need to
remove another whole hour as buffer.
In total this means we remove 1 hour 30 minutes from 24 hours, giving an
expiry time of 22 hours 30 minutes.
While testing alerts on these channels the MNOs sometimes need to
restart their CBCs to make sure everything is failing over properly.
If the CBC does not come back up, for whatever reason, then we are left
in a state where the alert can’t be cancelled.
To minimise the impact to the public in this scenario we should keep the
expiry time at 4 hours for alerts sent on test channels. We recently
increased it back up to 24 hours for all channels, so this in effect is
reverting that change for channels that won’t be used in a real
emergency.
The page which shows the count of phones does some logic based on how
close the ‘will get’ and ‘likely to get’ numbers are. This means it
accesses the `BroadcastMessage.count_of_phones` and
`BroadcastMessage.count_of_phones_likely` properties multiple times.
These properties are computed fresh every time, and are quite expensive
to compute. By caching them in memory we can cut the page load time
approximately in half.
To count phones in a custom polygon we need to work out the percentage
of overlap with each known area. This means we need to get each known
area from the database to compare it.
At the moment we do this by running:
- one SQLite query to get the details of all matching areas
- a loop, which performs one SQLite query *per area* to get the polygons
This commit reduces the number of SQLite queries to one, which uses a
`JOIN` to get both the details of the areas and their polygons.
This gives a speed increase of about 25% for a big area like
Lincolnshire.
By using the simplified polygons instead of the full resolutions ones
we:
- query less data from SQLite
- pass less data around
- give Shapely a less complicated shape to do its calculations on
This makes it faster to calculate how much of each electoral ward a
custom area overlaps.
For the two areas in our tests:
Place represented by custom area | Before | After
---------------------------------|--------|--------
Bristol | 0.07s | 0.02s
Skye | 0.02s | 0.01s
They all currently say 'Change' which makes it
confusing when they are viewed out of their
context (ie. when all the links in the page are
listed out by a screen reader).
This gives them a suffix relating to the thing
they will change, like the links on the service
settings page.
The layout for the platform admin base template needed to be changed so
that the back link appears in the same place as before.
Previously, the left hand nav was inside `<main>`, but that did not need
to be and was inconsistent with other pages, so has been taken out.
The page_header macro includes an optional back link. Since the
page_header is always used inside `<main>`, where the back link should
not be, this stops setting the back link in the page header and instead
sets it in the new `backLink` block.
This moves the back link to be above the `<main>` tag by making use of
the new `backLink` block. This doesn't change the pages which are using
a back link as part of the `page_header` macro yet.
This will be used to put the back link in, since it it is before the
`<main>` tag. We could have used the `beforeContent` block directly, but
that sometimes already has content in - this means it's not clear when
you also need to use `super()` inside the block and when you don't.
Relates to: https://github.com/alphagov/notifications-govuk-alerts/pull/152
I ran the "create-broadcast-areas-db.py" script to regenerate the
Sqlite DB. Existing alerts with the old naming still appear correctly,
and since we don't (yet) store this text in the DB, there's nothing
more to update.