This upgrades itsdangerous by a major version.
When testing most routes we:
* use the `client_request` fixture
* under the hood this logs in the user with `TestClient.login`
* logging in the user signs their session with a secret and the current time
For some tests we also:
* wrap the test method with a `freeze_time()` decorator to simulate a past date and time
When Pytest calls the wrapped test method:
* any application code which tries to get the current time will get the frozen time
* any application code getting the current user means decoding the session
* the code which decodes the session will see that the session was created in the future, in other words it has a negative age
* as of ItsDangerous 2.0.0 signatures with a negative age raise an exception
To avoid all the tests which freeze time failing, this adds itsdangerous
to the list of packages that freezegun ignores.
We can't yet upgrade to a version of itsdangerous that is >= 2.1.0
because there are compatibility issues with Flask 1.x.
Some tests use the `client` fixture but don’t call any of its methods.
The reason for doing this is because the test depends on something in
the request context.
This commit replaces all those instances with `client_request`, which
also sets the request context.
These tests are the last ones that still use the `client` fixture. By
replacing it with `client_request` we will be able to say that no tests
should be using the `client` fixture directly.
Some users have reported a problem with the received text message
report:
> I have tested the reply service but in the excel report the mobile
> number is showing as 4.47900E+23. How can I change the format so that
> it is show the mobile number that has replied?
This is happening because Excel is interpreting a phone number in the
format `447900900123` as a number in
[scientific notation](https://en.wikipedia.org/wiki/Scientific_notation),
in other words 4.479 × 10<sup>23</sup>.
`447900900123` is the format that our provider is giving us the number
in – there’s no guarantee it will always be in this format.
We can prevent this behaviour by putting spaces in the numbers. Excel
and Google Sheets won’t try to convert a string with spaces into a
number.
I think we used to do this for the sent text messages report but
probably stopped because we decided it was better to keep the phone
number in the same format as it had been supplied to us for
reconcilliation purposes.
`reference` isn’t very human-friendly – the Environment Agency just
supply a UUID in this field.
The Environment Agency also populate the `<event>`` field with some
human readable text, for example:
> 013 Issue Severe Flood Warning EA
(013 is an ‘area code’ which will be meaningful to the Flood Warning Service team)
This commit changes the frontend to display the value of the `cap_event`
field, if it’s present, which is where the API stores the value of the
`<event>` field from the original CAP XML.
***
Depends on:
- [x] https://github.com/alphagov/notifications-api/pull/3344/files
Custom broadcasts created directly via the API app won't have area
IDs since [1], where we started to distinguish between "names" (all
broadcasts have these) and IDs (for broadcasts created in this app).
We forgot to propagate the distinction into the code here.
This code fixes the bug for all broadcasts created after [1]. Any
custom broadcasts created before [1] will have their "ids" field set
instead of "names" so we won't show anything for them. This seems
reasonable as we don't support custom broadcasts yet.
[1]: 023a06d5fb
Depends on: https://github.com/alphagov/notifications-api/pull/3314
Previously:
- We introduced a new "areas_2" API field that's can
populate the "areas" column in the same way.
- We updated the Admin app to use the new field, so that
the old "areas" and "simple_polygons" API fields are unused.
- We repurposed the unused "areas" API field to use
the new "areas_2" format.
This PR:
- We can switch the Admin app back to the "areas" field,
but in the new format.
Future PRs:
- Remove support for the unused "areas_2" field (migration complete)
This naming was introduced in 2016 without explanation [1]. I find it
confusing because:
- It's reminiscent of "_app", which is a Python convention indicating
the variable is internal, so maybe avoid using it.
- It suggests there's some other "app" fixture I should be using (there
isn't, though).
The Python style guide describes using an underscore suffix to avoid
clashes with inbuilt names [1], which is sort of applicable if we need
to import the "app" module [2]. However, we can also avoid clashes by
choosing a different name, without the strange underscore.
[1]: 3b1d521c10
[2]: 78824f54fd/tests/app/main/views/test_forgot_password.py (L5)
Note, no option at the moment to set the service broadcast account type
as None, or back to without the broadcast permission. This has been done
for speed of development given the chance of us needing this is very
low. We can add it later if we need to.
At the moment the admin app expects all broadcasts to have a template,
and expects the content of the alert to come from the template.
This commit makes it so those pages can still get a `Template` instance,
but populated with content straight from the `content` field in the
database.
It’s a bit unintuitive that starting a job from a contact list makes a
copy of the file, which has no relationship to the list it was copied
from. This is more of an implementation detail, rather than something
that comes from people’s mental models of what is going on. Or at least
that’s what I hypothesise.
I think it’s clearer to show jobs that come from contact lists within
the lists that they were created from. By naming the jobs by template
this gives a clearer view of what messages have been sent to the group
over time.
For emails and text messages we sort by the time the user (or API) sent
them.
This makes sense for broadcasts too, since most users will receive the
alert within seconds of it being broadcast.
For alerts that haven’t started yet we can sort by `updated_at`, which
is when the user preparing the broadcast submitted it for approval.
We’ve observed people using ‘national’ and ‘local’ during user research.
It has less tongue-twisting ambiguity than county vs country.
But we think that maybe just getting rid of ‘counties’ is enough to
disambiguate them. So this commit just takes the ‘local’ concept.
This commit also gives the libraries and areas new IDs, which means if
we want to rename them in the future it won’t be a breaking change.
Broadcasting is not a precise technology, because:
- cell towers are directional
- their range varies depending on whether they are 2, 3, 4, or 5G
(the higher the bandwidth the shorter the range)
- in urban areas the towers are more densely packed, so a phone is
likely to have a greater choice of tower to connect to, and will
favour a closer one (which has a stronger signal)
- topography and even weather can affect the range of a tower
So it’s good for us to visually indicate that the broadcast is not as
precise as the boundaries of the area, because it gives the person
sending the message an indication of how the technology works.
At the same time we have a restriction on the number of polygons we
think and area can have, so we’ve done some work to make versions of
polygons which are simplified and buffered (see
https://github.com/alphagov/notifications-utils/pull/769 for context).
Serendipitously, the simplified and buffered polygons are larger and
smoother than the detailed polygons we’ve got from the GeoJSON files. So
they naturally give the impression of covering an area which is wider
and less precise.
So this commit takes those simple polygons and uses them to render the
blue fill. This makes the blue fill extend outside the black stroke,
which is still using the detailed polygons direct from the GeoJSON.
This commit adds a page to view a single broadcast. This is important
for two reasons:
- users need an audit of what happened when, and who else was involved
in approving or cancelling a broadcast
- we need a place to put actions (approving, cancelling) on a broadcast
so that you can confirm details of the message and the areas before
performing the action
Shows current and previous broadcasts. Does not add a page for viewing
an individual broadcast.
Broadcasts are split into live and previous.
Draft broadcasts are excluded from the dashboard.
A lot of pages in the admin app are now generated entirely from Redis,
without touching the API.
The one remaining API call that a lot of pages make, when the user is
platform admin or a member of an organisation, is to get the name of
the current service’s organisation.
This commit adds some code to start caching that as well, which should
speed up page load times for when we’re clicking around the admin app
(it’s typically 100ms just to get the organisation, and more than that
when the API is under load).
This means changing the service model to get the organisation from the
API by ID, not by service ID. Otherwise it would be very hard to clear
the cache if the name of the organisation ever changed.
We can’t cache the whole organisation because it has a
`count_of_live_services` field which can change at any time, without an
update being made.
flask_login sets a bunch of variables in the session object. We only use
one of them, `user_id`. We set that to the user id from the database,
and refer to it all over the place.
However, in flask_login 0.5.0 they prefix this with an underscore to
prevent people accidentally overwriting it etc. So when a user logs in
we need to make sure that we set user_id manually so we can still use
it.
flask_login sets a bunch of variables on the `flask.session` object.
However, this session object isn't the one that gets passed in to the
request context by flask - that one can only be modified outside of
requests from within the session_transaction context manager (see [1]).
So, flask_login populates the normal session and then we need to copy
all of those values across.
We didn't need to do this previously because we already set the
`user_id` value on line 20 of tests/__init__.py, but now that
flask_login is looking for `_user_id` instead we need to do this
properly.
[1] https://flask.palletsprojects.com/en/1.1.x/testing/#accessing-and-modifying-sessions
flask_login sets a bunch of variables in the session object. We only use
one of them, `user_id`. We set that to the user id from the database,
and refer to it all over the place.
However, in flask_login 0.5.0 they prefix this with an underscore to
prevent people accidentally overwriting it etc. So when a user logs in
we need to make sure that we set user_id manually so we can still use
it.
flask_login sets a bunch of variables on the `flask.session` object.
However, this session object isn't the one that gets passed in to the
request context by flask - that one can only be modified outside of
requests from within the session_transaction context manager (see [1]).
So, flask_login populates the normal session and then we need to copy
all of those values across.
We didn't need to do this previously because we already set the
`user_id` value on line 20 of tests/__init__.py, but now that
flask_login is looking for `_user_id` instead we need to do this
properly.
[1] https://flask.palletsprojects.com/en/1.1.x/testing/#accessing-and-modifying-sessions
Notifications won’t exist for a job if:
- it’s just started
- it started a long time ago (older than the retention period)
We have a bug where:
1. Job starts processing, puts notifications on queue
2. Job finishes processing, sets status to `finished`
3. First notification gets picked up off the queue and put in the
database
In between 2. and 3. it’s possible for a job to be finished, but also to
have no notifications. We’re saying this is because the notifications
have been deleted, whereas really it’s because they haven’t been created
yet.
This commit fixes that bug by introducing the concept of recency for
jobs.
‘Recent’ is defined as 1 day, which is:
- a lot longer than it takes to create any notifications
- a bit shorter than anyone’s retention time
N.B. `processing_started` is defined here:
879ba1d5f0/app/models.py (L1194)
It can be `None` for scheduled jobs that haven’t started yet.
The property doesn’t represent the whole client, but just one method on
it. So this commit renames the property to better describe what it is
designed to store.
This follows the pattern of what we’ve done with services, users and
events.
It gives us a better interface to the data we get back from the API than
dealing with the raw JSON directly.
Now is a good time to do this because we’re going to be making a bunch
of changes to the jobs pages, and those changes will be easier to code
and understand with a sesnsible model behind them.
Since we can't call the `api_user_active` fixture as a function with
Pytest 5, the `user_json` function can be used instead. This updates
the function to
- Stop returning `max_failed_login_count` since this is not a field that
gets returned from the api
- Return fields as strings, not UUIDS, to match the format that api
returns the data in
- Provide a way to use this function to return a user with no
permissions
Flake8 Bugbear checks for some extra things that aren’t code style
errors, but are likely to introduce bugs or unexpected behaviour. A
good example is having mutable default function arguments, which get
shared between every call to the function and therefore mutating a value
in one place can unexpectedly cause it to change in another.
This commit enables all the extra warnings provided by Flake8 Bugbear,
except for the line length one (because we already lint for that
separately).
It disables:
- _B003: Assigning to os.environ_ because I don’t really understand this
- _B306: BaseException.message is removed in Python 3_ because I think
our exceptions have a custom structure that means the `.message`
attribute is still present
Query string ordering is non-deterministic. This can cause tests to fail
in a non helpful way when we’re comparing two URLs. The values in the
query string can match, but the string won’t because the order is
different. This commit adds some code to split up the URL and check that
each part of it matches, rather than checking the thing as a whole.
At the moment, the process for accepting the data sharing and financial
agreement is:
1. download a pdf
* print it out
* get someone to sign it
* scan it
* email it back to us
* we rename the file and save it in Google Drive
* we then update the organisation to say the MOU is signed
* sometimes we also:
* print it out and get it counter-signed
* scan it again
* email it back to the service
Let's not do that any more.
When the first service for an organisation that doesn't have the
agreement in place is in the process of going live, then they should
be able to accept the agreement online as part of the go live flow. This
commit adds the pages that let someone do that.
Where the checklist shows the agreement as **[not completed]** then
they can follow a link where they can download it (as happens now).
From here, they should then also be able to provide some info to accept
it. The info that we need is:
**Version** – because we version the agreements occasionally, we need to
know which version they are accepting. It may not be the latest one if
they downloaded it a while ago and it took time to be signed off
**Who is accepting the agreement** – this will often be someone in the
finance team, and not necessarily a team member, so we should let the
person either accept as themselves, or on behalf of someone else. If
it's on behalf of someone else we need to the name and email address of
that person so we have that on record. Obvs if it's them accepting it
themselves, we have that already (so we just store their user ID and
not their name or email address).
We then replay the collected info back in a sort of legally
binding kind of way pulling in the organisation name too. The wording
we’re using is inspired by what GOV.UK Pay have. Then there’s a big
green button they can click to accept the agreement, which stores their
user ID and and timestamp.