We’ve observed people using ‘national’ and ‘local’ during user research.
It has less tongue-twisting ambiguity than county vs country.
But we think that maybe just getting rid of ‘counties’ is enough to
disambiguate them. So this commit just takes the ‘local’ concept.
This commit also gives the libraries and areas new IDs, which means if
we want to rename them in the future it won’t be a breaking change.
Broadcasting is not a precise technology, because:
- cell towers are directional
- their range varies depending on whether they are 2, 3, 4, or 5G
(the higher the bandwidth the shorter the range)
- in urban areas the towers are more densely packed, so a phone is
likely to have a greater choice of tower to connect to, and will
favour a closer one (which has a stronger signal)
- topography and even weather can affect the range of a tower
So it’s good for us to visually indicate that the broadcast is not as
precise as the boundaries of the area, because it gives the person
sending the message an indication of how the technology works.
At the same time we have a restriction on the number of polygons we
think and area can have, so we’ve done some work to make versions of
polygons which are simplified and buffered (see
https://github.com/alphagov/notifications-utils/pull/769 for context).
Serendipitously, the simplified and buffered polygons are larger and
smoother than the detailed polygons we’ve got from the GeoJSON files. So
they naturally give the impression of covering an area which is wider
and less precise.
So this commit takes those simple polygons and uses them to render the
blue fill. This makes the blue fill extend outside the black stroke,
which is still using the detailed polygons direct from the GeoJSON.
This commit adds a page to view a single broadcast. This is important
for two reasons:
- users need an audit of what happened when, and who else was involved
in approving or cancelling a broadcast
- we need a place to put actions (approving, cancelling) on a broadcast
so that you can confirm details of the message and the areas before
performing the action
Shows current and previous broadcasts. Does not add a page for viewing
an individual broadcast.
Broadcasts are split into live and previous.
Draft broadcasts are excluded from the dashboard.
A lot of pages in the admin app are now generated entirely from Redis,
without touching the API.
The one remaining API call that a lot of pages make, when the user is
platform admin or a member of an organisation, is to get the name of
the current service’s organisation.
This commit adds some code to start caching that as well, which should
speed up page load times for when we’re clicking around the admin app
(it’s typically 100ms just to get the organisation, and more than that
when the API is under load).
This means changing the service model to get the organisation from the
API by ID, not by service ID. Otherwise it would be very hard to clear
the cache if the name of the organisation ever changed.
We can’t cache the whole organisation because it has a
`count_of_live_services` field which can change at any time, without an
update being made.
flask_login sets a bunch of variables in the session object. We only use
one of them, `user_id`. We set that to the user id from the database,
and refer to it all over the place.
However, in flask_login 0.5.0 they prefix this with an underscore to
prevent people accidentally overwriting it etc. So when a user logs in
we need to make sure that we set user_id manually so we can still use
it.
flask_login sets a bunch of variables on the `flask.session` object.
However, this session object isn't the one that gets passed in to the
request context by flask - that one can only be modified outside of
requests from within the session_transaction context manager (see [1]).
So, flask_login populates the normal session and then we need to copy
all of those values across.
We didn't need to do this previously because we already set the
`user_id` value on line 20 of tests/__init__.py, but now that
flask_login is looking for `_user_id` instead we need to do this
properly.
[1] https://flask.palletsprojects.com/en/1.1.x/testing/#accessing-and-modifying-sessions
flask_login sets a bunch of variables in the session object. We only use
one of them, `user_id`. We set that to the user id from the database,
and refer to it all over the place.
However, in flask_login 0.5.0 they prefix this with an underscore to
prevent people accidentally overwriting it etc. So when a user logs in
we need to make sure that we set user_id manually so we can still use
it.
flask_login sets a bunch of variables on the `flask.session` object.
However, this session object isn't the one that gets passed in to the
request context by flask - that one can only be modified outside of
requests from within the session_transaction context manager (see [1]).
So, flask_login populates the normal session and then we need to copy
all of those values across.
We didn't need to do this previously because we already set the
`user_id` value on line 20 of tests/__init__.py, but now that
flask_login is looking for `_user_id` instead we need to do this
properly.
[1] https://flask.palletsprojects.com/en/1.1.x/testing/#accessing-and-modifying-sessions
Notifications won’t exist for a job if:
- it’s just started
- it started a long time ago (older than the retention period)
We have a bug where:
1. Job starts processing, puts notifications on queue
2. Job finishes processing, sets status to `finished`
3. First notification gets picked up off the queue and put in the
database
In between 2. and 3. it’s possible for a job to be finished, but also to
have no notifications. We’re saying this is because the notifications
have been deleted, whereas really it’s because they haven’t been created
yet.
This commit fixes that bug by introducing the concept of recency for
jobs.
‘Recent’ is defined as 1 day, which is:
- a lot longer than it takes to create any notifications
- a bit shorter than anyone’s retention time
N.B. `processing_started` is defined here:
879ba1d5f0/app/models.py (L1194)
It can be `None` for scheduled jobs that haven’t started yet.
The property doesn’t represent the whole client, but just one method on
it. So this commit renames the property to better describe what it is
designed to store.
This follows the pattern of what we’ve done with services, users and
events.
It gives us a better interface to the data we get back from the API than
dealing with the raw JSON directly.
Now is a good time to do this because we’re going to be making a bunch
of changes to the jobs pages, and those changes will be easier to code
and understand with a sesnsible model behind them.
Since we can't call the `api_user_active` fixture as a function with
Pytest 5, the `user_json` function can be used instead. This updates
the function to
- Stop returning `max_failed_login_count` since this is not a field that
gets returned from the api
- Return fields as strings, not UUIDS, to match the format that api
returns the data in
- Provide a way to use this function to return a user with no
permissions
Flake8 Bugbear checks for some extra things that aren’t code style
errors, but are likely to introduce bugs or unexpected behaviour. A
good example is having mutable default function arguments, which get
shared between every call to the function and therefore mutating a value
in one place can unexpectedly cause it to change in another.
This commit enables all the extra warnings provided by Flake8 Bugbear,
except for the line length one (because we already lint for that
separately).
It disables:
- _B003: Assigning to os.environ_ because I don’t really understand this
- _B306: BaseException.message is removed in Python 3_ because I think
our exceptions have a custom structure that means the `.message`
attribute is still present
Query string ordering is non-deterministic. This can cause tests to fail
in a non helpful way when we’re comparing two URLs. The values in the
query string can match, but the string won’t because the order is
different. This commit adds some code to split up the URL and check that
each part of it matches, rather than checking the thing as a whole.
At the moment, the process for accepting the data sharing and financial
agreement is:
1. download a pdf
* print it out
* get someone to sign it
* scan it
* email it back to us
* we rename the file and save it in Google Drive
* we then update the organisation to say the MOU is signed
* sometimes we also:
* print it out and get it counter-signed
* scan it again
* email it back to the service
Let's not do that any more.
When the first service for an organisation that doesn't have the
agreement in place is in the process of going live, then they should
be able to accept the agreement online as part of the go live flow. This
commit adds the pages that let someone do that.
Where the checklist shows the agreement as **[not completed]** then
they can follow a link where they can download it (as happens now).
From here, they should then also be able to provide some info to accept
it. The info that we need is:
**Version** – because we version the agreements occasionally, we need to
know which version they are accepting. It may not be the latest one if
they downloaded it a while ago and it took time to be signed off
**Who is accepting the agreement** – this will often be someone in the
finance team, and not necessarily a team member, so we should let the
person either accept as themselves, or on behalf of someone else. If
it's on behalf of someone else we need to the name and email address of
that person so we have that on record. Obvs if it's them accepting it
themselves, we have that already (so we just store their user ID and
not their name or email address).
We then replay the collected info back in a sort of legally
binding kind of way pulling in the organisation name too. The wording
we’re using is inspired by what GOV.UK Pay have. Then there’s a big
green button they can click to accept the agreement, which stores their
user ID and and timestamp.
the api always returns exactly:
```
id
name
email_address
auth_type
current_session_id
failed_login_count
logged_in_at
mobile_number
organisations
password_changed_at
permissions
platform_admin
services
state
```
it does this through `models.py::User.serialize` – there is an old
Marshmallow `user_schema` in `schemas.py` but this isn’t used for
dumping return data, only parsing the json in the create user rest
endpoint.
This means we can rely on these keys always being in the dictionary.
The data flow of other bits of our application looks like this:
```
API (returns JSON)
⬇
API client (returns a built in type, usually `dict`)
⬇
Model (returns an instance, eg of type `Service`)
⬇
View (returns HTML)
```
The user API client was architected weirdly, in that it returned a model
directly, like this:
```
API (returns JSON)
⬇
API client (returns a model, of type `User`, `InvitedUser`, etc)
⬇
View (returns HTML)
```
This mixing of different layers of the application is bad because it
makes it hard to write model code that doesn’t have circular
dependencies. As our application gets more complicated we will be
relying more on models to manage this complexity, so we should make it
easy, not hard to write them.
It also means that most of our mocking was of the User model, not just
the underlying JSON. So it would have been easy to introduce subtle bugs
to the user model, because it wasn’t being comprehensively tested. A lot
of the changed lines of code in this commit mean changing the tests to
mock only the JSON, which means that the model layer gets implicitly
tested.
For those reasons this commit changes the user API client to return
JSON, not an instance of `User` or other models.
This adds an option on the organisation settings page to add
'request_to_go_live_notes'. When a service belonging to this
organisation requests to go live, any go live notes for the
organisation will be added to the Zendesk ticket in the 'Agreement
signed' section.
When looking at a notification you can either be coming from the page
of all notifications, or from a job. Currently the back link always
takes you to the page of all notifications.
This commit makes it a bit more sophisticated so if you’ve come from
looking at a job, you go back to the job.
Every time someone adds a new service we ask them what kind of
organisation they work for.
We can look this up based on the user’s email address now. So we should
only ask the question if:
- we don’t know about the organisation
- or we haven’t set what type of organisation it is (this shouldn’t be
possible on productions because we’ve populated the column for all
existing organisations and it’s impossible to add a new one without
setting it
Adds a front end for:
https://github.com/alphagov/notifications-api/pull/2417
> Sometimes we have to make a few services for what really is one
> service, for example GOV.UK Pay and GOV.UK Pay Direct Debit. We also
> have our own test services which aren’t included in the count of live
> services. We currently count these as one service by not including
> them in the beta partners spreadsheet.
We were already invitializing InvitedUser with folder_permissions
(defaulting to None), but this removes the default and adds
folder_permissions to the serialize method. Folder permissions should
now always be returned from api, either as an empty list or a list of
UUIDs.
We get a bunch of requests to go live where people have told us they're
going to send email but there is no email reply-to address present.
These come from 2 scenarios:
1. when there are email templates, and no reply to address – but they
ignore the checklist
2. when there are no email templates (yet) but they provide anticipated
volumes for email
At the moment we only auto-check for a reply to address when they have
email templates. And because the question about anticipated volumes
follows the checklist, you'll get a checklist that passes (reply
addresses not required as no templates present) - but your future intent
that differs (reply address IS required because you have anticipated
volumes).
So let’s bring the request for anticipated volumes into the checklist,
that way we can dynamically add the requirement for a reply to address
if they say they will send email but don't have templates yet.
We should begin storing it in the database against the service to stop
people having to re-enter it each time they try to complete the go live
screens.
This also means moving the ‘consent to research question’ along with
the questions about volume, because
- we want people to answer both before going live
- we don’t want to clutter up the summary page by asking questions there
too
This commit is the first step to disentangling the models from the API
clients. With the models in the same folder as the API clients it makes
it hard to import the API clients within the model without getting a
circular import.
After this commit the user API clients still has this problem, but at
least the service API client doesn’t.
Making people use a property is a sure way to make sure they’re spelling
the name of the property correctly, and allows us to easily swap out
properties that call through to the underlying JSON, and properties
which are implemented as methods.
Added a new row to the settings table, 'Post class', which shows the
default letter class of a service and is only visible to Platform Admin.
Also added a new page to enable Platform Admin users to change the
default letter class for a service - this only has two options at the
moment, 1st class only and 2nd class only.