This changeset converts the display of dates and times to be just UTC to match the recent changes in the backend. This unwinds a bit of work that was done previously and allows us to start with a clean slate in how we want to approach displaying dates and times going forward. It also adds a bit of explanatory text to help users.
Signed-off-by: Carlo Costino <carlo.costino@gsa.gov>
Co-authored-by: stvnrlly <steven.reilly@gsa.gov>
It looks like, by default, Flask no longer makes full URLs, for example
`https://example.com/path`. Instead it does `/path`. This will still
work fine, and if anything is better because it reduces the number of
bytes of HTML we are sending.
It won’t mean that requests go over `http` instead of `https` without
the protocol because we set the appropriate HSTS header here:
0c57da7781/ansible/roles/paas-proxy/templates/admin.conf.j2 (L11)
This commit changes all our tests to reflect that URLs no longer have
the protocol and domain in them. `_external=True` is Flask’s way of
saying whether a URL should be generated with the domain and protocol
(`True`) or without it (`False`).
Again, I can’t find the changelog or diff where this was introuduced,
but if you’d like to go spelunking then here’s a starting point:
50374e3cfe/src/flask/helpers.py (L192)
This repeats the pattern we already have for previewing a letter,
where we assume the error is because the notification has already
been sent and redirect the user to see it.
I've improved the original pattern a bit:
- I've DRYed-up the low-level boto code and moved the error handler
there so it can be reused.
- I've introduced a custom exception, which the calling code can
choose to log.
- I've introduced the moto library, which we use elsewhere, to make
it easier to test S3 code.
I've used an error level log when sending a notification - now that
we have a more descriptive log, we can verify the assumption is true
and then make an informed decision to downgrade the log.
In future we may want to merge this handler with the similar code
in utils [1], but we'll need to be careful as the utils handler is
superficial - it doesn't check the reason for the error.
[1]: bce0f4e596/notifications_utils/s3.py (L52)
We added a new argument to `client_request.get` and
`client_request.post` to specify that it should return a raw `Response`
object rather than an instance of `BeautifulSoup`.
This is useful because sometimes we need to look at stuff like the
response headers.
However it turns out we already have a separate method for this, so
rather than invent something new I think it’s better to stick with the
thing we already have.
We have a `client_request` fixture which does a bunch of useful stuff
like:
- checking the status code of the response
- returning a `BeautifulSoup` object
Lots of our tests still use an older fixture called `logged_in_client`.
This is not as good because:
- it returns a raw `Response` object
- doesn’t do the additional checks
- means our tests contain a lot of repetetive boilerplate like `page = BeautifulSoup(response.data.decode('utf-8'), 'html.parser')`
This commit converts all the tests using `logged_in_client` to:
use `client_request` instead.
This has several advantages:
- It gives us more room to explain the error and actions. This will
be useful for upcoming work we want to do, which will add yet more
validations for CSV uploads.
- We already use a flash to show certain kinds of errors on these
pages (just above). This is more consistent.
- It's potentially more accessible. Previously the error and the
button text used to be read out as a single sentence. Now the page
reloads and reads the flash error alone.
In theory we should show an error in both places, but this can be
confusing on pages where there's only a single form control, and
especially if the error is long.
the api returns UTC timestamps, we should keep them as UTC timestamps
until the very last moment, and only convert them into BST when we know
we want to return to a user (ie: in contact-list.html and other places
like that)
This continues the work from Template Preview [1], so that we have
a complete store of original PDFs to use for testing changes to it.
Previously we did store some originals, but these were only invalid
PDFs that had failed sanitisation; for valid PDFs, the "transient"
bucket only contains the sanitised versions, which the API deletes
/ moves when the notification is sent [2].
Since the notification is only created at a later stage [3], there's
no easy way to get the final name of the PDF we send to DVLA. Instead,
we use the "upload_id", which eventually becomes the notification ID
[4]. This should be enough to trace the file for specific debugging.
Note that we only want to store original PDFs if they're valid (and
virus free!), since there's no point testing changes with bad data.
[1]: https://github.com/alphagov/notifications-template-preview/pull/545
[2]: c44ec57c17/app/service/send_notification.py (L212)
[3]: 7930a53a58/app/main/views/uploads.py (L362)
[4]: 7930a53a58/app/main/views/uploads.py (L373)
We have lots of functions for converting various types of data into
strings to be displayed to the user somewhere.
This commit collects all these functions into their own module, rather
than having them cluttering up `app/__init__.py` or buried amongst
various other things that have ended up in `app/utils.py`.
On the uploads page we only show jobs which are within a service’s data
retention.
This commit does the same for when we’re listing the jobs for a contact
list. This matches the UI, which says a contact list has been ‘used
`<count_of_jobs>` in the last <data_retention> days’
Old contact lists can be:
- never used
- used, but so long ago we no longer have data about the jobs due to
retention settings
We show different messages in each of these cases. This commit
parametrizes the tests to ensure that both cases are covered.
Also makes the job a bit older so that both cases are logically possible
with the test data.
Because we’re be grouping jobs under their parent contact lists it’s
good to have some information ‘scent’ to help people find their jobs,
ie by clicking into a contact list. It also lets you see which list have
been used more than others, maybe because the update hasn’t been sent
to that group of people yet.
The hint text under uploads always says when they were used. For contact
lists this is a bit more complicated, since they can:
- never have been used
- been used multiple times
This commit makes use of the new fields being returned by the API to say
determine when these messages are relevant. They also let us
differentiate between a contact list that’s never been used, and one
that has been used, but not recently enough to show any jobs against it.
It’s a bit unintuitive that starting a job from a contact list makes a
copy of the file, which has no relationship to the list it was copied
from. This is more of an implementation detail, rather than something
that comes from people’s mental models of what is going on. Or at least
that’s what I hypothesise.
I think it’s clearer to show jobs that come from contact lists within
the lists that they were created from. By naming the jobs by template
this gives a clearer view of what messages have been sent to the group
over time.
The code was looking for `original_file_name` in the metadata for a
contact list, or the query string if it wasn't in the metadata. Now that
the change to use the metadata for the file name has been deployed for a
while e can stop looking in the query string for the
`original_file_name`.
We were passing `original_file_name` from the `.upload_contact_list`
view function to the `.check_contact_list` view function as a query
param. We now store it in the metadata instead. `.check_contact_list`
still checks for `original_file_name` in the query string if it's not in
the metadata - this is necessary until the code has been deployed for a
few days and we can be sure that there are no contact lists that are
mid-way through the upload stage.
the upload preview page has a file_id - this corresponds to the file in
the transient pdf uploads bucket. However, if the user already hit send
(and then navigated back) the file's no longer in that bcuket, it's been
moved to the regular letters-pdf bucket. So the s3 get request fails. To
avoid this, simply redirect to the notifications page if the file isn't
in the transient bucket. This is better for the user as it'll stop them
trying to submit it twice, and will provide more clarity on the status
of the notification too.