Previously we made a call to S3 to list objects for a letter, even
though we already had the precise key of the single object to hand.
This removes the one usage of "get_s3_bucket_objects" and uses the
filename directly in the call to remove the object.
Previously, the function would just return a presumed filename. Now that
it actually checks s3, if the file doesn't exist it'll raise an
exception. By default that's a StopIteration at the end of the bucket
iterator, which isn't ideal as this will get supressed if the function
is called within a generator loop further up or anything.
There are a couple of places where we expect the file may not exist, so
we define a custom exception to rescue specifically here. I did consider
subclassing boto's ClientError, but this wasn't straightforward as the
constructor expects to know the operation that failed, which for me is a
signal that it's not an appropriate (re-)use of the class.
Previously we generated the filename we expected a letter PDF to be
stored at in S3, and used that to retrieve it. However, the generated
filename can change over the course of a notification's lifetime e.g.
if the service changes from crown ('.C.') to non-crown ('.N.').
The prefix of the filename is stable: it's based on properties of the
notification - reference and creation - that don't change. This commit
changes the way we interact with letter PDFs in S3:
- Uploading uses the original method to generate the full file name.
The method is renamed to 'generate_' to distinguish it from the new one.
- Downloading uses a new 'find_' method to get the filename using just
its prefix, which makes it agnostic to changes in the filename suffix.
Making this change helps to decouple our code from the requirements DVLA
have on the filenames. While it means more traffic to S3, we rely on S3
in any case to download the files. From experience, we know S3 is highly
reliable and performant, so don't anticipate any issues.
In the tests we favour using moto to mock S3, so that the behaviour is
realistic. There are a couple of places where we just mock the method,
since what it returns isn't important for the test.
Note that, since the new method requires a notification object, we need
to change a query in one place, the columns of which were only selected
to appease the original method to generate a filename.
We no longer will send them any stats so therefore don't need the code
- the code to work out the nightly stats
- the performance platform client
- any configuration for the client
- any nightly tasks that kick off the sending off the stats
We will require a change in cronitor as we no longer will have this task
run meaning we need to delete the cronitor check.
We current do this as part of send-daily-performance-platform-stats but
now this moves it into its own separate task. This is for two reasons
- we will shortly get rid of the send-daily-performance-platform-stats
task as we no longer will need to send anything to performance
platform
- even if we did decide to keep the task
send-daily-performance-platform-stats and remove the specific bits
that relate to the performance platform, it's probably nicer to
rewrite the new task from scratch to make sure it's all clear and easy
to understand
having `/invite/service/<token>` and `/invite/service/<id>` as two
separate routes (the first to validate an invite token, the second to
retrieve invite metadata) technically works. Routes are matched from
first to last until a match is found. The metadata endpoint only accepts
UUIDs, so requests with a UUID will be picked up by the correct
endpoint, while requests that don't look like a UUID will carry on
searching for an endpoint, and will find the token validation endpoint.
So while this works correctly for our normal expected input, it only
does so _because the UUID endpoint is first in the file_. This isn't
great, and it makes it harder to reason about the URLs when looking at
them.
To solve this, create the new `invite/service/check/<token>` endpoint.
For backwards compatibility, assign this in parallel with the existing
route - once the admin uses the new route we can remove the old route
and make better guarantees about what endpoint is being hit.
this functions the same as `validate_invitation_token`, but without
having the signed token, instead just the ID. This is so later endpoints
within the invite flow can also fetch the invited user
nb: the routes are not changing as part of this, only file paths and
blueprint names.
invite -> service_invite
this blueprint handles fetching invites for a service, creating invites,
etc.
accept_invite -> global_invite
this blueprint handles accepting invites for now, but will also involve
retrieving service/org user invite data without knowing the service/org
id associated. i'm not in love with this name and open to suggestions,
but i wanted to contrast it from service_invite and
organisation/invite_rest.py.
All other tasks in app/celery/*_tasks.py have timers on them. Some
of these timers will be useful to check before/after performance as
a way to reassure ourselves about the impact of [1].
[1]: https://github.com/alphagov/notifications-api/pull/3172
We are using the `set_broadcast_service_type` route to make changes to
service objects. However, we had forgotten to add the `version_class`
decorator to it which will mean the changing of a service going from
training to live mode will also be recorded in the services_history
table for free. Whilst not essential, this easy change makes things more
consistent for how we update other services.
the existing endpoint is a GET, and so leaves email addresses in log
files.
we've got an existing POST find_users_by_partial_email, but not one that
matches on a whole email address.
Spotted that we aren't testing all our permission types here so added
this one in.
It includes the TODO for allowing the API to give a service the
broadcast permission. We don't want this to happen, we want them to use
the `set_as_broadcast_service` route instead. We will probably get away
with it for the moment for it would be tighter validation we should add
to reduce the risk of letting a service get in a dodgy state.
We accidently were removing the ability for a service to do email auth
if it was a broadcast service with email auth. This fixes it.
Note, it might be up for debate later whether we let broadcast services
use email auth (I think we should) so this might change in time, but we
will fix this bug regardless.
Note, worth glancing at `SERVICE_PERMISSION_TYPES` which contains a list
of permissions that a service might have to make sure I haven't missed
any others. The one that looks potentially dodgy is the
`EDIT_FOLDER_PERMISSIONS` permission but I can't see this being used
anywhere in the API or the admin app so think it is likely now defunct
and a user level permission so we don't need to worry about it.
Removes the configuration override for Live, so the base configuration is
used, enabling cell broadcasting for all MNOs.
Signed-off-by: Richard Baker <richard.baker@digital.cabinet-office.gov.uk>