This deletes a big ol' chunk of code related to letters. It's not everything—there are still a few things that might be tied to sms/email—but it's the the heart of letters function. SMS and email function should be untouched by this.
Areas affected:
- Things obviously about letters
- PDF tasks, used for precompiling letters
- Virus scanning, used for those PDFs
- FTP, used to send letters to the printer
- Postage stuff
https://marshmallow.readthedocs.io/en/stable/upgrading.html#schemas-are-always-strict
`.load` doesn't return a `(data, errors)` tuple any more - only data is
returned. A `ValidationError` is raised if validation fails. The code
now relies on the `marshmallow_validation_error` error handler to handle
errors instead of having to raise an `InvalidRequest`. This has no
effect on the response that is returned (a test has been modified to
check).
Also added a new `password` field to the `UserSchema` so that we don't
have to specially check for password errors in the `.create_user` endpoint
- we can let marshmallow handle them.
At the moment we display the count of scheduled jobs on the dashboard
by sending all the scheduled jobs to the admin app and letting it work
out the stats.
This is inefficient and, because the get jobs response has a page size
of 50, becomes incorrect if a service schedules more than 50 jobs.
This commit adds a separate endpoint which gives the admin app the stats
it needs directly and correctly.
Rather than showing all jobs that have been ‘copied’ from a contact list
I think it makes more sense to group them under the contact list. This
way it’s easier to see what messages have been sent to a given group of
contacts over time.
Part of this work means the API needs to return only jobs that have been
created from a given contact list, when asked.
We want to add another argument here, and doing so would make the line
length too long with all the arguments on one line.
Also uses the * operator to enforce keyword-only arguments.
All our endpoint should perform a check that the params are valid - this is an easy whay to check that and is standard for our endpoints.
I reverted the query to just filter by job id.
When we cancel a job, we need to check if all notifications are
already in the database. So far, we were querying for all
notification objects in the database and counting them in
admin app, which runs into pagination problems for large jobs,
and could time out for very large jobs.
we previously always read from NotificationHistory to get the
notification status stats for a job. Now, if the job is more than three
days old read from ft_notification_status table, otherwise read from
the notifications table (to keep live updates).
The `save_email` and `save_sms` jobs were updated previously to take an
optional `sender_id` and to use this if it was available. This commit
now gets the `sender_id` from the S3 metadata if it exists and passes it
through the the tasks which save the job notifications. This means SMS
and emails sent through jobs can use a specified `sender_id` instead of
the default.
Since the admin app won’t be checking the metadata when it starts a job
now it’s possible that someone could make a post request which attempts
to start a job for an invalid file. This commit adds a check to make
sure that can’t happen.
This is more of an extra safety thing, rather than something that the
admin app or a user will see.
All of our uploads now have the metadata about the job set on them in
S3. So this commit moves to using that metadata, if it’s there, instead
of the data in the body of the post request.
The aim of this is to stop the admin app having to post this data, which
means that it won’t have to keep this data in the session for the
while doing the file upload flow.
The structure has been flattened, so I need to create a new endpoint, start using that endpoint, then change the name back.
Added template_id and version to the get job stats by id.
* This adds functionality (via an extra req param) to the
* existing get all notifications method allowing us to specify
* when we want the API to return in csv/non-csv format
There are three authentication methods:
- requires_no_auth - public endpoint that does not require an Authorisation header
- requires_auth - public endpoints that need an API key in the Authorisation header
- requires_admin_auth - private endpoint that requires an Authorisation header which contains the API key for the defined as the client admin user
can now pass in a query string `?statuses=x,y,z` to filter jobs based on
`Job.job_status`. Not passing in a status or passing in an empty string is
equivalent to selecting every filter type at once.
accepts a page parameter to control what page of data
returns additional pagination fields in the response dict
* page_size: will always be 50. defined by Config.PAGE_SIZE
* total: the total amount of unpaginated records
* links: dict containing optionally prev, next, and last, links to
other relevant pagination pages
also cleaned up some test imports
If you schedule a job you might change your mind or circumstances might
change. So you need to be able to cancel it. This commit adds a `POST`
endpoint for individual jobs which sets their status to `cancelled`.
This also means adding a new status of `cancelled`, so there’s a
migration…
- If the job JSON contains a scheduling date then the new 'job_status" column is set to "scheduled"
- the date is persisted on the JOB row in the database
- Also the job WILL NOT be placed onto the queue of jobs. This is deferred to a later celery beat task.
"with personalisation" should only be used by the public notification api
"with template" should be used when we want template name, etc details.
also added an xfail test for correctly constructing notification
personalisation
rename the notification_status_schema to make it apparent that it
involves the template, and then don't use it on the job page - the
job page doesn't do anything with the data. won't somebody think of
the cpu cycles! (also means it ignores problems with template
versions)