All our endpoint should perform a check that the params are valid - this is an easy whay to check that and is standard for our endpoints.
I reverted the query to just filter by job id.
When we cancel a job, we need to check if all notifications are
already in the database. So far, we were querying for all
notification objects in the database and counting them in
admin app, which runs into pagination problems for large jobs,
and could time out for very large jobs.
we previously always read from NotificationHistory to get the
notification status stats for a job. Now, if the job is more than three
days old read from ft_notification_status table, otherwise read from
the notifications table (to keep live updates).
The `save_email` and `save_sms` jobs were updated previously to take an
optional `sender_id` and to use this if it was available. This commit
now gets the `sender_id` from the S3 metadata if it exists and passes it
through the the tasks which save the job notifications. This means SMS
and emails sent through jobs can use a specified `sender_id` instead of
the default.
Since the admin app won’t be checking the metadata when it starts a job
now it’s possible that someone could make a post request which attempts
to start a job for an invalid file. This commit adds a check to make
sure that can’t happen.
This is more of an extra safety thing, rather than something that the
admin app or a user will see.
All of our uploads now have the metadata about the job set on them in
S3. So this commit moves to using that metadata, if it’s there, instead
of the data in the body of the post request.
The aim of this is to stop the admin app having to post this data, which
means that it won’t have to keep this data in the session for the
while doing the file upload flow.
The structure has been flattened, so I need to create a new endpoint, start using that endpoint, then change the name back.
Added template_id and version to the get job stats by id.
* This adds functionality (via an extra req param) to the
* existing get all notifications method allowing us to specify
* when we want the API to return in csv/non-csv format
There are three authentication methods:
- requires_no_auth - public endpoint that does not require an Authorisation header
- requires_auth - public endpoints that need an API key in the Authorisation header
- requires_admin_auth - private endpoint that requires an Authorisation header which contains the API key for the defined as the client admin user
can now pass in a query string `?statuses=x,y,z` to filter jobs based on
`Job.job_status`. Not passing in a status or passing in an empty string is
equivalent to selecting every filter type at once.
accepts a page parameter to control what page of data
returns additional pagination fields in the response dict
* page_size: will always be 50. defined by Config.PAGE_SIZE
* total: the total amount of unpaginated records
* links: dict containing optionally prev, next, and last, links to
other relevant pagination pages
also cleaned up some test imports
If you schedule a job you might change your mind or circumstances might
change. So you need to be able to cancel it. This commit adds a `POST`
endpoint for individual jobs which sets their status to `cancelled`.
This also means adding a new status of `cancelled`, so there’s a
migration…
- If the job JSON contains a scheduling date then the new 'job_status" column is set to "scheduled"
- the date is persisted on the JOB row in the database
- Also the job WILL NOT be placed onto the queue of jobs. This is deferred to a later celery beat task.
"with personalisation" should only be used by the public notification api
"with template" should be used when we want template name, etc details.
also added an xfail test for correctly constructing notification
personalisation
rename the notification_status_schema to make it apparent that it
involves the template, and then don't use it on the job page - the
job page doesn't do anything with the data. won't somebody think of
the cpu cycles! (also means it ignores problems with template
versions)
moved from notifications/rest -> service/rest and job/rest respectively
endpoint routes not affected
removed requires_admin decorator - that should be set by nginx config
as opposed to python code
or param errors to raise invalid data exception. That will cause
those responses to be handled in by errors.py, which will log
the errors.
Set most of schemas to strict mode so that marshmallow will raise
exception rather than checking for errors in return tuple from load.
Added handler to errors.py for marshmallow validation errors.