So we keep a record of who first uploaded a list it’s better to archive
a list than completely delete it.
The list in the database doesn’t contain any recipient info so this
isn’t a change to what data we’re retaining.
This means updating the endpoints that get contact lists to exclude ones
that are archived.
This is so we can display letter jobs in a different way on the admin
app (because it doesn’t make sense for them to have failed/delivered
counts like it does for email and text message jobs).
As elsewhere we use `fields.Method` to avoid serializing the whole
template object.
Currently if you visit the job page and the job is older than the data retention the totals on the page are all wrong because this query gets the counts from the notification table. With this change the data should always be correct. It also eliminates the need for looking at data retention. If the job is new and nothing has been created yet (i.e. the job hasn't started yet) then the page should show the correctly because the outcomes are empty (as expected), once the notifications for the jobs are created the numbers will start going up.
All our endpoint should perform a check that the params are valid - this is an easy whay to check that and is standard for our endpoints.
I reverted the query to just filter by job id.
When we cancel a job, we need to check if all notifications are
already in the database. So far, we were querying for all
notification objects in the database and counting them in
admin app, which runs into pagination problems for large jobs,
and could time out for very large jobs.
test can_letter_job_be_cancelled closer to the code
test dao_cancel_letter_job closer to the code
Mock out calls in cancel_letter_job to test just that method
we previously always read from NotificationHistory to get the
notification status stats for a job. Now, if the job is more than three
days old read from ft_notification_status table, otherwise read from
the notifications table (to keep live updates).
The `save_email` and `save_sms` jobs were updated previously to take an
optional `sender_id` and to use this if it was available. This commit
now gets the `sender_id` from the S3 metadata if it exists and passes it
through the the tasks which save the job notifications. This means SMS
and emails sent through jobs can use a specified `sender_id` instead of
the default.
The concern about performnace degrading has been thought through. We do not believe there will be an adverse effect since the high volume users do not send off messages.
really, it'll be somewhere btween 7 and 8 depending on what time of day
you request it at. But if today is monday, then seven days ago is last
tuesday - but we should return data for last monday as well so that
users see a full week's worth of data
also update/clarify the tests to make sure this is being honored for
all the different widgets on the dashboard
Since the admin app won’t be checking the metadata when it starts a job
now it’s possible that someone could make a post request which attempts
to start a job for an invalid file. This commit adds a check to make
sure that can’t happen.
This is more of an extra safety thing, rather than something that the
admin app or a user will see.
All of our uploads now have the metadata about the job set on them in
S3. So this commit moves to using that metadata, if it’s there, instead
of the data in the body of the post request.
The aim of this is to stop the admin app having to post this data, which
means that it won’t have to keep this data in the session for the
while doing the file upload flow.
previously they were using sample_service fixture under the hood, but
with full permissions added - this works fine, **unless** there's
already a service with the name "sample service" in the database. This
can happen for two reasons:
* A previous test didn't tear down correctly
* This test already invoked the sample_service fixture somehow
If this happens, we just return the existing service, without modifying
its values - values that we might change in tests, such as
research mode or letters permissions.
In the future, we'll have to be vigilant! and aware! and careful! to
not use sample_service if we're doing tests involving letters, since
they create a service with a different name now
Replace labels by adding a key kwarg in the model for status.
We still need this as sqlalchemy attmempts to look for `notification_status`
on the model (Notification/NotificationHistory). To achieve true ORM mapping
(map status -> notification_status) we need the key kwarg.
More here:
http://docs.sqlalchemy.org/en/latest/core/metadata.html#sqlalchemy.schema.Column#key
The structure has been flattened, so I need to create a new endpoint, start using that endpoint, then change the name back.
Added template_id and version to the get job stats by id.
Since the response has changed I have created new endpoints so that the deployments for Admin are more managable.
Removed print statements from some tests.
* This adds functionality (via an extra req param) to the
* existing get all notifications method allowing us to specify
* when we want the API to return in csv/non-csv format
There are three authentication methods:
- requires_no_auth - public endpoint that does not require an Authorisation header
- requires_auth - public endpoints that need an API key in the Authorisation header
- requires_admin_auth - private endpoint that requires an Authorisation header which contains the API key for the defined as the client admin user
If you want to send a job on Monday morning, you should be able to
schedule it on Friday. You shouldn’t need to work on the weekend.
96 hours is a full 4 days, so you can schedule a job at any time on
Friday for any time on Monday.
We’ve checked with the information assurance people, and they’re OK with
us holding the data for this extra amount of time.
can now pass in a query string `?statuses=x,y,z` to filter jobs based on
`Job.job_status`. Not passing in a status or passing in an empty string is
equivalent to selecting every filter type at once.