previously they were using sample_service fixture under the hood, but
with full permissions added - this works fine, **unless** there's
already a service with the name "sample service" in the database. This
can happen for two reasons:
* A previous test didn't tear down correctly
* This test already invoked the sample_service fixture somehow
If this happens, we just return the existing service, without modifying
its values - values that we might change in tests, such as
research mode or letters permissions.
In the future, we'll have to be vigilant! and aware! and careful! to
not use sample_service if we're doing tests involving letters, since
they create a service with a different name now
Replace labels by adding a key kwarg in the model for status.
We still need this as sqlalchemy attmempts to look for `notification_status`
on the model (Notification/NotificationHistory). To achieve true ORM mapping
(map status -> notification_status) we need the key kwarg.
More here:
http://docs.sqlalchemy.org/en/latest/core/metadata.html#sqlalchemy.schema.Column#key
The structure has been flattened, so I need to create a new endpoint, start using that endpoint, then change the name back.
Added template_id and version to the get job stats by id.
Since the response has changed I have created new endpoints so that the deployments for Admin are more managable.
Removed print statements from some tests.
* This adds functionality (via an extra req param) to the
* existing get all notifications method allowing us to specify
* when we want the API to return in csv/non-csv format
There are three authentication methods:
- requires_no_auth - public endpoint that does not require an Authorisation header
- requires_auth - public endpoints that need an API key in the Authorisation header
- requires_admin_auth - private endpoint that requires an Authorisation header which contains the API key for the defined as the client admin user
If you want to send a job on Monday morning, you should be able to
schedule it on Friday. You shouldn’t need to work on the weekend.
96 hours is a full 4 days, so you can schedule a job at any time on
Friday for any time on Monday.
We’ve checked with the information assurance people, and they’re OK with
us holding the data for this extra amount of time.
can now pass in a query string `?statuses=x,y,z` to filter jobs based on
`Job.job_status`. Not passing in a status or passing in an empty string is
equivalent to selecting every filter type at once.
accepts a page parameter to control what page of data
returns additional pagination fields in the response dict
* page_size: will always be 50. defined by Config.PAGE_SIZE
* total: the total amount of unpaginated records
* links: dict containing optionally prev, next, and last, links to
other relevant pagination pages
also cleaned up some test imports
If you schedule a job you might change your mind or circumstances might
change. So you need to be able to cancel it. This commit adds a `POST`
endpoint for individual jobs which sets their status to `cancelled`.
This also means adding a new status of `cancelled`, so there’s a
migration…
- If the job JSON contains a scheduling date then the new 'job_status" column is set to "scheduled"
- the date is persisted on the JOB row in the database
- Also the job WILL NOT be placed onto the queue of jobs. This is deferred to a later celery beat task.
- ensured statues not deleted on test runs
- returns in API call
Merge branch 'add-new-column-to-jobs-for-delayed-sending' into scheduled-delivery-of-jobs
Conflicts:
app/models.py
moved from notifications/rest -> service/rest and job/rest respectively
endpoint routes not affected
removed requires_admin decorator - that should be set by nginx config
as opposed to python code
This version of the client removed the request method, path and body from the encode and decode methods.
The biggest changes here is to the unit tests.