Flake8 Bugbear checks for some extra things that aren’t code style
errors, but are likely to introduce bugs or unexpected behaviour. A
good example is having mutable default function arguments, which get
shared between every call to the function and therefore mutating a value
in one place can unexpectedly cause it to change in another.
This commit enables all the extra warnings provided by Flake8 Bugbear,
except for:
- the line length one (because we already lint for that separately)
- B903 Data class should either be immutable or use `__slots__` because
this seems to false-positive on some of our custom exceptions
- B902 Invalid first argument 'cls' used for instance method because
some SQLAlchemy decorators (eg `declared_attr`) make things that
aren’t formally class methods take a class not an instance as their
first argument
It disables:
- _B306: BaseException.message is removed in Python 3_ because I think
our exceptions have a custom structure that means the `.message`
attribute is still present
Matches the work done in other repos:
- https://github.com/alphagov/notifications-admin/pull/3172/files
At the moment we display the count of scheduled jobs on the dashboard
by sending all the scheduled jobs to the admin app and letting it work
out the stats.
This is inefficient and, because the get jobs response has a page size
of 50, becomes incorrect if a service schedules more than 50 jobs.
This commit adds a separate endpoint which gives the admin app the stats
it needs directly and correctly.
If you’ve sent a bunch of jobs from the same contact list then a handy
way to differentiate between them will be date sent, but also template
name (in effect the message you sent).
This commit extends the job response to include template name, using the
same pattern as for template type.
Rather than showing all jobs that have been ‘copied’ from a contact list
I think it makes more sense to group them under the contact list. This
way it’s easier to see what messages have been sent to a given group of
contacts over time.
Part of this work means the API needs to return only jobs that have been
created from a given contact list, when asked.
So we keep a record of who first uploaded a list it’s better to archive
a list than completely delete it.
The list in the database doesn’t contain any recipient info so this
isn’t a change to what data we’re retaining.
This means updating the endpoints that get contact lists to exclude ones
that are archived.
This is so we can display letter jobs in a different way on the admin
app (because it doesn’t make sense for them to have failed/delivered
counts like it does for email and text message jobs).
As elsewhere we use `fields.Method` to avoid serializing the whole
template object.
Currently if you visit the job page and the job is older than the data retention the totals on the page are all wrong because this query gets the counts from the notification table. With this change the data should always be correct. It also eliminates the need for looking at data retention. If the job is new and nothing has been created yet (i.e. the job hasn't started yet) then the page should show the correctly because the outcomes are empty (as expected), once the notifications for the jobs are created the numbers will start going up.
All our endpoint should perform a check that the params are valid - this is an easy whay to check that and is standard for our endpoints.
I reverted the query to just filter by job id.
When we cancel a job, we need to check if all notifications are
already in the database. So far, we were querying for all
notification objects in the database and counting them in
admin app, which runs into pagination problems for large jobs,
and could time out for very large jobs.
test can_letter_job_be_cancelled closer to the code
test dao_cancel_letter_job closer to the code
Mock out calls in cancel_letter_job to test just that method
we previously always read from NotificationHistory to get the
notification status stats for a job. Now, if the job is more than three
days old read from ft_notification_status table, otherwise read from
the notifications table (to keep live updates).
The `save_email` and `save_sms` jobs were updated previously to take an
optional `sender_id` and to use this if it was available. This commit
now gets the `sender_id` from the S3 metadata if it exists and passes it
through the the tasks which save the job notifications. This means SMS
and emails sent through jobs can use a specified `sender_id` instead of
the default.
The concern about performnace degrading has been thought through. We do not believe there will be an adverse effect since the high volume users do not send off messages.
really, it'll be somewhere btween 7 and 8 depending on what time of day
you request it at. But if today is monday, then seven days ago is last
tuesday - but we should return data for last monday as well so that
users see a full week's worth of data
also update/clarify the tests to make sure this is being honored for
all the different widgets on the dashboard
Since the admin app won’t be checking the metadata when it starts a job
now it’s possible that someone could make a post request which attempts
to start a job for an invalid file. This commit adds a check to make
sure that can’t happen.
This is more of an extra safety thing, rather than something that the
admin app or a user will see.
All of our uploads now have the metadata about the job set on them in
S3. So this commit moves to using that metadata, if it’s there, instead
of the data in the body of the post request.
The aim of this is to stop the admin app having to post this data, which
means that it won’t have to keep this data in the session for the
while doing the file upload flow.
previously they were using sample_service fixture under the hood, but
with full permissions added - this works fine, **unless** there's
already a service with the name "sample service" in the database. This
can happen for two reasons:
* A previous test didn't tear down correctly
* This test already invoked the sample_service fixture somehow
If this happens, we just return the existing service, without modifying
its values - values that we might change in tests, such as
research mode or letters permissions.
In the future, we'll have to be vigilant! and aware! and careful! to
not use sample_service if we're doing tests involving letters, since
they create a service with a different name now
Replace labels by adding a key kwarg in the model for status.
We still need this as sqlalchemy attmempts to look for `notification_status`
on the model (Notification/NotificationHistory). To achieve true ORM mapping
(map status -> notification_status) we need the key kwarg.
More here:
http://docs.sqlalchemy.org/en/latest/core/metadata.html#sqlalchemy.schema.Column#key
The structure has been flattened, so I need to create a new endpoint, start using that endpoint, then change the name back.
Added template_id and version to the get job stats by id.
Since the response has changed I have created new endpoints so that the deployments for Admin are more managable.
Removed print statements from some tests.
* This adds functionality (via an extra req param) to the
* existing get all notifications method allowing us to specify
* when we want the API to return in csv/non-csv format