Commit Graph

1287 Commits

Author SHA1 Message Date
Rebecca Law
da8a7a8db1 Avoid key errors by setting the year_start with 2020 or 2021
Remove db.create_service_with_organisation method
Update comment in command
2021-03-30 09:08:04 +01:00
Rebecca Law
7da5abc17b The free sms allowances are changing for the financial year starting
April 1 2021.

In this PR there is a command to set annual_billing for all active
services with the the new defaults.

The new method `set_default_free_allowance_for_service` will also be
called in a PR to follow that will set a services free allowance to the
default if the organisation for the service is changed.
2021-03-29 13:32:00 +01:00
Rebecca Law
057c4e4568 Quick fix to ensure that billing doesn't fail if the crown is not set
for the service.

The letters rates for cronw and non crown are the same. It would be nice
to remove the need for crown but for now this is a quick fix.
2021-03-25 08:42:46 +00:00
Pea Tyczynska
4c3d70fd55 Update usage endpoint with billing details for orgs and services 2021-03-19 16:49:48 +00:00
Ben Thorner
c76e789f1e Reduce extra S3 ops when working with letter PDFs
Previously we did some unnecessary work:

- Collate task. This had one S3 request to get a summary of the object,
which was then used in another request to get the full object. We only
need the size of the object, which is included in the summary [1].

- Archive task. This had one S3 request to get a summary of the object,
which was then used to make another request to delete it. We still need
both requests, but we can remove the S3.Object in the middle.

[1]: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/s3.html#objectsummary
2021-03-16 12:53:13 +00:00
Ben Thorner
ff7eebc90a Simplify deleting old letters
Previously we made a call to S3 to list objects for a letter, even
though we already had the precise key of the single object to hand.
This removes the one usage of "get_s3_bucket_objects" and uses the
filename directly in the call to remove the object.
2021-03-15 17:18:20 +00:00
Leo Hemsted
6784ae62a6 Raise Exception if letter PDF not in S3
Previously, the function would just return a presumed filename. Now that
it actually checks s3, if the file doesn't exist it'll raise an
exception. By default that's a StopIteration at the end of the bucket
iterator, which isn't ideal as this will get supressed if the function
is called within a generator loop further up or anything.

There are a couple of places where we expect the file may not exist, so
we define a custom exception to rescue specifically here. I did consider
subclassing boto's ClientError, but this wasn't straightforward as the
constructor expects to know the operation that failed, which for me is a
signal that it's not an appropriate (re-)use of the class.
2021-03-15 17:18:11 +00:00
Ben Thorner
b43a367d5f Relax lookup of letter PDFs in S3 buckets
Previously we generated the filename we expected a letter PDF to be
stored at in S3, and used that to retrieve it. However, the generated
filename can change over the course of a notification's lifetime e.g.
if the service changes from crown ('.C.') to non-crown ('.N.').

The prefix of the filename is stable: it's based on properties of the
notification - reference and creation - that don't change. This commit
changes the way we interact with letter PDFs in S3:

- Uploading uses the original method to generate the full file name.
The method is renamed to 'generate_' to distinguish it from the new one.

- Downloading uses a new 'find_' method to get the filename using just
its prefix, which makes it agnostic to changes in the filename suffix.

Making this change helps to decouple our code from the requirements DVLA
have on the filenames. While it means more traffic to S3, we rely on S3
in any case to download the files. From experience, we know S3 is highly
reliable and performant, so don't anticipate any issues.

In the tests we favour using moto to mock S3, so that the behaviour is
realistic. There are a couple of places where we just mock the method,
since what it returns isn't important for the test.

Note that, since the new method requires a notification object, we need
to change a query in one place, the columns of which were only selected
to appease the original method to generate a filename.
2021-03-15 13:55:44 +00:00
David McDonald
41d95378ea Remove everything for the performance platform
We no longer will send them any stats so therefore don't need the code
- the code to work out the nightly stats
- the performance platform client
- any configuration for the client
- any nightly tasks that kick off the sending off the stats

We will require a change in cronitor as we no longer will have this task
run meaning we need to delete the cronitor check.
2021-03-15 12:04:53 +00:00
Leo Hemsted
ebd4eda8bd remove duplicate dao invite fns and improve naming 2021-03-12 13:56:05 +00:00
Ben Thorner
a91fde2fda Run auto-correct on app/ and tests/ 2021-03-12 11:45:45 +00:00
David McDonald
a2cc0df5a7 Merge pull request #3167 from alphagov/broadcast_services_history
Add service versioning to broadcast account type change
2021-03-11 17:07:24 +00:00
Rebecca Law
19f7a6ce38 Refactor method for deciding the failure type 2021-03-10 14:39:55 +00:00
Rebecca Law
a7a504a599 Merge pull request #3173 from alphagov/performance-platform-endpoints
Add an endpoint to return all the data required for the performance platform page
2021-03-10 13:27:08 +00:00
Rebecca Law
11d10d5293 Rename to performance_dashboard
Fix totals to return totals for all time rather than for date range.
Added more test data
2021-03-10 13:16:25 +00:00
David McDonald
8cf32d6f22 Add service versioning to broadcast account type change
We are using the `set_broadcast_service_type` route to make changes to
service objects. However, we had forgotten to add the `version_class`
decorator to it which will mean the changing of a service going from
training to live mode will also be recorded in the services_history
table for free. Whilst not essential, this easy change makes things more
consistent for how we update other services.
2021-03-08 14:09:24 +00:00
David McDonald
6b535fe946 Merge pull request #3166 from alphagov/email-auth-broadcast-bug
Email auth broadcast bug
2021-03-05 09:59:59 +00:00
Rebecca Law
b06850e611 Add an endpoint to return all the data required for the performance
platform page.
2021-03-05 09:59:03 +00:00
David McDonald
0ce539704e Fix bug with removing email auth for broadcast service
We accidently were removing the ability for a service to do email auth
if it was a broadcast service with email auth. This fixes it.

Note, it might be up for debate later whether we let broadcast services
use email auth (I think we should) so this might change in time, but we
will fix this bug regardless.

Note, worth glancing at `SERVICE_PERMISSION_TYPES` which contains a list
of permissions that a service might have to make sure I haven't missed
any others. The one that looks potentially dodgy is the
`EDIT_FOLDER_PERMISSIONS` permission but I can't see this being used
anywhere in the API or the admin app so think it is likely now defunct
and a user level permission so we don't need to worry about it.
2021-03-03 18:34:24 +00:00
Rebecca Law
0849070eca Add created_at and updated_at columns to ft_processing_time 2021-02-26 07:49:49 +00:00
Rebecca Law
21edf7bfdd Persist the processing time statistics to the database.
The performance platform is going away soon. The only stat that we do not have in our database is the processing time. Let me clarify the only statistic we don't have in our database that we can query efficiently is the processing time. Any queries on notification_history are too inefficient to use on a web page.
Processing time = the total number of normal/team emails and text messages plus the number of messages that have gone from created to sending within 10 seconds per whole day. We can then easily calculate the percentage of messages that were marked as sending under 10 seconds.
2021-02-26 07:49:49 +00:00
Pea Tyczynska
e0c73ac342 Send daily email with letter and sheet volumes to DVLA 2021-02-23 15:13:19 +00:00
Pea Tyczynska
c8ffebcce8 Query to get letter and sheet volumes
So we can send daily email with these volumes to DVLA.
2021-02-23 15:13:18 +00:00
David McDonald
abb3b3307c Fix flake8 2021-02-16 10:31:12 +00:00
David McDonald
6fcda6debb Make set_as_broadcast_service use a single DB commit
We don't want things in a half state if there is an error during the
method. Therefore, we move it all into a single function that is wrapped
in a transaction.

Note, we copy the approach of
https://github.com/alphagov/notifications-api/blob/master/app/dao/services_dao.py#L293
by having a single new dao function that does all the DB work.
2021-02-16 10:31:11 +00:00
David McDonald
4f7afa3fbe Set provider restriction 2021-02-16 10:31:08 +00:00
David McDonald
3b5d86c854 Add endpoint to set broadcast service channel 2021-02-16 10:31:01 +00:00
Leo Hemsted
4f89be6944 Revert "Merge pull request #3125 from alphagov/revert-retry"
This reverts commit 6b9a50beff, reversing
changes made to 33f93dfea2.
2021-02-09 17:01:04 +00:00
Leo Hemsted
bee0059e53 Revert "Merge pull request #3101 from alphagov/retry-broadcasts"
This reverts commit 1bd99c779d, reversing
changes made to d390eb2cac.
2021-02-08 11:02:34 +00:00
Leo Hemsted
bbae209200 check provider message status etc when sending rather than when retrying
previously if we were deciding whether to retry or not, it meant that
future events wouldn't have context of what the task is doing. We'd
run into issues with not knowing what references to include when
updating/cancelling in future events.

Instead of deciding whether to retry or not, always retry. Instead, when
any event sends, regardless of whether it is a first time or a retry,
check the status of previous events for that broadcast message. There
are a few things that will mean we don't send.

* If the finishes_at time has already elapsed (ie: we have been trying
  to resend this message and haven't had any luck and now the data is
  obselete)
* A previous event has no provider message (this means that we never
  picked the previous event off the queue for some reason)
* A previous event has a provider message that has anything other than
  an ack response. This includes sending (the old message is currently
  being sent), and technical-failure/returned-error (the old message is
  currently in the retry loop, having experienced issues).
2021-02-03 18:11:52 +00:00
Leo Hemsted
96a0935d1c update broadcast provider message status on success/error
so we can distinguish errorring messages that are currently retrying
from those that sent succesfully.
2021-02-03 18:03:16 +00:00
Rebecca Law
e05e9bb5e0 Change the sort order for the organisation usage page.
Ensure the archived services are at the bottom of the list. The organisation trial mode page already sorts the archived services to the bottom.
2021-01-12 09:44:35 +00:00
Chris Hill-Scott
3b0b96834d Do extra code style checks with flake8-bugbear
Flake8 Bugbear checks for some extra things that aren’t code style
errors, but are likely to introduce bugs or unexpected behaviour. A
good example is having mutable default function arguments, which get
shared between every call to the function and therefore mutating a value
in one place can unexpectedly cause it to change in another.

This commit enables all the extra warnings provided by Flake8 Bugbear,
except for:
- the line length one (because we already lint for that separately)
- B903 Data class should either be immutable or use `__slots__` because
  this seems to false-positive on some of our custom exceptions
- B902 Invalid first argument 'cls' used for instance method because
  some SQLAlchemy decorators (eg `declared_attr`) make things that
  aren’t formally class methods take a class not an instance as their
  first argument

It disables:
- _B306: BaseException.message is removed in Python 3_ because I think
  our exceptions have a custom structure that means the `.message`
  attribute is still present

Matches the work done in other repos:
- https://github.com/alphagov/notifications-admin/pull/3172/files
2020-12-22 16:26:45 +00:00
David McDonald
e35ea57ba2 Do not delete letters if not in final state
A few weeks ago, we deleted some pdf letters that had reached their
retention period. However, these letters were in the 'created' state so
it's very arguable that we should not have deleted them because we were
expecting to resend them and were unable to. Part of the reason for this
is that we marked the letters back to `created` as the status but we did
not nullify the `sent_at` timestamp, meaning the check on
ebb43082d5/app/dao/notifications_dao.py (L346)
did not catch it. Regardless of that check, which controls whether the
files were removed from S3, they were also archived into the
`notification_history` table as by default.

This commit does changes our code such that letters that are not in
their final state do not go through our retention process. This could
mean they violate their retention policy but that is likely the lesser
of two evils (the other being we delete them and are unable to resend
them).

Note, `sending` letters have been included in those not to be removed
because there is a risk that we give the letter to DVLA and put it in
`sending` but then they come back to us later telling us they've had
problems and require us to resend.
2020-12-16 10:50:11 +00:00
David McDonald
47e146f010 Move variable out of loop that didn't need to be 2020-12-16 10:40:30 +00:00
David McDonald
1bf9b29905 Document behaviour of s3 letter deleting
The behaviour was a bit of opaque so I have added tests around it so
it's clear what it is doing and why. No functionality has changed
2020-12-16 10:39:31 +00:00
Pea Tyczynska
def7a16765 Establish relation between provider message and message number
this is so we can access brodcast_provider_message_number from
BroadcastProviderMessage object
2020-12-09 11:41:22 +00:00
Pea Tyczynska
2952b70930 Only create sequential numbers for Vodafone messages 2020-12-07 13:13:13 +00:00
Pea Tyczynska
e95dc9450e Include message number in send_broadcast_provider_message 2020-12-07 13:13:12 +00:00
Pea Tyczynska
e6824dc3ff Broadcast provider message created with a sequential number
This is for the IBAG format (similar to CAP format, but proprietary)
used in the XMLs that we exchange with broadcast providers (specifically
Vodafone).
2020-12-07 13:13:10 +00:00
Leo Hemsted
72f8a15d4f respect service broadcast provider restrictions when sending 2020-12-03 13:39:09 +00:00
Chris Hill-Scott
682cbc5130 Don’t return jobs sent from contact lists
Now that we’re grouping jobs sent from contact lists within their
parent, they shouldn’t also be listed on the jobs page at the top level.

The jobs page uses the uploads API, not the jobs API, so this commit
makes sure that filtering is happening in the proper place.
2020-12-01 15:26:36 +00:00
Chris Hill-Scott
10e1fe6902 Revert "Don’t return jobs sent from contact lists"
This reverts commit 061c0a0050.
2020-12-01 15:18:32 +00:00
Chris Hill-Scott
061c0a0050 Don’t return jobs sent from contact lists
Now that we’re grouping jobs sent from contact lists within their
parent, they shouldn’t also be listed on the jobs page at the top level.
2020-12-01 11:56:34 +00:00
Leo Hemsted
f12c949ae9 create broadcast_provider_message and use id from that instead
(instead of using the id from broadcast_event)

we need every XML blob we send to have a different ID. if we're sending
different XML blobs for each provider, then each one should have a
different identifier. So, instead of taking the identifier from the
broadcast_event, take it from the broadcast_provider_message instead.

Note: We're still going to the broadcast_event for most fields, to
ensure they stay consistent between different providers. The last thing
we want is for different phone networks to get different content
2020-11-19 15:50:37 +00:00
Leo Hemsted
3bc3ed88b3 use yield_per instead of limit
limit means we only return 50k letters, if there are more than that for
a service we'll skip them and they won't be picked up until the next
day.

If you remove the limit, sqlalchemy prefetches query results so it can
build up ORM results, for example collapsing joined rows into single
objects with chidren. SQLAlchemy streams the data into a buffer, and
normally will still prefetch the entire resultset so it can ensure
integrity of the session, (so that if you modify one result that is
duplicated further down in the results, both rows are updated in the
session for example). However, we don't care about that, but we do care
about preventing the result set taking up too much memory. We can use
`yield_per` to yield from sqlalchemy to the iterator (in this case the
`for letter in letters_awaiting_sending` loop in letters_pdf_tasks.py) -
this means every time we hit 10000 rows, we go back to the database to
get the next 10k. This way, we only ever need 10k rows in memory at a
time.

This has some caveats, mostly around how we handle the data the query
returns. They're a bit hard to parse but I'm pretty sure the notable
limitations are:

* It's dangerous to modify ORM objects returned by yield_per queries
* It's dangerous to join in a yield_per query if you think there will be
  more than one row per item (for example, if you join from notification
  to service, there'll be multiple result rows containing the same
  service, and if these are split over different yield chunks, then we
  may experience undefined behaviour.

These two limitations are focused around there being no guarantee of
having one unique row per item.

For more reading:
https://docs.sqlalchemy.org/en/13/orm/query.html?highlight=yield_per#sqlalchemy.orm.query.Query.yield_per
https://www.mail-archive.com/sqlalchemy@googlegroups.com/msg12443.html
2020-10-26 13:01:34 +00:00
Leo Hemsted
ed182c2a22 return just the columns we need for collating letters
previously we were returning the entire ORM object. Returning columns
has a couple of benefits:

* Means we can join on to services there and then, avoiding second
  queries to get the crown status of the service later in the collate
  flow.
* Massively reduces the amount of data we return - particularly free
  text fields like personalisation that could be potentially quite big.
  5 columns rather than 26 columns.
* Minor thing, but will skip some CPU cycles as sqlalchemy will no
  longer construct an ORM object and try and keep track of changes. We
  know this function doesn't change any of the values to persist them
  back, so this is an unnecessary step from sqlalchemy.

Disadvantages are:

* The dao_get_letters_to_be_printed return interface is now much more
  tightly coupled to the get_key_and_size_of_letters_to_be_sent_to_print
  function that calls it.
2020-10-23 20:01:18 +01:00
Leo Hemsted
f8fbaba77d limit collate task to 50,000 messages
we've seen issues where tasks mysteriously hang and do not process
large volumes of letters - in this case >150k letters in created state.

to try and get at least some letters out of the door, limit the query to
only return 50k letters per postage type. We may need to run the task
multiple times, or letters may get delayed until the next day when
they'd be picked up (provided there's enough capacity then). The task
should only be re-run AFTER the ftp tasks have all finished, and updated
the letters to sending, or we run the risk of sending the same letters
twice.

For context, the largest ever letter day we've sent is ~65k in march of
this year.
2020-10-23 15:28:34 +01:00
Chris Hill-Scott
88cd92b946 Revert "Remove the upload letters permission" 2020-10-23 15:14:37 +01:00
Chris Hill-Scott
e7c1f7c60e Merge pull request #3001 from alphagov/remove-upload-letters-permission
Remove the upload letters permission
2020-10-23 14:27:48 +01:00