This celery task was not decorated with the cronitor decorator so never
checked in with cronitor.
Adding the decorator will ensure this task is monitored.
The requisite cronitor key is in the credentials repository already.
Signed-off-by: Toby Lorne <toby.lornewelch-richards@digital.cabinet-office.gov.uk>
Added a command to get more detailed information about the letters that
are listed in the files of zip files sent that are stored on S3. This takes
one or more file paths as arguments and creates a CSV file with a row for each
letter.
We need this in the admin app while we still have pages that:
- talk about the data sharing and financial agreement
- but aren’t within a service (so can’t look at the service’s
organisation)
This is a get, but it deliberately won’t work if you pass it an email
address, in order not to put personally identifying information in our
logs.
When we send a zip file of letters to DVLA we expect them to send back an acknowledgement of those files.
Previously they named the files like NOTIFY.20180202091254.ACK.TXT and the contents would contain the name of the zip file we sent with a date of when they got it.
They have updated this format to mirror the format of the zip file because there was an instance where they sent 2 files of the same name so the later overwrote the first.
Since the name matches our name, there is no need to get the file from S3 but just compare file names.
the create_nightly_notification_status task runs at 00:30am UK time,
however this means that in summer datetime.today() will return the
wrong date as the server (which runs on UTC) will run the task at
23:30 (populating the wrong row in the table).
fix this to use nice tz aware functions
* call variables unambiguous things like `start_time` or `bst_date` to
reduce risk of passing in the wrong thing
* simplify the count_dict object - remove nested dict and start_date
fields as superfluous
* use static datetime objects in tests rather than calculating them
each time
The pp client converts to UTC using the convert_utc_to_bst notify util.
This requires a datatime not a date, pass it a datetime, and add an
assertion in an existing test.
I didn't want to use the midnight conversion util in the test.
Signed-off-by: Toby Lorne <toby.lornewelch-richards@digital.cabinet-office.gov.uk>
Changed the query to get the performance platform stats from ft_notification_status. But the date used for the query needed to be a date, not datetime so the equality worked.
query args from GET requests are put into our logs, and we should avoid
personal data (eg phone numbers) in them. Remove this old GET now that
it's not used by the admin app anymore
The previous query was including all notifications regardless of notification_status. I don't think that's right, it shouldn't include things like technical-failure or validation-failed. Thoughts?
I also need to remove the query that's no longer being used.
We've seen the SQLAlchemy "could not acquire connection" error in
production during heavy traffic. Since we have more gunicorn eventlet
workers than we have DB connections available some workers need to
wait for a DB connection to become available before they can proceed
with the request. There's a timeout set on how long a worker would
wait and if that timeout is exceeded the above exception is raised.
Currently, we're using at most 1000 out of 5000 max DB connections,
40% peak CPU usage on the DB instance and an average of 60% CPU on
API instances during heavy load. The number of DB connections is
proportionally similar in preview and staging.
This slightly increases the number of max DB connections per API
instance. This should improve our utilization of API instances by
increasing the number of workers that can communicate with the DB
concurrently while staying well within the max DB connections limit.
also, it should default to last 7 days, not last 6 days. also change
count_inbound_sms to have the days passed in, so that it's more
explicit at the endpoint that we only return 7 days regardless of your
service's data retention
The previous migration didn’t work because the `created_by_id` column
in services references the user who created the _version_ of the
service, not who created the service originally.
This commit runs another migration to wipe all the data, and replace it
using an operation that looks at the first version of the service in the
history table, which will reference the user who actually created the
service.