- Reverted the Gunicorn worker number to 5 (this should be investigated
further on a well baselined system to compare)
- Enabled REDIS
- Increased the rate limit to 400 req/sec as using early testing
yesterday 450+ was being achieved
- Disable Redis as there is a current connection limit of 256 which
could slow down the request if they are all used
- Added statd to methods in the post to help spot any bottlenecks
Checks authentication header value on inbound SMS requests from
Firetext against a list of allowed API keys set in the application
config.
At the moment, we're only logging the attempts without aborting the
requests. Once this is rolled out to production and we've checked
the logs we'll switch on the aborts and add the tests for 401 and 403
responses.
The template name should be returned for the response and the user will
pick a year, so ths adds those two features to the
notifications/templates_usage/monthly endpoint and added some tests to
test the functionality.
Added a new endpoint which combines the usage of the stats table and the
data from the notifications tables, instead of using all the data from
the notification_history table. This should speed up the query times
and improve the page performance.
- Updated to make the stats create and update function transactional as
it actually wasn't committing the data to the table
- Added the get from the stats table
- Add a a method to combine the two results
- Added the endpoint
- Modified the services_dao to return an int instead of a datetime to
make usage easier and removed the BST function on year as it is not
relevant for year
- Improved tests do there is less logic by ordering the result so there
is less reliance on the template id
- Renamed variable in stats_template_usage_by_month_dao.py to make it
consistent with the method
Currently some pages are timing out due to the time it takes to perform
database queries. This is an attempt to improve the performance by
performing the query against the notification history table once a day
and use the notification table for a delta between midnight and the when
the page is run and combine the results.
- Added Celery task for doing the work
- Added a dao to handle the insert and update of the stats table
- Updated tests to test the new functionality
- Added a new task to process incomplete jobs
- Added tests to test the new method
- Updated the check for incomplete jobs method to start the new task
This will effectively resume tasks which for some reason were interrupted
whilst they were being processed. In some cases only some of the csv
was processed, this will find the place in the csv and continue processing
from that point.
1. No longer create jobs when creating letters from api 🎉
2. Bulk update notifications based on the notification references after
we send them to DVLA - either as success or as error
so that it doesn't appear generic when it's actually specific to
sending the daily notification totals. To do this, split it out into a
separate performance_platform directory, containing the business logic,
and make the performance_platform_client incredibly thin - all it
handles is adding ids to payloads, and sending stats.
Also, some changes to the config (not all done yet) since there is one
token per endpoint, not one for the whole platform as we'd previously
coded
- Created TaskNames for DVLA_FILES rather than have DVLA_FILES in QueueNames
- Removed PROCESS_FTP from all_queues() as this was causing problems in picking up letter job tasks
- Created test to ensure that we don't arbitrarily add queue names to all_queues
previously in run_app_paas.sh, we captured stdout from the app and
piped that into the log file. However, this came up with a bunch of
problems, mainly:
* exceptions with stack traces often weren't formatted properly,
and kibana could not parse them
* celery logs were duplicated - we'd collect both the json logs and
the human readable stdout logs.
instead, with the updated utils library, we can use that to log json
straight to the appropriate directory directly.