* Added is_precompiled_letter method to letter/utils.py
* Added tests for letter/utils.py
* Added tests for the rest endpoint
* Moved the Precompiled name to a central location
* Added hidden field to the test method to create a template
This will continue to update the notification history for letter notifications.
We currently have an issue where the responses to letters from the provider is taking a long time.
This is due to the manual nature of their process.
Updating the status of the letter will still work if the notification has been purged.
Also turned back on the purge letter notification scheduled task.
There's no reason to have things that never change in environment.sh.
you'll want to update your environment.sh, then restart your shells
(`exec bash` or `exec zsh` etc)
This also changes the database to be set statically in the config, but
overridable from the command line if you need to - for example, jenkins
will override it with the dockerised postgres uri.
This is to address some errors we saw yesterday such as:
`sqlalchemy.exc.TimeoutError: QueuePool limit of size 5 overflow 10
reached, connection timed out, timeout 30`
Related flask-sqlalchemy docs:
http://flask-sqlalchemy.pocoo.org/2.3/config/#configuration-keys
PR #1550 added the rate_limit column to the Service table.
This PR removes the rate limits from the config and uses rate_limit from
the Service model instead. Rate limits are still separated into 'team',
'normal' and 'test', but these values are the same for a service.
Pivotal story https://www.pivotaltracker.com/story/show/153992529
During database upgrades and database fail overs there has been errors
because the database connection stays open, when a query is run the
query fails and the connection is re-established. To avoid these errors
shorter timeouts have been used to keep the connections from getting
stale.
- SQLALCHEMY_POOL_TIMEOUT timeout idle connections after 30 secs
- Updated SQLALCHEMY_POOL_RECYCLE to recycle the connection every 5 mins
See guide on optimistic disconnect handling - using the pool recycle
as a way to manage this:
http://docs.sqlalchemy.org/en/latest/core/pooling.html#disconnect-handling-optimistic
Grouping the letters into a maximum number of files is necessary because
the SQS task needs to be under a certain size. We also compress the task
when sending.
add collate-letter-pdfs task (name pending). This retrieves a list of
letter pdf files (just the metadata, not the actual data) from s3, and
loops through them, calling the ftp task zip-and-send-letter-pdfs. It
groups them up by adding them to lists while counting the total
filesize, if it gets over a certain filesize (currently set to 500mb)
it breaks at that chunk, sends off that list of files to the ftp app,
and then starts building up a new list.
DVLA have a hard 2gb limit on how big the zip files we can send is -
however we're going to be limited by the amount of memory on the ftp
app well before we get around to handling 2gb of pdf data - so the
limit is 500mb for now. We'll adjust it after we see how ftp performs.
SQL Alchemy config changes were made to decrease the downtime of the
application. The last test only had 1 min of downtime in the upgrade
period i.e. 40 mins. Tested without the config changes to double
check the change had the desired effect. Adding back in so we can test
the changes under load and performance test outside of upgrade.
SQL Alchemy config changes were made to decrease the downtime of the
application. The last test only had 1 min of downtime in the upgrade
period i.e. 40 mins. Reverting the changes so that the same process
can be followed to ensure the changes had the desired effect.
Checks authentication header value on inbound SMS requests from
MMG against a list of allowed API keys set in the application
config.
At the moment, we're only logging the attempts without aborting the
requests. Once this is rolled out to production and we've checked
the logs we'll switch on the aborts and add the tests for 401 and 403
responses.
This work has already been done for Firetext in a previous PR:
https://github.com/alphagov/notifications-api/pull/1409
- Reverted the Gunicorn worker number to 5 (this should be investigated
further on a well baselined system to compare)
- Enabled REDIS
- Increased the rate limit to 400 req/sec as using early testing
yesterday 450+ was being achieved
- Disable Redis as there is a current connection limit of 256 which
could slow down the request if they are all used
- Added statd to methods in the post to help spot any bottlenecks
Checks authentication header value on inbound SMS requests from
Firetext against a list of allowed API keys set in the application
config.
At the moment, we're only logging the attempts without aborting the
requests. Once this is rolled out to production and we've checked
the logs we'll switch on the aborts and add the tests for 401 and 403
responses.
The template name should be returned for the response and the user will
pick a year, so ths adds those two features to the
notifications/templates_usage/monthly endpoint and added some tests to
test the functionality.
Added a new endpoint which combines the usage of the stats table and the
data from the notifications tables, instead of using all the data from
the notification_history table. This should speed up the query times
and improve the page performance.
- Updated to make the stats create and update function transactional as
it actually wasn't committing the data to the table
- Added the get from the stats table
- Add a a method to combine the two results
- Added the endpoint