Commit Graph

176 Commits

Author SHA1 Message Date
Katie Smith
a87be9b74a Use new value of SMS_CHAR_COUNT_LIMIT from utils
Admin, API and utils were all defining a value for SMS_CHAR_COUNT_LIMIT.
This value has been updated in notifications-utils to allow text
messages to be 4 fragments long and notifications-api now gets the value of
SMS_CHAR_COUNT_LIMIT from notifications-utils instead of defining it in
config.

Also updated some tests to check for the higher limit.
2018-08-16 16:34:34 +01:00
Pea Tyczynska
c0f309a2a6 Delete scheduled task to populate monthly_billing 2018-07-30 11:06:04 +01:00
Rebecca Law
317ab149f4 The periodic task to populate ft_notification_status was calling the wrong task, this fixes that. 2018-07-04 14:12:47 +01:00
Rebecca Law
1363244a2b Added the scheduling for the task 2018-06-20 16:48:03 +01:00
Leo Hemsted
897ab93148 zendesk instead of deskpro 2018-04-27 16:36:39 +01:00
Alexey Bezhan
204aaf172d Add a document download client
Allows uploading documents to the Document Download API.
The client is configured with an API host and auth token. There's
no need for a flag to disable the client in the test environments
at the moment since the upload is only triggered by a specific
payload which would only be sent with an explicit goal of using
document download.
2018-04-09 16:30:16 +01:00
Leo Hemsted
8e73961f65 add new redis template usage per day key
We've run into issues with redis expiring keys while we try and write
to them - short lived redis TTLs aren't really sustainable for keys
where we mutate the state. Template usage is a hash contained in redis
where we increment a count keyed by template_id each time a message is
sent for that template. But if the key expires, hincrby (redis command
for incrementing a value in a hash) will re-create an empty hash.

This is no good, as we need the hash to be populated with the last
seven days worth of data, which we then increment further. We can't
tell whether the hincrby created the key, so a different approach
entirely was needed:

* New redis key: <service_id>-template-usage-<YYYY-MM-DD>. Note: This
  YYYY-MM-DD is BTC time so it lines up nicely with ft_billing table
* Incremented to from process_notification - if it doesn't exist yet,
  it'll be created then.
* Expiry set to 8 days every time it's incremented to.

Then, at read time, we'll just read the last eight days of keys from
Redis, and sum them up. This works because we're only ever incrementing
from that one place - never setting wholesale, never recreating the
data from scratch. So we know that if the data is in redis, then it is
good and accurate data.

One thing we *don't* know and *cannot* reason about is what no key in
redis means. It could be either of:

* This is the first message that the service has sent today.
* The key was deleted from redis for some reason.

Since we set the TTL to so long, we'll never be writing to a key that
previously expired. But if there is a redis (or operator) error and the
key is deleted, then we'll have bad data - after any data loss we'll
have to rebuild the data.
2018-04-03 16:12:54 +01:00
Rebecca Law
0701b2546d Remove test crontab minute 2018-03-26 10:30:08 +01:00
Rebecca Law
9549ada200 Run task every 15 minutes.
Move variable to task from config.
2018-03-26 10:26:24 +01:00
Rebecca Law
612843d509 Run every 15 minutes not 15 minutes past the hour 2018-03-26 09:43:53 +01:00
Rebecca Law
40e535e112 Add the scheduled task to run every 15 minutes. 2018-03-23 16:00:13 +00:00
Rebecca Law
f596d17bf2 If a sms or email has not been sent after 4 hours and 15 minutes then put it on the delivery queue. 2018-03-23 15:38:35 +00:00
kentsanggds
5dc0248043 Merge pull request #1783 from alphagov/ken-process-antivirus
Send task to antivirus app and process antivirus callbacks
2018-03-21 16:39:55 +00:00
venusbb
378feda603 put import reporting_tasks in config 2018-03-21 10:39:00 +00:00
Ken Tsang
4ace33cc04 Add queue and task names to config 2018-03-20 10:12:59 +00:00
Ken Tsang
8733d84e75 Upload precompiled letter pdfs to letters-scan bucket 2018-03-20 10:11:36 +00:00
Ken Tsang
055a5ee7eb Add letter-scan bucket name to config 2018-03-20 10:11:36 +00:00
venusbb
7e2947790f merged master and up migration version 2018-03-16 10:57:23 +00:00
venusbb
bb95a2784f Create schedueled job, fixed tests 2018-03-16 09:22:34 +00:00
Ken Tsang
2ba5202e08 Add test_letters bucket name to config 2018-03-14 17:39:17 +00:00
kentsanggds
b0b0062b35 Merge pull request #1732 from alphagov/ken-hidden-in-json-response
Return `is_precompiled_letter` field as part of json for notification by id
2018-03-08 15:06:10 +00:00
Ken Tsang
7011b90bd4 Refactor is_precompiled_letter to model 2018-03-07 23:03:03 +00:00
Katie Smith
7f2e9f507e Delete functions which call the job statistics tasks
The JobStatistics table is going to be deleted. There are currently
3 tasks which use the JobStatistics model via the Statistics DAO, so we
need to make sure that these tasks aren't being used before they are
deleted in a separate PR.

This commit deletes:
* The `create_initial_notification_statistic_tasks` function which gets
used to call the `record_initial_job_statistics` task.
* The `create_outcome_notification_statistic_tasks` function which gets
used to call the `record_outcome_job_statistics` task.
* And the scheduling of the `timeout-job-statistics` scheduled task.
2018-03-07 09:23:29 +00:00
Richard Chapman
271e157d1a Merge pull request #1737 from alphagov/add_precompiled_letters
Updated API to handle pre-compiled pdfs
2018-03-06 08:39:49 +00:00
Richard Chapman
a4feaba309 Added tests to tests for precompiled flow and refactored a little
* Added is_precompiled_letter method to letter/utils.py
* Added tests for letter/utils.py
* Added tests for the rest endpoint
* Moved the Precompiled name to a central location
* Added hidden field to the test method to create a template
2018-03-05 14:11:37 +00:00
Rebecca Law
c474b2312b Process responses for letters even after the notification has been deleted.
This will continue to update the notification history for letter notifications.
We currently have an issue where the responses to letters from the provider is taking a long time.
This is due to the manual nature of their process.
Updating the status of the letter will still work if the notification has been purged.

Also turned back on the purge letter notification scheduled task.
2018-03-02 11:29:22 +00:00
Rebecca Law
bffc4863db Remove all methods no longer used now that we only send pdf files to DVLA. 2018-03-02 11:05:05 +00:00
Rebecca Law
be7989bbc9 Suspend the delete-letter-notifications scheduled job.
We have an issue with the provider. If we need to resend these letters, we need the notification.
2018-03-01 11:47:00 +00:00
Leo Hemsted
5b71d2f36e add org invite template to db 2018-02-23 10:45:18 +00:00
Leo Hemsted
c52ca3e7bb Merge pull request #1681 from alphagov/fix-test-db
make sure tests always run in test db
2018-02-22 16:54:57 +00:00
Athanasios Voutsadakis
c61ed043b3 Ensure pool size is an integer 2018-02-22 10:27:02 +00:00
Leo Hemsted
ee1be970fc make test config inherit from dev config
gets some secret keys and things set up for free
2018-02-21 18:42:24 +00:00
Leo Hemsted
073c48a0a7 move all static env vars from env.sh to config file in dev
There's no reason to have things that never change in environment.sh.
you'll want to update your environment.sh, then restart your shells
(`exec bash` or `exec zsh` etc)

This also changes the database to be set statically in the config, but
overridable from the command line if you need to - for example, jenkins
will override it with the dockerised postgres uri.
2018-02-21 18:12:03 +00:00
Athanasios Voutsadakis
dc19e644a6 Increase DB connection pool size to 10
This is to address some errors we saw yesterday such as:

`sqlalchemy.exc.TimeoutError: QueuePool limit of size 5 overflow 10
reached, connection timed out, timeout 30`

Related flask-sqlalchemy docs:
http://flask-sqlalchemy.pocoo.org/2.3/config/#configuration-keys
2018-02-21 15:47:58 +00:00
Rebecca Law
e736c90d00 Switch to using the pdf letter flow.
When sending letters always use the pdf letter flow regardless of service permissions.
2018-02-13 18:38:32 +00:00
Richard Chapman
49d69a84d9 Changed the time of the task to run at 00:05 as the query gets data for
the day before 0:00, so to minimise the report being out of date run the
query at 0:05.
2018-01-26 09:56:53 +00:00
Alexey Bezhan
5298f28f80 Add utils DeskproClient and configuration variables
Deskpro client is used to create tickets from celery alerting tasks
(eg alerts for missing ack or response files from DVLA).
2018-01-17 15:04:17 +00:00
venusbb
24b785e7e0 Added process for dvla acknowledgement file
Daily schedule task to check ack file against zip file lists
if we haven't receive ack for a zip file, raise a 500 exception
2018-01-12 15:44:00 +00:00
Rebecca Law
9c4e43bfac Some pseudo code and notes of how to implement a check for the letter acknowledgement file. 2018-01-11 16:37:39 +00:00
Katie Smith
b07db16cd1 Get rate limit from service.rate_limit column (not config)
PR #1550 added the rate_limit column to the Service table.

This PR removes the rate limits from the config and uses rate_limit from
the Service model instead. Rate limits are still separated into 'team',
'normal' and 'test', but these values are the same for a service.

Pivotal story https://www.pivotaltracker.com/story/show/153992529
2018-01-11 10:28:11 +00:00
Alexey Bezhan
d82801fa5d Remove unused PERFORMANCE_PLATFORM_TOKEN config variable 2018-01-09 10:45:03 +00:00
Richard Chapman
b90ee832a7 Moved the SQL Alchemy config from staging to all environments
During database upgrades and database fail overs there has been errors
because the database connection stays open, when a query is run the
query fails and the connection is re-established. To avoid these errors
shorter timeouts have been used to keep the connections from getting
stale.

-  SQLALCHEMY_POOL_TIMEOUT timeout idle connections after 30 secs
- Updated SQLALCHEMY_POOL_RECYCLE to recycle the connection every 5 mins

See guide on optimistic disconnect handling - using the pool recycle
as a way to manage this:
http://docs.sqlalchemy.org/en/latest/core/pooling.html#disconnect-handling-optimistic
2018-01-05 05:53:40 +00:00
Katie Smith
644b110a8d Group letters into a max number of files for sending to DVLA
Grouping the letters into a maximum number of files is necessary because
the SQS task needs to be under a certain size. We also compress the task
when sending.
2018-01-03 11:31:22 +00:00
Leo Hemsted
309b4d7d33 add collate-letter-pdfs task
add collate-letter-pdfs task (name pending). This retrieves a list of
letter pdf files (just the metadata, not the actual data) from s3, and
loops through them, calling the ftp task zip-and-send-letter-pdfs. It
groups them up by adding them to lists while counting the total
filesize, if it gets over a certain filesize (currently set to 500mb)
it breaks at that chunk, sends off that list of files to the ftp app,
and then starts building up a new list.

DVLA have a hard 2gb limit on how big the zip files we can send is -
however we're going to be limited by the amount of memory on the ftp
app well before we get around to handling 2gb of pdf data - so the
limit is 500mb for now. We'll adjust it after we see how ftp performs.
2018-01-02 10:39:21 +00:00
Richard Chapman
20d5a946f6 Add back in SQLALCHEMY config changes on staging
SQL Alchemy config changes were made to decrease the downtime of the
application. The last test only had 1 min of downtime in the upgrade
period i.e. 40 mins. Tested without the config changes to double
check the change had the desired effect. Adding back in so we can test
the changes under load and performance test outside of upgrade.
2017-12-22 08:21:53 +00:00
Ken Tsang
3ca97f67c9 Change live-letters-pdf to production-letters-pdf 2017-12-21 14:57:37 +00:00
Richard Chapman
66ae4ea9f2 Revert the SQLALCHEMY config changes on staging
SQL Alchemy config changes were made to decrease the downtime of the
application. The last test only had 1 min of downtime in the upgrade
period i.e. 40 mins. Reverting the changes so that the same process
can be followed to ensure the changes had the desired effect.
2017-12-21 11:31:38 +00:00
Richard Chapman
2bc4c8ac39 Added SQLALCHEMY settings to staging for db connections
- Updated SQLALCHEMY_POOL_TIMEOUT timeout idle connections after  30 secs
- Updated SQLALCHEMY_POOL_RECYCLE to receyle the connection every 5 mins
2017-12-20 14:22:23 +00:00
Ken Tsang
8103540261 Renamed run-letter-pdfs to trigger-letter-pdfs-for-day
- also set optional date_to_process argument for dao_get_count_of_letters_to_process_for_date to None, so it's set in the code instead
2017-12-19 13:23:55 +00:00
Ken Tsang
441651bbd1 Add get_count_of_letters_to_process to notifications_dao
- will get the letter notifications from day before >= letter processing deadline (17:30)
- letters_as_pdf permission is required in the service
2017-12-19 13:23:55 +00:00