Commit Graph

373 Commits

Author SHA1 Message Date
Martyn Inglis
4c0c30bb2e Ensure we count the tasks as well as the API calls.
After we have written to the database and placed it on a deliver queue we count it in the cache against the service.

This is the equivalent of doing it at the end of the API call.
2016-11-11 17:36:38 +00:00
Rebecca Law
4379308189 Fix test failures. 2016-11-11 17:28:15 +00:00
Rebecca Law
8f0a96c76c Merge branch 'master' into refactor-for-code-reuse 2016-11-11 16:51:18 +00:00
Rebecca Law
4f5254134b Fix logging in send_sms and send_email 2016-11-11 16:09:09 +00:00
Rebecca Law
4fe8d700ae Fix the send_sms and send_email code to send the notification id that has been persisted. 2016-11-11 16:00:31 +00:00
Rebecca Law
c45d1f63a7 Use the same method to persist a notification.
Refactored the send_sms task to use this method.
2016-11-11 14:56:33 +00:00
Rebecca Law
eafbbd9809 Refactor send_sms and send_email to use common code to persist the notificaiton. 2016-11-11 10:41:39 +00:00
Chris Hill-Scott
8474344c9a Ignore case and whitespace in personalisation keys
From a support ticket:

> it's possible to add a personalisation token with trailing whitespace
> (eg. "key " rather than "key"). Can this be trimmed in the UI to guard
> against this? (one of our devs copied and pasted it from a document
> and inadvertently included the space)

> Nothing major but caused a few hours of investigations!

Rather than trim the placeholder in the template, we should treat
placeholders in API calls the same way we do with CSV files, ie we
ignore case and spacing in the name of the placeholder. So
`(( First Name))` is equivalent to `((first_name))`, and both would be
populated with a dictionary like `{'firstName': 'Chris'}`.

Depends on:
- [x] https://github.com/alphagov/notifications-utils/pull/77
2016-11-10 11:40:00 +00:00
Martyn Inglis
15ba0a3eb1 Added job id to logger 2016-11-07 11:59:46 +00:00
Leo Hemsted
a1cc092d3b fix logger.exception syntax
when given any log function with multiple parameters, the python logging utils
assume the first param is a format string and the rest are arguments to pass
in - we were passing in the exception object to `logger.exception`, however,
the purpose of .exception is to add the exception object itself - so we didn't
need to
2016-10-17 17:44:17 +01:00
Leo Hemsted
a2c3d265de remove unused former send_sms_to_provider and send_sms_to_email functions
they were superceded by deliver_sms and deliver_email in the same file 3 wks ago
2016-10-13 15:53:01 +01:00
Leo Hemsted
a095aa41f3 don't retry task if InvalidEmailError
just record it as a technical error - retrying wont fix a bad email
2016-10-13 15:27:47 +01:00
Leo Hemsted
bdb4da4976 tests n stuff 2016-10-07 13:08:41 +01:00
Leo Hemsted
d22d055e21 only process jobs if they're pending
help prevent issues where scheduled jobs are processed twice. note this is NOT
a watertight solution - it holds no locks, and there is no guarantee that the
status won't have updated between asserting that its status is 'pending' and
updating it to be 'in progress'
2016-10-07 12:54:11 +01:00
Leo Hemsted
16dd16c026 move updating into the dao fn
this helps manage the transaction by keeping it inside one function in the dao,
so after the function completes you know that the transaction has been released
and concurrent processing can resume
2016-10-07 12:35:08 +01:00
Martyn Inglis
897ad6a957 prevent race conditions in run_scheduled_jobs queuing jobs multiple times
we were running into issues where multiple beats queue up the
run_scheduled_jobs task at the same time, and concurrency issues with selecting
scheduled jobs causes both tasks to trigger processing of the job.

Use with_for_update, which calls through to the postgres SELECT  ... FOR UPDATE
which locks other SELECT FOR UPDATES (ie other threads running same code) until
the rows are set to pending and the transaction completes - so the second
thread will not find any rows
2016-10-07 12:35:02 +01:00
Rebecca Law
6065ed57cf Fix for the job status
- It seems that when we changed the name of the job.status column that we didn't update the code to use job.job_status.
- Therefore none of the jobs since then have had the job status updated.
- Now that this is fix we can show the job status when there is an error like "sending exceeds limits"
  - This could happen if a job is scheduled to run at the top of the hour, so at the time of the job creation the limit was not exceed, but at the time of processing the job the limit is exceed.
2016-10-05 14:56:32 +01:00
Rebecca Law
9e7a7b1857 Merge branch 'master' into refactor-send_notifications
Conflicts:
	tests/app/notifications/rest/test_send_notification.py
2016-10-03 11:40:31 +01:00
Rebecca Law
e3d418506a Create query to sum notificaitons for the message limit check, this is a bit more efficient.
Refactored send_notifications method so that it is more readible.
Refectored the test_send_notificaitons so that it uses parametrized test to avoid duplication.
2016-10-03 10:57:10 +01:00
Martyn Inglis
ad5222442a Fixed build
- passing a single param to a celery task must be done as a list not a tuple.
2016-09-30 13:34:44 +01:00
Martyn Inglis
e3ed8fad1a Merge branch 'master' into research-mode-csv-file-queues
Conflicts:
	tests/app/celery/test_tasks.py
	tests/app/notifications/rest/test_send_notification.py
2016-09-30 11:07:32 +01:00
Imdad Ahad
db608a05d2 Refactor sending elegibility function and update across files 2016-09-28 17:00:17 +01:00
Martyn Inglis
3618a3967c Creating notification tasks should write send tasks to the research-mode queue if research mode service. 2016-09-28 15:29:10 +01:00
Martyn Inglis
c13ead77e4 Ensures that both the CSV processing and the API both use the new deliver_email and deliver_sms tasks not the old ones with the redundant parameter. 2016-09-28 15:05:50 +01:00
Martyn Inglis
9233ffa3d4 When processing a CSV file don't put the DB tasks into the live queue when service is in research mode. 2016-09-28 13:53:05 +01:00
Martyn Inglis
376f8355cb Updated clients to have a more robust error handling
- fire text and omg much more similar. Ready to be combined.
- Error handling now for JSON valid responses
2016-09-22 17:18:05 +01:00
Martyn Inglis
59ab5da5d3 Handle errors where the notification isn't found when executing the tasks
- Thows a NoResultFound sqlalchemy exception
- Which causes a retry. This means we give it a few goes (5, max 5 hours)  for the notification to appear.
- Should never happen, only if we get some task overlaps that are unusual that leads to tasks executed in an overlapping nature.
2016-09-22 09:52:23 +01:00
Martyn Inglis
2f36e0dbcf Refactor to make the new send_to_provider methods take a notification not a notification ID.
- Driven by the fact we won't know the type in the API call
- hence we need to load notification earlier , so pass it not the id through to the send task to avoid loading it twice.
2016-09-22 09:16:58 +01:00
Martyn Inglis
991cc884b4 Refactoring the send to provider code out of the tasks folder
- building a rest endpoint to call that code to compliment the existing task based approach.
2016-09-21 13:27:32 +01:00
Martyn Inglis
bd06b18684 Refactor of celery tasks.
- Aim to move the code that contacts providers into it's own module.
- Celery tasks now call this module to send to provider
- No exceptions caught in the new module. Celery tasks now use any exception to trigger a retry.
- tests moved about - new test directory for the new class, all tests from celery test module moved, excepting the retry logic.
2016-09-20 17:24:28 +01:00
Rebecca Law
fc66576d0f Remove try catch, it was only logging the exception. 2016-09-14 10:38:34 +01:00
Rebecca Law
cb51c69503 Update the timeout_notifications scheduled tasks.
We found that if the notifications were in created or pending they are not purged from notifications.
- New bulk update method to set all notificaitons with:
  -  a status = created|sending|pending to temporary-failure
  - and is older then today minus SENDING_NOTIFICATIONS_TIMEOUT_PERIOD (in seconds)
- the scheduled task to timeout notifications use the new bulk update query.
- the task will be more efficient
2016-09-13 16:42:53 +01:00
Martyn Inglis
e0d46d821d Merge branch 'master' into delete-csv-file-after-7-days
Conflicts:
	app/notifications/rest.py
2016-09-12 16:33:01 +01:00
Martyn Inglis
852e207478 Adds new scheduled task to delete CSV files.
Deletes files for jobs older than 7 days, matching delete process for the actual notifications

Runs 1 minutes past the hour at midnight, 1 and 2 am.
2016-09-07 15:36:59 +01:00
Martyn Inglis
0ecd6f2737 Removed the 'delete job' tasks from the tasks.py file
- this was called after processing a job.
- now to be called  on a schedule.
2016-09-07 15:35:12 +01:00
Martyn Inglis
cf107d649f Merge branch 'master' into write-to-postgres-not-sqs
Conflicts:
	app/celery/tasks.py
2016-09-07 13:51:21 +01:00
Martyn Inglis
255ffbd7f6 Tasks now use the new builder method to construct the DB object. 2016-09-07 13:45:37 +01:00
Imdad Ahad
94708bdee8 Fix indentation issue 2016-09-07 11:03:16 +01:00
Imdad Ahad
840b99b277 Fix issue where test key cannot send messages to external numbers 2016-09-07 09:57:20 +01:00
Martyn Inglis
e640e048e5 Pushed the notifications from the CSV in the DB queue 2016-08-30 17:27:01 +01:00
Martyn Inglis
f3c614634f Merge branch 'master' into seperate-queues-for-sending-and-for-db 2016-08-30 14:25:14 +01:00
minglis
4a078e3c61 Merge pull request #635 from alphagov/scheduled-delivery-of-jobs
Scheduled delivery of jobs
2016-08-30 12:48:48 +01:00
Martyn Inglis
486697d07c New queues for the sms/email tasks
Previously there were 4 queues for sending messages

The was based on the fact that each notification has 2 actions - persist in the database and send to provider.

Two queues supported the CSV upload - for the first of these tasks
- bulk-email
- build-sms

And there were two more queues for the tasks that make the 3rd party client calls.
- sms
- email

API Calls just used the latter two queues for both tasks

Added four new queues
- db-email
- db-sms
- send-sms
- send-email

So an API call puts a notification into the db-[type] queue first, which then puts the notification into the send-[type] queue

Build queues stay as before.

This will allow us to target processing of these tasks with separate workers to manage these differently.
2016-08-30 10:42:24 +01:00
Leo Hemsted
4f0c5fdd9e Merge pull request #623 from alphagov/test-fixes
Test fixes
2016-08-30 10:38:53 +01:00
minglis
287d32c037 Merge pull request #638 from alphagov/remove-contested-writes
Remove contested writes
2016-08-30 08:56:45 +01:00
Chris Hill-Scott
f7391da350 Don’t prefix text messages is sender name is set
Implements:
- [x] https://github.com/alphagov/notifications-utils/pull/66
2016-08-26 14:45:01 +01:00
Martyn Inglis
6e7532a561 Removed unused imports 2016-08-25 15:41:37 +01:00
Martyn Inglis
893164ae40 Removed contented updates the notifications stats table
- As before this is now driven from the notifications history table

- Removed from updates and create
- Signatures changes to removed unused params hits many files
- Also potential issue around rate limiting - we used to get the number sent per day from the stats table - which was a single row lookup, now we have to count this. This applies to EVERY API CALL. Probably not a good thing and should be addressed urgently.
2016-08-25 11:55:38 +01:00
Martyn Inglis
708f566c24 Removed updates to the provider stats table
- again these new come from the notifications history table
- We update this when we sent a notification, so removed from celery tasks
- tests removed also
2016-08-25 10:33:12 +01:00
Martyn Inglis
d848093fb0 Fixed formatting and typo on beat config. 2016-08-24 17:08:20 +01:00