Commit Graph

1264 Commits

Author SHA1 Message Date
Rebecca Law
5482c03bca [WIP] 2019-12-27 10:27:59 +00:00
David McDonald
f948555ca8 Do nothing on db conflict
For notification and notification_history we do an upsert. Here, as the
inbound_sms table is never updated, only inserted to once (signified by
lack of updated_at field), an upsert would be unnecessary.

Therefore, if for some reason the delete statement failed as part of
moving data into the inbound_sms_history table, we can simply just
ignore any db conflicts raised by a rerun of
`delete_inbound_sms_older_than_retention`.
2019-12-24 09:39:06 +00:00
Pea Tyczynska
f8ff2d121f Changes following review:
- Check if right keys in new history rows
- Improve model and get rid of old revision version
- Add updated migration file
- Test data when inserting into inbound sms history
2019-12-20 16:17:27 +00:00
Pea Tyczynska
448cd1e94e Integrate inbound history insert into delete inbound sms function 2019-12-20 16:16:29 +00:00
Pea Tyczynska
a6b4675ae7 Populate inbound sms history when deleting inbound sms 2019-12-20 16:16:29 +00:00
Chris Hill-Scott
d777cd8149 Merge pull request #2682 from alphagov/search-by-reference
Allow searching notifications by reference as well as recipient
2019-12-17 10:04:37 +00:00
Chris Hill-Scott
c573209d7e Stop guessing notification type
Before the search term was either:
- an email address (or partial email address)
- a phone number (or partial phone number)

Now it can also be:
- a reference (or partial reference)

We can take a pretty good guess, by looking at the search term, whether
the thing the user is searching by email address or phone number. This
helps us:
- only show relevant notifications
- normalise the search term to give the best chance of matching what we
  store in the `normalised_to` field

However we can’t look at a search term and guess whether it’s a
reference, because a reference could take any format. Therefore if the
user hasn’t told us what kind of thing their search term is, we should
stop trying to guess.
2019-12-16 13:43:38 +00:00
Chris Hill-Scott
8cb6907828 Allow searching by reference as well as recipient
We have a team who want to find emails that might have been sent to an
incorrect address. Therefore they can’t search by the correct address,
because it won’t match.

What they do have is the reference number of the user’s application,
which is also stored in the `client_reference` field on the
notification.

So when a user is searching we should also look at the client reference,
as well as the recipient, allowing the user to enter either in the
search box.
2019-12-16 11:02:07 +00:00
Rebecca Law
555e660a13 Merge pull request #2676 from alphagov/add-returned-letters-table
Add returned letters table
2019-12-13 14:13:28 +00:00
Leo Hemsted
b355fc4523 refactor shared functionality from provider priority logic 2019-12-13 10:03:23 +00:00
Leo Hemsted
31d1abd6d1 add task to move sms providers back towards shared load
we generally aim to share the load between the two providers equally
(more or less). When one provider has struggled, we deprioritise them,
this commit adds a function that gradually restores balance. It checks
every five minutes, if it's been more than an hour since the providers
were last changed then it adjusts them towards a 50/50 split. Except
it's not quite 50/50 due to #reasons (we want to slightly favour MMG),
it's actually 60/40. That's defined in a new dict in config.py.
2019-12-13 10:02:39 +00:00
Rebecca Law
d330025447 Changed reported_at to a date and included audit columns. 2019-12-12 14:11:54 +00:00
Rebecca Law
c8364b4dc4 Add endpoint to return the summary data for returned letters.
Returning the count of letters that are returned for each report date.
2019-12-10 16:21:55 +00:00
Rebecca Law
40a0c62926 New endpoint to return a summary of returned letters for the service. 2019-12-09 17:27:18 +00:00
Rebecca Law
15762d5c22 Method to insert or update the returned-letters 2019-12-09 16:19:46 +00:00
Rebecca Law
e80a002c58 New table returned-letters
The table will contain notification ids for services that have returned letters. This will make it easy to query the data in Notification_history since we can join on the primary key.
2019-12-09 16:19:22 +00:00
Pea M. Tyczynska
2019070536 Merge pull request #2667 from alphagov/warn-team-about-high-failure-rates
Warn team about high failure rates
2019-12-09 11:28:25 +00:00
Pea Tyczynska
1b7b26bf24 Query directly for services with high failure rate 2019-12-06 16:57:56 +00:00
Pea Tyczynska
cfbb080f57 Simplify failure rate by building separate query 2019-12-06 16:57:44 +00:00
Pea Tyczynska
53efd87e28 Check for services sending sms messages to tv numbers 2019-12-06 16:57:34 +00:00
David McDonald
396108313a Merge pull request #2670 from alphagov/uploads-endpoint
Uploads endpoint
2019-12-06 14:40:15 +00:00
Rebecca Law
921b90cdec Add type=int to request.args.get, if the arg is an int it's returned else None. This means we ignore the arg if its the wrong data type and we don't need to handle the error. 2019-12-06 13:10:38 +00:00
David McDonald
203e19bef3 Add uploads blueprint, the endpoint returns a combination of uploaded letters and jobs. The endpoint returns data about the uploaded letter or job, including notification statistics for the upload. The data is ordered by scheduled for and created_at.
It is likely this endppoint will need additional data for the UI to display, for the first iteration this will enable the /uploads page to show both letters and jobs. Only letter uploaded by the UI are included in the resultset.

Add file name to resultset.
2019-12-06 09:54:51 +00:00
Leo Hemsted
ca04ff4725 remove grouping from the fact status table
as we now always filter on notification type we don't need to group as well
2019-12-05 15:08:03 +00:00
Leo Hemsted
0448bca542 make create_nightly_notification_status_for_day take notification_type
the nightly task won't be affected, it'll just trigger three times more
sub-tasks.

this doesn't need to be a two-part deploy because we only trigger this
overnight, so as long as the deploy completes in daytime we don't need
to worry about celery task signatures
2019-12-05 14:43:33 +00:00
Leo Hemsted
8d160303a1 add transactional wrapper
and add case to get_notification_table_to_use test
2019-12-04 15:26:26 +00:00
Leo Hemsted
d457db4164 make has_delete_task_run non-optional
just to ensure people think about the value of it when using the function
2019-12-03 14:19:14 +00:00
Leo Hemsted
d83827579e make ft billing nightly task only look at one table
follows same logic as the create_nightly_notification_status task, see previous commit
for logic
2019-12-03 14:19:13 +00:00
Leo Hemsted
34ac7cb6c0 only commit once, rather than for every insert
if we insert a valid row, that'll mean we properly roll back the delete of old data
2019-11-29 15:27:56 +00:00
Leo Hemsted
913cf5e12d work out which table to get notification status data from
previously we checked notifications table, and if the results were
zero, checked the notification history table to see if there's data
in there. When we know that data isn't in notifications, we're still
checking. These queries take half a second per service, and we're
doing at least ten for each of the five thousand services we have in
notify. Most of these services have no data in either table for any
given day, and we can reduce the amount of queries we do by only
checking one table.

Check the data retention for a service, and then if the date is older
than the retention, get from history table.

NOTE: This requires that the delete tasks haven't run yet for the day!
If your retention is three days, this will look in the Notification
table for data from three days ago - expecting that shortly after the
task finishes, we'll delete that data.
2019-11-29 15:27:56 +00:00
Leo Hemsted
78edc3a8b3 fix log line for provider switch 2019-11-28 17:20:12 +00:00
Leo Hemsted
f7fbd6de5b make 500s change priorities quicker
it's not acceptable for a constantly failing provider to take 50 minutes
to drain (5x reducing priority by 10). But similarly, we need _some_
delay, or a handful of concurrent failures will completely turn off a
provider, rendering the whole excercise kinda pointless. Setting the
delay before it tries to reduce priority again to one minute is nice
because it means that if one request times out and returns 502, then any
other requests that are in flight at that time will time out before the
one minute is up and not switch, but any requests made after the switch
that take sixty seconds to time out will affect it.
2019-11-28 13:29:39 +00:00
Leo Hemsted
cfe82f8f4a make 500 error provider switches also check for recent changes
moving the logic and the test from switch provider on slow delivery to
dao reduce sms provider priority
2019-11-28 13:29:39 +00:00
Leo Hemsted
992e211a8d update history and created_by when adjusting provider priorities
making sure that we don't close the transaction early, because we need
to keep the transaction open as it has the with_for_update clause on the
select to lock the table.

also make sure the tests clean up after themselves as they're adding
history rows etc
2019-11-28 13:29:02 +00:00
Leo Hemsted
e29546cb65 flake8 2019-11-28 13:29:02 +00:00
Leo Hemsted
28da190a1c remove get_current_provider
the function no longer makes sense now that we send through both at
the same time. mostly just used in old tests that we'll end up rewriting
shortly anyway
2019-11-28 13:29:02 +00:00
Leo Hemsted
8fa7cde593 remove unused provider dao functions 2019-11-28 13:29:01 +00:00
Leo Hemsted
fa7e0a1e84 add dao_reduce_sms_provider_priority function
retrive the sms providers from the DB, and decrease the chosen
provider's priority by 10, while increasing the other by 10.

add a check in to ensure we never decrease below 0 or increase above 100
- this is per provider, we don't check that the two add up to 100 or
  anything. If the values are outside of this range (eg: set via the UI)
then they'll probably* fix themselves at some point - we've added tests
to document these cases.

Use with_for_update to ensure that the method can only run once at a
time - other invocations of the function will be held on that line until
the currently running one ends and commits the transaction. This doesn't
affect anyone doing things from the UI.
2019-11-28 13:29:01 +00:00
Leo Hemsted
6f38cbbcf1 randomly choose from providers based on priority
todo: make sure if they don't add up to 100 we do something sensible,
especially if they're both 0.
2019-11-28 13:29:01 +00:00
Rebecca Law
4fd6f33af2 Merge pull request #2658 from alphagov/fix-letters-in-created-status
Alert if a letter doesn't make it past created status
2019-11-27 13:38:51 +00:00
Rebecca Law
ac4f0e8027 After a comment from @idavidmcdonald, I asked myself why are not creating the task to upload the pdf and update the notification.
The assumption was that S3 would throw an exception if the object was uploaded twice. That's not the case the default behaviour is that if a file already exists it will be overwritten. So it is completely safe to run the task from the alert.

It can also mean that we don't need to wait 4hours 15 minutes. Shall I decease the amount of time before restarting the task?
2019-11-19 16:04:21 +00:00
Rebecca Law
db0d45966f Fix query to populate ft_billing table.
The group by for the query was wrong which would result in 2 rows with different totals but the same unique key, so the second row would update the first row. Meaning we had incorrect numbers for the billing data.
Because some of the data had null for the sent_by column, the select would turn the Null --> dvla, but that same function was not used in the group by. So any time we had missing sent_by data we would end up with 2 rows where one would overwrite the other.
2019-11-15 10:23:48 +00:00
Rebecca Law
c42420c329 Add an alert when a letter is created but doesn't have a file in S3 for sending. We can tell this is the case because there is no updated_at and billable units are still 0.
At this point we are just creating a zendesk ticket - perhaps we can just call the create_letter_pdf task.
2019-11-13 16:39:59 +00:00
Rebecca Law
8d042c36f4 Added a filter to restrict jobs older than a day from coming back.
This is only necessary because there is currently a job that is old, but had 1 row created a couple days later. So now there is 1 notifications for the job where the rest have been purged.
2019-11-08 10:37:37 +00:00
Rebecca Law
559faf3034 Fix the query.
Missing the where clause to join the two tables.... OOPS
2019-11-07 10:57:31 +00:00
Rebecca Law
db5a50c5a7 Adding a scheduled task to processing missing rows from job
Sometimes a job finishes but has missed a row in the middle. It is a mystery why this is happening, it could be that the task to save the notifications has been dropped.
So until we solve the missing let's find missing rows and process them.

A new scheduled task has been added to find any "finished" jobs that do not have enough notifications created. If there are missing notifications the job processes those rows for the job.
Adding the new task to beat schedule will be done in the next commit.

A unique key constraint has been added to Notifications to ensure that the row is not added twice. Any index or constraint can affect performance, but this unique constraint should not affect it enough for us to notice.
2019-11-06 10:49:46 +00:00
Leo Hemsted
975af113e4 Merge pull request #2639 from alphagov/remove-loadtesting-db-migration
remove loadtesting from the database
2019-11-06 10:49:46 +00:00
Katie Smith
fb80a9c92e Update insert_update_notification_history to take a query limit
The nightly job to delete email notifications was failing because it was
timing out (`psycopg2.errors.QueryCanceled: canceling statement due to statement timeout`).

This adds a query limit to the query which inserts or updates
notification history so that it only updates a maximum of 10000 rows at
a time.
2019-10-14 16:51:46 +01:00
Rebecca Law
2c41d6130c Merge pull request #2617 from alphagov/notification-count-for-job
Return count of notifications in the database for a job
2019-10-03 16:54:30 +01:00
Rebecca Law
7fc7d99dac Update the new endpoint to return a 404 if the job or service id are not found.
All our endpoint should perform a check that the params are valid - this is an easy whay to check that and is standard for our endpoints.
I reverted the query to just filter by job id.
2019-10-03 14:58:49 +01:00