For notification and notification_history we do an upsert. Here, as the
inbound_sms table is never updated, only inserted to once (signified by
lack of updated_at field), an upsert would be unnecessary.
Therefore, if for some reason the delete statement failed as part of
moving data into the inbound_sms_history table, we can simply just
ignore any db conflicts raised by a rerun of
`delete_inbound_sms_older_than_retention`.
- Check if right keys in new history rows
- Improve model and get rid of old revision version
- Add updated migration file
- Test data when inserting into inbound sms history
Before the search term was either:
- an email address (or partial email address)
- a phone number (or partial phone number)
Now it can also be:
- a reference (or partial reference)
We can take a pretty good guess, by looking at the search term, whether
the thing the user is searching by email address or phone number. This
helps us:
- only show relevant notifications
- normalise the search term to give the best chance of matching what we
store in the `normalised_to` field
However we can’t look at a search term and guess whether it’s a
reference, because a reference could take any format. Therefore if the
user hasn’t told us what kind of thing their search term is, we should
stop trying to guess.
We have a team who want to find emails that might have been sent to an
incorrect address. Therefore they can’t search by the correct address,
because it won’t match.
What they do have is the reference number of the user’s application,
which is also stored in the `client_reference` field on the
notification.
So when a user is searching we should also look at the client reference,
as well as the recipient, allowing the user to enter either in the
search box.
we generally aim to share the load between the two providers equally
(more or less). When one provider has struggled, we deprioritise them,
this commit adds a function that gradually restores balance. It checks
every five minutes, if it's been more than an hour since the providers
were last changed then it adjusts them towards a 50/50 split. Except
it's not quite 50/50 due to #reasons (we want to slightly favour MMG),
it's actually 60/40. That's defined in a new dict in config.py.
The table will contain notification ids for services that have returned letters. This will make it easy to query the data in Notification_history since we can join on the primary key.
It is likely this endppoint will need additional data for the UI to display, for the first iteration this will enable the /uploads page to show both letters and jobs. Only letter uploaded by the UI are included in the resultset.
Add file name to resultset.
the nightly task won't be affected, it'll just trigger three times more
sub-tasks.
this doesn't need to be a two-part deploy because we only trigger this
overnight, so as long as the deploy completes in daytime we don't need
to worry about celery task signatures
previously we checked notifications table, and if the results were
zero, checked the notification history table to see if there's data
in there. When we know that data isn't in notifications, we're still
checking. These queries take half a second per service, and we're
doing at least ten for each of the five thousand services we have in
notify. Most of these services have no data in either table for any
given day, and we can reduce the amount of queries we do by only
checking one table.
Check the data retention for a service, and then if the date is older
than the retention, get from history table.
NOTE: This requires that the delete tasks haven't run yet for the day!
If your retention is three days, this will look in the Notification
table for data from three days ago - expecting that shortly after the
task finishes, we'll delete that data.
it's not acceptable for a constantly failing provider to take 50 minutes
to drain (5x reducing priority by 10). But similarly, we need _some_
delay, or a handful of concurrent failures will completely turn off a
provider, rendering the whole excercise kinda pointless. Setting the
delay before it tries to reduce priority again to one minute is nice
because it means that if one request times out and returns 502, then any
other requests that are in flight at that time will time out before the
one minute is up and not switch, but any requests made after the switch
that take sixty seconds to time out will affect it.
making sure that we don't close the transaction early, because we need
to keep the transaction open as it has the with_for_update clause on the
select to lock the table.
also make sure the tests clean up after themselves as they're adding
history rows etc
the function no longer makes sense now that we send through both at
the same time. mostly just used in old tests that we'll end up rewriting
shortly anyway
retrive the sms providers from the DB, and decrease the chosen
provider's priority by 10, while increasing the other by 10.
add a check in to ensure we never decrease below 0 or increase above 100
- this is per provider, we don't check that the two add up to 100 or
anything. If the values are outside of this range (eg: set via the UI)
then they'll probably* fix themselves at some point - we've added tests
to document these cases.
Use with_for_update to ensure that the method can only run once at a
time - other invocations of the function will be held on that line until
the currently running one ends and commits the transaction. This doesn't
affect anyone doing things from the UI.
The assumption was that S3 would throw an exception if the object was uploaded twice. That's not the case the default behaviour is that if a file already exists it will be overwritten. So it is completely safe to run the task from the alert.
It can also mean that we don't need to wait 4hours 15 minutes. Shall I decease the amount of time before restarting the task?
The group by for the query was wrong which would result in 2 rows with different totals but the same unique key, so the second row would update the first row. Meaning we had incorrect numbers for the billing data.
Because some of the data had null for the sent_by column, the select would turn the Null --> dvla, but that same function was not used in the group by. So any time we had missing sent_by data we would end up with 2 rows where one would overwrite the other.
This is only necessary because there is currently a job that is old, but had 1 row created a couple days later. So now there is 1 notifications for the job where the rest have been purged.
Sometimes a job finishes but has missed a row in the middle. It is a mystery why this is happening, it could be that the task to save the notifications has been dropped.
So until we solve the missing let's find missing rows and process them.
A new scheduled task has been added to find any "finished" jobs that do not have enough notifications created. If there are missing notifications the job processes those rows for the job.
Adding the new task to beat schedule will be done in the next commit.
A unique key constraint has been added to Notifications to ensure that the row is not added twice. Any index or constraint can affect performance, but this unique constraint should not affect it enough for us to notice.
The nightly job to delete email notifications was failing because it was
timing out (`psycopg2.errors.QueryCanceled: canceling statement due to statement timeout`).
This adds a query limit to the query which inserts or updates
notification history so that it only updates a maximum of 10000 rows at
a time.
All our endpoint should perform a check that the params are valid - this is an easy whay to check that and is standard for our endpoints.
I reverted the query to just filter by job id.