it's not acceptable for a constantly failing provider to take 50 minutes
to drain (5x reducing priority by 10). But similarly, we need _some_
delay, or a handful of concurrent failures will completely turn off a
provider, rendering the whole excercise kinda pointless. Setting the
delay before it tries to reduce priority again to one minute is nice
because it means that if one request times out and returns 502, then any
other requests that are in flight at that time will time out before the
one minute is up and not switch, but any requests made after the switch
that take sixty seconds to time out will affect it.
making sure that we don't close the transaction early, because we need
to keep the transaction open as it has the with_for_update clause on the
select to lock the table.
also make sure the tests clean up after themselves as they're adding
history rows etc
the function no longer makes sense now that we send through both at
the same time. mostly just used in old tests that we'll end up rewriting
shortly anyway
retrive the sms providers from the DB, and decrease the chosen
provider's priority by 10, while increasing the other by 10.
add a check in to ensure we never decrease below 0 or increase above 100
- this is per provider, we don't check that the two add up to 100 or
anything. If the values are outside of this range (eg: set via the UI)
then they'll probably* fix themselves at some point - we've added tests
to document these cases.
Use with_for_update to ensure that the method can only run once at a
time - other invocations of the function will be held on that line until
the currently running one ends and commits the transaction. This doesn't
affect anyone doing things from the UI.
The assumption was that S3 would throw an exception if the object was uploaded twice. That's not the case the default behaviour is that if a file already exists it will be overwritten. So it is completely safe to run the task from the alert.
It can also mean that we don't need to wait 4hours 15 minutes. Shall I decease the amount of time before restarting the task?
The group by for the query was wrong which would result in 2 rows with different totals but the same unique key, so the second row would update the first row. Meaning we had incorrect numbers for the billing data.
Because some of the data had null for the sent_by column, the select would turn the Null --> dvla, but that same function was not used in the group by. So any time we had missing sent_by data we would end up with 2 rows where one would overwrite the other.
This is only necessary because there is currently a job that is old, but had 1 row created a couple days later. So now there is 1 notifications for the job where the rest have been purged.
Sometimes a job finishes but has missed a row in the middle. It is a mystery why this is happening, it could be that the task to save the notifications has been dropped.
So until we solve the missing let's find missing rows and process them.
A new scheduled task has been added to find any "finished" jobs that do not have enough notifications created. If there are missing notifications the job processes those rows for the job.
Adding the new task to beat schedule will be done in the next commit.
A unique key constraint has been added to Notifications to ensure that the row is not added twice. Any index or constraint can affect performance, but this unique constraint should not affect it enough for us to notice.
The nightly job to delete email notifications was failing because it was
timing out (`psycopg2.errors.QueryCanceled: canceling statement due to statement timeout`).
This adds a query limit to the query which inserts or updates
notification history so that it only updates a maximum of 10000 rows at
a time.
All our endpoint should perform a check that the params are valid - this is an easy whay to check that and is standard for our endpoints.
I reverted the query to just filter by job id.
When a service asks for branding I often go and:
- set the branding on the organisation as well
- set the branding on all the organisation’s services
The latter can be quite time consuming, but it does save effort since
existing services from the same organisation won’t have to request
branding. It also improves the consistency of the communications that
users are receiving.
This commit automates that process by applying the branding update to
any services belonging to the organisation, if those services don’t
already have their own custom branding set up.
Now we consistently use the created_at date, so we can always get the right file location and name.
The previous updates to this code were trying to solve the problem if a pdf being created at 17:29, but not ready to upload until 17:31 after the antivirus and validation check.
But in those cases we would have trouble finding the file.
When we cancel a job, we need to check if all notifications are
already in the database. So far, we were querying for all
notification objects in the database and counting them in
admin app, which runs into pagination problems for large jobs,
and could time out for very large jobs.
We want a way of getting the hidden precompiled template from admin,
so this adds an endpoint which gets the template or creates it if it doesn't
exist. The function to get or create the hidden template already existed
but has been moved to the template DAO now that it is used in more
places.
bst_date is a date field. Comparing dates with datetimes in postgres
gets confusing and dangerous. See this example, where a date evaluates
as older than midnight that same day.
```
notification_api=# select '2019-04-01' >= '2019-04-01 00:00';
?column?
----------
f
(1 row)
```
By only using dates everywhere, we reduce the chance of these bugs
happening
from service, join organisation, the free_allowance_remainder subquery
and the ft_billing table. Being explicit reduces confusion about what
tables we're joining and how we're constraining those joins
also remove references to AnnualBilling since we've already got the
free sms allowance from the free_allowance_remainder subquery
if there are no rows for a service in ft_billing, we should still
return their allowance (with 0 fragments used).
To do this, we need to build the query starting from AnnualBilling and
joining onto FactBilling, rather than the other way round. Also, we
need to account for the possibility of the sums being null by
coalescing them to 0
Previously we were doing it based on their email address. This will also
apply it if they self-select as a GP surgery, even if they don’t have an
NHS email address.
Now looking at the updated_at date, we are getting the alert if the notification was created_at:17:29 updated to created status at 17:30, so the letter is in the next days bucket.
Not sure if I want to make this change, there isn't an index on updated_at, so the query might be slow.
This is the second commit in the series to add organisation_id to Service.
- Data migration to update services.organisation_id from data in organisation_to_service
(The rollback will lose any updates to organisation unless the script is updated to set organistion_to_service from service.organisation_id )
- Update Service.organisation relationship to a ForeignKey relationship to Organisation.
- Update Organisation.services to a backref relationship to Service.