Years ago we started to implement a way to schedule a notification. We hit a problem but we never came up with a good solution and the feature never made it back to the top of the priority list.
This PR removes the code for scheduled_for. There will be another PR to drop the scheduled_notifications table and remove the schedule_notifications service permission
Unfortunately, I don't think we can remove the `scheduled_for` attribute from the notification.serialized method because out clients might fail if something is missing. For now I have left it in but defaulted the value to None.
By serialising these straight away we can:
- not go back to the database later, potentially closing the connection
sooner
- potentially cache the serialised data, meaning we don’t touch the
database at all
This commit changes the code in post notification endpoint to handle a
serialised template (ie a `dict`) rather than a database object.
This is the first step towards being able to cache the template and not
hit the database on every request.
There should be no functional changes here, it’s just refactoring.
There are some changes to the tests where the signature of functions
has changed.
Importing of the template schema has to be done at a function level,
otherwise Marshmallow gets weird.
This commit also copies the `JSONModel` class from the admin app, which
turns serialised data (a dict made from JSON) into an object on which
certain predefined properties are allowed.
This means we can still do the caching of serialised data, without
having to change too much of the code in the app, or make it ugly by
sprinkling dict lookups everywhere.
We’re not copying all of JSONModel from the admin app, just the bits we
need. We don’t need to compare or hash these objects, they’re just used
for lookups. And redefining `__getattribute__` scares Leo.
In the previous PR I removed the `update_notification` method to reduce the need for another update query. However, that meant the notification was marked as delivered without an updated_at timestamp.
It is weird to set the updated_at when we create the notification. So is this a better fix? Or do I put the update back now?
I recommend we push this fix now.
After the commit we issue two calls to the db to get service and get notification. This is because after the commit the ORM wants to ensure that the data model objects are the latest.
So far this is just a proof of concept, but the letter flow needs to be updated and we should be able to get rid of research mode. And it needs some tidy up.
At the moment we’re not consistent:
Precompiled (API and one-off):
`to` has the whole address
`normalised_to` has nothing
Templated (API, CSV and one off):
`to` has the first line of the address
`normalised_to` has nothing
This commit makes us consistently store the whole address in the `to`
field. We think that people might want to search by postcode, not just
first line of the address.
This commit also starts to populate the normalised_to field with the
address lowercased and with all spaces removed, to make it easier to
search on.
Bumped notifications-utils to 3.7.0. Version 3.7.0 includes the
`convert_utc_to_bst` and `convert_bst_to_utc` functions and the
`LETTER_PROCESSING_DEADLINE` constant, so these have been removed from
this repo and anywhere using these has now been updated to get these
from `notifications-utils`.
Also bumped pytest by a patch version to bring in a bug fix.
This commit modifies the code paths the admin app uses to send one off
emails and text messages to also accept letters.
This mostly worked already, the two changes were:
- making sure that one-off letters are processed by the correct task,
from the correct queue
- one-off letters sent from a service in research mode don’t get put on
a queue and go straight to `delivered` (because we don’t want to send
them for real)
We've run into issues with redis expiring keys while we try and write
to them - short lived redis TTLs aren't really sustainable for keys
where we mutate the state. Template usage is a hash contained in redis
where we increment a count keyed by template_id each time a message is
sent for that template. But if the key expires, hincrby (redis command
for incrementing a value in a hash) will re-create an empty hash.
This is no good, as we need the hash to be populated with the last
seven days worth of data, which we then increment further. We can't
tell whether the hincrby created the key, so a different approach
entirely was needed:
* New redis key: <service_id>-template-usage-<YYYY-MM-DD>. Note: This
YYYY-MM-DD is BTC time so it lines up nicely with ft_billing table
* Incremented to from process_notification - if it doesn't exist yet,
it'll be created then.
* Expiry set to 8 days every time it's incremented to.
Then, at read time, we'll just read the last eight days of keys from
Redis, and sum them up. This works because we're only ever incrementing
from that one place - never setting wholesale, never recreating the
data from scratch. So we know that if the data is in redis, then it is
good and accurate data.
One thing we *don't* know and *cannot* reason about is what no key in
redis means. It could be either of:
* This is the first message that the service has sent today.
* The key was deleted from redis for some reason.
Since we set the TTL to so long, we'll never be writing to a key that
previously expired. But if there is a redis (or operator) error and the
key is deleted, then we'll have bad data - after any data loss we'll
have to rebuild the data.
This PR is a proposal to reduce the average messages we see for a single notification from about 7 messages to 2.
Messaging would change to something like this:
February 2nd 2018, 15:39:05.885 Full delivery response from Firetext for notification: 8eda51d5-cd82-4569-bfc9-d5570cdf2126
{'status': ['0'], 'reference': ['8eda51d5-cd82-4569-bfc9-d5570cdf2126'], 'time': ['2018-02-02 15:39:01'], 'code': ['000']}
February 2nd 2018, 15:39:05.885 Firetext callback return status of 0 for reference: 8eda51d5-cd82-4569-bfc9-d5570cdf2126
February 2nd 2018, 15:38:57.727 SMS 8eda51d5-cd82-4569-bfc9-d5570cdf2126 sent to provider firetext at 2018-02-02 15:38:56.716814
February 2nd 2018, 15:38:56.727 Starting sending SMS 8eda51d5-cd82-4569-bfc9-d5570cdf2126 to provider at 2018-02-02 15:38:56.408181
February 2nd 2018, 15:38:56.727 Firetext request for 8eda51d5-cd82-4569-bfc9-d5570cdf2126 finished in 0.30376038211397827
February 2nd 2018, 15:38:49.449 sms 8eda51d5-cd82-4569-bfc9-d5570cdf2126 created at 2018-02-02 15:38:48.439113
February 2nd 2018, 15:38:49.449 sms 8eda51d5-cd82-4569-bfc9-d5570cdf2126 sent to the priority-tasks queue for delivery
To somthing like this:
February 2nd 2018, 15:39:05.885 Firetext callback return status of 0 for reference: 8eda51d5-cd82-4569-bfc9-d5570cdf2126
February 2nd 2018, 15:38:49.449 sms 8eda51d5-cd82-4569-bfc9-d5570cdf2126 created at 2018-02-02 15:38:48.439113
needed for monitoring the performance of the v2 endpoints. They were put
in as a temporary measure whilst sustained performance testing was
taking place.
- Disable Redis as there is a current connection limit of 256 which
could slow down the request if they are all used
- Added statd to methods in the post to help spot any bottlenecks
In other places the text we use for this error message is "Missing personalisation: name, date, thing". See:
- 72b108b694/app/template/rest.py (L125)
- 717c0510a3/app/notifications/rest.py (L206)
- 05a179c6ef/app/v2/template/post_template.py (L38)
For some reason this part of the codebase says "Template missing personalisation: …". This is inconsistent, and also confusing because it’s the API call that’s missing the personalisation, not the template itself.
This commit changes the error message to be consistent with the majority of the codebase, which uses the less confusing wording.
we now no longer create a job. At the end of the post there is no
action, as we don't have any tasks to queue immediately - if it's a
real notification it'll get picked up in the evening scheduled task.
If it's a test notification, we create it with an initial status of
sending so that we can be sure it'll never get picked up - and then we
trigger the update-letter-notifications-to-sent-to-dvla task to sent
the sent-at/by.
there are three steps to this
1. Create a job
* Starts in status 'ready to send'
* Created by None, since it's from the API
* original file name 'letter submitted via api'
2. Create a single notification for that job
* job_row_number 0
* client reference if provided
* address line 1 as recipient
3. Trigger the build_dvla_file task
we know that all the notifications have been created for this job
(since we just created them ourselves synchronously), so this will
just create the dvla-format file for the job, and upload it to s3.
when functions get as big as that, it's confusing to try and work out what
things are what. By including a * as the first arg, we require that anyone
calling the function has to use kwargs to reference the parameters