to get the data for a day can be reasonably slow (a few hundred
milliseconds), and if someone's viewing a service with no activity we
don't want to do that query seven times every two seconds. So if there
is no data in redis, when we get the data out of the database, we
should put it in redis so we can just grab it from there next time.
This'll happen in two cases:
* redis data is deleted
* the service sent no messages that day
additionally, make sure that we convert nicely from redis' return
values (ascii strings) to unicode keys and integer counts.
New redis keys are partitioned per service per day. New process is as
follows:
* require a count of days to filter by. Currently admin always gives 7.
* for each day, check and see if there's anything in redis. There won't
be if either a) redis is/was down or b) the service didn't send any
notifications that day
- if there isn't, go to the database and get a count out.
* combine all these stats together
* get the names/template types etc out of the DB at the end.
it's used in a few places - it should definitely know what timezones
are and return datetimes rather than dates, which are hard to work with
in terms of figuring out how tz aware they are.
we should be very careful with when we get data from
NotificationHistory - this should probably only be from scheduled
tasks. `dao_get_template_usage` is only called from the template
statistics rest endpoint, so shouldn't ever hit template history.
also, moved tests out to new file to break up the 2k test file a bit
If Monday or Tuesday check for letters still sending after 4 days.
If Saturday or Sunday do nothing
If Wed, Thurs, Fri check for letters still sending after 2 days
Added test for Tuesday, corrected tests after the correction to query.
Catches the requests exception for document-download-api calls, logs
a warning and returns a matching response code and message.
Connection errors to document download result in 503 response to the
user.
Adds support for a new personalisation value type: file upload.
File uploads are represented as a dictionary with a "file" key and
a base64-encoded file data as the key's value:
```
personalisation={
'field1': {'file': '<base64-encoded file contents>'}
}
```
Post notification endpoint checks the request personalisation data
looking for the file uploads in personalisation data. If any are
found and the service has permissions to upload documents the files
are sent to document download API and personalisation values are
replaced with the URLs returned in the document download response.
A fake document URL is returned for simulated notifications, no
documents are stored in Document Download.
Multiple files can be uploaded for one notification by providing
a file upload in more than one personalisation field.
Allows uploading documents to the Document Download API.
The client is configured with an API host and auth token. There's
no need for a flag to disable the client in the test environments
at the moment since the upload is only triggered by a specific
payload which would only be sent with an explicit goal of using
document download.
- Separated the logic of precompiled and template letters.
- Remove the check for research mode, research mode is not relevant to api calls. The test key is used for testing.
Refactor upload_pdf_letter to accept a precompile boolean to save a query to template.
Assertions should only be used in tests - they can be disabled at
runtime by setting the python flag -O (though I don't believe we use
that flag under normal circumstances).
also clean up test asserts - mock_redis is the redis object, so its
`called` property will always be false, because we never say
`redis_store()`. Rather, we should use the `mock_calls` property to
see all calls to all of its children
Previously "Result not found" would be returned when the id is not a valid uuid, which does not make sense.
Now the message says "notification_id is not a valid UUID", this should be a clearer message for the client service.
The command takes a service id and a day, grabs the historical data for
that day (potentially out of notification_history), and pops it in
redis (for eight days, same as if it were written to manually).
also, prefix template usage key with "service" to make clear that it's
a service id, and not an individual template id.
We've run into issues with redis expiring keys while we try and write
to them - short lived redis TTLs aren't really sustainable for keys
where we mutate the state. Template usage is a hash contained in redis
where we increment a count keyed by template_id each time a message is
sent for that template. But if the key expires, hincrby (redis command
for incrementing a value in a hash) will re-create an empty hash.
This is no good, as we need the hash to be populated with the last
seven days worth of data, which we then increment further. We can't
tell whether the hincrby created the key, so a different approach
entirely was needed:
* New redis key: <service_id>-template-usage-<YYYY-MM-DD>. Note: This
YYYY-MM-DD is BTC time so it lines up nicely with ft_billing table
* Incremented to from process_notification - if it doesn't exist yet,
it'll be created then.
* Expiry set to 8 days every time it's incremented to.
Then, at read time, we'll just read the last eight days of keys from
Redis, and sum them up. This works because we're only ever incrementing
from that one place - never setting wholesale, never recreating the
data from scratch. So we know that if the data is in redis, then it is
good and accurate data.
One thing we *don't* know and *cannot* reason about is what no key in
redis means. It could be either of:
* This is the first message that the service has sent today.
* The key was deleted from redis for some reason.
Since we set the TTL to so long, we'll never be writing to a key that
previously expired. But if there is a redis (or operator) error and the
key is deleted, then we'll have bad data - after any data loss we'll
have to rebuild the data.
be known. Added the notification id to the logging message so that
the notification can be traced through the logging system by knowing
the notification id, making it easier to debug. Also changed to raise an
exception so that alerts are generated. This way we should get an email
to say that there has been an error.