had to go through the code and change a few places where we filter on
template types. i specifically didn't worry about jobs or notifications.
Also, add braodcast_data - a json column that might contain arbitrary
broadcast data that we'll figure out as we go. We don't know what it'll
look like, but it should be returned by the API
We’ve added some new properties to the templates in utils that we can
use instead of doing weird things like
`WithSubjectTemplate.__str__(another_instance)`
previously we checked notifications table, and if the results were
zero, checked the notification history table to see if there's data
in there. When we know that data isn't in notifications, we're still
checking. These queries take half a second per service, and we're
doing at least ten for each of the five thousand services we have in
notify. Most of these services have no data in either table for any
given day, and we can reduce the amount of queries we do by only
checking one table.
Check the data retention for a service, and then if the date is older
than the retention, get from history table.
NOTE: This requires that the delete tasks haven't run yet for the day!
If your retention is three days, this will look in the Notification
table for data from three days ago - expecting that shortly after the
task finishes, we'll delete that data.
The NHS is a special case because it’s not one organisation, but it does
have one consistent brand. So anyone working for an NHS organisation
should have their default branding set when they create a service, even
if we know nothing about their specific organisation.
If you’re trying to show what a Notify email will look like in your
caseworking system all the API gives you at the moment is raw markdown
(with the placeholders replaced).
This isn’t that useful if your caseworkers have no idea what markdown
is. If we also give teams the HTML then they can embed this in their
systems, and the people using those systems will be able to see how
headings, bulleted lists, etc. look.
Bumped notifications-utils to 3.7.0. Version 3.7.0 includes the
`convert_utc_to_bst` and `convert_bst_to_utc` functions and the
`LETTER_PROCESSING_DEADLINE` constant, so these have been removed from
this repo and anywhere using these has now been updated to get these
from `notifications-utils`.
Also bumped pytest by a patch version to bring in a bug fix.
New redis keys are partitioned per service per day. New process is as
follows:
* require a count of days to filter by. Currently admin always gives 7.
* for each day, check and see if there's anything in redis. There won't
be if either a) redis is/was down or b) the service didn't send any
notifications that day
- if there isn't, go to the database and get a count out.
* combine all these stats together
* get the names/template types etc out of the DB at the end.
it's used in a few places - it should definitely know what timezones
are and return datetimes rather than dates, which are hard to work with
in terms of figuring out how tz aware they are.
The command takes a service id and a day, grabs the historical data for
that day (potentially out of notification_history), and pops it in
redis (for eight days, same as if it were written to manually).
also, prefix template usage key with "service" to make clear that it's
a service id, and not an individual template id.
We've run into issues with redis expiring keys while we try and write
to them - short lived redis TTLs aren't really sustainable for keys
where we mutate the state. Template usage is a hash contained in redis
where we increment a count keyed by template_id each time a message is
sent for that template. But if the key expires, hincrby (redis command
for incrementing a value in a hash) will re-create an empty hash.
This is no good, as we need the hash to be populated with the last
seven days worth of data, which we then increment further. We can't
tell whether the hincrby created the key, so a different approach
entirely was needed:
* New redis key: <service_id>-template-usage-<YYYY-MM-DD>. Note: This
YYYY-MM-DD is BTC time so it lines up nicely with ft_billing table
* Incremented to from process_notification - if it doesn't exist yet,
it'll be created then.
* Expiry set to 8 days every time it's incremented to.
Then, at read time, we'll just read the last eight days of keys from
Redis, and sum them up. This works because we're only ever incrementing
from that one place - never setting wholesale, never recreating the
data from scratch. So we know that if the data is in redis, then it is
good and accurate data.
One thing we *don't* know and *cannot* reason about is what no key in
redis means. It could be either of:
* This is the first message that the service has sent today.
* The key was deleted from redis for some reason.
Since we set the TTL to so long, we'll never be writing to a key that
previously expired. But if there is a redis (or operator) error and the
key is deleted, then we'll have bad data - after any data loss we'll
have to rebuild the data.
Similar to https://github.com/alphagov/notifications-api/pull/1515
This lets the admin app pass in a domain to use for email auth links,
so that when it’s running on a different URL users who try to sign in
will get an email auth link for the domain they sign in on, not the
default admin domain for the environment in which the API is running.
Removed the tests for trial mode service for the scheduled tasks and the process job.
Having the validation in the POST notification and create job endpoint is enough.
Updated the test_service_whitelist test because the order of the array is not gaurenteed.
accepts a page parameter to control what page of data
returns additional pagination fields in the response dict
* page_size: will always be 50. defined by Config.PAGE_SIZE
* total: the total amount of unpaginated records
* links: dict containing optionally prev, next, and last, links to
other relevant pagination pages
also cleaned up some test imports
moved from notifications/rest -> service/rest and job/rest respectively
endpoint routes not affected
removed requires_admin decorator - that should be set by nginx config
as opposed to python code