to get the data for a day can be reasonably slow (a few hundred
milliseconds), and if someone's viewing a service with no activity we
don't want to do that query seven times every two seconds. So if there
is no data in redis, when we get the data out of the database, we
should put it in redis so we can just grab it from there next time.
This'll happen in two cases:
* redis data is deleted
* the service sent no messages that day
additionally, make sure that we convert nicely from redis' return
values (ascii strings) to unicode keys and integer counts.
New redis keys are partitioned per service per day. New process is as
follows:
* require a count of days to filter by. Currently admin always gives 7.
* for each day, check and see if there's anything in redis. There won't
be if either a) redis is/was down or b) the service didn't send any
notifications that day
- if there isn't, go to the database and get a count out.
* combine all these stats together
* get the names/template types etc out of the DB at the end.
Adds test for:
* checking the template version foreign key constraint
* checking that template changes don't affect existing notifications
* notification statistics aren't affected by different template versions
* notification stats always return the current template name
This is because 7days should be the amount of data in that table so don't restrict.
Note not all queries do this in the same way and a pivotal bug has been raised to align this. This is a bug fix as right now the numbers are out in prod.
Cache expires every 10 minutes, but will help with the every 2 second query, especially when a job is running.
There is some clean up and qa to do for this yet
- Problem was that on notification creation we pass the template ID not the template onto the new notification object
- We then set the history object from this notification object by copying all the fields. This is OK at this point.
- We then set the relationship on the history object based on the template, which we haven't passed in. We only passed the ID. This means that SQLAlchemy nulls the relationship, removing the template_id.
- Later we update the history row when we send the message, this fixes the data. BUT if we ever have a send error, then this never happens and the template is never set on the history table.
Fix:
Set only the template ID when creating the history object.
This version of the client removed the request method, path and body from the encode and decode methods.
The biggest changes here is to the unit tests.
We were using two different queries to filter template stats to the past
7 days, plus today.
Since we’re storing both as short dates, we can now use the same query
for both.
0 | 1 | 2 | 3 | 4 | 5 | 6 | 7
-------|---------|-----------|----------|--------|----------|--------|-------
Monday | Tuesday | Wednesday | Thursday | Friday | Saturday | Sunday | Monday
So if we are on Monday, the stats should include today, plus everything
back to last Monday.
Previously the template stats query was only going back to the Tuesday.
This should mean the numbers on the dashboard always line up.