Commit Graph

7 Commits

Author SHA1 Message Date
jimmoffet
69abec0bb3 change dashboard test to reflect demo changes to uploads view 2022-09-09 17:02:48 -07:00
jimmoffet
dad051a662 2767 passing 2022-08-05 00:25:03 -07:00
James Moffet
f1d9aef9cd comment cache decorator 2022-07-19 18:31:01 -07:00
David McDonald
2767328e79 Reduce impact of stale cache for performance page
I came across a bug where the performance page might be visited
on say the 22nd and we expect it to show data for yesterday and
the past 7 days (so the 21st and back 7 days) but instead it
only shows it for the 20th and then back 6 days.

The cause is that we have nightly tasks that run to
calculate the number of notifications sent per service (the
ft_notification_status table)
calculate the number of notifications sent within 10 seconds
(the ft_processing_time table)
Both of these tables get filled between the hours of midnight and
2:30am. The first seems to be filled by about 12:30 every night
whereas the processing stats seems to be filled about 2am every
night.

The admin app currently caches the results of the call to the API
(https://github.com/alphagov/notifications-api/blob/master/app/performance_dashboard/rest.py#L24)
to get this data with the key
performance-stats-{start_date}-to-{end_date}.

Now the issue here is that if the performance page is visited at
00:01 on the 23rd, it will call the API for data on the 22nd and
backwards 7 days. The data it gets at 00:01 will not yet have the
data for the 22nd, it will only go as far as the 21st, because
the overnight jobs to put data in the tables have not run yet. The
admin app then caches this data, meaning that for the rest of the
day the page will be missing data for the 22nd, even though it
becomes available after approx 2am. We have seen it a few times
in production where the performance page has indeed been visited
before say 2 am, leaving the page with missing data.

In terms of solutions, there are several potential options.

1. Remove the caching that we do in redis. The downside of this is
that we will hit the API more (currently once a day for the first
visitor to the performance page). We have had 255 requests
to the performance page in the last week and when there is no value
in redis, the call to the API takes approximately 1-2 seconds.
Removing the caching would against why the caching was originally
added which was to act as a basic protection against malicious
DOS attempts.

2. Make the admin app only set the cache value if it is after
2:30am. This one feels a bit gross and snowflakey.

3. The API flushes the redis cache after it has finished its nightly
tasks. We don't have that much precedent of the API interacting
with redis, it is mostly the admin app that does but this is still
a valid option.

4. Cache in redis the data for 2 days ago to 7 days ago and then
get the data for yesterday freshly every time. This would mean
some things are still cached but the smallest bit that we can't rely
on can be gathered fresh.

5. Remove the caching we do in redis but then try and optimise the
API endpoint to return results even faster and with less
interaction with the DB. My hypothesis is that the slowest part
is where the API has to calculate how many notifications Notify
has ever sent
(https://github.com/alphagov/notifications-api/blob/master/app/performance_dashboard/rest.py#L36)
and so we could potentially try and store these totals in the DB
every night and update them nightly rather than having to query all
the data in the table to figure them out every time.

6. Reduce the expiry time of the cache key so although the bug will
still exist, it will only exist for a much shorter time.

In the end, we've gone for option 6. The user impact of
this bug sticking around for an hour is minimal, in particular
because that would be in the early hours of the morning. The solution
is also quick to implement and keeps the protection against a DOS attack.

The value of 1 hour for expiry was a fairly abitrary choice, it could
have been as little as 1 minute or as much as 6 hours.
2021-12-07 12:44:36 +00:00
Chris Hill-Scott
545f933e37 Cache the JSON in Redis
The endpoints are probably fast enough, but it seems risky to have a
public URL which causes a new query every time the page is loaded.
2021-03-12 14:43:58 +00:00
Chris Hill-Scott
25a6788d66 Use arguments rather than passing around a dict
This makes it harder to write code which will pass tests but fail in
real life.
2021-03-12 14:43:50 +00:00
Rebecca Law
3ca2840652 Rename to performance-dashboard 2021-03-12 11:17:51 +00:00