now that we're reading from two tables (ft_notification_status and
notifications) for stats, we'll get a couple of rows for each
notification type. If a service doesn't have any rows in one of those
tables, the query will return a row with nulls for the notification
types and counts. Some services will have history but no stats from
today, others will have data from today but no history.
This commit acknowledges that any row might have nulls, not just the
first row.
a query for notifications was filtering on FtNotificationStatus - we
aren't joining to that table in the query, so sqlalchemy added a cross
join between ft_notification_status (3.7k rows) and Notifications (3.9m
rows), resulting in a 1.3 trillion row materialised table. This query
took 17 hours and pending.
Also, remove orders from querys other than the outer one, since we're
grouping anyway.
Flask-SQLAlchemy paginate function issues a separate query to get
the total count of rows for a given filter. This query (with
filters used by the API integration Message log page) is slow for
services with large number of notifications.
Since Message log page doesn't actually allow users to paginate
through the response (it only shows the last 50 messages) we can
use limit instead of paginate, which requires passing in another
flag from admin to the dao method.
`count` flag has been added to `paginate` in March 2018, however
there was no release of flask-sqlalchemy since then, so we need
to pull the dev version of the package from Github.
Previously, we logged a warning containing the notification reference
and new status. However it wasn't a great message - this new one
includes the notification id, the old status, the time difference and
more.
This separates out logs for callbacks for notifications we don't know
(error level) and duplicates (info level).
previously the script would:
try and SIGTERM each celery process every second for the 9 second
timeout, and then SIGKILL every second after, with no upper bound.
This commit changes this to:
* SIGTERM each process once.
* Wait nine seconds (checking if the pid files are still present each
second)
* SIGKILL any remaining processes once.
* exit
The query follows the same pattern as the other queries, getting the statistics from the fact_notification_status table for dates older than today and union that with today.
Tests required.