A recent issue with a long-running query (#2288) highlighted the
fact that even though the original HTTP connection might be closed
(for example after gorouter timeout of 15 minutes, which returns a
504 response to the client), the request worker will not be stopped.
This means that the worker is spending time and potentially DB
resources generating a response that will never be delivered.
Gunicorn's timeout setting only applies to sync workers and there
doesn't seem to be an option to interrupt individual requests in
gevent/eventlet deployments.
Since the most likely (and potentially most dangerous) scenario for
this is a long-running DB query, we can set a statement timeout on
our DB connections. This will raise a sqlalchemy.exc.OperationalError
(wrapping psycopg2.extensions.QueryCanceledError), interrupting the
request after the given timeout has been reached.
This is a Postgres client setting, so the database itself will abort
the transaction when it reaches the set timeout.
Since this will also apply to our celery tasks (including potentially
long-running nightly tasks) we set a timeout of 20 minutes to begin
with.
This can potentially be split in the future to set a different value
for each app, so that we could limit API requests even more.
now that we're reading from two tables (ft_notification_status and
notifications) for stats, we'll get a couple of rows for each
notification type. If a service doesn't have any rows in one of those
tables, the query will return a row with nulls for the notification
types and counts. Some services will have history but no stats from
today, others will have data from today but no history.
This commit acknowledges that any row might have nulls, not just the
first row.
a query for notifications was filtering on FtNotificationStatus - we
aren't joining to that table in the query, so sqlalchemy added a cross
join between ft_notification_status (3.7k rows) and Notifications (3.9m
rows), resulting in a 1.3 trillion row materialised table. This query
took 17 hours and pending.
Also, remove orders from querys other than the outer one, since we're
grouping anyway.
Flask-SQLAlchemy paginate function issues a separate query to get
the total count of rows for a given filter. This query (with
filters used by the API integration Message log page) is slow for
services with large number of notifications.
Since Message log page doesn't actually allow users to paginate
through the response (it only shows the last 50 messages) we can
use limit instead of paginate, which requires passing in another
flag from admin to the dao method.
`count` flag has been added to `paginate` in March 2018, however
there was no release of flask-sqlalchemy since then, so we need
to pull the dev version of the package from Github.
Previously, we logged a warning containing the notification reference
and new status. However it wasn't a great message - this new one
includes the notification id, the old status, the time difference and
more.
This separates out logs for callbacks for notifications we don't know
(error level) and duplicates (info level).
The query follows the same pattern as the other queries, getting the statistics from the fact_notification_status table for dates older than today and union that with today.
Tests required.
We want to send two new headers, ServiceId and NotificationId to the
template preview /precompiled/sanitise endpoint. This is to allow us to log
errors from this endpoint in template preview with all the information needed,
instead of needing to pass the information back to notifications-api and
to log it there.
we previously always read from NotificationHistory to get the
notification status stats for a job. Now, if the job is more than three
days old read from ft_notification_status table, otherwise read from
the notifications table (to keep live updates).
We are adding an index to Notifications to optimize the get_notifications_for_service. We need to build the index concurrently which can not be run inside a transaction block so the index will need to be run on the db directly.
CREATE INDEX CONCURRENTLY ix_notifications_service_created_at ON notifications (service_id, created_at);
DROP INDEX CONCURRENTLY ix_notifications_service_created_at
We don't use FUNCTIONAL_TEST_PROVIDER_SERVICE_ID or
UNCTIONAL_TEST_PROVIDER_SMS_TEMPLATE_ID anymore so we can safely
delete them from config and tests.
sent_by_email_address field was added because sometimes two
people at one institution have the same name and then email
address, which is unique, is more useful.