We found that if the notifications were in created or pending they are not purged from notifications.
- New bulk update method to set all notificaitons with:
- a status = created|sending|pending to temporary-failure
- and is older then today minus SENDING_NOTIFICATIONS_TIMEOUT_PERIOD (in seconds)
- the scheduled task to timeout notifications use the new bulk update query.
- the task will be more efficient
> filter_by and filter are just aliases for each other so can be
> combined together - filter is probably the better one (and then use
> == instead of keyword args)
If you schedule a job you might change your mind or circumstances might
change. So you need to be able to cancel it. This commit adds a `POST`
endpoint for individual jobs which sets their status to `cancelled`.
This also means adding a new status of `cancelled`, so there’s a
migration…
- As before this is now driven from the notifications history table
- Removed from updates and create
- Signatures changes to removed unused params hits many files
- Also potential issue around rate limiting - we used to get the number sent per day from the stats table - which was a single row lookup, now we have to count this. This applies to EVERY API CALL. Probably not a good thing and should be addressed urgently.
- again these new come from the notifications history table
- We update this when we sent a notification, so removed from celery tasks
- tests removed also
- on create notification we updated the templates stats to record the usage.
- this is now based on notification history
- this update and associated tests are now removed,
Previously we kept a running total of job progress/success/failure on the job table. This causes contention, we now generate this data from notification history.
Removed these updates.
- groups by template Id and Day.
Returns count per day, template name, template id, template type, and day.
Ordered by day (desc) and template name (acc)
only in the public notification endpoint so far for fear of breaking
things - in an ideal world i'd remove the template relationship
from models entirely and replace that with actual_template
history-meta's dynamic magic is insufficient for templates, where we
need to be able to refer to the specific history table to take
advantage of sqlalchemy's relationship management (2/3rds of an ORM).
So replace it with a custom made version table.
Had to change the version decorator slightly for this
Removed all existing statsd logging and replaced with:
- statsd decorator. Infers the stat name from the decorated function call. Delegates statsd call to statsd client. Calls incr and timing for each decorated method. This is applied to all tasks and all dao methods that touch the notifications/notification_history tables
- statsd client changed to prefix all stats with "notification.api."
- Relies on https://github.com/alphagov/notifications-utils/pull/61 for request logging. Once integrated we pass the statsd client to the logger, allowing us to statsd all API calls. This passes in the start time and the method to be called (NOT the url) onto the global flask object. We then construct statsd counters and timers in the following way
notifications.api.POST.notifications.send_notification.200
This should allow us to aggregate to the level of
- API or ADMIN
- POST or GET etc
- modules
- methods
- status codes
Finally we count the callbacks received from 3rd parties to mapped status.
this replaces content_char_count, by performing the additional
steps to calculated billable units at insert time, rather than
read time. This means we can take into account whether the
service was in research mode or using a test api key when the
notification was sent :tada
use NotficationHistory instead. Unfortunately this means the SQL
gets a bit gnarly, as we have to repeat notifications_utils'
`get_sms_fragment_count` functionality inside a SELECT 😱