> filter_by and filter are just aliases for each other so can be
> combined together - filter is probably the better one (and then use
> == instead of keyword args)
If you schedule a job you might change your mind or circumstances might
change. So you need to be able to cancel it. This commit adds a `POST`
endpoint for individual jobs which sets their status to `cancelled`.
This also means adding a new status of `cancelled`, so there’s a
migration…
- As before this is now driven from the notifications history table
- Removed from updates and create
- Signatures changes to removed unused params hits many files
- Also potential issue around rate limiting - we used to get the number sent per day from the stats table - which was a single row lookup, now we have to count this. This applies to EVERY API CALL. Probably not a good thing and should be addressed urgently.
- again these new come from the notifications history table
- We update this when we sent a notification, so removed from celery tasks
- tests removed also
- on create notification we updated the templates stats to record the usage.
- this is now based on notification history
- this update and associated tests are now removed,
Previously we kept a running total of job progress/success/failure on the job table. This causes contention, we now generate this data from notification history.
Removed these updates.
- groups by template Id and Day.
Returns count per day, template name, template id, template type, and day.
Ordered by day (desc) and template name (acc)
only in the public notification endpoint so far for fear of breaking
things - in an ideal world i'd remove the template relationship
from models entirely and replace that with actual_template
history-meta's dynamic magic is insufficient for templates, where we
need to be able to refer to the specific history table to take
advantage of sqlalchemy's relationship management (2/3rds of an ORM).
So replace it with a custom made version table.
Had to change the version decorator slightly for this
Removed all existing statsd logging and replaced with:
- statsd decorator. Infers the stat name from the decorated function call. Delegates statsd call to statsd client. Calls incr and timing for each decorated method. This is applied to all tasks and all dao methods that touch the notifications/notification_history tables
- statsd client changed to prefix all stats with "notification.api."
- Relies on https://github.com/alphagov/notifications-utils/pull/61 for request logging. Once integrated we pass the statsd client to the logger, allowing us to statsd all API calls. This passes in the start time and the method to be called (NOT the url) onto the global flask object. We then construct statsd counters and timers in the following way
notifications.api.POST.notifications.send_notification.200
This should allow us to aggregate to the level of
- API or ADMIN
- POST or GET etc
- modules
- methods
- status codes
Finally we count the callbacks received from 3rd parties to mapped status.
this replaces content_char_count, by performing the additional
steps to calculated billable units at insert time, rather than
read time. This means we can take into account whether the
service was in research mode or using a test api key when the
notification was sent :tada
use NotficationHistory instead. Unfortunately this means the SQL
gets a bit gnarly, as we have to repeat notifications_utils'
`get_sms_fragment_count` functionality inside a SELECT 😱
if passed in, returns the service object with additional statistics
dictionary, which will be used in the admin app to populate dashboard
components. A new schema has been created for this to avoid clashing/
causing confusion with the existing schema, which is already used
for PUT/POST as well, and this schema can be easily tailored to
reduce ambiguity and lazy-loading
Add additional relationships to exclude in the ServiceSchema metaclass.
Marshmallow by default lazily loads relationships when dumping, so any
relationships we know we won't need, we can exclude and avoid a DB call.
Lots of tables are linked to services, so it loads a lot of tables.
So don't load statistics tables, since they're clearly not needed.
We *do* however want to return the users for the service - they're used
in a few places. If we're returning all services, then we don't want to
make separate queries for these users, so we modify the services_dao
queries to load users the first time round. This should speed up all
GET queries to the services endpoints, most notably pages that get many
services (platform_admin, choose service, login)
please ensure that any changes to notifications table happen through either dao_create_notification or dao_update_notification.
changed the notification status update triggered by the provider callbacks to ensure that sets updated_by and can update the history table.
also re-added the character_count so we can reconstruct billing data if needed.
triggered via calls in dao_create_notification and dao_update_notification - if you don't use those functions (eg update in bulk) you'll have to update the history table yourself!
This happens because more than one beat process was creating the timeout task, resulting in multiple workers running the same queries at the same time.