history-meta's dynamic magic is insufficient for templates, where we
need to be able to refer to the specific history table to take
advantage of sqlalchemy's relationship management (2/3rds of an ORM).
So replace it with a custom made version table.
Had to change the version decorator slightly for this
* can now handle nullable foreign keys - would previously raise
AttributeError
* can now handle default values - we would previously not insert
default values that hadn't been generated yet, which if the
field is not nullable will result in IntegrityErrors. We were
deliberately removing 'default' attributes from columns. Only
remove them if nullable is set to true now.
Removed all existing statsd logging and replaced with:
- statsd decorator. Infers the stat name from the decorated function call. Delegates statsd call to statsd client. Calls incr and timing for each decorated method. This is applied to all tasks and all dao methods that touch the notifications/notification_history tables
- statsd client changed to prefix all stats with "notification.api."
- Relies on https://github.com/alphagov/notifications-utils/pull/61 for request logging. Once integrated we pass the statsd client to the logger, allowing us to statsd all API calls. This passes in the start time and the method to be called (NOT the url) onto the global flask object. We then construct statsd counters and timers in the following way
notifications.api.POST.notifications.send_notification.200
This should allow us to aggregate to the level of
- API or ADMIN
- POST or GET etc
- modules
- methods
- status codes
Finally we count the callbacks received from 3rd parties to mapped status.
use NotficationHistory instead. Unfortunately this means the SQL
gets a bit gnarly, as we have to repeat notifications_utils'
`get_sms_fragment_count` functionality inside a SELECT 😱
please ensure that any changes to notifications table happen through either dao_create_notification or dao_update_notification.
changed the notification status update triggered by the provider callbacks to ensure that sets updated_by and can update the history table.
also re-added the character_count so we can reconstruct billing data if needed.
triggered via calls in dao_create_notification and dao_update_notification - if you don't use those functions (eg update in bulk) you'll have to update the history table yourself!
set to 'normal' for all existing notifications, and all job notifications also created as 'normal' - so if your eg reporting service hits notify, it gets notifications created from both API calls and front-end csv jobs.
moved api_key secret manipulation (generating and getting) into
authentiation/utils, and added a property on the model, to facilitate
easier matching of authenticated requests and the api keys they used
It seems like an oversight not to include the notification type in the notifcation.
When updating statistics a query to the template table is required to get the type, this update will mean that query does not have to happen.
If the notification ends up in the retry queue and the delivery app is restarted the notification will get sent twice.
This is because when the app is restarted another message will be in the retry queue as message available which is a
duplicate of the one in the queue that is a message in flight.
This should complete https://www.pivotaltracker.com/story/show/121844427
* single-column static data table that currently contains two types: 'normal' and 'team'
* key_type foreign-keyed from api_keys
- must be not null
- existing rows set to 'normal'
* key_type foreign-keyed from notifications
- nullable
- existing rows set to null
* api_key foreign-keyed from notifications
- nullable
- existing rows set to null
The only update we should be doing to an api key is to expire/revoke the api key.
Removed the update_dict from the the save method.
Added an expire_api_key method that only updates the api key with an expiry date.
or param errors to raise invalid data exception. That will cause
those responses to be handled in by errors.py, which will log
the errors.
Set most of schemas to strict mode so that marshmallow will raise
exception rather than checking for errors in return tuple from load.
Added handler to errors.py for marshmallow validation errors.
It’s going to be useful to see all the notifications for a job that are
failed/delivered/etc.
The API seems to support this behaviour already, but it doesn’t seem to
be tested.
This commit adds some testsfor the DAO and REST layers.
dao was deleting all permissions for that user (regardless of service
id) as the last filter on the permissions dao get_query method won.
I've added a replace flag to the set_user_service_permission method
so that it can handle adding new users + permissions and editing
of existing users' permissions.
Also by pass the get_query method until it can be refactored to work
correctly.
For now execute the filter query directly on the model.
When you add a new template, it’s probably the one that you want to do
subsequent stuff with. But it’s also helpful to see the template in
context (with its siblings) to understand that there are multiple
templates. So we don’t want to do what we do in
https://github.com/alphagov/notifications-admin/pull/648
for adding a new template.
But we _can_ make your brand-new template appear first by always
ordering by when the template was created.
This also removes the confusion caused by having `updated_at` affecting
order, and causing the templates to move around all the time.
In order to set a message as temporary-failure, we check if it is in pending status first.
Otherwise a delivery receipt for failure is set to permanent failure.