In this PR the id for the notification is passed in and used to created the notification, which causes a integrity error.
Normally when we get a SQLAlchemy error here we send the message to the retry queue, but if the notification already exists
we just ignore it.
We're formally using the ISO 8601 UTC datetime format, and so the
correct way to output the data is by appending the timezone.
("Z" in the case of UTC*).
Unfortunately, Python's `datetime` formatting will just ignore the
timezone part of the string on output, which means we just have to
append the string "Z" to the end of all datetime strings we output.
Should be fine, as we will only ever output UTC timestamps anyway.
* https://en.wikipedia.org/wiki/ISO_8601#UTC
1) It's incr not inc on the redis client, so renamed the calls everywhere
2) Redis returns bytes/string rather than an int if the value stored is an int. Cast the result to an int before use. Not you can set up the GET to do this transparently but I've not done this as we *may * use GETS for non-int and the callback sets up the cast for the connection not the call.
After we have written to the database and placed it on a deliver queue we count it in the cache against the service.
This is the equivalent of doing it at the end of the API call.
mocks create any property you access, so calling functions on them is
inherently risky due to typos quietly doing nothing. instead assert
`.called is False`, which will fail noisily if you typo
- It seems that when we changed the name of the job.status column that we didn't update the code to use job.job_status.
- Therefore none of the jobs since then have had the job status updated.
- Now that this is fix we can show the job status when there is an error like "sending exceeds limits"
- This could happen if a job is scheduled to run at the top of the hour, so at the time of the job creation the limit was not exceed, but at the time of processing the job the limit is exceed.
Previously there were 4 queues for sending messages
The was based on the fact that each notification has 2 actions - persist in the database and send to provider.
Two queues supported the CSV upload - for the first of these tasks
- bulk-email
- build-sms
And there were two more queues for the tasks that make the 3rd party client calls.
- sms
- email
API Calls just used the latter two queues for both tasks
Added four new queues
- db-email
- db-sms
- send-sms
- send-email
So an API call puts a notification into the db-[type] queue first, which then puts the notification into the send-[type] queue
Build queues stay as before.
This will allow us to target processing of these tasks with separate workers to manage these differently.
Removed all existing statsd logging and replaced with:
- statsd decorator. Infers the stat name from the decorated function call. Delegates statsd call to statsd client. Calls incr and timing for each decorated method. This is applied to all tasks and all dao methods that touch the notifications/notification_history tables
- statsd client changed to prefix all stats with "notification.api."
- Relies on https://github.com/alphagov/notifications-utils/pull/61 for request logging. Once integrated we pass the statsd client to the logger, allowing us to statsd all API calls. This passes in the start time and the method to be called (NOT the url) onto the global flask object. We then construct statsd counters and timers in the following way
notifications.api.POST.notifications.send_notification.200
This should allow us to aggregate to the level of
- API or ADMIN
- POST or GET etc
- modules
- methods
- status codes
Finally we count the callbacks received from 3rd parties to mapped status.
dont send reply_to_addresses around from process_job and send_email -
take it from the service in send_email_to_provider. also clean up
the kwarg in aws_ses.send_email to more accurately reflect what we
might pass in
It seems like an oversight not to include the notification type in the notifcation.
When updating statistics a query to the template table is required to get the type, this update will mean that query does not have to happen.
notifications, when retrieved by notification id, or service id (i.e.
all notifications for service).
There is a new element returned at top level of notification json called
body, which is the template content merged with personalisation. This
is consistent with api to endpoint to create notification which returns
what was sent as 'body' in json response.
Merging of template with personalisation is done in the
NotificationStatusSchema.
Personalisation data in encrypted before storing in db.