please ensure that any changes to notifications table happen through either dao_create_notification or dao_update_notification.
changed the notification status update triggered by the provider callbacks to ensure that sets updated_by and can update the history table.
also re-added the character_count so we can reconstruct billing data if needed.
triggered via calls in dao_create_notification and dao_update_notification - if you don't use those functions (eg update in bulk) you'll have to update the history table yourself!
set to 'normal' for all existing notifications, and all job notifications also created as 'normal' - so if your eg reporting service hits notify, it gets notifications created from both API calls and front-end csv jobs.
It seems like an oversight not to include the notification type in the notifcation.
When updating statistics a query to the template table is required to get the type, this update will mean that query does not have to happen.
If the notification ends up in the retry queue and the delivery app is restarted the notification will get sent twice.
This is because when the app is restarted another message will be in the retry queue as message available which is a
duplicate of the one in the queue that is a message in flight.
This should complete https://www.pivotaltracker.com/story/show/121844427
It’s going to be useful to see all the notifications for a job that are
failed/delivered/etc.
The API seems to support this behaviour already, but it doesn’t seem to
be tested.
This commit adds some testsfor the DAO and REST layers.
In order to set a message as temporary-failure, we check if it is in pending status first.
Otherwise a delivery receipt for failure is set to permanent failure.
If the notification has a status == sending then update the status otherwise do not update the status.
In other words do not change the status more than once.
Update notifications/sms|email endpoint to send the template version to the queue.
Update the process_job celery talk to send the template version to the queue.
When the send_sms|send_email task runs it will get the template by id and version.
Created a data migration script to add the template_vesion column for jobs and notifications.
The existing jobs and notifications are given the template_version of the current template.
There is a chance this is the wrong template version, but deemed okay since the application is not live.
Create unit test for the dao_get_template_versions method.
Rename /template/<id>/version to /template/<id>/versions which returns all versions for that template id and service id.
We were using two different queries to filter template stats to the past
7 days, plus today.
Since we’re storing both as short dates, we can now use the same query
for both.
0 | 1 | 2 | 3 | 4 | 5 | 6 | 7
-------|---------|-----------|----------|--------|----------|--------|-------
Monday | Tuesday | Wednesday | Thursday | Friday | Saturday | Sunday | Monday
So if we are on Monday, the stats should include today, plus everything
back to last Monday.
Previously the template stats query was only going back to the Tuesday.
This should mean the numbers on the dashboard always line up.
This means we will retain notifications for a full week and not
delete records that are 7 x 24 hours older than the time of the run of
the deletion task.
Also the task only needs to run once a day now, so I have changed
the celery config for the deletion tasks.
https://www.pivotaltracker.com/story/show/117920839
On the dashboard we want to show counts of notifications sent in the
last 7 days, plus today. So the API enpoint needs to accept an argument
to limit how many days worth of statistics it will return.
This is a bit fiddly at the DAO level because the date is just stored as
a string.