when we made the change to async persist notifications, we forgot to
pass through api_key_id and key_type. in send_sms/email, for legacy
reasons, they default to None/KEY_TYPE_NORMAL, so regardless of what
your api key was set up as, we would send real messages!
TODO: Once the PaaS transition is complete and the task changes are
reverted, remove the api_key_id and key_type params from the send_*
tasks entirely, as those are only called from the csv job flow, and
don't need them
This is being done for the PaaS migration to allow us to keep traffic coming in whilst we migrate the database.
uses the same tasks as the CSV uploaded notifications. Simple changes to not persist the notification, and call into a different task.
This will transform each notification in a job to a row in a file.
The file is then uploaded to S3.
The files will later be aggregated by the notifications-ftp app to send to dvla.
The method to upload the file to S3 should be pulled into notifications-utils package.
It is the same method used in notifications-admin.
Update the PermissionsDao.get_permissions_by_user_id to only return permissions for active services,
this will make the admin app return a 403 if someone (otherthan platform admin) tries to look at an inactive service.
Removed the active flag in sample_service the dao_create_service overiddes this attribute.
the `to` field stores either the phone number or the email address
of the recipient - it's a bit more complicated for letters, since
there are address lines 1 through 6, and a postcode. In utils, they're
stored alongside the personalisation, and we have to ensure that when
we persist to the database we keep as much parity with utils to make
our work easier. Aside from sending, the `to` field is also used to
show recipients on the front end report pages - we've decided that the
best thing to store here is address_line_1 - which is probably going to
be either a person's name, company name, or PO box number
Also, a lot of tests and test cleanup - I added create_template and
create_notification functions in db.py, so if you're creating new
fixtures you can use these functions, and you won't need to pass
notify_db and notify_db_session around, huzzah!
also removed create param from sample_notification since it's not used
anywhere
it's almost entirely duplicated so share it across.
also clean up retrying - `task.retry(...)` raises a
celery.exceptions.Retry object, so you do not need to `raise` its
response. additionally, cleaned up tests around that since raising
Exception and asserting Exception is raised is dangerous as it could
mask actual programming errors
note that all of these tests have to be checked to ensure that they
still call through to notify_db_session (notify_db not required) to
tear down the database after the test runs - since it's no longer
required to pass it in to the function just to invoke the sample_user
function
it now switches utils.template.Template type, since the base Template
type now no longer has a subject attribute.
updated test case to use `sample_email_template_with_placeholders`
instead of `sample_email_template`
- seems phonenumber/emailaddress from the CSV are now passed in as personalisation.
- assume the renderer does the correct thing here. Will need to check with @quis
In this PR the id for the notification is passed in and used to created the notification, which causes a integrity error.
Normally when we get a SQLAlchemy error here we send the message to the retry queue, but if the notification already exists
we just ignore it.
We're formally using the ISO 8601 UTC datetime format, and so the
correct way to output the data is by appending the timezone.
("Z" in the case of UTC*).
Unfortunately, Python's `datetime` formatting will just ignore the
timezone part of the string on output, which means we just have to
append the string "Z" to the end of all datetime strings we output.
Should be fine, as we will only ever output UTC timestamps anyway.
* https://en.wikipedia.org/wiki/ISO_8601#UTC
1) It's incr not inc on the redis client, so renamed the calls everywhere
2) Redis returns bytes/string rather than an int if the value stored is an int. Cast the result to an int before use. Not you can set up the GET to do this transparently but I've not done this as we *may * use GETS for non-int and the callback sets up the cast for the connection not the call.
After we have written to the database and placed it on a deliver queue we count it in the cache against the service.
This is the equivalent of doing it at the end of the API call.
mocks create any property you access, so calling functions on them is
inherently risky due to typos quietly doing nothing. instead assert
`.called is False`, which will fail noisily if you typo
- It seems that when we changed the name of the job.status column that we didn't update the code to use job.job_status.
- Therefore none of the jobs since then have had the job status updated.
- Now that this is fix we can show the job status when there is an error like "sending exceeds limits"
- This could happen if a job is scheduled to run at the top of the hour, so at the time of the job creation the limit was not exceed, but at the time of processing the job the limit is exceed.
Previously there were 4 queues for sending messages
The was based on the fact that each notification has 2 actions - persist in the database and send to provider.
Two queues supported the CSV upload - for the first of these tasks
- bulk-email
- build-sms
And there were two more queues for the tasks that make the 3rd party client calls.
- sms
- email
API Calls just used the latter two queues for both tasks
Added four new queues
- db-email
- db-sms
- send-sms
- send-email
So an API call puts a notification into the db-[type] queue first, which then puts the notification into the send-[type] queue
Build queues stay as before.
This will allow us to target processing of these tasks with separate workers to manage these differently.
Removed all existing statsd logging and replaced with:
- statsd decorator. Infers the stat name from the decorated function call. Delegates statsd call to statsd client. Calls incr and timing for each decorated method. This is applied to all tasks and all dao methods that touch the notifications/notification_history tables
- statsd client changed to prefix all stats with "notification.api."
- Relies on https://github.com/alphagov/notifications-utils/pull/61 for request logging. Once integrated we pass the statsd client to the logger, allowing us to statsd all API calls. This passes in the start time and the method to be called (NOT the url) onto the global flask object. We then construct statsd counters and timers in the following way
notifications.api.POST.notifications.send_notification.200
This should allow us to aggregate to the level of
- API or ADMIN
- POST or GET etc
- modules
- methods
- status codes
Finally we count the callbacks received from 3rd parties to mapped status.