Updated the endpoint for `.set_permissions` to update a user's folder
permissions as well as permissions for a service. User folder
permissions are optional for now, since Admin is not currently passing
this data through.
The data posted to the `set_permissions` endpoint is currently sent as a
list of permissions:
`[{'permission': MANAGE_SETTINGS}, {'permission': MANAGE_TEMPLATES}]`.
This endpoint is going to also be used for folder permissions, so the
data now needs to be nested:
`{'permissions': [{'permission': MANAGE_SETTINGS}, {'permission': MANAGE_TEMPLATES}]}`
This changes the set_permissions endpoint to accept data in either
format. Once admin is sending data in the new format, the code can be
simplified.
We've seen errors caused by what we suspect is a race condition when
SES callback processing tries to look up the notification before the
sender worker has saved notification reference from the SES POST
response to the database.
This adds a retry for SES callback task if the notification was not
found and the message is less than 10 minutes old and removes the
error log message for notifications older than 3 days (since they
might no longer exist in the notifications table and would've been
marked as failure by then either way).
In order to be able to call retry and silence the error log based on
notification time this change inlines `process_ses_response` and
`update_notification_by_reference` functions into the celery task.
It also removes a lot of defensive error-handling that doesn't appear
to have been triggered in the last few months (for things like missing
keys in SES callback data).
Currently we switch if:
* status = delivered and updated_at - sent_at > threshold
* status = sending and now - sent_at > threshold
firetext can leave notifications in the pending state, which is
equivalent to sending in terms of how we should handle it, so this
commit changes the second case to allow pending as well as sending.
It makes most sense to collect this at the same time as the estimated
volumes. Which means we need to store it somewhere; we can’t put it
straight into the ticket.
This will make it easier to do analysis on the data. Almost all users
are submitting data in a numerical format now anyway, because we ask the
question in a sensible way.
When a service go live we ask people for their estimated sending
volumes. At the moment we only put this in the ticket, and store it in
a spreadsheet.
This means that a service can
- say they want to go live
- say they are sending 100,000 emails per year
- not have created any email templates
- still see ‘create templates’ as ‘completed’ in the go live checklist
If we store this data against the service we can collect it earlier, and
then use it to determine automatically what kind of templates the user
needs to create before their go live checklist can be considered
complete.
The template preview app now accepts a null value for the `filename`
parameter. If a service doesn't have a letter branding option set,
previously we defaulted to their dvla_organisation (probably HM
Government). Now, we pass through None, so that we generate letters
without any logo or branding.
when creating a service, the api accepts a `service_domain` field that
it uses to populate the letter branding - if the service domain is
known to match an existing letter branding option, use that
automatically. However, the admin currently doesn't know about this
field yet so doesn't pass anything through - the api erroneously
searches the DB for letter branding with a domain of None - which they
currently all have.
This meant that when services were created, their letter branding was
set to the most recent row in the DB (that matched None).
Because we test for the other properties in the schema.
Also sets `additionalProperties` to `False` so we’re forced to update
the schemas if we make similar changes in the future. This means
removing `created_by` from the test data because it’s not returned by
the real response.
If you’re trying to show what a Notify email will look like in your
caseworking system all the API gives you at the moment is raw markdown
(with the placeholders replaced).
This isn’t that useful if your caseworkers have no idea what markdown
is. If we also give teams the HTML then they can embed this in their
systems, and the people using those systems will be able to see how
headings, bulleted lists, etc. look.
Step 1 of 2 of turning on folders for all services.
We think it’s a feature which will be useful for the majority of
services, and we think we’ve done enough research to know that it’s
mature enough to release to all services.
previously, it was too loose - checking `"name" in str(exc)` returns
false positives.
By changing from three if statements to a loop we can cut down on
unnecessary code (and ensure that the returned objects are consistent),
and by using the full check constraint name we can be sure that we're
only capturing exactly the right errors. Additionally, don't return
the original data in the error message - it's obvious what the name is
because it'll be populated in the form you just filled in.
However, until we can create a letter without a logo, we will still default to hm-government, because the dvla_organisation is set on the service.
This does simplify the code.
Also removed the inserts to letter_branding in the data migration file, because we can deploy this before the rest of the work is finished. But we will need to do it later.
make a decorator that pings cronitor before and after each task run.
Designed for use with nightly tasks, so we have visibility if they
fail. We have a bunch of cronitor monitors set up - 5 character keys
that go into a URL that we then make a GET to with a self-explanatory
url path (run/fail/complete).
the cronitor URLs are defined in the credentials repo as a dictionary
of celery task names to URL slugs. If the name passed in to the
decorator isn't in that dict, it won't run.
to use it, all you need to do is call `@cronitor(my_task_name)`
instead of `@notify_celery.task`, and make sure that the task name and
the matching slug are included in the credentials repo (or locally,
json dumped and stored in the CRONITOR_KEYS environment variable)