if the international_billing_rates.yml has `dlr: null`, that means we
don't know what delivery receipts they provide - they might not provide
any. So if we do get an update, we don't know for sure that the message
was actually delivered - lets not update it.
- as the response is fake, the notifications billable_unit is left at 0, the fake dvla response should also be 0. Otherwise there will be confusing logs reporting mismatched page count and billable units which are just research ones.
- task not really necessary as the status is already set to 'sending' before the task is called if it is not sending i.e. in reseach mode or using a test key
- calls create fake response file to allow functional tests to run and trigger update of status to delivered only on preview and development so that FT response files don't pollute staging and live buckets
- call create fake response file task if in research mode only on preview and development environments to not impact response files on staging and live
The personalisation for letters can take different formats depending on
how the letter was generated, for example it can contain either
address_line_1 or addressline1. This change ensures that it is always
serialized in the same way.
Letters is a mature enough feature now – and one that we’ve been talking
about offering for long enough – that we shouldn’t make people dig
around in the settings.
I think we’d want to wait a bit longer/indefinitely before deciding to
turn it on for existing services across the platform.
- Changed the notification status of letters for letters that DVLA marks
as 'failed' from NOTIFICATION_TECHNICAL_FAILURE to
NOTIFICATION_TEMPORARY_FAILURE.
The whitelist was built to help developers and designers making
prototypes to do realistic usability testing of them, without having to
go through the whole go live process.
These users are sending messages using the API. The whitelist wasn’t
made available to users uploading spreadsheets. The users sending one
off messages are similar to those uploading spreadsheets, not those
using the API. Therefore they shouldn’t be able to use the whitelist to
expand the range of recipients they can send to.
Passing the argument through three methods doesn’t feel that great, but
can’t think of a better way without major refactoring…
- Also convert the files info to upper() for comparison rather than lower
because original file names are in upper case. The unit tests contain examples of the returned lists.
Since preview and staging environments don't have a full DVLA
integration they're likely to contain letter notifications in
a 'sending' state. To avoid spamming Deskpro we skip the check
unless we're in a production or test environment.
We should receive a response file from DVLA by 4pm the next working
day (next Monday for letters created on Friday, Saturday or Sunday).
Response file triggers a task to update the letters status from
'sending' to either 'failed' or 'delivered', at which point there
should be no letter notifications in the 'sending' state for that day.
To catch any errors in the process (eg a missing response file from
DVLA) we add a scheduled task that checks letter notifications for
previous day (or Friday when run on Monday) and raises a Deskpro
ticket if it finds any in a 'sending' state. We're checking letter
notifications based on the `sent_at` date, which is set when the
letter PDF is sent to DVLA (so for letters created after 5:30pm it
will be the next day).
The task runs at 4:30pm, which should give the response file processing
task enough time to finish if the file was uploaded at 4pm.
this means if we end up with some notifications sending and others not,
due to problems with the ftp connectivity for example, we don't re-send
those that worked.
As a reminder, letter pdf notifications start as created and stay that
way until we have sent the zip file to DVLA, at which point they are
updated to sending
#
Previously, if the SMS recipient was None there would be a 500 error
with no message displayed to the user. We now check if the recipient is
None and raise a BadRequestError if this is the case.
PR #1550 added the rate_limit column to the Service table.
This PR removes the rate limits from the config and uses rate_limit from
the Service model instead. Rate limits are still separated into 'team',
'normal' and 'test', but these values are the same for a service.
Pivotal story https://www.pivotaltracker.com/story/show/153992529
The history was not being updated properly, we think this is because the declaritive attribute is not being set propery by the property.
When reply_to: None it will update the service_letter_contact_id, but not the service_letter_contact, we think when the history_meta is build the history class and checking if the value is updated it depends which attribute it is checking first.
In order to fix this issue, there is a new dao method to update the reply_to on the Template and insert a new Template history.
It seems selecting the service_letter_contact in the validation method was causing SQLAlchemy to persist the object. When the dao was called to save the object nothing was different so we didn't persist the history object.
It may be time to take another look at how we version. :(
By replacing user-provided services with manifest environment variables
we avoid the need to set the application environment variables from the
service data.
Most of the variable names already match the service JSON keys, but we
need to rename the ones that don't (eg MMG and Firetext `api_key`) this
is done in a separate credentials PR.