The date in the notifications table should always be the most recent date for the template.
Removed the template_type param for the query as well.
Simplified the tests.
The existing endpoint returned a whole notification for the last time the template was used. But this only takes into account data in the last week. This new methods allows us to be specific about when the template was last used if ever but looking into the ft_notification_status table as well.
This is to bring it in line with other serialized dates in User
model, like logged_in_at and password_changed_at.
Also get rid of check if password_changed_at has value, as
it is a non-nullable column, so it needs to always have value.
Also set a default value for email_access_validated_at, to bring
it in line with other non-nullable columns.
when verifying the code is correct. This way if user has sms_auth
and we send them verification code to validate their email access,
and they click the link in the email, their access will be validated
correctly.
and update it when users have to use their email to interact with
Notify service.
Initial population:
If user has email_auth, set last_validated_at to logged_in_at.
If user has sms_auth, set it to created_at.
Then:
Update email_access_valdiated_at date when:
- user with email_auth logs in
- new user is created
- user resets password when logged out, meaning we send them an
email with a link they have to click to reset their password.
If the letter passed sanitisation, the recipient address will be
returned from template preview, so we want to save this as the `to`
field of the notification.
Template preview will now send an encrypted dict containing all the args
to the `process_sanitised_letter` task, so this updates the task to
handle data in the new format.
Now that the encryption module has been moved from this app to utils, we
can remove it from here (along with its tests) and import it from utils
instead. This also renames the `encryption.py` file to `hashing.py`,
since it no longer contains the encryption class.
if doc download returns a 403, that's a screw-up on our side. it's not
helpful to a notify user for that to be passed on. the only thing they
should care about is if it's a 400, because they uploaded a filetype we
don't allow.
Everything else should return 500 internal server error.
We were just ignoring the errors and our users were not fixing things.
Given that 500 texts cost approx £8 it's not the end of the world.
In the long run we may decide to just stop letting people try and send
messages to TV numbers but this is a quick fix to stop emails coming in
which we ignore.
These alerts are sent to our postal provider. And it usually arrives as they are getting ready to go home for the day or the weekend.
Which means they get missed/overlooked. They have agreed to get the alert an hour earlier, perhaps that will improved the response time.
When a precompiled letter is sent via the admin app, we now pass in the address which can be set in the Notifications.to field.
Once a precompiled letters sent by the API has passed validation we can set the address in Notifications.to field.
The celery tasks to validate precompiled letters sent by the API will be done in another PR.
test_config manipulates os.environ. os.environ is an `environ` object,
which acts like a dict but isn't in some subtle unknowable ways. The
`reload_config` fixture would create a dict copy of the env, and then
just call `os.environ = old_env` afterwards.
Boto3 would then complain that it couldn't load credentials (despite us
using the mock_s3 fixture and also not having creds in the environment
in the first place). Not entirely sure why this happens, but it does.
For some reason, it being a `dict` instead of an `environ` object causes
the mocking of boto3 to fail.
The solution is to not overwrite os.environ entirely, rather, use the
standard dictionary setitem syntax to update the values to their
previous values. Use `clear` to empty the environment too.
Currently if you visit the job page and the job is older than the data retention the totals on the page are all wrong because this query gets the counts from the notification table. With this change the data should always be correct. It also eliminates the need for looking at data retention. If the job is new and nothing has been created yet (i.e. the job hasn't started yet) then the page should show the correctly because the outcomes are empty (as expected), once the notifications for the jobs are created the numbers will start going up.