this means that on non-prod envs, it reflects that environment.
it needs to be a lamdba, because the column object is created at import
time, when current_app.config won't have been loaded - this means that
when you create a Service object, that lambda executes and grabs the
correct default value
We now have a new column in the database, but it isn't being
populated. The first step is to make sure we update this column,
while still keeping the old enum based column up to date as well.
A couple of changes have had to happen to support this - one irritating
thing is that if we're ever querying columns individually, including
`Notification.status`, then we'll need to give that column a label,
since under the hood it translates to `Notification._status_enum`.
Accessing status through the ORM (i.e., my_noti.status = 'sending' or
similar) will work fine.
"sent" is fine as an internal marker but not very obvious to the end
user that it specifically refers to international messages. We now
say "Sent internationally" in the CSV
Since the response has changed I have created new endpoints so that the deployments for Admin are more managable.
Removed print statements from some tests.
We only need one rate per channel. This reflects that. The provider_rates has been left for now, it is still not being used.
New dao has been added to select the right rate for the given notification_type and date of notificaiton.
* Introduces separate method on Notification to serialise the notification
* ready for csv output
* Fixes issue where job_row_number = 0 not being accounted for correctly
ready to send == we have created a dvla file in S3, the job id is in the file name
sent to dvla == we have created an aggregate dvla file to send for the day and have sent the file via sftp
when we change the last logged in time, set the current session id to
a random uuid
this way, we can compare it to the cookie a user has, and if they
differ then we can log them out
also update user.logged_in_at at 2FA rather than password check, since
that feels more accurate
PEP8 was renamed to pycodestyle; this issue explains why:
https://github.com/PyCQA/pycodestyle/issues/466
This commit changes our tests to use pycodestyle instead of pep8.
It also means:
- making a couple of whitespace changes to appease the linter
- disabling warnings for bare `Except`s (ie `Except` instead of `Except
ValueError`) – this seems like a sensible thing to catch but I’m not
going to make meaningful code changes in this commit
If the template is marked as priority the notification will be sent using the `notify` queue.
The `notify` queue is a low volume queue, messages here will not be queue behind a large job and should be delivered with in a more consistent time frame.
- Added templates.process_type and templates_history.process_type column.
- Added a template_process_type table to handle the enum for templates.process_type, initial values are normal and priority
https://www.pivotaltracker.com/story/show/135429147
set all existing rows to have a version of 1 (also copy across values
to populate the new provider_details_history table in the upgrade
script)
in dao_update_provider_details bump the provider_details.version by 1
and then duplicate into the history table as a new row
(done manually as opposed to the decorator used in template_history
since this is only edited in this one place and the decorator is icky)
* to be used for auditing changes to provider details
* not hooked in yet
* also made provider_details.active non-nullable. set all existing null
provider_details.active to FALSE. none are set false on live so
should hopefully be fairly uncontroversial
* refactored NotificationHistory.from_notification and update from noti
to share with provider_details_history