This is to bring it in line with other serialized dates in User
model, like logged_in_at and password_changed_at.
Also get rid of check if password_changed_at has value, as
it is a non-nullable column, so it needs to always have value.
Also set a default value for email_access_validated_at, to bring
it in line with other non-nullable columns.
and update it when users have to use their email to interact with
Notify service.
Initial population:
If user has email_auth, set last_validated_at to logged_in_at.
If user has sms_auth, set it to created_at.
Then:
Update email_access_valdiated_at date when:
- user with email_auth logs in
- new user is created
- user resets password when logged out, meaning we send them an
email with a link they have to click to reset their password.
Now that the encryption module has been moved from this app to utils, we
can remove it from here (along with its tests) and import it from utils
instead. This also renames the `encryption.py` file to `hashing.py`,
since it no longer contains the encryption class.
- Check if right keys in new history rows
- Improve model and get rid of old revision version
- Add updated migration file
- Test data when inserting into inbound sms history
SMS and emails may be marked as `NOTIFICATION_PENDING`. These will be
billed as they will have been sent to the provider and will eventually
turn to a final state such as `NOTIFICATION_DELIVERED` or
`NOTIFICATION_PERMANENT_FAILURE`.
This change will fix a discrepency on the billing page were the number
of messages being billed was less than the number of messages reported
as sent on a services dashboard when some of those messages were in a
pending state.
In reality, I don't think this bug would have had any longer affects for
incorrect billing as messages would not stay in the pending state for
too long and billing calculations would happen after that point.
The table will contain notification ids for services that have returned letters. This will make it easy to query the data in Notification_history since we can join on the primary key.
previously we checked notifications table, and if the results were
zero, checked the notification history table to see if there's data
in there. When we know that data isn't in notifications, we're still
checking. These queries take half a second per service, and we're
doing at least ten for each of the five thousand services we have in
notify. Most of these services have no data in either table for any
given day, and we can reduce the amount of queries we do by only
checking one table.
Check the data retention for a service, and then if the date is older
than the retention, get from history table.
NOTE: This requires that the delete tasks haven't run yet for the day!
If your retention is three days, this will look in the Notification
table for data from three days ago - expecting that shortly after the
task finishes, we'll delete that data.
We have a table for datetime dimension that we no longer use and
I believe can be dropped.
First step is to remove the model and release the change. The next
step will be to then add a database migration to drop the actual
table. I believe we need to do it in this order and it can't
be done as a single PR.
this is only applicable when getting a single notification by id. it's
also ignored if the notification isn't a letter.
Otherwise, it overwrites the 'body' field in the response.
If the notification's pdf is available, it returns that, base64
encoded. If the pdf is not available, it returns an empty string.
The pdf won't be available if the notification's status is:
* pending-virus-scan
* virus-scan-failed
* validation-failed
* technical-failure
The pdf will be retrieved from the correct s3 bucket based on its type
This will let us do some filtering of this list in the admin. It’s
better to do it there because it means the admin can use the same cached
response from Redis each time.
Previously we were doing it based on their email address. This will also
apply it if they self-select as a GP surgery, even if they don’t have an
NHS email address.
Although their allowances are the same as what we call `nhs_local` it
makes more sense to store them separately because:
- we already present them as two separate choices to the user
- we may want to handle them differently in the future, eg in terms of
what branding choices are available to them
This table is no longer used or referenced in the code.
We can remove this mapping table now that the organisation to service relationship is handled by the foreign key in Services.
This is the second commit in the series to add organisation_id to Service.
- Data migration to update services.organisation_id from data in organisation_to_service
(The rollback will lose any updates to organisation unless the script is updated to set organistion_to_service from service.organisation_id )
- Update Service.organisation relationship to a ForeignKey relationship to Organisation.
- Update Organisation.services to a backref relationship to Service.
- Add oranisation_id to Service data model.
- Update methods to create service and associate service to organisation to set the organisation_id on the Service.
- Create the missing test, if the service user email matches a domain for an organisation then associate the service to the organisation and inherit crown and organisation_type from the organisation.
we build up one personalisation dict, and then pass it in to all the
different templates - so be careful editing things. also of note, we
check if the agreement_signed_on_behalf_of is set, and send a different
template with slightly different wording to the person who clicked the
confirm button.
At the moment this response returns a list of service IDs for hundreds
of organisations.
The admin app doesn’t use this information, but having to wait for it to
be serialized and sent across the network slows it down all the same.
This is changing because we’re going to introduce accepting contracts
and MoUs online.
Previously
---
We had one column for who signed the agreement, which is foreign keyed
to the user table. This is still relevant, because there will always be
a user who is clicking the button.
Now
---
We add two new fields for the name and email address of the person on
whose behalf the agreement is being accepted. This person:
- is different from the one signing the agreement
- won’t necessarily have a Notify account
For a user to be able to be archived, each service that they are a
member of must have at least one other user who is active and who has
the 'manage-settings' permission.
To archive a user we remove them from all their services and
organisations, remove all permissions that they have and change some of
their details:
- email_address will start with '_archived_<date>'
- the current_session_id is changed (to sign them out of their current
session)
- mobile_number is removed (so we also need to switch their auth type to
email_auth)
- password is changed to a random password
- state is changed to 'inactive'
If any of the steps fail, we rollback all changes.
go_live_user_id: is the user that requested the service to go live
go_live_at: is the DateTime the service went live.
There will be a data migration from the beta partners spreadsheet to back fill the data.
This relationship is via the `Organisation` now; we don’t use this
column to fudge a relationship based on the user’s email address and the
matching something in these columns.
Needed to update old migration scripts so that the email_branding name is not null when creating the test dbs.
This should no affect the migrations elsewhere.