Added a new JSONB column, folder_permissions, to the invited_users table
to store a list of folders that an invited user can see. This is
nullable for now, but will be changed to be non-nullable and
back-populated later.
Currently we have
- a thing in the database called an ‘organisation’ which we don’t use
- the idea of an organisation which we derive from the user’s email
address and is used to set the default branding for their service and
determine whether they’ve signed the MOU
We should make these two things into one thing, by storing everything
we know about an organisation against that organisation in the database.
This will be much less laborious than storing it in a YAML file that
needs a deploy every time it’s updated.
An organisation can now have:
- domains which we can use to automatically associate services with it
(eg anyone whose email address ends in `dwp.gsi.gov.uk` gets services
they create associated to the DWP organisation)
- default letter branding for any new services
- default email branding for any new services
It should be nullable so we can tell whether someone has answered the
question already or not.
No real users have entered data into this column yet, so it’s fine to
wipe it.
Changed the user_to_service mapping table into a model called
ServiceUser. When looking at users who have permission for a folder
we are only interested in users for a particular service, not all users,
so we can use the ServiceUser model to access folder permissions.
Added a user_folder_permissions table which contains the service_id,
user_id and template_folder_id. There are links between
user_folder_permissions and TemplateFolder, and between
user_folder_permissions and ServiceUser.
It makes most sense to collect this at the same time as the estimated
volumes. Which means we need to store it somewhere; we can’t put it
straight into the ticket.
This will make it easier to do analysis on the data. Almost all users
are submitting data in a numerical format now anyway, because we ask the
question in a sensible way.
When a service go live we ask people for their estimated sending
volumes. At the moment we only put this in the ticket, and store it in
a spreadsheet.
This means that a service can
- say they want to go live
- say they are sending 100,000 emails per year
- not have created any email templates
- still see ‘create templates’ as ‘completed’ in the go live checklist
If we store this data against the service we can collect it earlier, and
then use it to determine automatically what kind of templates the user
needs to create before their go live checklist can be considered
complete.
However, until we can create a letter without a logo, we will still default to hm-government, because the dvla_organisation is set on the service.
This does simplify the code.
Also removed the inserts to letter_branding in the data migration file, because we can deploy this before the rest of the work is finished. But we will need to do it later.
sent_by_email_address field was added because sometimes two
people at one institution have the same name and then email
address, which is unique, is more useful.
Added cancelled letters to the number of failed letters in the statistics
that get used for the dashboard. At some point, we want to stop
including cancelled letters in the stats, but for now this keeps things
consistent with our current letter failure state, permanent-failure.
Bumped notifications-utils to 3.7.0. Version 3.7.0 includes the
`convert_utc_to_bst` and `convert_bst_to_utc` functions and the
`LETTER_PROCESSING_DEADLINE` constant, so these have been removed from
this repo and anywhere using these has now been updated to get these
from `notifications-utils`.
Also bumped pytest by a patch version to bring in a bug fix.
new endpoints:
/services/<service_id>/move-to-folder
/services/<service_id>/move-to-folder/<target_template_folder_id>
* takes in a dict containing lists of `templates` and `folders` uuids.
* sets parent of templates and folders to the folder specified in the
URL. Or None, if there was no id specified.
* if any template or folder has a differen service id, then the whole
update fails
* if any folder is an ancestor of the target folder, then the whole
update fails (as that would cause a cyclical folder structure).
* the whole function is wrapped in a single `transactional` decorator,
so in case of error nothing will be saved.
If the parent_folder_id then check if the folder exists and is for the same service. If it is add the folder to the template model object, the relationship will be persisted when the template is saved. If the folder does not exist or is for a different service, then return a ResultNotFound error.
When creating the Tempalte from_json, the folder is passed in. Since some validation should done, as in the folder exists and is for the same service, the folder is passed through to the Tempalte.from_json method.
When the template is persisted so is the relationship to folders.
TODO: If the folder is invalid a specific message should be returned.
Updated jsonschema to Draft7, this allowed a conditional validation on subject, if template_type == 'email' or 'letter' then subject is required.
This version is backward compatible with Draft4.
When creating TempalteRedacted, I've built the dict depending on if the created_by or created_by_id exists.
* create template folder
* rename template folder
* get list of template folders for service (not nested/presented in any
particular way)
* delete template folder
Also removed `lazy=dynamic` from the `template_folder.templates`
relationship. lazy=dynamic returns a query object (which you can then
filter further). We just want to return the entire fetched list, at
least for now.
The `@version_class` decorator looks at every dirty (modified) model in
the session to work out which new history models to create. However, if
there are dirty items in the session, sqlalchemy might flush to the
database, clearing the whole session.
We ran into problems with the archive service function, which is
versioned for api keys, templates and services. When constructing the
TemplateHistory objects, `history_meta.py::create_history` would call
getattr on `Template.folders`, which would make a database call to join
across to the TemplateFolder objects - this would then flush the dirty
Service object from the session before the ServiceHistory object was
created.
To get around this, we eager load the Template.folder object, joining
on to it automatically when the Template is fetched. That way, it
doesn't make a SELECT mid-way through the version decorator, and the
history is preserved.
Note: This relationship is only on Template, not TemplateHistory - so
we're not doing this join every single time we send a message.
a TemplateFolder has a service, a name, and a parent. Parent is a
nullable foreign key pointing to another TemplateFolder instance. We
don't do any checks here for cyclical or otherwise invalid folder
structures so keep your data clean, folks!
Unsurprisingly, a Template can be part of a TemplateFolder - there's a
mapping class (template_folder_map to avoid giving it a dumb name) -
this mapping table shouldn't be interacted with directly - rather, you
should use the `Template.folder` or `TemplateFolder.templates`
relationship.
- pass new, sanitised pdf for sending
- move invalid pdfs to a newly created bucket
- set status fro notifications that failed pdf validation to a new status validation-failed
- adjust existing tests
Added a filename column to the dvla_organisation table and populated it
with the filenames that are currently hard-coded in template-preview.
The filenames for letter logos are going to be stored in the database,
instead of in template-preview.
The FactBilling model and the ft_billing database table have diverged
slightly - this makes some minor changes to the model columns so that
the model matches the table (which appears to be the correct version).
The ft_billing table is currently like this:
Column | Type | Modifiers | Storage | Stats target | Description
--------------------+-----------------------------+-----------+----------+--------------+-------------
bst_date | date | not null | plain | |
template_id | uuid | not null | plain | |
service_id | uuid | not null | plain | |
notification_type | text | not null | extended | |
provider | text | not null | extended | |
rate_multiplier | integer | not null | plain | |
international | boolean | not null | plain | |
rate | numeric | not null | main | |
billable_units | integer | | plain | |
notifications_sent | integer | | plain | |
updated_at | timestamp without time zone | | plain | |
created_at | timestamp without time zone | not null | plain | |
Indexes:
"ft_billing_pkey" PRIMARY KEY, btree (bst_date, template_id, service_id, rate_multiplier, provider, notification_type, international, rate)
"ix_ft_billing_bst_date" btree (bst_date)
"ix_ft_billing_service_id" btree (service_id)
"ix_ft_billing_template_id" btree (template_id)
There are two fun quirks of postgres/sql that we need to work around:
* any `x = y` where x or y is NULL returns NULL, rather than false.
* check constraints accept NULL or true values as good.
so, the check `postage in ('first', 'second')` returns `null` rather
than `false` when postage is null itself. This surprisingly passes the
check constraint. To get around this, we have to add an explicit not
null check as well.
A not valid constraint only checks against new rows, not existing rows.
We can call VALIDATE CONSTRAINT against this new constraint to check
the old rows (which we know are good, having run the command from
74961781). Adding a normal constraint acquires an ACCESS EXCLUSIVE
lock, but validate constraint only needs a SHARE UPDATE EXCLUSIVE lock.
see 9d4b8961 and 0a50993f for more information on marking constraints
as not valid.