When creating a service it should inherit it’s organisation’s branding,
if that organisation has branding.
This wasn’t working because we were referring to the ID of the branding
when making the association, not the branding itself.
This sets the folder permissions for a user when adding them to a
service. If a user is being added to a service after accepting an
invite, we need to account for the possibility that the folders we are
trying to add them to have been deleted before they accepted the invite.
Updated the add_user_to_service endpoint to only handle data in the
'new' format (`{"permissions": [...]}` instead of `[permission_1, permission_2]`)
since Admin has been updated to send data the new way.
This change means that we no longer need the Marshmallow Permission
schema, so it can be deleted.
The data posted to the `add_user_to_service` endpoint is currently sent as a
list of permissions:
`[{'permission': MANAGE_SETTINGS}, {'permission': MANAGE_TEMPLATES}]`.
This endpoint is going to also be used for folder permissions, so the
data now needs to be nested:
`{'permissions': [{'permission': MANAGE_SETTINGS}, {'permission': MANAGE_TEMPLATES}]}`
This changes the add_user_to_service endpoint to accept data in either
format. Once admin is sending data in the new format, the code can be
simplified.
If we had organisations for GDS and Cabinet Office, then we’d always
want someone whose email address ends in `@cabinet-office.gov.uk` to
match to `cabinet-office.gov.uk` before matching to
`digital.cabinet-office.gov.uk`.
Sorting the list by shortest first addresses this.
It should be nullable so we can tell whether someone has answered the
question already or not.
No real users have entered data into this column yet, so it’s fine to
wipe it.
It makes most sense to collect this at the same time as the estimated
volumes. Which means we need to store it somewhere; we can’t put it
straight into the ticket.
This will make it easier to do analysis on the data. Almost all users
are submitting data in a numerical format now anyway, because we ask the
question in a sensible way.
When a service go live we ask people for their estimated sending
volumes. At the moment we only put this in the ticket, and store it in
a spreadsheet.
This means that a service can
- say they want to go live
- say they are sending 100,000 emails per year
- not have created any email templates
- still see ‘create templates’ as ‘completed’ in the go live checklist
If we store this data against the service we can collect it earlier, and
then use it to determine automatically what kind of templates the user
needs to create before their go live checklist can be considered
complete.
The template preview app now accepts a null value for the `filename`
parameter. If a service doesn't have a letter branding option set,
previously we defaulted to their dvla_organisation (probably HM
Government). Now, we pass through None, so that we generate letters
without any logo or branding.
when creating a service, the api accepts a `service_domain` field that
it uses to populate the letter branding - if the service domain is
known to match an existing letter branding option, use that
automatically. However, the admin currently doesn't know about this
field yet so doesn't pass anything through - the api erroneously
searches the DB for letter branding with a domain of None - which they
currently all have.
This meant that when services were created, their letter branding was
set to the most recent row in the DB (that matched None).
Step 1 of 2 of turning on folders for all services.
We think it’s a feature which will be useful for the majority of
services, and we think we’ve done enough research to know that it’s
mature enough to release to all services.
However, until we can create a letter without a logo, we will still default to hm-government, because the dvla_organisation is set on the service.
This does simplify the code.
Also removed the inserts to letter_branding in the data migration file, because we can deploy this before the rest of the work is finished. But we will need to do it later.
now that we're reading from two tables (ft_notification_status and
notifications) for stats, we'll get a couple of rows for each
notification type. If a service doesn't have any rows in one of those
tables, the query will return a row with nulls for the notification
types and counts. Some services will have history but no stats from
today, others will have data from today but no history.
This commit acknowledges that any row might have nulls, not just the
first row.
Flask-SQLAlchemy paginate function issues a separate query to get
the total count of rows for a given filter. This query (with
filters used by the API integration Message log page) is slow for
services with large number of notifications.
Since Message log page doesn't actually allow users to paginate
through the response (it only shows the last 50 messages) we can
use limit instead of paginate, which requires passing in another
flag from admin to the dao method.
`count` flag has been added to `paginate` in March 2018, however
there was no release of flask-sqlalchemy since then, so we need
to pull the dev version of the package from Github.
Added cancelled letters to the number of failed letters in the statistics
that get used for the dashboard. At some point, we want to stop
including cancelled letters in the stats, but for now this keeps things
consistent with our current letter failure state, permanent-failure.
Letters should always have a reference, because that’s what DVLA use to
tell us when they’ve sent a letter.
If a letter has a reference of `None` then DVLA say they’ve sent a
letter with a reference of `'None'`. This means we can never reconcile
the letter, which means it stays in `created`, which means it never
gets billed.
We don’t think this has affected any real letters yet, just ones that
we’ve sent as tests.
This commit modifies the code paths the admin app uses to send one off
emails and text messages to also accept letters.
This mostly worked already, the two changes were:
- making sure that one-off letters are processed by the correct task,
from the correct queue
- one-off letters sent from a service in research mode don’t get put on
a queue and go straight to `delivered` (because we don’t want to send
them for real)
Added the filename of a service's letter logo to the service schema. We want
this in the schema so that it is possible to call
`current_service.letter_logo_filename` from notifications-admin and to pass this value
through to template-preview.
To start with this will be an attribute on the service, at the time the notification is created it will look at Service.letter_class to decide what class to use for the letter.
This PR adds Service.letter class as a nullable column.
Updated the create_service and update_service method to default the value to second.
Subsequent PRs will add the check constraint to ensure we only get first or second in the letter_class column and make that column nullable.
This can't be done all at once because it will cause an error if someone inserts or updates a service during the deploy.
Admin, API and utils were all defining a value for SMS_CHAR_COUNT_LIMIT.
This value has been updated in notifications-utils to allow text
messages to be 4 fragments long and notifications-api now gets the value of
SMS_CHAR_COUNT_LIMIT from notifications-utils instead of defining it in
config.
Also updated some tests to check for the higher limit.
Admin app needs to get the service data retention for the specified
notification type, so to avoid iterating through the list of all
existing service data retention settings we restore the endpoint
to get the individual data retention period.