Having this as a function which does string parsing and manipulation
surprised me a bit when I was trying to figure out why something wasn’t
working.
It’s more in line with the way we do other config like this (for example
`ASSET_PATH`) to make it a simple config variable, rather than trying to
be clever and guess things based on other config variables.
It’s also less code, and is explicit enough that it doesn’t need tests.
lets us keep cabinet office financials safe in the credentials repo
the dict in the creds repo will either be an empty dict or a full dict,
so the env var on paas will always contain some parseable json. But
locally it might not, so if it's not set at all then default to the
string `null` so the json parsing doesn't throw a wobbly.
This continues the work from Template Preview [1], so that we have
a complete store of original PDFs to use for testing changes to it.
Previously we did store some originals, but these were only invalid
PDFs that had failed sanitisation; for valid PDFs, the "transient"
bucket only contains the sanitised versions, which the API deletes
/ moves when the notification is sent [2].
Since the notification is only created at a later stage [3], there's
no easy way to get the final name of the PDF we send to DVLA. Instead,
we use the "upload_id", which eventually becomes the notification ID
[4]. This should be enough to trace the file for specific debugging.
Note that we only want to store original PDFs if they're valid (and
virus free!), since there's no point testing changes with bad data.
[1]: https://github.com/alphagov/notifications-template-preview/pull/545
[2]: c44ec57c17/app/service/send_notification.py (L212)
[3]: 7930a53a58/app/main/views/uploads.py (L362)
[4]: 7930a53a58/app/main/views/uploads.py (L373)
The API has a method to handle setting the default SMS free allowance. This will save a call to the API and remove some code duplication between the two apps.
Needs to be merged after https://github.com/alphagov/notifications-api/pull/3197
We don’t vary this between different environments so it doesn’t need to
be in the config.
I was trying to look up what this value was and found it a bit confusing
that it was spread across multiple places.
🚨 Do not merge until after 1 April 2020 🚨
Once this date has past we no longer need to give any services the
previous allowances, so we can remove them from the codebase to avoid
confusion.
It’s possible we change the allowance structure again, but it might
change in a way that this config-based logic doesn’t account for (what
if we did a per-organisation allowance for example). Having both years’
allowances in the config was a quick fix, not a foundation to build on.
We’re going to have different allowances next financial year. This means
that when someone adds a service, we’ll need to check which year it is,
so we can give them the right allowance.
This commit changes the config structure so that the current allowances
are explicitly assigned to the 2020/21 financial year.
It freezes the tests to the 2020/21 financial year, so they won’t start
failing automatically when next financial year comes around.
It clashes with the new `$govuk-focus-colour` now. This commit changes
it to half way between `govuk-colour("dark-grey")` (`#505a5f`) and
`govuk-colour("mid-grey")` (`#b1b4b6`) from the Design System. Dark was
too dark and mid was too light.
It also adds a line of JS to let us easily switch the header to blue by
clicking on it, which is useful for taking screenshots etc.
we want to keep track of all broadcast services across govt easily. As
such, when broadcasting is enabled for a service, we've decided we're
going to add the service to a special broadcasting organisation.
This organisation is defined in the config file. It's hard coded for
production, if you want to test locally, you should set
BROADCAST_ORGANISATION_ID in your local environment.
Everything else is production. The bucket is currently called
production. The fact that the CSV bucket is called `live-` is a legacy
thing that’s hard to change.
We don’t want to muddy them up with the normal CSV uploads.
I’ve tried to reuse the existing S3 code where possible because it’s
well tested.
Buckets have already been created.
Celery/SQS underperforms in low-traffic environments. Tasks will sit on
celery queues for several seconds before getting picked up if they're
the only thing on the queue. This is observable in our test environments
like preview and staging, but we've got enough load on production that
this isn't an issue.
When we validate reply to email addresses, we expect a delivery receipt
to have been processed within 45 seconds of the button being pressed. On
preview, we often observe times over that, possibly due to the several
queues involved in sending an email and processing its receipt. So, to
ensure that functional tests can pass (when we don't really care how
fast things are, just that the flow doesn't break), bump this timeout up
to 120 seconds on preview. The functional tests were waiting for 120
seconds for the reply to address to be validated anyway.
we have a hunch that some session related issues that we've seen over
the last few weeks might be related to weird race conditions where
cookies set by subresources (image previews of letters on the send flow)
arrive just as the img request is cancelled because the user has clicked
on a button to navigate to a new page, but still manage to set the
cookie? We're not entirely sure what's going on, but we've got a hunch
that not setting cookies on image fetches sounds sensible. Images are
always loaded as a subresource (ie: through a `src` tag in an html
element), so they should never need to change the cookies, so this seems
sensible. We've done this by creating a new blueprint that doesn't set
session.permanent, and doesn't call `save_serivce_or_org_after_request`
either.
cookies are sent back to the browser if:
`sesion.modified or (session.permanent and 'REFRESH_EVERY_REQUEST')`
(where the latter is a config setting).
Turning off REFRESH_EVERY_REQUEST (which is True by default) means that
we will only update the sesion if it's been modified. In practice,
literally every request is modified in the after_request handler
`save_service_or_org_after_request`. This is accidentally convenient,
as it guarantees that we'll still send back the cookie normally even
though refresh_every_request is disabled. Sending back the cookie
updates the expiry time (20 hours), so we need to keep doing this to
preserve existing session timeout behaviour.
This sanitises uploaded letters and stores the sanitised result in S3
with if it passes validation or the original PDF in S3 if validation
fails. A metadata value of 'status' is set to either 'valid' or
'invalid'.
also set redis url locally to be localhost. redis is disabled by
default so this won't do anything unless you set REDIS_ENABLED=1 as an
environment variable
The CDN URLs aren’t in included in the content security policy. So
browsers will refuse to load them.
This commit:
- adds each of the CDN URLs to the
- only prepend URLs in CSS files with `/static/` if we’re running
locally (because the CDN URLs are like `static.example.com` not
`example.com/static`)
`www.notifications.service.gov.uk` domain is:
- not gzipped
The PaaS proxy used to GZip and set headers for anything served from a
path starting with `/static/`:
76dd511a8a/ansible/roles/paas-proxy/templates/admin.conf.j2 (L53-L64)
Anything served from `static.notifications.service.gov.uk` is:
- GZipped
- and as a bonus, cached by Cloudfront where possible (meaning the
requests won’t ever hit our app)
This commit moves to serving static asset from `/static/` to
`static.notifications.service.gov.uk`, to get the above listed benefits.
***
We could do even better by setting long cache expiry headers on the static subdomain (currently they’re only set to cache for 60 seconds). But that’s out of scope for this commit.