* CORS headers removed because browsers should not interact with API directly
* Updated error handling to return expected content-type for JSON error messages
This deletes a big ol' chunk of code related to letters. It's not everything—there are still a few things that might be tied to sms/email—but it's the the heart of letters function. SMS and email function should be untouched by this.
Areas affected:
- Things obviously about letters
- PDF tasks, used for precompiling letters
- Virus scanning, used for those PDFs
- FTP, used to send letters to the printer
- Postage stuff
This works in conjunction with the new SMS provider stub [^1].
Local testing:
- Run the migrations to add Reach as an inactive provider.
- Activate the Reach provider locally and deactivate the others.
update provider_details set priority = 100, active = false where notification_type = 'sms';
update provider_details set active = true where identifier = 'reach';
- Tweak your local environment to point at the SMS stub.
export REACH_URL="http://host.docker.internal:6300/reach"
- Start / restart Celery to pick up the config change.
- Send a SMS via the Admin app and see the stub log it.
- Reset your environment so you can send normal SMS.
update provider_details set active = true where notification_type = 'sms';
update provider_details set active = false where identifier = 'reach';
[^1]: https://github.com/alphagov/notifications-sms-provider-stub/pull/10
Note that the new base class doesn't include a bespoke feature we
had here: 'log_on_worker_shutdown'. We've agreed it's reasonable
to remove it for now as it was introduced many years ago and its
use case is unclear - we can always add it back if needed.
In response to: https://github.com/alphagov/notifications-api/pull/3305#pullrequestreview-726672421
Previously this was added among the public /v2 endpoints, but it's
only meant for internal use. While only the govuk-alerts app would
be able to access it, the location and /v2 URL suggested otherwise.
This restructures the endpoint so it resembles other internal ones.
We can define the API properly in future work. I've used a separate
blueprint from "broadcasts" since this API is purely internal, and
it's helpful to make it clear it's specific to govuk-alerts.
See https://github.com/alphagov/notifications-utils/pull/878 for
details.
Changes we had to make for our app and tests to work correctly
after the dependency updates:
1. Update emergency alerts polygons test because we changed
how exact we are with locations of the points on the map.
2. Use Flask's g object to set additional request attributes
So far we have been storing them in _request_ctx_stack which is
an innard for Flask's request context.
Because of major update to Werkzeug dependency, which Flask relies
on, the way we were using it stopped working, so we had a new
way to set those values.
The way we set those values now, by using g object, seems to also
be favoured in Flask documentation:
https://flask.palletsprojects.com/en/1.1.x/reqcontext/#how-the-context-works
We haven't bumped the test version for a while.
Also bumped the version of Flask and itsdangerous.
In order to fix flask warnings I needed to changed how the blueprints were registerd.
added some simple validation to the delete endpoint for sanity, but
generally my assumption is that more validation will happen on the admin
side.
noteably im not checking whether the credentials are duplicated, nor is
there a uniqueness constraint in the database - I'm not sure if the
credential blob will always reliably be equivalent, and I believe the
browser should hopefully take care of dupes.
Previously we used a '@statsd' decorator to time and count Celery
tasks [1]. Using a decorator isn't ideal since we need to remember
to add it to every task we define. In addition, it's not possible
to use data like the task name and queue.
In order to avoid breaking existing stats, this duplicates them as
new StatsD metrics until we have sufficient data to update dashboards
using the old ones. Using the CeleryTask superclass to send metrics
avoids a future maintenance overhead, and means we can include more
useful data in the StatsD metric. Note that the new metrics will sit
in StatsD until we add a mapping for them [2].
StatsD automatically produces a 'count' stat for timing metrics, so
we don't need to increment a separate counter for successful tasks.
[1]: dea5828d0e/app/celery/tasks.py (L65)
[2]: https://github.com/alphagov/notifications-aws/blob/master/paas/statsd/statsd-mapping.yml
We no longer will send them any stats so therefore don't need the code
- the code to work out the nightly stats
- the performance platform client
- any configuration for the client
- any nightly tasks that kick off the sending off the stats
We will require a change in cronitor as we no longer will have this task
run meaning we need to delete the cronitor check.
nb: the routes are not changing as part of this, only file paths and
blueprint names.
invite -> service_invite
this blueprint handles fetching invites for a service, creating invites,
etc.
accept_invite -> global_invite
this blueprint handles accepting invites for now, but will also involve
retrieving service/org user invite data without knowing the service/org
id associated. i'm not in love with this name and open to suggestions,
but i wanted to contrast it from service_invite and
organisation/invite_rest.py.
previously we would retry if the task was queued up for retry but the
status is in "received-ack" or "received-err". We don't expect that a
task will be retried after getting this status, but if there are
duplicate tasks that could happen. Lets plan for the worst by saying
"only process a retry if the task is currently in sending".
this way, if a duplicate task is on retry and the first task goes
through succesfully, the duplicate task will give up.
We know there is at least one system which wants to integrate with
Notify to send out emergency alerts, rather than creating them manually.
This commit adds an endpoint to the public API to let them do that.
To start with we’ll just let the system create them in a single call,
meaning they still have to be approved manually. This reduces the risk
of an attacker being able to broadcast an alert via the API, should the
other system be compromised.
We’ve worked with the owners of the other system to define which fields
we should care about initially.
this is a pretty big and convoluted refactor unfortunately.
Previously:
There was one global `cbc_proxy_client` object in apps. This class has
the information about how to invoke the bt-ee lambda, and handles all
calls to lambda. This includes calls to the canary too (which is a
separate lambda).
The future:
There's one global `cbc_proxy_client`. This knows about the different
provider functions and lambdas, and you'll need to ask this client for a
proxy for your chosen provider. call cbc_proxy_client.get_proxy('ee')`
and it'll return you a proxy that knows what ee's lambda function is,
how to transform any content in a way that is exclusive to ee, and in
future how to parse any response from ee.
The present:
I also cleaned up some duplicate tests.
I'm really not sure about the names of some of these variables - in
particular `cbc_proxy_client` isn't a client - it's more of a java style
factory, where you call a function on it to get the client of your
choice.
i think it's causing havoc with my attempts to mock stuff in the
`app.clients` directory because it's also accessible at that path. the
name's super vague and doesn't explain what it is anyway
We are going to invoke a lambda to send a message to the CBC
We need a CBC Proxy Client to do this
The Client will be able to send/update/cancel broadcasts in the CBC
Unless we have configured the app with AWS credentials for the
CBCProxyClient, we just want to use a client that does nothing: the noop
client
The AWS access keys are separate for the CBC Proxy vs other Notify AWS
things because the CBC Proxy lives in another AWS account
Signed-off-by: Toby Lorne <toby.lornewelch-richards@digital.cabinet-office.gov.uk>
Co-authored-by: Pea <pea.tyczynska@digital.cabinet-office.gov.uk>
Co-authored-by: Katie <katie.smith@digital.cabinet-office.gov.uk>
We don't want an exception while recording metrics to affect a user action. A KeyError exception was thrown today, that meant that a user say a 500, the action being performed was to download a document from the document-download-frontend app. By catching the error we prevent the user from seeing a 500 when a recording the connection metric fails.
Also catch the exception in the checkout event.