Because the IDs of our callback and inbound SMS APIs were stored in
lists instead of directly on the serialised model they weren’t getting
cast to a string before trying to JSONify them. And JSON doesn’t know
what to do with a UUID object.
For some reason this was only affecting the endpoint for fetching
inbound SMS.
The API needs these to check whether a service can send a notification.
This commit also updates all the tests in `test_validators.py` to take
a serialised service, not a database object.
We changed auth.py to import from app.serialised_models here:
https://github.com/alphagov/notifications-api/pull/2887/files#diff-77cbb1e03185c7319f0311371c438b0cR11
`serialised_models.py` imports from `templates_dao.py`
`templates_dao.py` imports from `users_dao.py`
`users_dao.py` imports from `errors.py`
`errors.py` imports from `auth.py` … and the circle is complete 💥
For some reason this caused the Celery workers to crash on startup, but
not the app. Which I guess is why the integration tests didn’t catch
this?
Years ago we started to implement a way to schedule a notification. We hit a problem but we never came up with a good solution and the feature never made it back to the top of the priority list.
This PR removes the code for scheduled_for. There will be another PR to drop the scheduled_notifications table and remove the schedule_notifications service permission
Unfortunately, I don't think we can remove the `scheduled_for` attribute from the notification.serialized method because out clients might fail if something is missing. For now I have left it in but defaulted the value to None.
Same as we’re doing for templates.
This means avoiding a database call, even for services that don’t hit
our API so often.
They’ll still need to go to the database for the API keys, because we’re
not comfortable putting the API key secrets in Redis.
But once a service has got its keys from the database we commit the
transaction, so the connection can be freed up until we need it again to
insert the notification.
Same as we’ve done for templates.
For high volume services this should mean avoiding calls to external
services, either the database or Redis.
TTL is set to 2 seconds, so that’s the maximum time it will take for
revoking an API key or renaming a service to propagate.
Some of the tests created services with the same service ID. This
caused intermittent failures because the cache relies on unique service
IDs (like we have in the real world) to key itself.
We think that holding open database transactions while we go and do
something else is causing us to have poor performance.
Because we’re not serialising everything as soon as we pull it out of
the database we can guarantee that we don’t need to go back to the
database again.
So let’s see if explicitly closing the transaction helps with
performance.
By serialising these straight away we can:
- not go back to the database later, potentially closing the connection
sooner
- potentially cache the serialised data, meaning we don’t touch the
database at all
For some reason our V1 get template response wraps the whole template in
a dictionary with one key, `'data'`:
0d99033889/app/template/rest.py (L166)
That means when the admin app caches the response it also caches it in
this format.
The API needs to do the same, otherwise it will be cacheing data with a
schema that the admin app isn’t expecting, and vice-versa.
We need to serialise the template to JSON to store it in Redis. Python’s
built in JSON serialiser doesn’t know what to do with a UUID object, so
we need to manually cast it to a string instead.
It’s a UUID column, but by default Marshmallow wants to select the id
from the users table, not from the templates table, because the two
are foreign-keyed.
Adding the property explicity like this forces it to select from the
`created_by_id` column, but still serialises it to the `created_by`
field to avoid any breaking change.
Content and subject are user-submitted so are effectively unbounded in
size. And we’re serialising them for every template when sending the
list of templates to the admin app.
For the service with the most templates this results in a 1.3Mb blob of
JSON going over the wire, and being cached in Redis.
And then the admin app completely ignores these fields, because it does
show template content until you’ve clicked into a single template.
This commit adds a new query parameter, `detailed`, that the admin app
can set to `False`. When it does only the fields needed to render the
`/templates` page are returned.
This is done with a new parameter so as not to break the V1 API.
Although I looked in Kibana and it doesn’t seem like anyone external is
using this endpoint we’ve come this far without breaking the API so…
This commit changes the code in post notification endpoint to handle a
serialised template (ie a `dict`) rather than a database object.
This is the first step towards being able to cache the template and not
hit the database on every request.
There should be no functional changes here, it’s just refactoring.
There are some changes to the tests where the signature of functions
has changed.
Importing of the template schema has to be done at a function level,
otherwise Marshmallow gets weird.
This commit also copies the `JSONModel` class from the admin app, which
turns serialised data (a dict made from JSON) into an object on which
certain predefined properties are allowed.
This means we can still do the caching of serialised data, without
having to change too much of the code in the app, or make it ugly by
sprinkling dict lookups everywhere.
We’re not copying all of JSONModel from the admin app, just the bits we
need. We don’t need to compare or hash these objects, they’re just used
for lookups. And redefining `__getattribute__` scares Leo.