This wasn't working - the error given when trying to access it was
`TypeError: Object of type 'Row' is not JSON serializable` when we tried
to serialize a SQLAlchemy Row.
I haven't looked too far into what has changed to stop this from
working, but have just changed the endpoint to return a nested list instead.
Just looks a bit tidier and less repetitive.
I’ve only done this for the serialised service because:
- we’re only checking this in places where we’re already using the
serialised service
- if we want to check this elsewhere there’s a good chance that new code
should be using the serialised service, since it’ll itself be doing
some kind of performance optimisation
This was added by mistake - the Concourse pipeline never did this
previously, and errors if we try (the necessary environment vars
aren't present, even if we wanted to).
This was mentioned in an old pen test report that you could send a
request twice to set a broadcast message as broadcasting which would
trigger us to send two alerts.
It looks like this is now fixed and this test coverage backs that up.
Note, it's unlikely that it would have been an issue anyway as the CBC
would likely have rejected the message as it would notice it is a
duplicate.
Note, this test coverage is not supposed to be exhaustive of all the
potential transitions but covers the vast majority of ones that we care
about.
See `BroadcastStatusType.ALLOWED_STATUS_TRANSITIONS` for allowed
transitions.
It looks like we were allowing broadcasts to transition from draft to
broadcasting in one go. This isn't valid now. It should go draft,
pending approval and then broadcasting.
It looks like this was a leftover bit of support in our code for when we
were building stuff out and is no longer needed.
It's possible a letter can pass our validation but our print provider can not print the letter. The letter will be marked as permanent failure in this case. Typically happens with precompiled letters.
Related to: https://github.com/alphagov/notifications-aws/pull/905
Previously this would fail because the Docker image we use for CI
builds doesn't have an 'environment.sh' file; it uses preset env
vars instead. This makes the command to upgrade the DB optional -
if the env file is missing, the error should be self evident.
We had a situation where the delivery-worker app instance was terminated before the job was marked as `in-progress`, presumably because the query to check the daily limits was taking too long to complete.
If the job was in progress the `check_job_status` task would have restarted the job.
Updating the status to in-progress sooner will help.
The query had a group by on notification_type and notification_status, this not only slows the query down but is wrong. The query only looked at the first result, but this query would return as many rows as different notification types and status, meaning the results do not include the correct number.
Are we concerned that all status types are included. For example letters can be cancelled or have validation-failures which shouldn't be included in the daily limit check.
- Update the Notification and NotificationHistory model to reflect the database.
- Updates to datatypes, removal of indexes and addition of indexes.
Why?
After running the `flask db migrate` command there are many deltas because we did some work to update the notification and notification_history tables, however, the SQLAlchemy models were not updated to reflect those changes. This PR cleans up all those deltas.
However, there are still some differences that can be done but we can look at that in another PR.
This adds total_letters to the data that is returned by the
`/platform-stats/data-for-billing-report` endpoint so that we can add
total letters as a column in the CSV file that can be downloaded.
it doesn't really do any verification - that's the webauthn code in the
browser and the admin app that does that. Instead, this completes the
login flow, by marking the user as logged in in the database. Added a
docstring that explains this process a bit more, and also added a new
route: /<id>/complete/webauthn. We'll move the admin code over to use
this new url in time
The trouble is the aggregate query to return the big blue numbers on the dashboard and /notifications/{notification_type} page is taking too long to return.
I have some ideas on how to improve the query, but should take some time to do some more research and test. In the meantime, let's just ignore "todays" total numbers for the high volume services. There are only two services that this will affect.
We haven't bumped the test version for a while.
Also bumped the version of Flask and itsdangerous.
In order to fix flask warnings I needed to changed how the blueprints were registerd.
It's always going to be in the future anyway.
After some analysis the query does perform better without it.
I'll make a note to update other queries where we get todays
notification data to remove the end date filter in a separate PR.
Many of the team members do not look at emails from zendesk, adding a current_app.logger.error message for things we care about to give developers a better chance of seeing them.
I have purposely not added an erro log for `check_for_services_with_high_failure_rates_or_sending_to_tv_numbers` because it's not something we need to look at immediately.