This wasn't working - the error given when trying to access it was
`TypeError: Object of type 'Row' is not JSON serializable` when we tried
to serialize a SQLAlchemy Row.
I haven't looked too far into what has changed to stop this from
working, but have just changed the endpoint to return a nested list instead.
This was mentioned in an old pen test report that you could send a
request twice to set a broadcast message as broadcasting which would
trigger us to send two alerts.
It looks like this is now fixed and this test coverage backs that up.
Note, it's unlikely that it would have been an issue anyway as the CBC
would likely have rejected the message as it would notice it is a
duplicate.
Note, this test coverage is not supposed to be exhaustive of all the
potential transitions but covers the vast majority of ones that we care
about.
See `BroadcastStatusType.ALLOWED_STATUS_TRANSITIONS` for allowed
transitions.
It looks like we were allowing broadcasts to transition from draft to
broadcasting in one go. This isn't valid now. It should go draft,
pending approval and then broadcasting.
It looks like this was a leftover bit of support in our code for when we
were building stuff out and is no longer needed.
It's possible a letter can pass our validation but our print provider can not print the letter. The letter will be marked as permanent failure in this case. Typically happens with precompiled letters.
The query had a group by on notification_type and notification_status, this not only slows the query down but is wrong. The query only looked at the first result, but this query would return as many rows as different notification types and status, meaning the results do not include the correct number.
Are we concerned that all status types are included. For example letters can be cancelled or have validation-failures which shouldn't be included in the daily limit check.
This adds total_letters to the data that is returned by the
`/platform-stats/data-for-billing-report` endpoint so that we can add
total letters as a column in the CSV file that can be downloaded.
it doesn't really do any verification - that's the webauthn code in the
browser and the admin app that does that. Instead, this completes the
login flow, by marking the user as logged in in the database. Added a
docstring that explains this process a bit more, and also added a new
route: /<id>/complete/webauthn. We'll move the admin code over to use
this new url in time
The trouble is the aggregate query to return the big blue numbers on the dashboard and /notifications/{notification_type} page is taking too long to return.
I have some ideas on how to improve the query, but should take some time to do some more research and test. In the meantime, let's just ignore "todays" total numbers for the high volume services. There are only two services that this will affect.
We haven't bumped the test version for a while.
Also bumped the version of Flask and itsdangerous.
In order to fix flask warnings I needed to changed how the blueprints were registerd.
Many of the team members do not look at emails from zendesk, adding a current_app.logger.error message for things we care about to give developers a better chance of seeing them.
I have purposely not added an erro log for `check_for_services_with_high_failure_rates_or_sending_to_tv_numbers` because it's not something we need to look at immediately.
with sms and email auth the api handles verifying logins in the
`/<user_id>/verify/code` endpoint, when it checks the code is valid etc.
The admin app has already done this for webauthn logins, but we still
need an API endpoint so that we can set up the user's db entry to have
a new logged in timestamp, a new session id (this is important for
logging out other browser sessions), etc.
Also, we need to be able to make sure that the user's max login count
isn't exceeded. If it's exceeded, we shouldn't let them log in even with
a valid webauthn check.
This endpoint is a POST where the admin passes in a json dict with key
"succesful" being True or False. True sets up the db stuff as mentioned.
False just increments the failed login count.
simplify logic by changing the dao function to require a user id and a
webauthn cred id. Note that this changes the response from a 400 to a
404 if the cred is for a different user than the supplied id.
give a minimum length to the text fields in POSTS to create/update a
credential to avoid surprising unexpected edge cases involving empty
string names etc.
Also fix tests:
First add init file so the tests are found correctly, then update
the tests after we stopped serialising webauthn
registration_response.
added some simple validation to the delete endpoint for sanity, but
generally my assumption is that more validation will happen on the admin
side.
noteably im not checking whether the credentials are duplicated, nor is
there a uniqueness constraint in the database - I'm not sure if the
credential blob will always reliably be equivalent, and I believe the
browser should hopefully take care of dupes.
This will allow admin to pass through a value of "government" for the
broadcast_channel. We don't have any logic around the value of service.broadcast_channel,
so no updates are needed to the tasks etc.
We found that non-templated broadcast messages weren’t having their
content normalised before saving into an event.
This means that stuff like `\r\n` and curly quotes were being passed
through to the CBC proxy.
This commit firstly changes templated events to use
`str(BroadcastMessageTemplate)` to normalise the content, because it’s
non-obvious that calling
`BroadcastMessageTemplate.content_with_placeholders_filled_in` also
normalises content.
Then it changes the non-templated route to also call
`str(BroadcastMessageTemplate)`, where previously it was passing the
content straight through.
We want to replace the value `None` for
service.allowed_broadcast_provider with the value of "all". As a first
step, we need to allow both values. Once notifications-admin has been
changed to pass through "all" and all the data in the database has been
updated, we can update the code to stop supporting both values.
This adds a type table for broadcast providers, which is the pattern we
follow with our models (e.g. we have a `broadcast_channel_types` table).
As well as the four providers, the migration populates it with `all`
which is the value that will replace `null` in a later change.
It should be safe to add the foreign key constraint to the
`service_broadcast_settings` in the same migration since the column is
still nullable and we don't have data in that column that is not in the
types table.
DVLA would like to be able to identify letters sent by the Insolvency
Service, so we are changing the zipfile name. They need all zipfile
names to have the same structure, so we can't just add a marker to files
sent by that service - we have to change all filenames.
The new format is like this:
`{NOTIFY}.{DATE}.{SEQUENCE_ID}.{UNIQUE_ID}.{SERVICE_ID}.{ORG_NAME}.{EXTENSION}`
When deploying to paas the database postgres environment variables are set using VCAP_SERVICES provided by PaaS. When we start up the app and set the properties we need to replace the postgres string with postgresql for the app to start up properly.
This wasn't caught locally or with the unit tests because we were setting this property with postgresql.
- sqlalchemy.sql.expression.case must include an else statement.
- clearly define list of columns for inbound_sms_history insert, getting the list from InboundSmsHistory.__table__.c was causing data type errors.
- remove relationships when not needed, the foreign key relationship is established in the creation of the column. This will get rid of the warnings referenced here: http://sqlalche.me/e/14/qzyx.
- update queries now that he user relationship in ServiceUser db model has been removed.
- move the check that a template is archived to the view instead of the dao method. The check was clearing the session before the version history could be done.
Deleting notifications in the night tasks still needs to be
investigated. The raw sql is causing an error.
This applies the same change we made in other apps [1][2]. Adding
the override here is special, though, because it means the others
will now get triggered, since this app is the start of the chain
of tasks for a request. We will also retain existing request_id
tracing for tasks within this app, since "apply_async" calls the
"send_task" method internally, which is the one we're overriding.
[1]: 6f3c118a1e
[2]: 2e08b7aa95