Many of the team members do not look at emails from zendesk, adding a current_app.logger.error message for things we care about to give developers a better chance of seeing them.
I have purposely not added an erro log for `check_for_services_with_high_failure_rates_or_sending_to_tv_numbers` because it's not something we need to look at immediately.
with sms and email auth the api handles verifying logins in the
`/<user_id>/verify/code` endpoint, when it checks the code is valid etc.
The admin app has already done this for webauthn logins, but we still
need an API endpoint so that we can set up the user's db entry to have
a new logged in timestamp, a new session id (this is important for
logging out other browser sessions), etc.
Also, we need to be able to make sure that the user's max login count
isn't exceeded. If it's exceeded, we shouldn't let them log in even with
a valid webauthn check.
This endpoint is a POST where the admin passes in a json dict with key
"succesful" being True or False. True sets up the db stuff as mentioned.
False just increments the failed login count.
simplify logic by changing the dao function to require a user id and a
webauthn cred id. Note that this changes the response from a 400 to a
404 if the cred is for a different user than the supplied id.
give a minimum length to the text fields in POSTS to create/update a
credential to avoid surprising unexpected edge cases involving empty
string names etc.
Also fix tests:
First add init file so the tests are found correctly, then update
the tests after we stopped serialising webauthn
registration_response.
added some simple validation to the delete endpoint for sanity, but
generally my assumption is that more validation will happen on the admin
side.
noteably im not checking whether the credentials are duplicated, nor is
there a uniqueness constraint in the database - I'm not sure if the
credential blob will always reliably be equivalent, and I believe the
browser should hopefully take care of dupes.
so we can be in line with what the admin handles, and keep it simple on
the api side and do as little manipulation of binary data as possible.
### Minor changes
* id is a UUID we can use for referencing within notify. No relation to
the key itself.
* name is a user viewable name that can be set/edited
* fix updated_at to have onupdate, not default
### Simplify the webauthn data
credential_data is the data we store about an authenticator that we'll
use to identify the key when logging in. includes the credential_id, the
public_key, and the aaguid (which identifies the authenticator
make/model)
registration_response is the data containing audit information - in the
future we can use this to ensure that the authenticators used are of
high quality.
both of these fields are CBOR (a kind of binary json), encoded in
base64 so that they can be embedded within our regular JSON api
endpoints. we don't anticipate the api ever needing to interact with
this data directly.
This will allow admin to pass through a value of "government" for the
broadcast_channel. We don't have any logic around the value of service.broadcast_channel,
so no updates are needed to the tasks etc.
We found that non-templated broadcast messages weren’t having their
content normalised before saving into an event.
This means that stuff like `\r\n` and curly quotes were being passed
through to the CBC proxy.
This commit firstly changes templated events to use
`str(BroadcastMessageTemplate)` to normalise the content, because it’s
non-obvious that calling
`BroadcastMessageTemplate.content_with_placeholders_filled_in` also
normalises content.
Then it changes the non-templated route to also call
`str(BroadcastMessageTemplate)`, where previously it was passing the
content straight through.
We want to replace the value `None` for
service.allowed_broadcast_provider with the value of "all". As a first
step, we need to allow both values. Once notifications-admin has been
changed to pass through "all" and all the data in the database has been
updated, we can update the code to stop supporting both values.
This adds a type table for broadcast providers, which is the pattern we
follow with our models (e.g. we have a `broadcast_channel_types` table).
As well as the four providers, the migration populates it with `all`
which is the value that will replace `null` in a later change.
It should be safe to add the foreign key constraint to the
`service_broadcast_settings` in the same migration since the column is
still nullable and we don't have data in that column that is not in the
types table.
DVLA would like to be able to identify letters sent by the Insolvency
Service, so we are changing the zipfile name. They need all zipfile
names to have the same structure, so we can't just add a marker to files
sent by that service - we have to change all filenames.
The new format is like this:
`{NOTIFY}.{DATE}.{SEQUENCE_ID}.{UNIQUE_ID}.{SERVICE_ID}.{ORG_NAME}.{EXTENSION}`
Second attempt [1]. This increases the threshold so:
- It's a more substantial amount of money lost (£16).
- It's 10% of the minimum free allowance for a service.
- It's greater than the threshold we have for TV numbers (500).
Having a higher threshold for this alert will help prevent wasted
effort investigating more negligible failures, and reduces the
ambiguity of whether we should take action: we should.
[1]: https://github.com/alphagov/notifications-api/pull/3221
When deploying to paas the database postgres environment variables are set using VCAP_SERVICES provided by PaaS. When we start up the app and set the properties we need to replace the postgres string with postgresql for the app to start up properly.
This wasn't caught locally or with the unit tests because we were setting this property with postgresql.
`service_ids_to_purge` is a list of `row` object rather than a list of `UUID`.
NOTE: db.session.query(Service).filter(Service.id.notin_(services_with_data_retention)).all() would have also worked. It seems that only selecting attributes from the db.Model has caused the change.
- sqlalchemy.sql.expression.case must include an else statement.
- clearly define list of columns for inbound_sms_history insert, getting the list from InboundSmsHistory.__table__.c was causing data type errors.
- remove relationships when not needed, the foreign key relationship is established in the creation of the column. This will get rid of the warnings referenced here: http://sqlalche.me/e/14/qzyx.
- update queries now that he user relationship in ServiceUser db model has been removed.
- move the check that a template is archived to the view instead of the dao method. The check was clearing the session before the version history could be done.
Deleting notifications in the night tasks still needs to be
investigated. The raw sql is causing an error.