Commit Graph

4713 Commits

Author SHA1 Message Date
Pea Tyczynska
96f34bbd45 Merge pull request #3286 from alphagov/bump-utils-fix-placeholder-bug
Bump utils to bring in fix for optional placeholder bug
2021-07-09 11:24:42 +01:00
Pea Tyczynska
9e8682ac29 Bump utils to bring in fix for optional placeholder bug
See https://github.com/alphagov/notifications-utils/pull/878 for
details.

Changes we had to make for our app and tests to work correctly
after the dependency updates:

1. Update emergency alerts polygons test because we changed
how exact we are with locations of the points on the map.

2. Use Flask's g object to set additional request attributes

So far we have been storing them in _request_ctx_stack which is
an innard for Flask's request context.

Because of major update to Werkzeug dependency, which Flask relies
on, the way we were using it stopped working, so we had a new
way to set those values.
The way we set those values now, by using g object, seems to also
be favoured in Flask documentation:
https://flask.palletsprojects.com/en/1.1.x/reqcontext/#how-the-context-works
2021-07-08 12:18:09 +01:00
Katie Smith
fc0b9736eb Remove user permissions if service becomes a broadcast service
The "normal" service permissions and broadcast service permissions are
going to be different with no overlap. This means that if you were
viewing the team members page, there might be permissions in the
database that are not visible on the frontend if a service has changed
type. For example, someone could have the 'manage_api_keys' permission,
which would not show up on the team members page of a broadcast service.
To avoid people having permissions which aren't visible in admin, we now
remove all permissions from users when their service is converted to a
broadcast service.

Permisions for invited users are also removed.

It's not possible to convert a broadcast service to a normal service, so
we don't need to cover for this scenario.
2021-07-07 16:13:35 +01:00
Katie Smith
e5fdd8ee1f Add new broadcast related permissions
We want to have new permissions which will be used specifically for
broadcasts:
- `create_broadcasts`
- `approve_broadcasts`
- `reject_broadcasts`
- `cancel_broadcasts`

Cancel and reject will always go together, but having separate database
permissions makes things easier to change in the future.

The permission column of the permissions table is an enum. We can add values
in the alembic upgrade script, but removing individual values from an
enum is not supported by Postgres. To remove values, we have to recreate
the enum with the old values.
2021-07-07 14:54:13 +01:00
Ben Thorner
273d14fbe4 Merge pull request #3280 from alphagov/webauthn-user-flag
Add flag to say if user is eligible for WebAuthn
2021-07-01 11:10:06 +01:00
Ben Thorner
6cf24899dd Let existing WebAuthn users continue using it
It's not a big deal if a user is no longer eligible to register a
security key, so we may as well let them continue using it. This
avoids putting them in a limbo state if we don't immediately change
their auth type when they're no longer eligible to use the feature.
2021-06-30 15:41:43 +01:00
Rebecca Law
c501c92bf3 Convert value from redis to an int.
This has been properly tested locally with redis enabled.
2021-06-29 14:00:25 +01:00
Rebecca Law
ed788cb0bd Fix bug in check_service_over_daily_limit
I forgot to return service_stats if the cache exists. And fixed the tests to check service_stats value.
2021-06-29 12:59:20 +01:00
Rebecca Law
310e1cb4e2 Merge pull request #3276 from alphagov/daily-limit-redis-cache
Correct the daily limits cache.
2021-06-29 12:06:35 +01:00
Ben Thorner
2fa6327efb Add flag to say if user is eligible for WebAuthn
Currently we have some data-driven roles to say who can use this
feature. Adding a flag in the API means we can avoid API calls in
the Admin app to determine the same.

Allowing members of the GOV.UK Notify service to use the feature
is a workaround, so we can avoid making someone a Platform Admin
before they've protected their account with it.
2021-06-28 13:35:24 +01:00
Rebecca Law
18dd9050a4 - make sure when processing a job that we check the total_sent + job.notification_count against the service.message_limit. 2021-06-28 13:07:48 +01:00
Rebecca Law
fd7486d751 - Merge daily limit functions into one, refactor call for daily limit check from process_job
- refactor tests to standardise test names
- refactor some tests to be more clear
- remove unnecessary tests
- include missing test
2021-06-24 11:05:22 +01:00
Katie Smith
0f42b4dbec Fix the endpoint for the monthly status report
This wasn't working - the error given when trying to access it was
`TypeError: Object of type 'Row' is not JSON serializable` when we tried
to serialize a SQLAlchemy Row.

I haven't looked too far into what has changed to stop this from
working, but have just changed the endpoint to return a nested list instead.
2021-06-23 16:03:58 +01:00
Rebecca Law
57fb9da414 - change the condition so that we don't reset the cache if it's zero
- set the cache if it doesn't exist so there is an expiry of 24 hours.
2021-06-23 15:09:09 +01:00
Chris Hill-Scott
1c1023a877 Merge pull request #3257 from alphagov/refactor-high-volume-check
Refactor high volume into serialised service model
2021-06-23 09:01:27 +01:00
Rebecca Law
35b20ba363 Correct the daily limits cache.
Last year we had an issue with the daily limit cache and the query that was populating it. As a result we have not been checking the daily limit properly. This PR should correct all that.

The daily limit cache is not being incremented in app.notifications.process_notifications.persist_notification, this method is and should always be the only method used to create a notification.
We increment the daily limit cache is redis is enabled (and it is always enabled for production) and the key type for the notification is team or normal.

We check if the daily limit is exceed in many places:
 - app.celery.tasks.process_job
 -  app.v2.notifications.post_notifications.post_notification
 - app.v2.notifications.post_notifications.post_precompiled_letter_notification
 - app.service.send_notification.send_one_off_notification
 - app.service.send_notification.send_pdf_letter_notification

If the daily limits cache is not found, set the cache to 0 with an expiry of 24 hours. The daily limit cache key is service_id-yyy-mm-dd-count, so each day a new cache is created.

The best thing about this PR is that the app.service_dao.fetch_todays_total_message_count query has been removed. This query was not performant and had been wrong for ages.
2021-06-22 16:15:36 +01:00
Rebecca Law
c44ec57c17 Merge pull request #3266 from alphagov/update-notifications-model-with-indexes
Tidy up models
2021-06-21 12:43:08 +01:00
Rebecca Law
ff79c65cab Move the indexes to be inline with the table. 2021-06-21 12:06:38 +01:00
David McDonald
69212827eb Merge pull request #3270 from alphagov/broadcast-status-transition-tests
Broadcast status transition tests
2021-06-16 16:04:15 +01:00
Chris Hill-Scott
57249b43c8 Refactor high volume into serialised service model
Just looks a bit tidier and less repetitive.

I’ve only done this for the serialised service because:
- we’re only checking this in places where we’re already using the
  serialised service
- if we want to check this elsewhere there’s a good chance that new code
  should be using the serialised service, since it’ll itself be doing
  some kind of performance optimisation
2021-06-16 10:46:18 +01:00
Rebecca Law
467794c212 Merge pull request #3269 from alphagov/permanent-failure-for-letters
Add permanent-failure for letters.
2021-06-16 10:42:08 +01:00
Rebecca Law
d4a42471cb Merge pull request #3267 from alphagov/fix-daily-totals-query
Improve the query to get today's totals for a service.
2021-06-16 07:34:01 +01:00
David McDonald
54fe8ee68d Remove old todo for support of draft to broadcasting transition
It looks like we were allowing broadcasts to transition from draft to
broadcasting in one go. This isn't valid now. It should go draft,
pending approval and then broadcasting.

It looks like this was a leftover bit of support in our code for when we
were building stuff out and is no longer needed.
2021-06-15 17:18:54 +01:00
Rebecca Law
2c36898684 Add permanent-failure for letters.
It's possible a letter can pass our validation but our print provider can not print the letter. The letter will be marked as permanent failure in this case. Typically happens with precompiled letters.
2021-06-15 15:12:46 +01:00
Rebecca Law
8af10eb1f0 Update the job_status to in-progress sooner.
We had a situation where the delivery-worker app instance was terminated before the job was marked as `in-progress`, presumably because the query to check the daily limits was taking too long to complete.
If the job was in progress the `check_job_status` task would have restarted the job.
Updating the status to in-progress sooner will help.
2021-06-15 07:58:17 +01:00
Rebecca Law
08bb5c657f Fix the query to get todays totals for a service.
The query had a group by on notification_type and notification_status, this not only slows the query down but is wrong. The query only looked at the first result, but this query would return as many rows as different notification types and status, meaning the results do not include the correct number.

Are we concerned that all status types are included. For example letters can be cancelled or have validation-failures which shouldn't be included in the daily limit check.
2021-06-14 15:29:21 +01:00
Rebecca Law
0688a16cb2 Tidy up models
- Update the Notification and NotificationHistory model to reflect the database.
- Updates to datatypes, removal of indexes and addition of indexes.

Why?
After running the `flask db migrate` command there are many deltas because we did some work to update the notification and notification_history tables, however, the SQLAlchemy models were not updated to reflect those changes. This PR cleans up all those deltas.
However, there are still some differences that can be done but we can look at that in another PR.
2021-06-14 14:43:34 +01:00
Katie Smith
0148b3dba6 Add new total_letters field to the billing report data
This adds total_letters to the data that is returned by the
`/platform-stats/data-for-billing-report` endpoint so that we can add
total letters as a column in the CSV file that can be downloaded.
2021-06-11 11:31:22 +01:00
David McDonald
be035664c4 Add operator channel to broadcast settings route
Looks identical to the government channel in terms of the interface
2021-06-09 13:49:06 +01:00
Leo Hemsted
8e1a144f87 Merge pull request #3229 from alphagov/data-error
make sure all non-uuid service ids 403 in api keys
2021-06-07 14:09:22 +01:00
Leo Hemsted
c53ed4107b Merge pull request #3260 from alphagov/verify-to-complete
rename verify webauth endpoint to complete
2021-06-04 13:25:10 +01:00
Leo Hemsted
542b151875 rename verify webauth endpoint to complete
it doesn't really do any verification - that's the webauthn code in the
browser and the admin app that does that. Instead, this completes the
login flow, by marking the user as logged in in the database. Added a
docstring that explains this process a bit more, and also added a new
route: /<id>/complete/webauthn. We'll move the admin code over to use
this new url in time
2021-06-03 17:12:19 +01:00
Rebecca Law
684a882cf3 Revert "Do not include today's totals" 2021-06-02 16:06:33 +01:00
Rebecca Law
c668bed9d3 Merge pull request #3256 from alphagov/no-totals-for-high-volume-services
Do not include today's totals
2021-06-02 15:08:45 +01:00
Rebecca Law
a341536de0 - Add comment to test and new if statement
- Update assert in test
2021-06-02 14:13:31 +01:00
Rebecca Law
0e0c911517 Merge pull request #3252 from alphagov/upgrade-warning-to-error
Add a error log for alert tasks.
2021-06-02 13:48:01 +01:00
Rebecca Law
b170b5ed80 This change is a temporary fix to allow users for high volume services to use the admin app.
The trouble is the aggregate query to return the big blue numbers on the dashboard and /notifications/{notification_type} page is taking too long to return.
I have some ideas on how to improve the query, but should take some time to do some more research and test. In the meantime, let's just ignore "todays" total numbers for the high volume services. There are only two services that this will affect.
2021-06-02 10:31:38 +01:00
David McDonald
04e23ca6a9 Revert "Bump utils version for new invalid address character" 2021-06-01 10:53:28 +01:00
Rebecca Law
50de85988e Fix dependency issues
We haven't bumped the test version for a while.
Also bumped the version of Flask and itsdangerous.
In order to fix flask warnings I needed to changed how the blueprints were registerd.
2021-05-27 13:02:24 +01:00
Rebecca Law
ed5e3b3d9c Removed the end date in the filter.
It's always going to be in the future anyway.
After some analysis the query does perform better without it.
I'll make a note to update other queries where we get todays
notification data to remove the end date filter in a separate PR.
2021-05-26 13:47:53 +01:00
Rebecca Law
1bf5ce08b2 Add a error log for alert tasks.
Many of the team members do not look at emails from zendesk, adding a current_app.logger.error message for things we care about to give developers a better chance of seeing them.
I have purposely not added an erro log for `check_for_services_with_high_failure_rates_or_sending_to_tv_numbers` because it's not something we need to look at immediately.
2021-05-26 11:06:21 +01:00
Rebecca Law
bd1498f49f Bump utils version which contains ~ as an invalid first character for a
postal address.
2021-05-25 08:29:25 +01:00
Rebecca Law
782514a0f1 Update the dao_fetch_todays_stats_for_service query.
We have an index on Notifications(service_id, created_at), by updating the query to use between created_at rather than date(created_at) this query will use the index. Changing the query plan to use an index scan rather than a sequence scan, see query plans below.
This query is still rather slow but is improved by this update.

https://www.pivotaltracker.com/story/show/178263480

explain analyze
SELECT notification_type, notification_status, count(id)
FROM notifications
WHERE service_id = 'e791dbd4-09ea-413a-b773-ead8728ddb09'
AND date(created_at) = '2021-05-23'
AND key_type != 'test'
GROUP BY notification_type, notification_status;
                                                                                     QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Finalize GroupAggregate  (cost=6326816.31..6326926.48 rows=24 width=22) (actual time=91666.805..91712.976 rows=10 loops=1)
   Group Key: notification_type, notification_status
   ->  Gather Merge  (cost=6326816.31..6326925.88 rows=48 width=22) (actual time=91666.712..91712.962 rows=30 loops=1)
         Workers Planned: 2
         Workers Launched: 2
         ->  Partial GroupAggregate  (cost=6325816.28..6325920.31 rows=24 width=22) (actual time=91662.907..91707.027 rows=10 loops=3)
               Group Key: notification_type, notification_status
               ->  Sort  (cost=6325816.28..6325842.23 rows=10379 width=30) (actual time=91635.890..91676.225 rows=270884 loops=3)
                     Sort Key: notification_type, notification_status
                     Sort Method: external merge  Disk: 10584kB
                     Worker 0:  Sort Method: external merge  Disk: 10648kB
                     Worker 1:  Sort Method: external merge  Disk: 10696kB
                     ->  Parallel Seq Scan on notifications  (cost=0.00..6325123.93 rows=10379 width=30) (actual time=0.036..91513.985 rows=270884 loops=3)
                           Filter: (((key_type)::text <> 'test'::text) AND (service_id = 'e791dbd4-09ea-413a-b773-ead8728ddb09'::uuid) AND (date(created_at) = '2021-05-23'::date))
                           Rows Removed by Filter: 16191366
 Planning Time: 0.760 ms
 Execution Time: 91714.500 ms
(17 rows)

explain analyze
SELECT notification_type, notification_status, count(id)
FROM notifications
WHERE service_id = 'e791dbd4-09ea-413a-b773-ead8728ddb09'
AND created_at  >= '2021-05-22 23:00'
and created_at < '2021-05-23 23:00'
AND key_type != 'test'
GROUP BY notification_type, notification_status;
                                                                                                                       QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Finalize GroupAggregate  (cost=2114273.37..2114279.57 rows=24 width=22) (actual time=21032.076..21035.725 rows=10 loops=1)
   Group Key: notification_type, notification_status
   ->  Gather Merge  (cost=2114273.37..2114278.97 rows=48 width=22) (actual time=21032.056..21035.703 rows=30 loops=1)
         Workers Planned: 2
         Workers Launched: 2
         ->  Sort  (cost=2113273.35..2113273.41 rows=24 width=22) (actual time=21029.261..21029.265 rows=10 loops=3)
               Sort Key: notification_type, notification_status
               Sort Method: quicksort  Memory: 25kB
               Worker 0:  Sort Method: quicksort  Memory: 25kB
               Worker 1:  Sort Method: quicksort  Memory: 25kB
               ->  Partial HashAggregate  (cost=2113272.56..2113272.80 rows=24 width=22) (actual time=21029.228..21029.230 rows=10 loops=3)
                     Group Key: notification_type, notification_status
                     ->  Parallel Bitmap Heap Scan on notifications  (cost=114455.71..2111695.14 rows=210322 width=30) (actual time=4983.790..20960.581 rows=271217 loops=3)
                           Recheck Cond: ((service_id = 'e791dbd4-09ea-413a-b773-ead8728ddb09'::uuid) AND (created_at >= '2021-05-22 23:00:00'::timestamp without time zone) AND (created_at < '2021-05-23 23:00:00'::timestamp without time zone))
                           Rows Removed by Index Recheck: 1456269
                           Filter: ((key_type)::text <> 'test'::text)
                           Heap Blocks: exact=12330 lossy=123418
                           ->  Bitmap Index Scan on ix_notifications_service_created_at  (cost=0.00..114329.51 rows=543116 width=0) (actual time=4973.139..4973.140 rows=813671 loops=1)
                                 Index Cond: ((service_id = 'e791dbd4-09ea-413a-b773-ead8728ddb09'::uuid) AND (created_at >= '2021-05-22 23:00:00'::timestamp without time zone) AND (created_at < '2021-05-23 23:00:00'::timestamp without time zone))
 Planning Time: 0.191 ms
 Execution Time: 21035.770 ms
(21 rows)
2021-05-25 08:00:24 +01:00
Leo Hemsted
c1b08e4cbc make sure all non-uuid service ids 403 in api keys
previously 'invalid-strings' would be handled, but integers would just
return 500.
2021-05-19 08:57:31 +01:00
Leo Hemsted
00b0227007 add endpoint for verifying webauthn login
with sms and email auth the api handles verifying logins in the
`/<user_id>/verify/code` endpoint, when it checks the code is valid etc.
The admin app has already done this for webauthn logins, but we still
need an API endpoint so that we can set up the user's db entry to have
a new logged in timestamp, a new session id (this is important for
logging out other browser sessions), etc.

Also, we need to be able to make sure that the user's max login count
isn't exceeded. If it's exceeded, we shouldn't let them log in even with
a valid webauthn check.

This endpoint is a POST where the admin passes in a json dict with key
"succesful" being True or False. True sets up the db stuff as mentioned.
False just increments the failed login count.
2021-05-17 20:37:46 +01:00
Pea Tyczynska
251107029a Add webauthn to tests that include other auth types 2021-05-13 12:44:36 +01:00
Pea Tyczynska
098c6f031b Add webauthn as an auth type.
Both in our models and as a migration to add it to auth_types
table.

Make sure that if we downgrade, we first clean up the data.
2021-05-13 12:44:36 +01:00
Leo Hemsted
c190886bfe tweak webauthn rest errors
simplify logic by changing the dao function to require a user id and a
webauthn cred id. Note that this changes the response from a 400 to a
404 if the cred is for a different user than the supplied id.

give a minimum length to the text fields in POSTS to create/update a
credential to avoid surprising unexpected edge cases involving empty
string names etc.
2021-05-12 17:48:38 +01:00
Pea Tyczynska
d6fead7c04 On update, check that webauthn credential belongs to user 2021-05-12 17:48:38 +01:00
Pea Tyczynska
e6291187ba Remove registration_response from webauthn serialize - not needed in admin app
Also fix tests:

First add init file so the tests are found correctly, then update
the tests after we stopped serialising webauthn
registration_response.
2021-05-12 17:48:37 +01:00