this helps manage the transaction by keeping it inside one function in the dao,
so after the function completes you know that the transaction has been released
and concurrent processing can resume
we were running into issues where multiple beats queue up the
run_scheduled_jobs task at the same time, and concurrency issues with selecting
scheduled jobs causes both tasks to trigger processing of the job.
Use with_for_update, which calls through to the postgres SELECT ... FOR UPDATE
which locks other SELECT FOR UPDATES (ie other threads running same code) until
the rows are set to pending and the transaction completes - so the second
thread will not find any rows
Notifications with a `billable_units` count of `0` wont have any effect
on the result, but including them in the query will slow down the
grouping and summing of the results because it’ll have to loop over more
rows.
April 1st is in British summer time, ie 1hr ahead of UTC. The database
stores everything in UTC, so for accurate comparisions we need to make
sure that `get_financial_year()` returns a UTC, datetime-aware
timestamp that is 1hr ahead of midnight.
This also means that when we group notifications by month, the months
need to be in BST. So the line between one year and another is actually
01:00 on April 1st, _not_ 00:00 on April 1st.
There’s no way we’ve found to do this in SQLAlchemy or raw Postgres,
especially because we don’t store the timestamps with a timezone in the
database.
So the grouping and summing of the notifications has to be done in
Python.
In order to invoice people we need to know how many text message
fragments they’ve sent per month.
This should be per (government) financial year, ie April 1st to April
1st because we’ll only ever show a page for one year (because the
250,000 allowance is topped up at the start of every financial year).
This commit only does the DAO bit, not the REST bit.
Refactored send_notifications method so that it is more readible.
Refectored the test_send_notificaitons so that it uses parametrized test to avoid duplication.
can now pass in a query string `?statuses=x,y,z` to filter jobs based on
`Job.job_status`. Not passing in a status or passing in an empty string is
equivalent to selecting every filter type at once.
* changed POST to PUT - we are modifiying an already present resource
* improved error handling on PUT
- return 400 if bad
- rollback the delete of the previous whitelist on error
* return 204 if PUT succeeds ( NO CONTENT )
Developers need visibility of what their integration is doing within
the app. This includes notifications sent with a test key.
This commit adds an optional, defaults-to-false parameter to include
notifications sent from a test API key when getting notifications.
Starting arguments on their own line and putting the closing parenthesis
on it’s own line because any subsequent changes to the arguments diff
cleanly (ie without touching any other lines).
GET /<service_id>/whitelist
returns all whitelisted contacts for a service, separated into two lists
POST /<service_id>/whitelist
removes all existing whitelisted contacts, and replaces them with the
provided new entries
(todo: dao work + tests)
- get all notifications by service
- template usage
- most recently used templates
Ensures that the dashboard shows no test key data. Supplements: https://github.com/alphagov/notifications-api/pull/677 which excludes CSV data. This branches from that so is dependant.
We found that if the notifications were in created or pending they are not purged from notifications.
- New bulk update method to set all notificaitons with:
- a status = created|sending|pending to temporary-failure
- and is older then today minus SENDING_NOTIFICATIONS_TIMEOUT_PERIOD (in seconds)
- the scheduled task to timeout notifications use the new bulk update query.
- the task will be more efficient
> filter_by and filter are just aliases for each other so can be
> combined together - filter is probably the better one (and then use
> == instead of keyword args)
If you schedule a job you might change your mind or circumstances might
change. So you need to be able to cancel it. This commit adds a `POST`
endpoint for individual jobs which sets their status to `cancelled`.
This also means adding a new status of `cancelled`, so there’s a
migration…
- As before this is now driven from the notifications history table
- Removed from updates and create
- Signatures changes to removed unused params hits many files
- Also potential issue around rate limiting - we used to get the number sent per day from the stats table - which was a single row lookup, now we have to count this. This applies to EVERY API CALL. Probably not a good thing and should be addressed urgently.
- again these new come from the notifications history table
- We update this when we sent a notification, so removed from celery tasks
- tests removed also
- on create notification we updated the templates stats to record the usage.
- this is now based on notification history
- this update and associated tests are now removed,