Added some tests to the test_post_notifications.
Added a errorhandler for AuthErrors.
This endpoint is not being used anywhere, however there is some common code being used in the v1 post endpoint. The only thing that may be affected is the error response, hopefully they are the same.
- It would be nice to refactor the send_sms and send_email tasks to use these common functions as well, that way I can get rid of the new Notifications.from_v2_api_request method.
- Still not happy with the format of the errors. Would like to find a happy place, where the message is descript enough that we do not need external documentation to explain the error. Perhaps we still only need documentation to explain the trial mode concept.
- Use these validation methods in post_sms_notification and the version 1 of post_notification.
- Create a v2 error handlers.
- InvalidRequest has a to_dict method for private and v1 error responses and a to_dict_v2 method to create the v2 of the error responses.
- Each validation method has extensive unit tests, so the unit test for the endpoint do not need to check every error case, but check that the error handle formats the message correctly.
- The format of the error messages is still a work on progress.
- This version of the api could be deployed without causing a problem to the application.
- The new endpoing is still a work in progress and is not being used yet.
Start building up the validators required for post notificaiton.
The app/v2/errors.py is a rough sketch, will be passed a code, the error can look up the message and link for the error message.
when given any log function with multiple parameters, the python logging utils
assume the first param is a format string and the rest are arguments to pass
in - we were passing in the exception object to `logger.exception`, however,
the purpose of .exception is to add the exception object itself - so we didn't
need to
ensure that if unexpected Exceptions are thrown, we handle them correctly
(log and then return JSON)
also remove some branches that will never trip, and combine a couple of
identical functions
we shouldn't try and use statsd to log an error if they fail, for example
[we also shouldn't retry sending the message but that's a problem for another time]
If you want to send a job on Monday morning, you should be able to
schedule it on Friday. You shouldn’t need to work on the weekend.
96 hours is a full 4 days, so you can schedule a job at any time on
Friday for any time on Monday.
We’ve checked with the information assurance people, and they’re OK with
us holding the data for this extra amount of time.
this is so that the filtering, which we do on the admin side, is applied
before pagination - so that the pages returned are all valid displayable
jobs. unfortunately this means that another config value has to be copied
to the server side but it's not the end of the world
Currently getting a single notification by ID is restricted to
notifications created with the same key type.
This makes things awkward for the functional tests now we’ve removed the
ability to create live keys in trial mode. So this commit removes the
restriction, so that any key can get any notification, no matter how it
was created.
And you’re never going to guess a UUID, so the chances of this giving
you privileged access to someone’s personal information is none.
This does not change the get all notifications endpoint, which
absolutely should be restricted by key type.
Say you have a dashboard with some jobs you sent. Normally looks like:
job | sent
--- | ---
file.csv | **5pm**
file.csv | 3pm
file.csv | 1pm
file.csv | 11am
However if your 5pm job was scheduled at lunchtime, then it will look
like this:
job | sent
--- | ---
file.csv | 3pm
file.csv | 1pm
file.csv | **5pm**
file.csv | 11am
This is because the jobs are sorted by when they were created, not when
they were sent. It looks wrong.
**For jobs that have already been sent**
This commit changes the sort order to be based on `processed_at`
instead.
**For upcoming jobs**
If a job doesn’t have a `processed_at` time then it’s scheduled, but
hasn’t started yet. Only in this case should we still be sorting by
`created_at`.
There is an overlap between team key/trial mode/whitelist. But it’s not
a complete overlap. So it’s hard to understand all the different
permutations of which key lets you send to which people when.
This commit tries to reduce the differences between these concepts. So
for a user of the API
**In trial mode**
- You can send to anyone in your team or whitelist, using the team key
- You can simulate sending to anyone, using the simulate key
**When you’re live**
- You can send to anyone in your team or whitelist, using the team key
- You can simulate sending to anyone, using the simulate key
- You can send to anyone with the live key
So doing a `git diff` on that list, the only difference between being in
trial mode and live mode is now:
`+` You can send to anyone with the live key
**(How trial mode used to work)**
- You can send to anyone in your team or whitelist, using the normal key
- You can simulate sending to anyone, using the simulate key
- You can send to _just_ people in your team using the team key
help prevent issues where scheduled jobs are processed twice. note this is NOT
a watertight solution - it holds no locks, and there is no guarantee that the
status won't have updated between asserting that its status is 'pending' and
updating it to be 'in progress'
this helps manage the transaction by keeping it inside one function in the dao,
so after the function completes you know that the transaction has been released
and concurrent processing can resume
we were running into issues where multiple beats queue up the
run_scheduled_jobs task at the same time, and concurrency issues with selecting
scheduled jobs causes both tasks to trigger processing of the job.
Use with_for_update, which calls through to the postgres SELECT ... FOR UPDATE
which locks other SELECT FOR UPDATES (ie other threads running same code) until
the rows are set to pending and the transaction completes - so the second
thread will not find any rows
- It seems that when we changed the name of the job.status column that we didn't update the code to use job.job_status.
- Therefore none of the jobs since then have had the job status updated.
- Now that this is fix we can show the job status when there is an error like "sending exceeds limits"
- This could happen if a job is scheduled to run at the top of the hour, so at the time of the job creation the limit was not exceed, but at the time of processing the job the limit is exceed.
Notifications with a `billable_units` count of `0` wont have any effect
on the result, but including them in the query will slow down the
grouping and summing of the results because it’ll have to loop over more
rows.
April 1st is in British summer time, ie 1hr ahead of UTC. The database
stores everything in UTC, so for accurate comparisions we need to make
sure that `get_financial_year()` returns a UTC, datetime-aware
timestamp that is 1hr ahead of midnight.
This also means that when we group notifications by month, the months
need to be in BST. So the line between one year and another is actually
01:00 on April 1st, _not_ 00:00 on April 1st.
There’s no way we’ve found to do this in SQLAlchemy or raw Postgres,
especially because we don’t store the timestamps with a timezone in the
database.
So the grouping and summing of the notifications has to be done in
Python.
`/services/ef7a665d-11a4-425a-a180-a67ca00b69d7/billable-units?year=2016`
Pretty much just passes through to the DAO layer. Validates that year
is:
- present (there’s no need for unbounded queries on this endpoint)
- an integer
In order to invoice people we need to know how many text message
fragments they’ve sent per month.
This should be per (government) financial year, ie April 1st to April
1st because we’ll only ever show a page for one year (because the
250,000 allowance is topped up at the start of every financial year).
This commit only does the DAO bit, not the REST bit.