- It would be nice to refactor the send_sms and send_email tasks to use these common functions as well, that way I can get rid of the new Notifications.from_v2_api_request method.
- Still not happy with the format of the errors. Would like to find a happy place, where the message is descript enough that we do not need external documentation to explain the error. Perhaps we still only need documentation to explain the trial mode concept.
- Use these validation methods in post_sms_notification and the version 1 of post_notification.
- Create a v2 error handlers.
- InvalidRequest has a to_dict method for private and v1 error responses and a to_dict_v2 method to create the v2 of the error responses.
- Each validation method has extensive unit tests, so the unit test for the endpoint do not need to check every error case, but check that the error handle formats the message correctly.
- The format of the error messages is still a work on progress.
- This version of the api could be deployed without causing a problem to the application.
- The new endpoing is still a work in progress and is not being used yet.
Start building up the validators required for post notificaiton.
The app/v2/errors.py is a rough sketch, will be passed a code, the error can look up the message and link for the error message.
when given any log function with multiple parameters, the python logging utils
assume the first param is a format string and the rest are arguments to pass
in - we were passing in the exception object to `logger.exception`, however,
the purpose of .exception is to add the exception object itself - so we didn't
need to
ensure that if unexpected Exceptions are thrown, we handle them correctly
(log and then return JSON)
also remove some branches that will never trip, and combine a couple of
identical functions
we shouldn't try and use statsd to log an error if they fail, for example
[we also shouldn't retry sending the message but that's a problem for another time]
If you want to send a job on Monday morning, you should be able to
schedule it on Friday. You shouldn’t need to work on the weekend.
96 hours is a full 4 days, so you can schedule a job at any time on
Friday for any time on Monday.
We’ve checked with the information assurance people, and they’re OK with
us holding the data for this extra amount of time.
this is so that the filtering, which we do on the admin side, is applied
before pagination - so that the pages returned are all valid displayable
jobs. unfortunately this means that another config value has to be copied
to the server side but it's not the end of the world
Currently getting a single notification by ID is restricted to
notifications created with the same key type.
This makes things awkward for the functional tests now we’ve removed the
ability to create live keys in trial mode. So this commit removes the
restriction, so that any key can get any notification, no matter how it
was created.
And you’re never going to guess a UUID, so the chances of this giving
you privileged access to someone’s personal information is none.
This does not change the get all notifications endpoint, which
absolutely should be restricted by key type.
- uses 4 rather than 8 entries to test the sort (2 notifications × 2
columns on which we’re sorting)
- makes sure we test for when a scheduled job was created before a job
that’s been processed already
- removes any relative datetimes so the tests are independant of
database speed
Say you have a dashboard with some jobs you sent. Normally looks like:
job | sent
--- | ---
file.csv | **5pm**
file.csv | 3pm
file.csv | 1pm
file.csv | 11am
However if your 5pm job was scheduled at lunchtime, then it will look
like this:
job | sent
--- | ---
file.csv | 3pm
file.csv | 1pm
file.csv | **5pm**
file.csv | 11am
This is because the jobs are sorted by when they were created, not when
they were sent. It looks wrong.
**For jobs that have already been sent**
This commit changes the sort order to be based on `processed_at`
instead.
**For upcoming jobs**
If a job doesn’t have a `processed_at` time then it’s scheduled, but
hasn’t started yet. Only in this case should we still be sorting by
`created_at`.
There is an overlap between team key/trial mode/whitelist. But it’s not
a complete overlap. So it’s hard to understand all the different
permutations of which key lets you send to which people when.
This commit tries to reduce the differences between these concepts. So
for a user of the API
**In trial mode**
- You can send to anyone in your team or whitelist, using the team key
- You can simulate sending to anyone, using the simulate key
**When you’re live**
- You can send to anyone in your team or whitelist, using the team key
- You can simulate sending to anyone, using the simulate key
- You can send to anyone with the live key
So doing a `git diff` on that list, the only difference between being in
trial mode and live mode is now:
`+` You can send to anyone with the live key
**(How trial mode used to work)**
- You can send to anyone in your team or whitelist, using the normal key
- You can simulate sending to anyone, using the simulate key
- You can send to _just_ people in your team using the team key
help prevent issues where scheduled jobs are processed twice. note this is NOT
a watertight solution - it holds no locks, and there is no guarantee that the
status won't have updated between asserting that its status is 'pending' and
updating it to be 'in progress'
mocks create any property you access, so calling functions on them is
inherently risky due to typos quietly doing nothing. instead assert
`.called is False`, which will fail noisily if you typo
this helps manage the transaction by keeping it inside one function in the dao,
so after the function completes you know that the transaction has been released
and concurrent processing can resume
we were running into issues where multiple beats queue up the
run_scheduled_jobs task at the same time, and concurrency issues with selecting
scheduled jobs causes both tasks to trigger processing of the job.
Use with_for_update, which calls through to the postgres SELECT ... FOR UPDATE
which locks other SELECT FOR UPDATES (ie other threads running same code) until
the rows are set to pending and the transaction completes - so the second
thread will not find any rows