If you want to send a job on Monday morning, you should be able to
schedule it on Friday. You shouldn’t need to work on the weekend.
96 hours is a full 4 days, so you can schedule a job at any time on
Friday for any time on Monday.
We’ve checked with the information assurance people, and they’re OK with
us holding the data for this extra amount of time.
- uses 4 rather than 8 entries to test the sort (2 notifications × 2
columns on which we’re sorting)
- makes sure we test for when a scheduled job was created before a job
that’s been processed already
- removes any relative datetimes so the tests are independant of
database speed
Say you have a dashboard with some jobs you sent. Normally looks like:
job | sent
--- | ---
file.csv | **5pm**
file.csv | 3pm
file.csv | 1pm
file.csv | 11am
However if your 5pm job was scheduled at lunchtime, then it will look
like this:
job | sent
--- | ---
file.csv | 3pm
file.csv | 1pm
file.csv | **5pm**
file.csv | 11am
This is because the jobs are sorted by when they were created, not when
they were sent. It looks wrong.
**For jobs that have already been sent**
This commit changes the sort order to be based on `processed_at`
instead.
**For upcoming jobs**
If a job doesn’t have a `processed_at` time then it’s scheduled, but
hasn’t started yet. Only in this case should we still be sorting by
`created_at`.
There is an overlap between team key/trial mode/whitelist. But it’s not
a complete overlap. So it’s hard to understand all the different
permutations of which key lets you send to which people when.
This commit tries to reduce the differences between these concepts. So
for a user of the API
**In trial mode**
- You can send to anyone in your team or whitelist, using the team key
- You can simulate sending to anyone, using the simulate key
**When you’re live**
- You can send to anyone in your team or whitelist, using the team key
- You can simulate sending to anyone, using the simulate key
- You can send to anyone with the live key
So doing a `git diff` on that list, the only difference between being in
trial mode and live mode is now:
`+` You can send to anyone with the live key
**(How trial mode used to work)**
- You can send to anyone in your team or whitelist, using the normal key
- You can simulate sending to anyone, using the simulate key
- You can send to _just_ people in your team using the team key
help prevent issues where scheduled jobs are processed twice. note this is NOT
a watertight solution - it holds no locks, and there is no guarantee that the
status won't have updated between asserting that its status is 'pending' and
updating it to be 'in progress'
mocks create any property you access, so calling functions on them is
inherently risky due to typos quietly doing nothing. instead assert
`.called is False`, which will fail noisily if you typo
this helps manage the transaction by keeping it inside one function in the dao,
so after the function completes you know that the transaction has been released
and concurrent processing can resume
we were running into issues where multiple beats queue up the
run_scheduled_jobs task at the same time, and concurrency issues with selecting
scheduled jobs causes both tasks to trigger processing of the job.
Use with_for_update, which calls through to the postgres SELECT ... FOR UPDATE
which locks other SELECT FOR UPDATES (ie other threads running same code) until
the rows are set to pending and the transaction completes - so the second
thread will not find any rows
- It seems that when we changed the name of the job.status column that we didn't update the code to use job.job_status.
- Therefore none of the jobs since then have had the job status updated.
- Now that this is fix we can show the job status when there is an error like "sending exceeds limits"
- This could happen if a job is scheduled to run at the top of the hour, so at the time of the job creation the limit was not exceed, but at the time of processing the job the limit is exceed.
Notifications with a `billable_units` count of `0` wont have any effect
on the result, but including them in the query will slow down the
grouping and summing of the results because it’ll have to loop over more
rows.
April 1st is in British summer time, ie 1hr ahead of UTC. The database
stores everything in UTC, so for accurate comparisions we need to make
sure that `get_financial_year()` returns a UTC, datetime-aware
timestamp that is 1hr ahead of midnight.
This also means that when we group notifications by month, the months
need to be in BST. So the line between one year and another is actually
01:00 on April 1st, _not_ 00:00 on April 1st.
There’s no way we’ve found to do this in SQLAlchemy or raw Postgres,
especially because we don’t store the timestamps with a timezone in the
database.
So the grouping and summing of the notifications has to be done in
Python.
`/services/ef7a665d-11a4-425a-a180-a67ca00b69d7/billable-units?year=2016`
Pretty much just passes through to the DAO layer. Validates that year
is:
- present (there’s no need for unbounded queries on this endpoint)
- an integer
In order to invoice people we need to know how many text message
fragments they’ve sent per month.
This should be per (government) financial year, ie April 1st to April
1st because we’ll only ever show a page for one year (because the
250,000 allowance is topped up at the start of every financial year).
This commit only does the DAO bit, not the REST bit.
Refactored send_notifications method so that it is more readible.
Refectored the test_send_notificaitons so that it uses parametrized test to avoid duplication.