The API clients should just deal with calling the API and returning the
data from it.
Inferring things from the data is more logically done at the model
layer. But we couldn’t do that before, because we didn’t have a model
layer for jobs.
This follows the pattern of what we’ve done with services, users and
events.
It gives us a way of neatly instantiating a model for each item in the
list we get back from the API and reduces the complexity of the view
layer code.
Now is a good time to do this because we’re going to be making a bunch
of changes to the jobs pages, and those changes will be easier to code
and understand with a sensible model behind them.
Done using isort[1], with the following command:
```
isort -rc ./app ./tests
```
Adds linting to the `run_tests.sh` script to stop badly-sorted imports
getting re-introduced.
Chosen style is ‘Vertical Hanging Indent’ with trailing commas, because
I think it gives the cleanest diffs, eg:
```
from third_party import (
lib1,
lib2,
lib3,
lib4,
)
```
1. https://pypi.python.org/pypi/isort
> When we have jobs that have over 3% failure rates we should highlight
> those so that peoples attention is drawn to deal with the failure.
>
> They would then go to the job view to see what the details are where
> they could filter by failure, but that's a different story...
>
> This is just about calculating and highlighting those that need their
> attention.
— https://www.pivotaltracker.com/story/show/121206123
This commit:
- calculates the failure rate for each job
- makes jobs with a failure rate of > 3% go red on the dashboard
We can filter notifications on the activity page by state.
This commit adds counts to those filters.
This is mainly so that we can consistently do the same thing on the job
page later on.