This updates the `get_notifications_for_service` of the
`NotificationApiClient` to make POST request to the api when the `to`
field is provided. This is done so that we avoid logging personal
details such as email addreses.
If the `to` field is not present, the method will make a GET reqeust and
there will be no change to how it works.
Now persisting the address to the "to" field of the Notification, after the notification has been validated.
If the letter is pending validation, then "Checking..." will appear as the identifier for the letter.
If the letter has passed validation, then the first line of the address (now persisted in the "to" field) will be displayed, with the client reference underneath.
If the letter has failed validation the "Provided as PDF" will show be displayed, which is now the initial value of the "to" field.
Show valdiation failed messages on letter notifications in red text,
not in the banner like we do on Uploads and Validation checker pages.
This is because it is a different step in the journey: the user
has already sent the notification and styling needs to be in line
with other places where user is checking the notification she already
has sent.
We hardcode this as second class for the moment but eventually
will let the user pick.
Currently the API appears to do no validation, e.g. a json
schema, that rejects API calls with the extra key for postage.
Next steps will be to put a PR into the API that will expect a
postage value in the request and save it with the rest of the
notification. Then when that is done we can add the user interface
to the admin app to let the user pick the postage.
Added a send button which only appears on the page if the query string
indicates that the PDF is valid. Before actually sending, we check that
the service has the right permissions and that the metadata for the
letter confirms the letter is valid (because the query string can be
changed).
This report will be used by the engagement team. There is a form to give
a start and end date for the report, and the form is then downloaded
as a CSV file when the form is submitted.
This removes some code which is duplicative and obscure (ie it’s not
very clear why we do `"a" * 73` even though there is a Very Good Reason
for doing so).
Counting pages for API notifications takes a long time for services
with a lot of sent messages (since it issues a `count(*)` query for
the given filter). Since API message log doesn't have a "Next page"
link we can skip the count by setting a flag on the API request.
Upcoming changes to API will mean that by default its
`get_notifications_for_service` DAO function will return one-off
notifications. In most cases this is what we want, but the message log
page should not show one-off notifications. By passing in the `include_one_off=False`
option to API we can ensure that this page will stay the same when API
changes.
* Moved the notifications code to go to admin to get the the template
preview document rather than go to template preview.
This will remove the logic from admin and place it in api so it is
easier to expand on later when there are precompiled PDFs
* Added some error handling if API returns an API error.
Caught the error and displayed an error PNG so it is obvious something
failed. Currently it displayed a thumbnail of a png over the top of the
loading page, and therefore it wasn't obvious of the state.
preview document rather than go to template preview.
This will remove the logic from admin and place it in api so it is
easier to expand on later when there are precompiled PDFs
Currently requests to the API made from the admin app are going from
PaaS admin app to the nginx router ELB, which then routes them back
to the api app on PaaS.
This makes sense for external requests, but for requests made from
the admin app we could skip nginx and go directly to the api PaaS
host, which should reduce load on the nginx instances and
potentially reduce latency of the api requests.
API apps on PaaS are checking the X-Custom-Forwarder header (which
is set by nginx on proxy_pass requests) to only allow requests going
through the proxy.
This adds the custom header to the API client requests, so that they
can pass that header check without going through nginx.
Done using isort[1], with the following command:
```
isort -rc ./app ./tests
```
Adds linting to the `run_tests.sh` script to stop badly-sorted imports
getting re-introduced.
Chosen style is ‘Vertical Hanging Indent’ with trailing commas, because
I think it gives the cleanest diffs, eg:
```
from third_party import (
lib1,
lib2,
lib3,
lib4,
)
```
1. https://pypi.python.org/pypi/isort
Because we’re setting the API key and service ID after calling the
`__init__` method of the client it wasn’t doing the thing where it
splits the combined key into the two individual UUIDs. So we still need
to set them directly, individually on the client.
The Notify API client changed in version 4 to take two arguments, not
three (service ID was removed in favour of the combined API key).
This gets a bit gnarly because the API key has to be at least a certain
length so it can be substringed internally.
completely mimicks the job status page, and as such, all the code and
templates have been taken from the job page. This page performs
exactly the same as the job page for now
* total, sending, delivered, failed blue boxes (though they'll just
read 0/1 for now.
* download report button (same as with job download, except without job
or row number in file)
* removed references to scheduled
* kept references to help (aka tour/tutorial) as that'll eventually
change over from a job to a one-off too
> Once an inbound message has been received, there should be a way to
> see the other messages in the system from the same service to the same
> number. Both in and outbound. Nice inbox/whatsapp stylee view or some
> such. This way the context of the reply is understood.
>
> Initially will only see the outbound template, not the actual message,
> but we’re going to change this for the rest (soon), so that you can
> always see the full message for all outbound.
> Service teams that use the admin interface often need to know the
> outcome of a message... at the moment they have to page through all
> the results in the activity stream. They should be able to find
> notifications by email address or phone number.
– https://www.pivotaltracker.com/n/projects/1443052
This commit adds an additional query string parameter (`to`) to the URL,
which users can use to filter down the list of notifications.
It:
- takes the status into account
- doesn’t update the counts based on the search term (in reality each
service will only send a handful of notifications to one person in any
7 day period)
In other words the funnel that filters down the notifications looks
like:
> all notifications for service → only failed → only to this phone
> number
We want to show a log of notifications that have been sent from the API.
The admin app uses its own private `/service/…/notifications` endpoint
for listing activity. This commit allows us to pass through two
optional, additional parameters to tell the API to:
- include or not include notifications created from a job
- include or not include notifications created with a test API key
Starting arguments on their own line and putting the closing parenthesis
on it’s own line because any subsequent changes to the arguments diff
cleanly (ie without touching any other lines).
The clients never get passed useful values to their `__init__` methods.
Rather the real values are passed through later using the `init_app`
method.
So it should be an error if the client is relying on the values that
get passed to it’s init method. Easiest way to ensure this is by making
the `__init__` method not expect any arguments and passing fake values
to the `Super` call.