These fixtures were both calling other fixtures as functions and being
called as functions in the tests. Rewriting the tests to make them
Pytest 4 compatible means we are no longer using
`sample_template_without_letter_permission`, so this has been deleted.
Admin, API and utils were all defining a value for SMS_CHAR_COUNT_LIMIT.
This value has been updated in notifications-utils to allow text
messages to be 4 fragments long and notifications-api now gets the value of
SMS_CHAR_COUNT_LIMIT from notifications-utils instead of defining it in
config.
Also updated some tests to check for the higher limit.
when in research mode or test key, dont send letters via api - instead,
just put them straight to success state
when using a team key, flat out reject the request (403)
* Alter config so an error will be raised if you forget to mock out a
celery call in one of your tests
* Remove an unneeded exception type that was masking errors
- uses new utils methods to validate phone numbers
- defaults to International=True on validation. This ensures the validator works on all numbers
- Then check if the user can send this message to the number internationally if needed.
- both V1 and V2 APIs
- Rate limiting wrapped into a new method - check_rate_limiting
- delegates to the previous daily limit and the new though put limit
- Rate limiting done on key type. Each key has it's own limit (number of requests) and interval (time period of requests)
- Configured in the config. Not done on a per-env basis though could be in the future.
Brings in:
- [ ] https://github.com/alphagov/notifications-utils/pull/135
Also adds extra tests for:
- the exact issue that we saw in production when #867 was deployed
- what happens when `None` is passed as a placeholder value, because
this should never get as far as the relevant bit of utils
This is being done for the PaaS migration to allow us to keep traffic coming in whilst we migrate the database.
uses the same tasks as the CSV uploaded notifications. Simple changes to not persist the notification, and call into a different task.
> If a user makes an API request with additional personalisation fields,
> we should simply discard any fields that the template doesn't have.
>
> This gives a couple of related advantages:
>
> - modifying template parameters no longer requires downtime for
> clients - as they can pass in extra new parameters before a template
> change, or continue passing in old unused parameters after removing
> them from a template
>
> - services can pass in large user objects, for example, and then play
> around with templates adding and removing fields at will
>
> we should make sure we still return an error if a user doesn't pass in
> a required parameter.
– https://www.pivotaltracker.com/story/show/140774195
Cache expires every 10 minutes, but will help with the every 2 second query, especially when a job is running.
There is some clean up and qa to do for this yet
We are using the notify queue in this iteration because that queue is a low volume queue with it's own dedicated workers. This just saves us from building a new queue at this point, and a new queue may not be necessary.
- Added the `simulate` notification logic to version 2. We have 3 email addresses and phone numbers that are used
to simulate a successful post to /notifications. This was missed out of the version 2 endpoint.
- Added a test to template_dao to check for the default value of normal for new templates
- in v2 get_notifications, casted the path param to a uuid, if not uuid abort(404)
note that all of these tests have to be checked to ensure that they
still call through to notify_db_session (notify_db not required) to
tear down the database after the test runs - since it's no longer
required to pass it in to the function just to invoke the sample_user
function
brings in a fix to InvalidEmail/Phone/AddressExceptions not being
instantiated correctly. `exception.message` is not a python standard,
so we shouldn't be relying on it to transmit exception reasons -
rather we should be using `str(exception)` instead. This involved a
handful of small changes to the schema validation
1) It's incr not inc on the redis client, so renamed the calls everywhere
2) Redis returns bytes/string rather than an int if the value stored is an int. Cast the result to an int before use. Not you can set up the GET to do this transparently but I've not done this as we *may * use GETS for non-int and the callback sets up the cast for the connection not the call.
These means that the cache count is on Notifications in the database NOT notifications sent to providers. If the provider fails to accept the notification, it still counts.
I think this is correct, as they have done the work to send it so we should count it, though there is an argument that we should count them on sending?
- Uses Redis cache to check for current count
- If not present then sets the value based on the database state
- Any Redis errors are swallowed. Cache failures should NOT fail the request.