Added a validation method that always fails for scheduled notifications.
Comment out config for scheduled task.
The schedule notifications will be turned on once we can invite services to use it.
Waiting for the service permission story, must commit this in order to keep things from going stale.
this means that on non-prod envs, it reflects that environment.
it needs to be a lamdba, because the column object is created at import
time, when current_app.config won't have been loaded - this means that
when you create a Service object, that lambda executes and grabs the
correct default value
jsonschema states:
> A format attribute can generally only validate a given set of
> instance types. If the type of the instance to validate is not in
> this set, validation for this format attribute and instance SHOULD
> succeed.
We were not checking for the type of the input, and our validators were
behaving in unexpected manners (throwing TypeErrors etc etc). Despite
us declaring that the phone_number field is of type `str`, we still
need to make sure the validator passes gracefully, so that the inbuilt
type check can be the bit that catches if someone passes in a non-str
value. We've seen this with people passing in integers instead of strs
for phone numbers. This'll make them receive a nice 400 error
(e.g. "phone_number 12345 is not of type string"), rather than us
having a 500 internal server error
- uses new utils methods to validate phone numbers
- defaults to International=True on validation. This ensures the validator works on all numbers
- Then check if the user can send this message to the number internationally if needed.
- both V1 and V2 APIs
- Rate limiting wrapped into a new method - check_rate_limiting
- delegates to the previous daily limit and the new though put limit
- Rate limiting done on key type. Each key has it's own limit (number of requests) and interval (time period of requests)
- Configured in the config. Not done on a per-env basis though could be in the future.
This is being done for the PaaS migration to allow us to keep traffic coming in whilst we migrate the database.
uses the same tasks as the CSV uploaded notifications. Simple changes to not persist the notification, and call into a different task.
We are using the notify queue in this iteration because that queue is a low volume queue with it's own dedicated workers. This just saves us from building a new queue at this point, and a new queue may not be necessary.
- Added the `simulate` notification logic to version 2. We have 3 email addresses and phone numbers that are used
to simulate a successful post to /notifications. This was missed out of the version 2 endpoint.
- Added a test to template_dao to check for the default value of normal for new templates
- in v2 get_notifications, casted the path param to a uuid, if not uuid abort(404)
brings in a fix to InvalidEmail/Phone/AddressExceptions not being
instantiated correctly. `exception.message` is not a python standard,
so we shouldn't be relying on it to transmit exception reasons -
rather we should be using `str(exception)` instead. This involved a
handful of small changes to the schema validation
* Ensure we dont raise exception if e.cause does not contain a message
* Ensure we handle case where e.path may be empty
* Refactor existing tests to conform to new format
This PR fixes that and adds a test for it.
I am confused as to why I had to change the test_validators test that is checking if the mock is called.
Why did this code pass on preview?
Created a new schema that accepts request parameters for the
get_notifications v2 route.
Using that to validate now instead of the marshmallow validation.
Also changed the way formatted error messages are returned because
the previous way was cutting off our failing `enum` messages.