for reasons unknown, using a file descriptor no longer works on
concourse (or in that docker container generally). It might be a change
within cf-cli v7 vs v6.
Either way, it does't work, so use a temporary file. Clean up the
temporary file afterwards.
If the command fails, the temporary file will still stick around, so
I've added the file to the /tmp/ folder instead. it's full of secret
keys and things so if you do have a deployment error while running
locally, you should make sure you clean up the file (make cf-rollback
and make clean will both do this for you).
This is for the IBAG format (similar to CAP format, but proprietary)
used in the XMLs that we exchange with broadcast providers (specifically
Vodafone).
a bug in cf-cli v6 caused us to get rate limited, one solution to this
is to bump the version of cf-cli we're using to version 7. This has a
few syntax changes as the old v3 commands become mainline.
To upgrade locally, grab the latest version from brew.
```sh
brew install cloudfoundry/tap/cf-cli@7
```
some services only send to one provider. This is a platform admin
setting to allow us to test integrations and providers manually without
affecting other broadcasts from different services.
one-to-one - a service can either send to all as normal, or send to only
one provider.
Now that we’re grouping jobs sent from contact lists within their
parent, they shouldn’t also be listed on the jobs page at the top level.
The jobs page uses the uploads API, not the jobs API, so this commit
makes sure that filtering is happening in the proper place.
`service.id` is a uuid so will not be matched to anything in
`current_app.config.get('HIGH_VOLUME_SERVICE')` because that is a list
of strings.
This is why we are never falling into the first if statement and having
any metrics for high volume services on our dashboards at the moment.
Note, I had taken the existing line from the `post_notification`
endpoint, but that is using a serialised service which already has the
UUID converted to a string.
Changes the high volume and not high volume metrics to both only include
non test notifications. This is because when looking at the grafana
metrics, it was impossible to tell what affect the high volume/non high
volume effect was having vs the test/live notification effect.
This leaves us with no break down of high volume/not high volume sending
times for test notifications but I don't think we really need that.
We currently measure the sending time for all. This commit then breaks
it down into
- test keys and non test keys
- high volume services and non high volume services
Breaking it down into test keys and non test keys is important because
we don't care as much about sending test notifications within 10
seconds, only non test keys so we don't want our graphs to reflect poor
performance if it's just test keys affecting this
Breaking it down into high volume and non high volume will allow us to
easily debug issues with slow sending if they are high volume or non
high volume issues
previously we made some incorrect assumptions about set-up on staging
and prod - they currently don't have any cbc_proxy aws creds at all.
We shoudn't be attempting canaries or link tests when there's no AWS
infrastructure to connect to.
We also shouldn't bother writing a row into the database at all for the
broadcast_provider_message since we're not even attempting to send, and
we shouldn't get confused between messages that failed and messages we
never wanted to send at all.