mirror of
https://github.com/GSA/notifications-api.git
synced 2026-03-01 14:29:51 -05:00
Probably shouldn’t deploy this to production 😅 This shows exactly what I removed, bodged and hardcoded to test what the performance implications of caching a bunch of stuff might look like. Test command: ```bash python -m timeit -n 1 -s "import notifications_python_client; c = notifications_python_client.notifications.NotificationsAPIClient('🤫', base_url='http://localhost:6011')" "c.send_email_notification(email_address='sender@something.com', template_id='be433bfc-fe31-464b-9f2c-5be11abf2176')" ``` Before: ``` raw times: 12.1 11.9 11.8 100 loops, best of 3: 118 msec per loop ``` After ``` raw times: 11.2 10.7 10.1 100 loops, best of 3: 101 msec per loop ``` Not a big improvement… So I was curious what it was doing in those ~100ms Let’s go back to master and comment out persisting the notification to the database: ``` raw times: 12.3 10.5 10.5 100 loops, best of 3: 105 msec per loop ``` (saves about 13ms) If we comment out sending to the queue: ``` raw times: 3.43 3.24 4.88 100 loops, best of 3: 32.4 msec per loop ``` (saves about 85ms) This means most of our request time is spent waiting for SQS. If we test our fake caching while sending to the queue is commented out we get a clearer picture of the potential improvements: ``` raw times: 2.13 1.84 2.18 100 loops, best of 3: 18.4 msec per loop ``` This is a saving of 14ms, from a baseline of 32.4ms, so 56%. A typical call to fetch a service from Redis from the admin app takes about 0.6ms, for context. It’s also worth thinking about whether we’re holding a database connection longer than we need to if we still have it while talking to SQS.