DynamoDB offers two capacity modes for read and write requests: the original provisioned capacity and the more recent on-demand capacity. Unfortunately, adding On-Demand as an option actually makes things more difficult for DynamoDB customers because there is a lot of time spent trying to figure out which is the right choice and when to switch between them. In this blog, I'll talk about the pros and cons for each, and then describe how the one mode (provisioned) could have evolved to become everything that DynamoDB customers actually want.
Perspectives for evaluation
We need to consider the capacity modes from both a financial perspective and an operational perspective. In terms of cost, DynamoDB users don't want to pay for throughput they aren't using, and they also want cost reductions if they provide signal to AWS on the throughput they expect and commit to it over time. In terms of operations, they want to minimize hassles, avoid undesirable throttling of requests, and optionally place limits to control unintended overspend.
Provisioned capacity mode
In provisioned capacity mode, you configure your table (or global secondary index) for a particular read and write throughput. Once in place, DynamoDB will deliver that level of throughput whenever you need it - but you'll pay for that throughput capacity whether you use it or not. The provisioned read and write throughput can also be auto scaled, with a policy which sets a minimum, a maximum, and a target utilization (70% by default). Auto scaling is based on recent consumption metrics in CloudWatch, and can take a few minutes to identify a trend before starting to adjust the provisioned capacity. You need to tune your auto scaling policy to set a minimum and target utilization which maintains enough buffer to cover rapid increases in throughput - otherwise you'll see throttling.
For long term use of DynamoDB, you can purchase a capacity reservation. This is a commitment (one year or three year) to purchase a particular level of provisioned throughput - in return DynamoDB extends you a significant (up to ~70%) discount.
Provisioned capacity gives a lot of control - you can ensure that you have a particular throughput capability before an expected peak event - you'll know that DynamoDB has already built out all the partitions on the backend to cover your needs. You can also cap the throughput so that accidental loops etc in development environments cannot result in crazy levels of spend. But maintaining an auto scaling policy for optimum efficiency takes time and effort - and you might still see throttling from time to time. The minimum provisioned capacity is 1 read unit and 1 write unit - so there's no way to completely avoid throughput cost on an inactive table in this mode.
Provisioned capacity is not an ideal answer for customers in its present form.
On-demand capacity mode
When your DynamoDB table is configured for on-demand capacity mode, you don't need to configure auto scaling or provision any particular level of throughput - and you only pay for the read units and write units which are actually consumed. The service monitors your consumption (near real-time) and splits out partitions as required - it tries to maintain a 50% capability buffer over and above your past needs. In on-demand mode, every partition is allowed to give its full capability in any given second. Each on-demand table starts with 4 partitions (each supporting 3000 read units per second and 1000 write units per second) for a total capability of 12000 read units per second and 4000 write units per second.
An important point is that splitting of partitions takes time. Each split generation (one parent partition being replaced by two child partitions to double the storage and throughput capability of that part of the key space) can take a few minutes, but it can sometimes take much longer. I like to set 30mins for a split as a reasonably safe expectation. There's no way in on-demand mode to tell DynamoDB that you're expecting a peak load that will require 32 partitions to avoid throttling - you have to switch to provisioned mode, configure your expected throughput, wait for the splitting to finish, then convert back to on-demand. Yes, that customer experience is a bit crummy. The only alternative is to actually drive the load, and potentially encounter significant throttling while those 4 partitions split not once, but three times to reach a total of 32.
Another concern with on-demand mode is cost. First, the only constraint on the throughput is the per-table limit on read units and writes units consumed per second (aka "quota", which is configurable only for all tables in a particular account/region). This is 40k reads and 40k writes per second by default - imagine the surprising bill you'd receive if you accidentally created a loop when experimenting with an on-demand table and drove consumption of 40k write units per second for a month!
On a read/write unit-for-unit basis, on-demand throughput costs ~7x provisioned throughput (and that's not allowing for the possibility of reserved capacity - which has no equivalent in on-demand). This assumes 100% utilization of the provisioned capacity, which is difficult if not impossible to achieve. But a utilization of only 14.5% is cost equivalent to on-demand! While there are some workload patterns which are sporadic and unpredictable enough for on-demand to work out as the more cost efficient choice, these are not as common as people might think - and if there is any predictability at all, it can be quite reasonable to schedule auto scaling policy around the requirements. I would argue that many load spikes are only a concern because the solution lacks best practice implementation details like caching of reads and queue-based load leveling for writes. Leaving aside the reserved capacity option, if you can use provisioned capacity with a target utilization of 20% and not see any throttling, you will be saving money over on-demand.
With that 20% target utilization in provisioned mode, you will essentially be having DynamoDB maintain built-out backing partitions which allow you to increase load 5x at any time and not have to wait for splitting to accommodate it. On-demand is equivalent to a target utilization of 50% in this regard - accommodating only a 2x increase before a delay for splitting is required.
On-demand is also not a complete answer for DynamoDB customers.
The one capacity mode to rule them all
Okay, so what would the ideal capacity mode look like? And how might the original provisioned mode have smoothly evolved to take that form?
At a high level, we'd take all the best operational parts of each mode and merge them - then the auto scaled provisioned read and write unit values would be used for billing and as an indicator for determining partitioning requirements - it would not be applied as a rate limiter (for throttling). Let me break it down a little more as the evolutionary path that the original provisioned mode (with auto scaling) could have taken...
Provisioned read and write capacity values are allowed to be set as zero, and zero indicates a behavior which is just like today's on-demand.
The provisioned read and write capacities are billed as per the existing provisioned pricing (ie, you pay for the throughput capacity whether you use it all or not), and they continue to be eligible for capacity reservations.
Auto scaling minimum is used to guide partition requirements - the table or index is always kept at a partitioning level that accommodates that minimum (or more).
Auto scaling maximum is used to set a throughput limit beyond which requests will be throttled. The maximum is optional - if not set, the account/region configured per-table limit still applies.
Throughput beyond the provisioned value is allowed if the partitions are capable (up to the auto scaling maximum) and is billed at the on-demand rate.
Doesn't this sound dreamy? I'd love to hear your thoughts on this! We live in the real world, so I plan to share some guidance on choosing between provisioned and on-demand in future blogs, and some tips and tricks for auto scaling success, too.