Over-Provisioning — AWS Developer (DVA-C02)
You provisioned more capacity or redundancy than the scenario required. The exam rewards right-sizing.
Provisioned Concurrency Pays Off Only at Sustained Load
Candidates reading a cold-start complaint pick provisioned concurrency by reflex. For a reporting function invoked twice a day, that reflex charges for warm instances that sit idle between runs. Provisioned concurrency pays for itself only when invocation frequency keeps warm instances in continuous use. At low invocation rates the break-even point never arrives, and on-demand cold starts cost less in aggregate than the idle warm-instance charge. SnapStart for Java runtimes reduces cold-start latency without the continuous idle cost, which changes the calculus significantly for Java functions with intermittent traffic patterns.
The Scenario
A development team needs a database for a new microservice with unknown traffic patterns, starting at approximately 100 reads and 20 writes per second. You choose Multi-AZ RDS PostgreSQL with provisioned IOPS for consistent performance. The correct answer is DynamoDB with on-demand capacity mode. The workload is key-value access (not relational joins), the traffic pattern is unknown (on-demand auto-scales without capacity planning), and the scenario said "new microservice" — meaning requirements will change. Multi-AZ adds cost for availability the scenario never specified. Provisioned IOPS locks you into capacity you may not need.
How to Spot It
- •New workloads with unknown traffic patterns favor on-demand or auto-scaling over provisioned capacity. DynamoDB on-demand charges per request — $1.25 per million reads. At 100 reads/second, that is $10.80/month. A db.r6g.large Multi-AZ RDS instance with provisioned IOPS starts at $400+/month.
- •Multi-AZ is only correct when the scenario requires high availability with automatic failover. Development environments, new microservices, and workloads without SLA requirements do not need Multi-AZ. The exam tests whether you add redundancy that was not requested.
- •Aurora Serverless v2 scales from 0.5 to 128 ACUs — but the minimum 0.5 ACU still costs ~$43/month even at zero traffic. For intermittent workloads, DynamoDB on-demand at $0 idle cost or Aurora Serverless v1 with pause-after-idle may be cheaper.
Decision Rules
Determine whether a short-burst, episodic test workload (90 min active per invocation, 15-20 invocations per day) justifies a Reserved Instance commitment or whether on-demand CloudFormation stack provisioning or serverless compute eliminates idle spend more cost-effectively.
Whether the CodeDeploy deployment group has a CloudWatch alarm explicitly associated so that a metric breach during the canary window triggers an automated traffic revert, versus relying on a parallel full-capacity environment and manual alarm review that inflates cost and removes the automated revert path.
Whether Rolling with Additional Batch (adds one transient extra batch before retiring old-version instances, preserving the availability floor without duplicating the full fleet) or Immutable (launches a complete parallel Auto Scaling group for every deployment, guaranteeing clean rollback but doubling EC2 fleet size for the entire deployment window) satisfies both the availability SLA and the fixed-budget constraint simultaneously.
Whether to pair strongly consistent reads with on-demand capacity mode in DynamoDB (satisfies both zero-staleness and cost-efficiency for unpredictable traffic) versus using provisioned capacity with strongly consistent reads (satisfies correctness but over-provisions idle throughput) or using eventually consistent reads (fails correctness) or using an Aurora reader endpoint (introduces replication lag and always-on compute cost).
Whether the application's read cadence (fixed polling interval of tens of seconds) tolerates DynamoDB's eventual consistency convergence window (milliseconds to low single-digit seconds), making eventually consistent reads sufficient and strongly consistent reads an over-provisioned default.
Whether to emit custom metrics via CloudWatch Embedded Metric Format (structured JSON written to stdout) or via a synchronous PutMetricData SDK call inside the Lambda handler — only EMF satisfies the no-additional-API-call-on-critical-path constraint.
Whether the Lambda execution role's permission policy restricts both the action set (s3:GetObject only) and the resource scope (specific bucket ARN plus prefix path) versus granting an over-provisioned policy such as s3:* on '*' that satisfies functionality but violates least-privilege.
Whether lambda-execution-duration-limit is satisfied and per-invocation billing eliminates idle cost, making Lambda strictly cheaper and operationally simpler than always-on or over-reserved Fargate tasks for a short-duration, bursty, event-driven workload.
Whether Lambda's per-invocation pricing model eliminates economically irrational always-on container capacity given that individual job durations are well below the Lambda execution ceiling and weekend utilization approaches zero.
Domain Coverage
Difficulty Breakdown
Related Patterns