Over-Provisioning — AWS Solutions Architect (SAA-C03)
You provisioned more capacity or redundancy than the scenario required. The exam rewards right-sizing.
Reserved capacity feels safe until cost enters
The scenario describes a variable or bursty workload. The candidate reaches for Reserved Instances or large On-Demand capacity because it eliminates performance risk. The exam is testing whether you recognize that Spot Instances, Lambda, or Auto Scaling with right-sized instances satisfies the performance constraint at materially lower cost. "Cost-optimized" is not a secondary concern — when the scenario names it, it becomes the filter that eliminates the safe-feeling option.
The Scenario
A development team needs a database for a new microservice with unknown traffic patterns, starting at approximately 100 reads and 20 writes per second. You choose Multi-AZ RDS PostgreSQL with provisioned IOPS for consistent performance. The correct answer is DynamoDB with on-demand capacity mode. The workload is key-value access (not relational joins), the traffic pattern is unknown (on-demand auto-scales without capacity planning), and the scenario said "new microservice" — meaning requirements will change. Multi-AZ adds cost for availability the scenario never specified. Provisioned IOPS locks you into capacity you may not need.
How to Spot It
- •New workloads with unknown traffic patterns favor on-demand or auto-scaling over provisioned capacity. DynamoDB on-demand charges per request — $1.25 per million reads. At 100 reads/second, that is $10.80/month. A db.r6g.large Multi-AZ RDS instance with provisioned IOPS starts at $400+/month.
- •Multi-AZ is only correct when the scenario requires high availability with automatic failover. Development environments, new microservices, and workloads without SLA requirements do not need Multi-AZ. The exam tests whether you add redundancy that was not requested.
- •Aurora Serverless v2 scales from 0.5 to 128 ACUs — but the minimum 0.5 ACU still costs ~$43/month even at zero traffic. For intermittent workloads, DynamoDB on-demand at $0 idle cost or Aurora Serverless v1 with pause-after-idle may be cheaper.
Decision Rules
Whether target tracking (autonomous threshold management against a single metric target) or step scaling (manually configured CloudWatch alarm per step band) satisfies the combined metric-normalised elastic workload and minimal-operational-overhead constraint — fixed over-provisioned capacity is disqualified by the variability and cost constraints.
Whether the workload's explicit interruptibility and non-continuous run profile (≈17% of daily hours) makes EC2 Spot Instances the dominant cost choice over commitment-based options such as Reserved Instances or Savings Plans that impose per-hour baseline charges on idle hours.
Choose Aurora Serverless v2 Multi-AZ over provisioned Aurora Multi-AZ when the workload exhibits large idle-to-peak swings and the dominant constraint is minimising monthly database cost without relaxing an availability SLA.
Whether to enforce an organization-level preventive SCP via AWS Organizations that blocks public-access actions on S3 across all member accounts — or to deploy Amazon Macie per account to detect and alert on PII exposure, a detective approach that requires per-account provisioning and can be disabled by member account admins.
Whether the workload's maximum per-job execution duration violates Lambda's 15-minute hard limit, and if so, which EC2 capacity model avoids idle-capacity waste given the highly variable arrival rate.
Select the instance family that matches the primary resource bottleneck type (compute-optimized C-family, not balanced M-family) AND the placement group type that minimizes inter-node network latency (cluster, not spread), treating these as two orthogonal axes that must both be satisfied.
Whether the stated IOPS and throughput values fall within gp3's independently provisionable ceiling (≤16,000 IOPS, ≤1,000 MiB/s), making gp3 the cost-optimal choice over io2 despite the database workload context.
When a fixed, over-provisioned fleet shows chronically low average CPU and traffic is variable rather than schedule-predictable, the correct two-lever action is Compute Optimizer for instance-type rightsizing plus a Target Tracking scaling policy to maintain a CPU target dynamically — Scheduled Scaling is disqualified because it requires a fixed, repeatable traffic schedule that the scenario cannot confirm.
Domain Coverage
Difficulty Breakdown
Related Patterns