Over-Provisioning — Azure Solutions Architect (AZ-305)
You provisioned more capacity or redundancy than the scenario required. The exam rewards right-sizing.
Reserved Instances Against a Variable Workload Pattern
A workload that runs for a short window and then sits idle will still pay for the unused hours under Reserved Instances. For burst workloads, batch jobs, or event-driven processing with long idle windows, reserved capacity locks in baseline spend during periods of zero utilization. The scenario signals its own answer through traffic shape: sustained, predictable load favors reservation; variable or intermittent load favors autoscale paired with Spot VMs. The point is not a magic threshold. It is that commitment pricing stops making sense once the idle time becomes a meaningful part of the workload shape.
The Scenario
A team needs storage for application logs. Logs are written continuously but only accessed during incident investigations — maybe once per quarter. You choose Premium Blob Storage for fast write performance. The correct answer is Standard Hot for recent logs (first 30 days) with a lifecycle management policy that moves data to Cool tier after 30 days and Archive after 90 days. Premium storage costs $0.15/GB/month; Standard Hot costs $0.018/GB/month; Cool costs $0.01/GB/month; Archive costs $0.002/GB/month. For 1TB of logs, Premium costs $150/month vs. a tiered approach averaging under $20/month.
How to Spot It
- •Azure Blob Storage tiers exist for different access patterns. Premium is for low-latency, high-transaction workloads (databases on disk). Hot is for frequently accessed data. Cool is for 30+ day retention. Archive is for 180+ day retention with hours of rehydration time. The exam tests whether you match the tier to the access frequency.
- •Azure Cosmos DB provisioned throughput at 400 RU/s (minimum) costs ~$23/month per container. If the scenario describes "occasional reads" or "low-traffic API," serverless Cosmos DB charges per RU consumed with no minimum, which can be pennies per month for light workloads.
- •Auto-scale and elastic tiers (Azure SQL Serverless, Cosmos DB autoscale, App Service auto-scaling) are the exam-preferred answer for unpredictable workloads. Fixed provisioned capacity is correct only when the scenario provides specific, stable throughput numbers.
Decision Rules
Whether to use Azure Cosmos DB with provisioned RU/s or a cost-tier-matched storage option (Azure Blob Storage cool tier or Azure Cosmos DB serverless) when access patterns are infrequent and do not demand sustained low-latency throughput.
Whether to configure Azure Cosmos DB with provisioned fixed RU/s throughput or serverless mode when the access pattern is infrequent and batch-only and an explicit per-GB cost ceiling is stated alongside the latency SLA.
Whether to configure Azure Cosmos DB in serverless throughput mode versus manually provisioned RU/s when read access is infrequent and burst-shaped, because provisioned RU/s charges continuously regardless of actual consumption and delivers no additional latency benefit over serverless for sub-ten-queries-per-day access patterns.
Whether to select a transactional command-message broker with native session ordering and dead-letter support (Service Bus) or a high-scale streaming ingestion platform (Event Hubs) when the workload is a command pattern with variable load and a hard cost-efficiency constraint against idle throughput spend.
Select the Azure messaging service whose native delivery semantics—session-scoped FIFO, dead-letter queues, at-least-once delivery—satisfy a command-message pattern at 5,000 msg/sec without provisioning a dedicated streaming cluster sized for orders-of-magnitude higher event volumes.
Whether to satisfy the throughput-latency constraint by layering Azure Cache for Redis for read absorption and Azure API Management for throttle-based backpressure, or by over-provisioning dedicated compute nodes that incur a full database round-trip for every request regardless of cache eligibility.
Domain Coverage
Difficulty Breakdown
Related Patterns