Azure · AZ-400

Over-Provisioning — Azure DevOps Engineer (AZ-400)

You provisioned more capacity or redundancy than the scenario required. The exam rewards right-sizing.

Reserved Capacity Guarantees Performance and Wastes Money Here

Workload profile: batch jobs that run four hours per night, idle the remaining twenty. Competing options: D-series Reserved VM instances versus Azure Batch with auto-scaling Spot nodes. The deciding constraint is utilization pattern—Reserved Instances are cost-optimal for consistently running workloads, not intermittent ones. The exam is not asking whether the larger option handles the load. It is asking whether the spend is justified given the described usage shape.

4%of exam questions affected (8 of 200)

The Scenario

A team needs storage for application logs. Logs are written continuously but only accessed during incident investigations — maybe once per quarter. You choose Premium Blob Storage for fast write performance. The correct answer is Standard Hot for recent logs (first 30 days) with a lifecycle management policy that moves data to Cool tier after 30 days and Archive after 90 days. Premium storage costs $0.15/GB/month; Standard Hot costs $0.018/GB/month; Cool costs $0.01/GB/month; Archive costs $0.002/GB/month. For 1TB of logs, Premium costs $150/month vs. a tiered approach averaging under $20/month.

How to Spot It

  • Azure Blob Storage tiers exist for different access patterns. Premium is for low-latency, high-transaction workloads (databases on disk). Hot is for frequently accessed data. Cool is for 30+ day retention. Archive is for 180+ day retention with hours of rehydration time. The exam tests whether you match the tier to the access frequency.
  • Azure Cosmos DB provisioned throughput at 400 RU/s (minimum) costs ~$23/month per container. If the scenario describes "occasional reads" or "low-traffic API," serverless Cosmos DB charges per RU consumed with no minimum, which can be pennies per month for light workloads.
  • Auto-scale and elastic tiers (Azure SQL Serverless, Cosmos DB autoscale, App Service auto-scaling) are the exam-preferred answer for unpredictable workloads. Fixed provisioned capacity is correct only when the scenario provides specific, stable throughput numbers.

Decision Rules

Whether to expand telemetry collection by raising Container Insights granularity and disabling Application Insights adaptive sampling to capture every request (over-provisioning, busts ingestion budget) or to write targeted KQL queries joining the existing Perf, KubePodInventory, requests, and dependencies tables to correlate P99 latency with infrastructure metrics within the current ingestion commitment.

Application InsightsAzure Log AnalyticsContainer Insights

Whether to expand Azure Monitor diagnostic settings ingestion across additional telemetry categories to maximize data availability, or to author a targeted KQL query joining existing AppRequests and Perf tables already present in the shared Log Analytics workspace to achieve the required correlation within the current spend envelope.

Application InsightsAzure Log AnalyticsAzure Monitor

Domain Coverage

Implement an Instrumentation Strategy

Difficulty Breakdown

Hard: 4Expert: 4

Related Patterns