Multi-Service Tradeoff — Azure Fundamentals (AZ-900)
Matching Azure Compute to the Dominant Workload Constraint
Architecture requirement: deploy containerized or event-driven workloads across options with very different orchestration overhead and scaling behavior. ACI runs isolated containers without orchestration. AKS manages full cluster topology for multi-container workloads. Functions handle stateless, event-driven execution with no idle cost. Queue Storage decouples producers and consumers but computes nothing itself. The deciding constraint is operational complexity tolerance combined with scaling pattern — the exam gives exactly enough detail to eliminate three of the four.
What This Pattern Tests
Azure offers three messaging services with distinct models. Service Bus handles enterprise messaging with sessions, dead-lettering, and exactly-once delivery at $0.05 per million operations. Event Grid handles reactive event routing with push delivery at $0.60 per million events. Queue Storage handles simple FIFO queueing at $0.004 per 10,000 transactions. The exam gives you a messaging requirement and tests whether you match it: "order processing with dead-letter handling" = Service Bus, "react to blob uploads" = Event Grid, "simple task queue for background workers" = Queue Storage. Cosmos DB vs. SQL Database vs. Table Storage follows the same principle: global multi-model vs. relational with joins vs. simple key-value.
Decision Axis
Message complexity and delivery model determine service. Over-specifying is as wrong as under-specifying.
Associated Traps
Decision Rules
At which shared-responsibility boundary does OS and runtime management transfer from the customer to Microsoft — IaaS (Azure VM, customer owns OS) versus PaaS (Azure App Service, Microsoft owns OS and runtime)?
Whether serverless container execution (Azure Container Instances) or auto-scaling IaaS (Azure Virtual Machine Scale Sets) better satisfies the explicit constraint of eliminating VM configuration and OS update management from the customer's operational scope.
Which Azure service type (IaaS vs PaaS) places OS and runtime management responsibility on Microsoft rather than the customer, satisfying the explicit no-OS-management constraint?
Whether 'managed Kubernetes' (AKS) transfers node-level and orchestration lifecycle duties to Microsoft or whether a serverless model (Azure Functions) is required to satisfy a constraint that eliminates all customer-side infrastructure management.
Whether the workload's 'no cluster management' constraint is fully satisfied by ACI (zero cluster surface owned by customer) or only partially satisfied by AKS (control plane managed by Microsoft, but node pools and upgrade schedules remain customer-owned).
Determine which service type moves OS and runtime lifecycle management to Microsoft versus keeping it with the customer, then match that shared-responsibility boundary to the scenario's explicit constraint to eliminate any option where the customer still owns the infrastructure management layer.
Whether consumption-based serverless pricing is the cost-optimal model for a steady-state, always-on workload, or whether a managed PaaS with predictable dedicated-tier pricing satisfies both the operational-overhead-minimization and cost-predictability constraints simultaneously.
Does 'managed Kubernetes' (AKS) transfer worker-node OS management to Microsoft, or does the customer retain that layer — making Azure Container Instances the only option that satisfies a zero-infrastructure-management constraint?
Does the presence of automatic scaling in an IaaS service (Azure Virtual Machine Scale Sets) eliminate the customer's OS and VM-level management responsibilities, or must the team select a PaaS service (Azure App Service) to truly transfer those duties to Microsoft?
Choose availability zones (intra-region, datacenter-level fault isolation) over region pairs (inter-region disaster recovery) when the dominant constraints are single-datacenter failure tolerance, data residency within one region, and avoidance of cross-region replication costs.
Choose availability zones (intra-region, datacenter-level fault isolation) over region pairs (inter-region disaster recovery) when the dominant constraints are single-datacenter failure tolerance, data residency within one region, and cost minimization.
Whether to rely on an assumed Microsoft platform default for fault distribution or explicitly declare availability zone placement in the ARM deployment to achieve intra-region zone-level resilience without incurring cross-region data-movement costs.
Whether restricting resource deployment to a specific Azure region is a Microsoft platform default triggered by region selection, or a customer responsibility requiring an explicit Azure Policy assignment.
Choose availability zones (intra-region fault-isolation boundary, data remains in-region, customer-configured) over region pairs (inter-region DR boundary, cross-region replication, also customer-configured) when the stated failure scope is a single datacenter and the compliance constraint prohibits cross-region data movement.
Availability zones provide intra-region fault isolation sufficient for a single-datacenter-failure RTO without moving data across geography boundaries, whereas region pairs extend protection to region-wide outages but place the paired-region residency verification burden on the customer — not Microsoft — making AZs the correct and complete answer for the stated constraints.
Whether subscription-level ARM hierarchy boundaries — which provide both an independent RBAC scope and a distinct billing context — or tag-based labeling — which provides cost-grouping metadata only — satisfies the dual requirements of access isolation and billing separation.
Choose availability zones (intra-region, no cross-region data-transfer charges) over region pairs (inter-region geo-replication, which adds replication storage and egress costs) when the RTO target is satisfied by datacenter-level fault isolation and data must remain cost-efficiently within a single region.
Availability zone configuration and billing commitment tier are fully independent controls — zone placement must be explicitly declared at VM provisioning time via Azure Resource Manager regardless of whether the VM uses on-demand or reserved pricing, and selecting a Reserved Instance discount does not confer any fault-isolation or zone-distribution behavior.
Whether single-datacenter fault tolerance under a region-locked Azure Policy assignment is best achieved with availability zones (intra-region isolation, policy-compliant, no secondary-region cost) or region-pair replication (inter-region, expands residency scope beyond the policy-allowed region and incurs secondary compute and egress charges).
Whether Azure Policy's location-restriction scope inherently confers zone-redundancy within that region, or whether datacenter fault isolation requires an independent, explicit zone-placement declaration in the ARM deployment configuration.
Determine whether Azure Tags exert any fault-isolation or placement behavior (they do not) and whether zone-redundant deployment must be explicitly declared in ARM templates, which incurs additional per-zone VM compute charges the tag-based claim incorrectly dismisses.
Domain Coverage
Difficulty Breakdown