Azure AZ-305Trap Reference

Commonly Confused Services on AZ-305

AZ-305 is primarily a design exam — most questions require reasoning about governance, identity boundaries, network topology, and organizational constraints, not service identification alone. This page covers the service pairs that need quick disambiguation as part of that broader analysis.

Each section below gives you the deciding signal, a quick check, and why the wrong answer keeps looking right.

Azure Service BusAzure Event GridAzure Event HubsAzure Queue Storage
#1

Enterprise messaging vs. event routing vs. event streaming vs. simple queue

All four transfer messages or events between services, so candidates apply Service Bus to all messaging questions.

Deciding signal

Azure Queue Storage is the simplest option — basic FIFO queue for decoupled message delivery, up to 64 KB per message, 7-day maximum retention. Service Bus is enterprise messaging with ordering guarantees, dead-letter queues, duplicate detection, sessions, and message sizes up to 256 KB (premium tier: 100 MB). It suits complex transactional message workflows. Event Grid routes discrete events (resource changes, custom events) to multiple subscribers simultaneously based on event type — it is push-based pub/sub for reactive event-driven architectures. Event Hubs is a high-throughput event streaming platform — it ingests millions of events per second with configurable retention and consumer groups, suited for telemetry, logging, and analytics pipelines.

Quick check

Is this enterprise transactional messaging with ordering and dead-letter (Service Bus), routing discrete events to multiple subscribers (Event Grid), high-throughput event stream ingestion (Event Hubs), or simple decoupled queuing (Queue Storage)?

Why it looks right

Service Bus is the default messaging answer. Event Grid is correct when discrete events need to fan out to multiple subscribers — not for message retention or ordering guarantees.

Azure Front DoorAzure CDNAzure Traffic ManagerAzure Application Gateway
#2

Global HTTP proxy with WAF vs. edge content caching vs. DNS routing vs. regional L7 LB

All four improve performance or availability for distributed users, so candidates reach for Front Door as the default "global" answer.

Deciding signal

Application Gateway is regional Layer 7 — it load balances HTTP/HTTPS traffic within a region, terminates SSL, and includes a WAF. It does not operate globally. Azure CDN caches static assets (images, CSS, JavaScript) at edge POPs to reduce origin load for cacheable content. Traffic Manager routes DNS queries to different regional endpoints based on a routing method — it does not proxy or inspect HTTP traffic. Front Door is a global HTTP/HTTPS proxy with routing intelligence, WAF, URL-based routing, health-based failover, and CDN capabilities combined. When the scenario involves a regional web app with WAF and URL routing, Application Gateway. When it involves global users with HTTP routing and failover across regions, Front Door.

Quick check

Is this regional Layer 7 routing with WAF (Application Gateway), caching static assets at edge (CDN), DNS-based global routing without HTTP proxy (Traffic Manager), or global HTTP proxy with WAF and failover (Front Door)?

Why it looks right

Front Door and Traffic Manager both operate globally and candidates conflate them. Traffic Manager only controls DNS resolution — it cannot proxy traffic or apply WAF rules.

Azure Kubernetes ServiceAzure Container AppsAzure Service Fabric
#3

Managed Kubernetes vs. serverless containers with Kubernetes under the hood vs. microservices platform

All three run distributed applications with service orchestration, so candidates apply AKS to all containerized workload questions.

Deciding signal

AKS is managed Kubernetes — you interact with the full Kubernetes API, manage node pools, deploy operators, and configure the cluster. Required when Kubernetes API compatibility, custom operators, or specific cluster configuration is needed. Container Apps is a serverless container platform built on Kubernetes and KEDA — you deploy containers without managing the cluster, get automatic scaling including scale-to-zero, and pay only for what runs. Required when the team wants containers without Kubernetes operational overhead. Service Fabric is Microsoft-native distributed systems platform for stateful microservices and actor models — it predates Kubernetes and is used for scenarios requiring stateful service instances with built-in reliability. AKS is the modern default; Container Apps for serverless containers; Service Fabric for stateful actor-model microservices.

Quick check

Does the team need full Kubernetes API control (AKS), containers with serverless autoscaling without cluster management (Container Apps), or stateful microservice actors with built-in reliability (Service Fabric)?

Why it looks right

AKS is the prominent container orchestration answer. Container Apps is correct when the scenario explicitly avoids Kubernetes operational complexity — a growing AZ-305 test point.

Azure SQL DatabaseAzure Cosmos DBAzure Synapse Analytics
#4

OLTP relational vs. globally distributed NoSQL vs. analytics warehouse

All three store large datasets and appear in "which database" questions without candidates distinguishing workload type.

Deciding signal

Azure SQL Database is a fully managed OLTP relational database — strong consistency, ACID transactions, foreign keys, and complex joins. Best for structured transactional workloads. Cosmos DB is globally distributed with multi-region writes, sub-10ms latency at any scale, and flexible schemas via multiple APIs (Core SQL, MongoDB, Cassandra). Best for globally distributed, high-velocity, flexible-schema applications. Azure Synapse Analytics is an integrated analytics platform combining a dedicated SQL pool (data warehouse), serverless SQL, and Apache Spark — it is optimized for OLAP analytical queries over large datasets. The signal is workload type: OLTP transactions (SQL Database), low-latency global NoSQL (Cosmos DB), analytics and BI over large data (Synapse).

Quick check

Is this a transactional workload with relational data (Azure SQL), a globally distributed low-latency flexible-schema workload (Cosmos DB), or analytical queries over large datasets for BI (Synapse Analytics)?

Why it looks right

Cosmos DB is the modern NoSQL answer and candidates apply it to analytics scenarios. Synapse is specifically for analytics warehousing — a different query pattern and cost model from Cosmos DB.

VNet PeeringAzure Virtual WANAzure ExpressRoute
#5

Direct VNet-to-VNet routing vs. hub-and-spoke managed WAN vs. dedicated private circuit

All three connect VNets or connect on-premises networks to Azure, so candidates blur direct peering with WAN management.

Deciding signal

VNet Peering provides direct, low-latency routing between two VNets — non-transitive, no gateway required. Best for a small number of VNets in direct relationships. Virtual WAN is a Microsoft-managed networking hub: it connects VNets, branch offices, and ExpressRoute/VPN circuits into a global hub-and-spoke topology with built-in routing and security integration. It is the right answer when the scenario involves dozens of VNets and branch locations needing centralized connectivity management. ExpressRoute connects on-premises to Azure via a private dedicated circuit. When the scenario describes scaling connectivity management across many VNets and sites, Virtual WAN. For direct Azure-to-Azure VNet connectivity, Peering.

Quick check

Is this connecting two Azure VNets directly (VNet Peering), managing hub-and-spoke connectivity across many VNets and branch sites (Virtual WAN), or connecting on-premises to Azure via a private dedicated circuit (ExpressRoute)?

Why it looks right

VNet Peering is the default Azure-to-Azure connectivity answer. Virtual WAN is the correct answer when the scenario involves centralized management of many connections — a scale signal that changes the answer.

Azure FunctionsAzure Logic AppsAzure Durable Functions
#6

Stateless event-triggered compute vs. low-code workflow vs. stateful function orchestration

All three automate workflows and respond to triggers, so candidates apply Azure Functions to all automation questions.

Deciding signal

Azure Functions is stateless event-driven compute — triggered by HTTP, timers, queues, and other events, executes code, and exits. It has a 230-second synchronous timeout (longer via async). Logic Apps is a low-code/no-code workflow automation platform with 400+ connectors to SaaS services (Salesforce, SharePoint, ServiceNow) — the right answer when the automation involves integrating external services without writing code. Durable Functions extends Azure Functions with stateful orchestration — it supports long-running workflows, fan-out/fan-in, and human approval patterns using durable orchestrator and activity functions. When the scenario requires long-running stateful multi-step workflows with waits or approvals in code, Durable Functions.

Quick check

Is this stateless event-triggered code execution (Functions), low-code workflow with SaaS connectors (Logic Apps), or stateful long-running orchestration with waiting and approval steps in code (Durable Functions)?

Why it looks right

Azure Functions is the default compute answer. Durable Functions are correct when the workflow has state, fan-out patterns, or human interaction steps — scenarios where stateless Functions cannot maintain context across invocations.

Microsoft Entra External ID (B2C)Microsoft Entra B2BEntra External Identities
#7

Customer identity platform vs. partner federation vs. umbrella term

All involve external users authenticating into an Azure-connected application, so candidates blur customer access with partner access.

Deciding signal

Entra B2B (Business-to-Business) enables partner organizations to access your applications using their own organizational identity — a supplier uses their company credentials to access your partner portal. It is for workforce-to-workforce federation. Entra External ID for customers (formerly B2C, Business-to-Consumer) is a CIAM platform for consumer-facing applications — it manages customer sign-up, sign-in, and profile with social identity provider support (Google, Facebook, Apple). Entra External Identities is the umbrella term covering both B2B and CIAM. The signal is who the external users are: partner employees using organizational credentials (B2B) or consumers using social or email accounts (External ID for customers/B2C).

Quick check

Are the external users partner employees using their organization credentials (B2B), or customers and consumers signing in with email or social accounts (External ID for customers / B2C)?

Why it looks right

B2C is the more elaborate platform and candidates apply it to B2B partner scenarios. B2B is correct when the external users are from another organization with existing Entra ID credentials.

Azure MigrateAzure Site RecoveryAzure Database Migration Service
#8

Discovery and migration assessment vs. DR replication and migration vs. database schema migration

All three appear in migration scenarios, so candidates apply Site Recovery to all lift-and-shift questions.

Deciding signal

Azure Migrate is a discovery and migration hub: it discovers on-premises VMs, servers, and applications; assesses readiness and sizing; and orchestrates migration of VMs to Azure using the Migration tool (built on Site Recovery for server migration). Site Recovery is specifically a DR and business continuity service — it continuously replicates VMs for failover. When used for migration, it replicates and migrates servers to Azure. Azure Database Migration Service (DMS) migrates database schemas and data from on-premises database engines (SQL Server, MySQL, PostgreSQL, MongoDB, Oracle) to Azure database services. When the scenario involves assessing and discovering on-premises servers before migration, Azure Migrate. Database schema migration, DMS. Ongoing DR replication for regional failover, Site Recovery.

Quick check

Is this discovering and planning a server migration with assessment (Azure Migrate), migrating database schemas and data to Azure databases (DMS), or continuous VM replication for DR and migration (Site Recovery)?

Why it looks right

Site Recovery is the prominent migration-and-DR service. Azure Migrate is the correct answer when the scenario starts with discovery and assessment — before any replication or migration begins.

Azure Key VaultAzure Managed HSMAzure Managed Identity
#9

Secrets and keys service vs. dedicated hardware key store vs. passwordless resource identity

All three protect secrets or keys, so candidates apply Key Vault universally.

Deciding signal

Azure Key Vault stores secrets, encryption keys, and certificates in a multi-tenant managed service with FIPS 140-2 Level 2 validation. Most workloads use Key Vault. Managed HSM provides dedicated, single-tenant FIPS 140-2 Level 3 validated HSM hardware — required when regulations mandate customer-exclusive key custody in dedicated hardware. Managed Identity is not a key store — it is a system-assigned or user-assigned Azure AD identity for Azure resources that automatically obtains tokens to authenticate to services like Key Vault, eliminating the need to store credentials. The pattern is: Managed Identity authenticates to Key Vault; Key Vault stores secrets; Managed HSM is used when dedicated hardware is required for the keys.

Quick check

Is this storing secrets and keys in a managed service (Key Vault), requiring dedicated hardware with customer-exclusive key custody (Managed HSM), or giving an Azure resource a passwordless identity to access other services (Managed Identity)?

Why it looks right

Key Vault is the default secrets/keys answer. Managed HSM is correct when the scenario specifies regulatory requirements for dedicated hardware — a level of detail candidates sometimes skim past.

Azure API ManagementAzure Application GatewayAzure Front Door
#10

Full API platform vs. regional L7 proxy vs. global HTTP routing

All three sit in front of backend services and manage HTTP traffic, so candidates conflate API management with load balancing.

Deciding signal

Application Gateway is a regional Layer 7 load balancer and WAF — it routes to backend pools within a region based on URL rules. Front Door is a global HTTP proxy with intelligent routing, WAF, and CDN capabilities across regions. API Management is a full API platform: it provides a developer portal, API versioning, throttling/rate limiting, subscription keys, OAuth integration, transformation policies, and backend abstraction. It is the right answer when the scenario involves managing a public API surface with developer onboarding, quotas, and transformation policies — not just routing or load balancing.

Quick check

Is this routing HTTP traffic within a region with WAF (Application Gateway), globally routing and accelerating HTTP with WAF (Front Door), or managing a full API lifecycle with developer portal, policies, and subscriptions (API Management)?

Why it looks right

Front Door and Application Gateway are visible "API proxy" options. API Management is the correct answer when the scenario involves developer experience, API versioning, subscription management, or transformation policies — none of which Front Door or App Gateway provide.

Train these confusions, not just read them

10 AZ-305 questions. Pattern-tagged with trap analysis. Free, no signup required.

Start AZ-305 Mini-Trainer →