Near-Right Architecture — Azure DevOps Engineer (AZ-400)
Two options were architecturally valid — you picked the one that violates a constraint buried in the scenario. Read constraints before evaluating answers.
The Architecture Works, But Not Here
The scenario specifies a constraint—cost ceiling, team size, compliance scope—that sounds secondary but is actually the deciding factor. Candidates pick the architecturally richer answer because it would work in a broader context. The exam is testing whether you can apply the dominant constraint, not whether you recognize valid Azure patterns. Both options function; only one fits.
The Scenario
The question asks you to design a globally distributed web application with real-time bidirectional communication via WebSockets. Two options: Azure Front Door with backend pools, or Traffic Manager with regional Application Gateways. Both achieve global distribution. But Front Door operates at Layer 7 with HTTP/HTTPS — it supports WebSocket connections. Traffic Manager is DNS-based and does not proxy traffic at all, so it cannot maintain WebSocket connections across failovers. The trap is that Traffic Manager sounds like the "global load balancer" answer, but it only does DNS resolution, not connection proxying.
How to Spot It
- •Azure Front Door vs. Traffic Manager is a Layer 7 vs. DNS-level distinction. If the scenario needs connection proxying, SSL offloading, or WebSocket support, Traffic Manager is eliminated. If it only needs DNS-based routing with health probes, Front Door may be over-engineering.
- •Pay attention to "real-time," "bidirectional," or "persistent connections." These require a proxy-based load balancer (Front Door, Application Gateway), not DNS-only routing (Traffic Manager).
- •When both architectures distribute traffic globally, the tiebreaker is always in the connection semantics — HTTP request-response vs. persistent connections vs. raw TCP.
Decision Rules
Choose trunk-based development with short-lived feature branches and Azure Repos branch policies (required reviewers + build validation) over a GitFlow multi-branch model, because the team's multiple-daily deployment cadence and small size make long-lived release and hotfix branches an anti-pattern that conflicts with the release-isolation and trunk-stability constraints.
Whether manual approval gates or automated Azure Monitor metric-query gates satisfy the dual constraint of objective health-signal promotion criteria and a rollback SLA that cannot depend on human availability or environment reprovisioning time.
Whether to store secrets as Azure Pipelines secret variables (convenient, encrypted, masked in logs) or in Azure Key Vault accessed via Managed Identity (centralized rotation, per-secret RBAC, full audit trail) — determined by which option satisfies the rotation-without-pipeline-edit and audit-trail constraints simultaneously.
Whether to split security and quality validation into two independently scoped Azure Pipelines environments with distinct approver groups and service connections, versus collapsing both approvals into a single shared environment with multiple required reviewers — which satisfies the surface 'two approvals' requirement but merges trust boundaries and eliminates independent auditability.
Does the proposed integration maintain an unbroken traceability chain covering all four segments — work item → commit, commit → build, build → artifact, artifact → deployment — or does it satisfy only the work-item-to-commit segment while leaving the build-to-deployment link unrecorded in Azure Boards?
Whether the clone-time bottleneck is driven by large binary blob size (requiring Git LFS to offload blobs to a separate store) or by repository tree depth and history size (requiring Scalar or partial-clone optimizations) — the answer must match the optimization target to the stated root cause.
Whether the dominant clone-time bottleneck is large individual binary blobs — solved by enabling Git LFS to replace those files with pointer files in the standard clone path — or working-directory tree depth and file count — solved by Scalar with sparse-checkout — and therefore which feature combination should be configured on the Azure Repos repository.
Whether to enable upstream sources on the feed containing internal packages (near-right: unified resolution but opens dependency confusion surface) versus enforcing feed-scope-separation with a dedicated internal-only feed and a separate upstream-proxy feed, with Dependabot monitoring the proxy feed.
Whether GitHub Packages or Azure Artifacts is the correct registry when the scenario explicitly requires upstream source proxying — GitHub Packages cannot proxy public registries as upstream sources, making Azure Artifacts the only viable choice despite sitting outside the GitHub-native toolchain.
Whether to extend the existing Application Insights instrumentation (application-layer telemetry: traces, exceptions, dependencies) or enable Container Insights (cluster/node/pod-layer telemetry: restart counts, pod phase transitions, eviction events) to satisfy a sub-5-minute restart-loop detection constraint.
Whether to scope the security auditor's query surface to a dedicated Log Analytics workspace (enforcing data classification via workspace-level RBAC) or to grant project-level access in Azure DevOps and restrict visibility through an Analytics view filter (near-right but fails separation of duties because project membership exposes Boards and Sprint navigation).
Whether to authenticate the pipeline workload via a system-assigned Managed Identity on the agent VM (secretless, scope-bounded, no credential rotation) or via a Service Principal whose client secret is retrieved from Azure Key Vault at runtime (vault-backed but still a long-lived secret that violates the no-long-lived-credentials constraint).
Domain Coverage
Difficulty Breakdown
Related Patterns