Deployment And Delivery Design — Azure DevOps Engineer (AZ-400)
Rollback Speed Changes Which Deployment Strategy Wins
Requirement: deploy to production with a guaranteed rollback window of under five minutes if error rate exceeds threshold. Competing strategies: blue-green swap via Azure Front Door traffic routing, or a rolling update across App Service deployment slots. The deciding constraint is rollback atomicity—blue-green reverts with a single traffic weight change; rolling updates require re-deploying the previous artifact across all instances. The scenario's rollback SLA, not deployment speed, determines the correct pattern.
What This Pattern Tests
Azure deployment questions test pipeline architecture with Azure-native tools. Azure Pipelines uses multi-stage YAML pipelines with environments that enforce approval gates and deployment history. For AZ-400, deployment strategies include slot swaps on App Service (zero-downtime blue/green), canary releases with Traffic Manager weighted routing, and ring-based deployments to progressively larger user groups. Azure Artifacts manages NuGet, npm, and Maven packages with upstream sources. For AI-300, Azure ML pipelines handle model training, evaluation, and registration, while managed online endpoints support blue/green deployment by splitting traffic between model versions. GitHub Actions with Azure-native actions provide an alternative to Azure Pipelines with tighter GitHub integration. The trap is using a single release pipeline without environments and approvals, or deploying ML models through a generic CI/CD pipeline instead of Azure ML pipelines.
Decision Axis
Workload type determines pipeline tooling: application code uses Azure Pipelines with slot swaps, ML models use Azure ML pipelines, infrastructure uses Bicep with what-if previews.
Associated Traps
Decision Rules
Whether to wire GitHub PR events to Teams via custom incoming webhooks—satisfying notification latency but leaving the audit-completeness and work item traceability requirements unmet—or to use the first-party Azure Boards GitHub App plus the Azure Boards app for Microsoft Teams, which satisfies both notification latency and zero-gap traceability as a managed unit.
Whether to satisfy the Teams notification requirement through the built-in Azure Boards app for Microsoft Teams with subscription filters, or through a custom integration path such as Azure DevOps service hooks wired to an incoming webhook, Power Automate flow, or Logic App — and which choice preserves the zero-maintenance and single-source-of-truth constraints.
Choose trunk-based development with short-lived feature branches and Azure Repos branch policies (required reviewers + build validation) over a GitFlow multi-branch model, because the team's multiple-daily deployment cadence and small size make long-lived release and hotfix branches an anti-pattern that conflicts with the release-isolation and trunk-stability constraints.
Whether to adopt trunk-based development with short-lived feature branches and strict branch policies on main — where the RTO of integration failure is minimised by small, frequently merged changesets — or a GitFlow model with long-lived release and hotfix branches that appears to improve release isolation (an RPO-like constraint) but accumulates merge debt and slows the team's ability to detect and roll back a bad commit at the stated deployment frequency.
Whether to adopt trunk-based development with short-lived feature branches enforced via Azure Repos branch policies (minimum reviewer count + build validation pipeline) or GitFlow with long-lived release and hotfix branches, given a small team deploying at continuous-delivery cadence where long-lived branches accumulate merge debt and degrade hotfix recovery speed.
Select the Azure Pipelines deployment topology—among backup-restore, pilot light, warm standby, and active-active patterns—that satisfies BOTH the stated RPO ≤ 15 min and RTO ≤ 60 min targets; warm standby (pipeline deploys to secondary on every run, secondary runs at minimum scale) satisfies both, while a post-deploy backup stage satisfies RPO but fails RTO because restoration duration exceeds 60 minutes.
Whether to satisfy zero-downtime and schema dependency ordering by adding a parallel blue-green environment with its own schema orchestration, or by enforcing migration-first ordering inside a sequential Azure Pipelines stage with an Application Insights deployment gate and slot swap — choosing the approach that closes both constraints without introducing new schema-state coordination complexity the team is not equipped to manage.
Whether blue-green deployment alone satisfies both the zero-downtime RTO requirement and the zero-data-loss RPO requirement on rollback, or whether an expand-contract database migration pattern gated by feature flags is also required to satisfy the RPO constraint independently of traffic switching.
Whether the selected guest OS configuration mechanism independently satisfies both the RPO (maximum undetected drift window) and the RTO (maximum time to return to desired state after detection), given that a DSC pull interval can match the RPO number while the pull-server remediation execution model structurally cannot meet the separate RTO target.
Whether to implement cross-pipeline reuse via YAML step or job template includes (template: references) versus structural template inheritance enforced by Azure DevOps required-template pipeline policy (extends: plus policy), where only the latter satisfies both automatic propagation and structural stage enforcement simultaneously.
Whether manual approval gates or automated Azure Monitor metric-query gates satisfy the dual constraint of objective health-signal promotion criteria and a rollback SLA that cannot depend on human availability or environment reprovisioning time.
Whether to store secrets as Azure Pipelines secret variables (convenient, encrypted, masked in logs) or in Azure Key Vault accessed via Managed Identity (centralized rotation, per-secret RBAC, full audit trail) — determined by which option satisfies the rotation-without-pipeline-edit and audit-trail constraints simultaneously.
Choose workload identity federation via OIDC federated credentials on a managed identity or app registration over storing service principal client secrets as GitHub Actions secrets — because client secrets require per-repo secret updates on every rotation, violating the zero-pipeline-change-on-rotation constraint, while OIDC tokens are issued at runtime and never stored, satisfying both least-privilege and rotation constraints at scale.
Domain Coverage
Difficulty Breakdown