Azure · AI-300

Deployment And Delivery Design — Azure AI Engineer (AI-300)

6%of exam questions (12 of 200)

Rollback Speed and Blast Radius Drive the Deployment Choice

Bicep and ARM Templates are infrastructure-as-code tools, not deployment strategies. Azure DevOps pipelines orchestrate the release. The architectural decision the exam actually tests is how traffic shifts during a model or service update—canary release, blue-green swap, or rolling replacement—and which combination of tools implements that pattern with the rollback speed the scenario specifies. When blast radius must be minimized, staged traffic shifting via deployment slots or weighted routing takes precedence over the toolchain choice.

What This Pattern Tests

Azure deployment questions test pipeline architecture with Azure-native tools. Azure Pipelines uses multi-stage YAML pipelines with environments that enforce approval gates and deployment history. For AZ-400, deployment strategies include slot swaps on App Service (zero-downtime blue/green), canary releases with Traffic Manager weighted routing, and ring-based deployments to progressively larger user groups. Azure Artifacts manages NuGet, npm, and Maven packages with upstream sources. For AI-300, Azure ML pipelines handle model training, evaluation, and registration, while managed online endpoints support blue/green deployment by splitting traffic between model versions. GitHub Actions with Azure-native actions provide an alternative to Azure Pipelines with tighter GitHub integration. The trap is using a single release pipeline without environments and approvals, or deploying ML models through a generic CI/CD pipeline instead of Azure ML pipelines.

Decision Axis

Workload type determines pipeline tooling: application code uses Azure Pipelines with slot swaps, ML models use Azure ML pipelines, infrastructure uses Bicep with what-if previews.

Associated Traps

More Top Traps on This Exam

Decision Rules

Whether to add the new model version as a second named deployment under the existing Azure ML managed online endpoint and shift traffic via percentage-based allocation rules, versus provisioning a separate managed online endpoint for the new version and routing traffic through an external load balancer or DNS swap.

Azure Machine Learning EndpointsAzure Machine Learning Workspace

Whether to route progressive traffic using multiple named deployments behind a single Azure Machine Learning managed online endpoint—enabling instant percentage reallocation as rollback—versus provisioning separate endpoint resources per version and switching traffic through an external routing layer that cannot meet the 60-second rollback SLA without additional orchestration.

Azure Machine Learning EndpointsAzure Monitor

Whether to register the new model version as a named deployment behind the existing Azure Machine Learning managed online endpoint and control exposure via traffic-weight percentages, versus deploying to a separate endpoint resource that isolates the new version but routes rollback through DNS or application-layer URL changes that cannot satisfy a sub-minute SLA.

Azure Machine Learning EndpointsAzure Monitor

Domain Coverage

Implement Machine Learning Model Lifecycle and Operations

Difficulty Breakdown

Medium: 8Hard: 4