Azure · AI-300

Multi-Service Tradeoff — Azure AI Engineer (AI-300)

22%of exam questions (44 of 200)

Every Azure Service Optimizes for a Different Constraint

Azure Functions, Container Instances, and AKS can all run containerized ML inference code. The exam distinguishes them by operational posture and scaling behavior: Functions for event-driven, short-duration invocation with no infrastructure management; Container Instances for isolated, short-lived batch jobs; AKS when the team needs persistent, orchestrated workloads with custom scaling policies. Queue Storage appears as a trigger or buffer, not a compute choice. Identify the dominant constraint first—duration, orchestration need, or team capability—then match the service.

What This Pattern Tests

Azure offers three messaging services with distinct models. Service Bus handles enterprise messaging with sessions, dead-lettering, and exactly-once delivery at $0.05 per million operations. Event Grid handles reactive event routing with push delivery at $0.60 per million events. Queue Storage handles simple FIFO queueing at $0.004 per 10,000 transactions. The exam gives you a messaging requirement and tests whether you match it: "order processing with dead-letter handling" = Service Bus, "react to blob uploads" = Event Grid, "simple task queue for background workers" = Queue Storage. Cosmos DB vs. SQL Database vs. Table Storage follows the same principle: global multi-model vs. relational with joins vs. simple key-value.

Decision Axis

Message complexity and delivery model determine service. Over-specifying is as wrong as under-specifying.

Associated Traps

Decision Rules

Whether to register the shared environment and model artifact locally in each team's Azure Machine Learning Workspace (per-workspace duplication) or promote those assets to Azure Machine Learning Registries to share a single canonical version across all three workspaces.

Azure Machine Learning RegistriesAzure Machine Learning Workspace

Whether to centralize curated environment definitions in Azure Machine Learning Registries for cross-workspace promotion, or distribute Conda YAML specs via Azure Blob Storage and recreate environments locally in each workspace.

Azure Machine Learning EnvironmentsAzure Machine Learning Registries

Whether to promote shared curated assets to an Azure Machine Learning Registry for cross-workspace reuse or to maintain independent per-workspace copies of those environments and model artifacts registered locally in each workspace's asset store.

Azure Machine Learning RegistriesAzure Machine Learning Workspace

Whether workspace-local model registration is sufficient or whether a cross-workspace Azure Machine Learning Registry with lifecycle-stage gating is required to satisfy the cross-team lineage and promotion governance constraint.

Azure Machine Learning RegistriesAzure Machine Learning Workspace

Whether to centralize cross-workspace model versioning and lifecycle-stage promotion in Azure Machine Learning Registries, or to replicate workspace-local MLflow registrations and stitch promotions together with custom scripts or pipeline steps.

Azure Machine LearningAzure Machine Learning Registries

Whether Responsible AI evaluation scores should be captured as native MLflow-logged run properties on the Azure ML model registration record—making them platform-enforced and intrinsically linked—or stored in a separate external metadata store keyed to the model name and version by naming convention.

Azure Machine LearningMLflow

Whether to use Azure Machine Learning Registries lifecycle-stage gating (platform-native, zero custom code, audit-logged) or Azure Machine Learning Pipelines (flexible but bespoke orchestration) to satisfy a multi-stage promotion requirement that explicitly prohibits custom scripts outside the platform boundary.

Azure Machine Learning RegistriesAzure Machine Learning Pipelines

Whether the chosen evaluation configuration satisfies evaluation-coverage-completeness by covering all explicitly required dimensions—groundedness and safety—or addresses only one dimension such as fluency, which looks correct but fails the dominant constraint.

Azure OpenAI ServiceMicrosoft Foundry

Determine whether deployment-gated evaluation coverage for groundedness and safety is best satisfied by on-demand built-in evaluators in Microsoft Foundry or by a provisioned Azure Machine Learning compute cluster running a custom evaluation pipeline, where the workload is intermittent and cost-efficiency is the dominant constraint.

Microsoft FoundryAzure Machine Learning

Whether Microsoft Foundry's built-in multi-dimensional evaluators already satisfy all three required quality dimensions (groundedness, relevance, content safety) without additional infrastructure, making Azure Machine Learning Pipelines with custom evaluation code an over-provisioned alternative that violates the stated constraint.

Microsoft FoundryAzure Machine Learning Pipelines

Whether to satisfy both the real-time safety interception requirement and the ongoing groundedness validation requirement by combining Azure OpenAI Service built-in content filtering with Microsoft Foundry's managed evaluation suite, rather than building a custom evaluation pipeline in Azure Machine Learning with Azure Monitor alerting—which offers metric flexibility but imposes continuous maintenance of pipeline orchestration logic, metric schema definitions, and alert-threshold tuning.

Azure OpenAI ServiceMicrosoft Foundry

Domain Coverage

Design and Implement an MLOps InfrastructureImplement Machine Learning Model Lifecycle and OperationsImplement Generative AI Quality Assurance and Observability

Difficulty Breakdown

Medium: 24Hard: 12Easy: 8