AWS · AIF-C01

Deployment And Delivery Design — AWS AI Practitioner (AIF-C01)

7%of exam questions (9 of 125)

Rollback Speed Is the Constraint That Picks the Strategy

Deploying a new ML model version under CodeDeploy or ECS means choosing between canary, blue/green, and rolling. Blue/green retains the full previous environment and shifts traffic atomically—rollback takes seconds and requires no redeployment. Canary limits blast radius but forces maintaining two live inference endpoints simultaneously. Rolling replaces capacity incrementally with no instant rollback path. When the scenario states "immediately revert if model accuracy degrades in production," blue/green is the only answer that satisfies the rollback constraint.

What This Pattern Tests

The exam tests deployment pipeline design with service-appropriate strategies. CodePipeline orchestrates source-build-test-deploy stages with manual approval gates. CodeDeploy supports blue/green on ECS (shift traffic between task sets), canary on Lambda (shift 10% then all), and rolling on EC2. For DOP-C02, CloudFormation StackSets deploy infrastructure across multiple accounts and regions simultaneously, while change sets preview modifications before execution. For AIF-C01 and MLS-C01, SageMaker Pipelines orchestrate ML workflows — data processing, training, evaluation, and model registration — with Model Registry tracking model versions and approval status before deployment to endpoints. The trap is using CodeDeploy for ML model deployment (SageMaker Pipelines handles the ML lifecycle) or CloudFormation direct updates without change sets in production.

Decision Axis

Deployment risk tolerance and workload type determine the pipeline: application code uses CodePipeline, infrastructure uses CloudFormation StackSets, ML models use SageMaker Pipelines.

Associated Traps

More Top Traps on This Exam

Decision Rules

When the dominant constraint is augmenting FM output with proprietary data while avoiding weight modification and ML operational burden, Amazon Bedrock with RAG satisfies the deployment requirement; Amazon SageMaker does not because it is scoped to model training and custom deployment pipelines.

Amazon BedrockAmazon SageMaker AI

Whether to access pre-trained foundation models through a fully managed inference API (Amazon Bedrock) or deploy and operate model endpoints through an ML platform (Amazon SageMaker AI), when the decisive constraint is elimination of model-infrastructure management.

Amazon BedrockAmazon SageMaker AI

Which deployment layer provides a provider-agnostic, unified FM access API so that model substitution is a runtime configuration change rather than an application refactor or infrastructure redeployment?

Amazon BedrockAmazon SageMaker AI

Domain Coverage

Fundamentals of Generative AI

Difficulty Breakdown

Easy: 6Medium: 3