AWS · AIF-C01

Shared Responsibility Confusion — AWS AI Practitioner (AIF-C01)

You put responsibility on the wrong side of the shared responsibility model. Managed services shift the boundary.

AWS Doesn't Own Your Model's Output Behavior

Managed AI services abstract away infrastructure, so candidates assume AWS also governs prompt behavior, output filtering, and fine-tuning data quality. It doesn't. The customer owns prompt engineering decisions, guardrail configuration, and the content fed into fine-tuning jobs. AWS secures the underlying model infrastructure and compute layer. What goes in and what comes out is the customer's responsibility—including bias, toxicity filtering, and data handling at the application level.

20%of exam questions affected (25 of 125)

The Scenario

The question asks who is responsible for patching the operating system on an RDS MySQL instance. You answer "the customer" because you manage OS patches on EC2. But RDS is a managed service — AWS handles OS patching, minor engine patches, and the underlying infrastructure. You are responsible for major engine version upgrades, database-level security (users, grants, network access via security groups), and data encryption configuration. The shared responsibility boundary shifts at every service tier: EC2 you patch everything, RDS you manage data-layer security, Lambda you manage code and IAM only.

How to Spot It

  • Map the service to its management level before answering. EC2 = you manage OS and up. RDS = AWS manages OS, you manage engine config and data access. Lambda = AWS manages everything below your function code. Fargate = AWS manages OS and runtime, you manage container image and task config.
  • Encryption at rest responsibility varies. S3 default encryption is automatic with SSE-S3 (AWS manages keys). KMS customer-managed keys shift key policy and rotation responsibility to you. CloudHSM shifts the entire key lifecycle to you. The exam tests which encryption model puts which responsibilities on which side.
  • When the question mentions a managed service (RDS, Aurora, ElastiCache, OpenSearch), your responsibility shrinks to data access control, network configuration (security groups, VPC placement), and application-level security. If your answer includes "patch the operating system" for a managed service, you are wrong.

Decision Rules

Whether the stated business domain (fraud detection) maps to a purpose-built AWS AI service that eliminates customer ML model ownership, or whether a general-purpose ML platform is required — determined by whether the customer must retain control over model logic or can delegate domain AI responsibility to AWS.

Amazon Fraud DetectorAmazon SageMaker AI

When the requirement is NLP inference with zero customer-owned model development, the pre-built NLP API (Amazon Comprehend) satisfies the constraint because AWS owns the inference model; the ML platform (Amazon SageMaker AI) fails because the customer must still supply or train the model regardless of managed infrastructure.

Amazon ComprehendAmazon SageMaker AI

When a described business problem falls squarely within the domain of a purpose-built AWS AI service, that service minimizes customer ML responsibilities; choosing SageMaker incorrectly shifts model design, training, and lifecycle ownership onto the customer.

Amazon Fraud DetectorAmazon SageMaker AI

Whether content-enforcement controls on a managed FM service are an AWS-managed default or a customer configuration responsibility requiring the team to explicitly define Guardrails and harden the system prompt.

Amazon BedrockAmazon Comprehend

Whether consistent JSON output format is the customer's responsibility to enforce via system-prompt instructions in the Amazon Bedrock API request, or a platform-managed behavior that Amazon Bedrock provides automatically.

Amazon BedrockAmazon SageMaker AI

Whether to apply a prompt engineering technique (chain-of-thought prompting) directly in the Bedrock invocation to control reasoning structure at inference time, versus introducing a managed retrieval or fine-tuning service that addresses a different problem dimension, adds infrastructure overhead, and does not resolve the stated reasoning-visibility symptom.

Amazon BedrockAmazon Kendra

Whether to apply customer-side few-shot or system-prompt persona prompting within Amazon Bedrock to enforce style consistency, versus migrating to a managed assistant service under the mistaken belief that AWS absorbs output-style configuration responsibility.

Amazon BedrockAmazon Q Business

Whether to reduce hallucinations through a prompt engineering technique applied within the model invocation (e.g., instructing the model to express uncertainty or answer only from context provided in the prompt) versus invoking a service-level data ingestion solution such as RAG via Amazon Kendra.

Amazon BedrockAmazon Kendra

Whether JSON output format consistency should be enforced through customer-authored prompt instructions within Amazon Bedrock, or delegated to a separate AWS-managed extraction service whose structured outputs are treated as an AWS responsibility.

Amazon BedrockAmazon Comprehend

Domain Coverage

Fundamentals of AI and MLApplications of Foundation Models

Difficulty Breakdown

Easy: 9Medium: 16

Related Patterns