AWS AIF-C01Trap Reference

Commonly Confused Services on AIF-C01

On AIF-C01, many wrong answers come from applying the wrong AI service to a use case where several services address similar-sounding problems. The distinctions usually come down to input modality, who the user is, and whether any model training is involved.

This is one layer of preparation for the exam — not the whole picture. Each section below gives you the deciding signal, a quick check, and why the wrong answer keeps looking right.

Amazon BedrockAmazon SageMaker
#1

Foundation model API access vs. custom ML model build and train platform

Both are described as AI/ML services, so candidates treat them as alternative paths to the same outcome.

Deciding signal

Amazon Bedrock provides API access to pre-trained foundation models (FMs) from AWS and third-party providers — Anthropic Claude, Meta Llama, Amazon Titan, and others. You call an API; AWS manages the model infrastructure. It is the right answer when the scenario involves building applications on top of existing large language models without training or managing models. SageMaker is a platform for building, training, tuning, and deploying custom machine learning models. It provides compute infrastructure, managed notebooks, training jobs, hyperparameter tuning, and model hosting. It is the right answer when the scenario involves training a model on proprietary data, managing the ML training pipeline, or deploying a custom model. The decisive signal is whether a pre-trained foundation model is sufficient or whether custom model training is required.

Quick check

Does the scenario involve using an existing foundation model via API (Bedrock), or training and deploying a custom ML model on your own data (SageMaker)?

Why it looks right

SageMaker is the more established ML service and candidates apply it to generative AI scenarios where Bedrock — which requires no model training infrastructure — is the specific answer.

Amazon RekognitionAmazon ComprehendAmazon TextractAmazon Transcribe
#2

Image/video analysis vs. text NLP vs. document extraction vs. speech-to-text

All four are AWS AI service APIs, so candidates pick the one they remember without mapping it to the input modality.

Deciding signal

Rekognition analyzes images and videos — detecting objects, faces, text in images, celebrities, unsafe content, and custom labels. The input is always an image or video file. Comprehend is a natural language processing service that analyzes text: sentiment, entities, key phrases, language detection, and topic modeling. The input is text. Textract extracts structured data from documents — forms, tables, and handwriting — going beyond simple OCR to understand the document layout. The input is a document image (PDF, TIFF, JPEG). Transcribe converts speech in audio or video files to text. The input is audio. The input modality is the primary discriminator: video/image (Rekognition), audio (Transcribe), document layout (Textract), text content (Comprehend).

Quick check

Is the input an image or video (Rekognition), audio or speech (Transcribe), a scanned document or form (Textract), or plain text requiring NLP analysis (Comprehend)?

Why it looks right

Comprehend and Textract are frequently confused because both "process text." Textract extracts structured content from document images; Comprehend analyzes existing text strings for meaning and entities.

Amazon Q BusinessAmazon Q Developer
#3

Enterprise knowledge assistant vs. developer coding assistant

Both are "Amazon Q" products, so candidates treat them as the same service with different names.

Deciding signal

Amazon Q Business is a generative AI assistant for employees — it connects to enterprise data sources (SharePoint, S3, Salesforce, Confluence) and answers questions about business content, policies, and documentation. It is a retrieval-augmented generation (RAG) application targeted at business end-users. Amazon Q Developer is an AI coding assistant integrated into IDEs and the AWS Console — it generates code, explains AWS services, reviews security vulnerabilities, and assists with infrastructure as code. The user is a developer; the use case is software development productivity. When the scenario involves employees asking questions about company policies or internal documents, Q Business. When it involves developers getting code suggestions or AWS service explanations, Q Developer.

Quick check

Are the users employees asking questions about business content (Q Business), or developers needing coding assistance and AWS guidance (Q Developer)?

Why it looks right

Both share the "Amazon Q" brand and both are described as AI assistants. The user base is the distinguishing signal: business knowledge for employees versus code generation for developers.

Amazon Bedrock Knowledge BasesAmazon Bedrock Agents
#4

RAG document retrieval vs. multi-step task execution with tools

Both are Bedrock features for adding context to foundation models, so candidates treat them as interchangeable.

Deciding signal

Bedrock Knowledge Bases implement retrieval-augmented generation (RAG): documents are chunked, embedded, and stored in a vector store. When a user asks a question, relevant chunks are retrieved and provided to the model as context. The model answers based on the retrieved content — it does not take actions. Bedrock Agents go further: they use foundation models to reason about a task, call tools (Lambda functions, APIs, knowledge bases), interpret results, and take sequential actions to complete a goal. When the scenario involves answering questions based on a document corpus, Knowledge Bases. When it involves an AI system that autonomously executes multi-step tasks — looking up data, calling an API, updating a record — Agents.

Quick check

Is the goal to answer questions by retrieving relevant documents (Knowledge Bases), or to autonomously complete multi-step tasks by calling tools and APIs (Agents)?

Why it looks right

Both are described as features that add context or capability to foundation models. Agents are the correct answer when the scenario describes taking actions or completing tasks, not just answering questions.

Amazon KendraAmazon LexAmazon Q Business
#5

Intelligent enterprise search vs. conversational UI builder vs. generative AI assistant

All three help users find information or get answers, so candidates pick based on whichever "answer questions" framing they recall.

Deciding signal

Kendra is an enterprise search service that uses machine learning to return relevant answers from indexed documents, FAQs, and content repositories. It is traditional search with ML-enhanced relevance — keyword and semantic search over your document corpus. Lex builds conversational chatbots and voice interfaces — it handles intent recognition, slot filling, and multi-turn dialogue flows. It is the right answer when the scenario involves building a dialog interface (not just search). Amazon Q Business is a generative AI assistant that uses foundation models to synthesize answers from connected data sources — it produces natural language responses rather than a ranked list of documents. When the scenario involves building a chatbot with defined intents and utterances, Lex. When it involves a search-style interface over enterprise documents, Kendra. When it involves a generative AI assistant that synthesizes answers from company data, Q Business.

Quick check

Is this a structured chatbot with defined intents (Lex), ML-enhanced document search returning ranked results (Kendra), or a generative AI assistant synthesizing answers from enterprise content (Q Business)?

Why it looks right

Kendra and Q Business both "answer questions from documents" and candidates conflate them. Q Business uses foundation models to synthesize a single answer; Kendra returns ranked search results from indexed documents.

SageMaker CanvasSageMaker AutopilotSageMaker JumpStart
#6

No-code ML for business users vs. AutoML with explainability vs. pre-trained model hub

All three reduce the ML expertise required, so candidates treat them as equivalent "low-code ML" options.

Deciding signal

SageMaker Canvas is a no-code ML interface for business analysts — it imports data, trains a model, and generates predictions through a point-and-click UI without any coding. SageMaker Autopilot runs AutoML: it automatically trains and tunes models on your data and produces an explainability report. SageMaker JumpStart provides a hub of pre-trained models that can be deployed in one click without training. The signal is user type and whether model training is happening: no-code for non-technical business users (Canvas), automated training on your own data (Autopilot), or deploying an already-trained model (JumpStart).

Quick check

Is this a no-code ML tool for non-technical business users (Canvas), automated model training with tuning and explainability (Autopilot), or deploying an existing pre-trained model from a hub (JumpStart)?

Why it looks right

JumpStart is the common wrong answer because it sounds like it handles all "pre-built ML" scenarios. Autopilot is correct when the scenario involves training a model on your own data automatically; JumpStart is for deploying an already-trained model.

Amazon PollyAmazon Transcribe
#7

Text-to-speech vs. speech-to-text

Both are speech services, so candidates confuse direction — which converts to what.

Deciding signal

Amazon Polly converts text to lifelike speech — given a string of text, it produces an audio file. It is used for voice-enabled applications, narration, accessibility features, and IVR prompts. Amazon Transcribe converts speech in audio or video files to text — it performs automatic speech recognition (ASR). It is used for transcription of call recordings, video captions, and voice command processing. The direction is the entire distinction: text-to-speech (Polly) or speech-to-text (Transcribe).

Quick check

Does the scenario start with text and need to produce speech audio (Polly), or start with audio/video and need a text transcript (Transcribe)?

Why it looks right

Both are described as "speech services" in the same breath. The direction of conversion is easy to lose track of when a question describes a complex pipeline involving audio.

Train these confusions, not just read them

10 AIF-C01 questions. Pattern-tagged with trap analysis. Free, no signup required.

Start AIF-C01 Mini-Trainer →