Amazon Cognito User PoolsAmazon Cognito Identity Pools
#1Both are Cognito components, so candidates treat them as interchangeable or pick the one they remember first.
Deciding signal
User Pools handle user authentication: sign-up, sign-in, MFA, password policies, and JWT token issuance. They answer "who is this user?" Identity Pools (Federated Identities) exchange tokens — from User Pools, social providers, SAML IdPs, or guest access — for temporary AWS credentials via STS. They answer "what AWS resources can this user access?" When the scenario describes login flows, user directories, or OAuth/OIDC token issuance, User Pools. When the scenario describes giving authenticated or unauthenticated users direct access to AWS services like S3 or DynamoDB, Identity Pools.
Quick check
Is the requirement to authenticate users and issue tokens (User Pools), or to grant users temporary AWS credentials to call AWS services directly (Identity Pools)?
Why it looks right
The names sound similar and both appear in "authentication" questions. The distinction — authentication versus credential vending — is easy to miss when the question describes both steps in a single flow.
AWS CodeBuildAWS CodeDeployAWS CodePipeline
#2All three have "Code" in the name and appear together in CI/CD questions, so candidates conflate their roles.
Deciding signal
CodeBuild compiles source code, runs tests, and produces build artifacts — it is the CI build stage. CodeDeploy takes a deployable artifact and deploys it to EC2 instances, Lambda functions, or ECS services using configurable deployment strategies (blue/green, rolling, canary). CodePipeline is the orchestrator: it connects source, build, and deploy stages into an automated pipeline. A question asking "which service deploys code to EC2 with a rolling strategy" points to CodeDeploy. A question asking "which service automates the end-to-end release process from source to production" points to CodePipeline.
Quick check
Is the requirement to compile and test (CodeBuild), to deploy a built artifact to compute resources (CodeDeploy), or to orchestrate the entire CI/CD workflow (CodePipeline)?
Why it looks right
CodePipeline is the most visible CI/CD service and candidates apply it to deployment-specific questions. CodeDeploy is the service that actually executes the deployment logic and controls deployment strategy.
API Gateway REST APIAPI Gateway HTTP API
#3Both are API Gateway types, so candidates treat the REST API as the default without checking for cost or feature requirements.
Deciding signal
REST APIs in API Gateway support the full feature set: request/response transformation, API keys, usage plans, per-stage caching, WAF integration, and edge-optimized endpoints with CloudFront. HTTP APIs are a simpler, lower-latency, and lower-cost option — up to 70% cheaper — that supports JWT authorizers, Lambda proxy, and private integrations, but not the full feature set. On DVA-C02, the exam tests whether candidates can identify that HTTP APIs are the correct answer when the scenario describes a simple proxy to Lambda or a backend HTTP endpoint without a requirement for features unique to REST APIs.
Quick check
Does the scenario require REST API-only features like per-method caching, WAF, or request transformation (REST API), or is a lightweight proxy with JWT auth sufficient (HTTP API)?
Why it looks right
REST API is the familiar default. Candidates overlook HTTP APIs because the name implies a capability gap that does not exist for most Lambda proxy use cases.
DynamoDB Global Secondary Index (GSI)DynamoDB Local Secondary Index (LSI)
#4Both are secondary indexes on DynamoDB tables, so candidates pick one without checking which key attribute the query uses.
Deciding signal
An LSI must be created at table creation time and shares the same partition key as the base table — it only adds an alternate sort key. LSIs are limited to 10 GB per partition key value. A GSI can have a completely different partition key and sort key from the base table and can be created or deleted at any time. GSIs provide eventual consistency by default; LSIs can provide strongly consistent reads. When the scenario requires querying on a different partition key entirely, only a GSI works. When the scenario uses the same partition key but needs to sort or filter on a different attribute, an LSI is a valid option — but candidates often confuse which constraint applies.
Quick check
Does the query use the same partition key as the table (LSI is possible), or does it require a different partition key entirely (GSI required)?
Why it looks right
GSI is the more flexible and commonly used index, so candidates apply it everywhere. LSIs are sometimes the correct answer when the design already has the right partition key and the exam is testing knowledge of the 10 GB partition limit or strong consistency availability.
Amazon ElastiCacheAmazon DynamoDB Accelerator (DAX)
#5Both add an in-memory caching layer, so candidates treat them as interchangeable when DynamoDB is in scope.
Deciding signal
ElastiCache (Redis or Memcached) is a general-purpose in-memory cache that your application must explicitly check before querying the database. It works with any backend: RDS, DynamoDB, external APIs. DAX is a write-through cache designed specifically for DynamoDB that is API-compatible — your application points at DAX instead of DynamoDB and DAX handles cache misses transparently without code changes to your read path. When the scenario involves DynamoDB and the requirement is read latency reduction with minimal code change, DAX. When the scenario involves caching across multiple data sources or requires pub/sub and session features, ElastiCache.
Quick check
Is the backend exclusively DynamoDB and the goal is transparent caching with no read-path code changes (DAX), or does the workload span multiple data sources or need general caching features (ElastiCache)?
Why it looks right
ElastiCache is the more prominent caching service. Candidates apply it to DynamoDB scenarios without recognizing that DAX provides a DynamoDB-native, code-transparent alternative.
AWS X-RayAmazon CloudWatch
#6Both are observability services, so candidates use "CloudWatch" as the default answer for any monitoring question.
Deciding signal
CloudWatch collects metrics (CPU, invocation count, error rate), ingests logs from services and applications, triggers alarms, and feeds dashboards. It is the right answer when the scenario involves thresholds, alerts, or log aggregation. X-Ray traces requests as they pass through distributed systems — it shows which service called which, how long each segment took, and where errors occurred across Lambda, API Gateway, EC2, and other services. X-Ray is specifically the right answer when the scenario describes debugging latency across multiple services, identifying which microservice is causing errors, or visualizing the request flow through a distributed application.
Quick check
Is the requirement to alert on a metric or aggregate logs (CloudWatch), or to trace a request through multiple services and identify where it slows down or fails (X-Ray)?
Why it looks right
CloudWatch is the familiar monitoring service and "monitoring" appears in both contexts. X-Ray is for distributed tracing of request paths, which is a different problem than metric-based monitoring.
DynamoDB StreamsAmazon Kinesis Data Streams
#7Both can trigger Lambda and both are described as streams, so candidates conflate them in event-driven architectures.
Deciding signal
DynamoDB Streams captures item-level changes (INSERT, MODIFY, REMOVE) on a DynamoDB table and makes them available as an ordered stream. Retention is 24 hours. It is tightly coupled to DynamoDB and is the right answer when the scenario involves reacting to DynamoDB table changes for replication, audit, or downstream processing. Kinesis Data Streams is a general-purpose streaming platform for any high-volume real-time data: application events, IoT telemetry, clickstreams. It supports configurable retention (1–365 days), multiple consumers, and replay. When the trigger is specifically a DynamoDB table write, use DynamoDB Streams. When the data originates from applications or external sources, use Kinesis Data Streams.
Quick check
Is the event source a DynamoDB table write (DynamoDB Streams), or is data being produced by applications or external systems (Kinesis Data Streams)?
Why it looks right
Kinesis is the more prominent streaming service and candidates apply it to scenarios where DynamoDB is already the data store — missing that DynamoDB Streams is the native, purpose-built option for table change events.
AWS Secrets ManagerAWS Systems Manager Parameter Store
#8Both store sensitive values, so candidates choose based on which they encountered more recently.
Deciding signal
Parameter Store stores configuration values and secrets in a hierarchical key-value store. Standard parameters are free; Advanced parameters add policies and larger sizes at a cost. It does not natively rotate values. Secrets Manager is designed for credentials that need automatic rotation: RDS passwords, API keys, OAuth tokens. It natively integrates with RDS, Redshift, and DocumentDB for rotation, and supports custom rotation Lambda functions for other services. When the scenario describes storing a database password and rotating it automatically on a schedule, Secrets Manager. When the scenario describes storing application configuration, feature flags, or non-rotating secrets at low cost, Parameter Store.
Quick check
Does the secret require automatic rotation on a schedule (Secrets Manager), or is this configuration or a non-rotating secret that benefits from centralized storage (Parameter Store)?
Why it looks right
Both services store secrets and both can store database credentials. Candidates pick Secrets Manager for all credential scenarios — but Parameter Store is the correct answer when cost and simplicity matter and automatic rotation is not required.
SQS Long PollingSQS Short Polling
#9Both are SQS polling modes and the difference is subtle, so candidates answer SQS questions without specifying which polling mode is appropriate.
Deciding signal
Short polling queries a subset of SQS servers and returns immediately — even if no messages are available. This can result in empty responses and higher API call costs. Long polling waits up to 20 seconds for messages to arrive before returning an empty response, reducing empty responses and API costs. On DVA-C02, long polling is the answer when the scenario describes reducing the number of empty SQS API calls, reducing cost, or avoiding CPU-spinning consumer loops. The ReceiveMessageWaitTimeSeconds attribute on the queue controls this.
Quick check
Is the scenario asking how to reduce empty API calls and polling cost (long polling), or describing a consumer that needs an immediate response even if the queue is empty (short polling)?
Why it looks right
Short polling is the default and candidates do not think to specify long polling unless the question explicitly mentions "empty responses" or "polling costs." The exam specifically tests awareness of long polling as the cost-reduction mechanism.
Lambda LayersLambda Extensions
#10Both are Lambda add-on mechanisms, so candidates treat them as alternatives for the same use case.
Deciding signal
Lambda Layers are ZIP archives that are extracted into /opt in the Lambda execution environment before the function code runs. They are for sharing libraries, dependencies, or custom runtimes across multiple functions — reducing package size and enabling dependency reuse. Lambda Extensions are processes that run alongside the Lambda function in the same execution environment, integrated into the Lambda lifecycle (init, invoke, shutdown). They are for integrating monitoring agents, secrets managers, or configuration providers that need to interact with the Lambda runtime lifecycle. When the scenario involves sharing code or libraries between functions, Layers. When the scenario involves running a sidecar process that hooks into Lambda initialization or shutdown events, Extensions.
Quick check
Is the goal to share reusable code or dependencies across functions (Layers), or to run a companion process that integrates with the Lambda execution lifecycle (Extensions)?
Why it looks right
Layers is the better-known mechanism and candidates apply it to scenarios describing observability agents or secrets providers — which are Extension use cases because they need lifecycle event access, not just code sharing.
10 DVA-C02 questions. Pattern-tagged with trap analysis. Free, no signup required.
Start DVA-C02 Mini-Trainer →