AWS SAA-C03Trap Reference

Commonly Confused Services on SAA-C03

Service confusion on the Solutions Architect Associate exam is rarely about not knowing what a service does in isolation. It is about misreading the architectural constraint in the question that makes one service correct and the other wrong.

Each section below gives you the deciding signal, a quick check to run when you encounter the confusion, and why the wrong answer keeps looking right.

ALBNLBGWLB
#1

Content routing vs. TCP throughput vs. inline appliance insertion

All three are AWS load balancers, so candidates reach for ALB whenever they see load balancer in a question.

Deciding signal

ALB operates at Layer 7 and routes based on HTTP rules: path, host header, query string, or source IP. It is the right answer when the scenario involves routing web traffic to different target groups by URL pattern or when content-based decisions are required. NLB operates at Layer 4 and handles high-volume TCP and UDP traffic with ultra-low latency and static IP support — right when the protocol is not HTTP or when extreme throughput is the constraint. GWLB is not a general-purpose load balancer: it routes traffic transparently through third-party virtual appliances such as firewalls and intrusion detection systems. When a scenario describes inserting security appliances inline before traffic reaches the application, GWLB is the answer.

Quick check

Is this routing web traffic by URL rules (ALB), handling high-throughput TCP/UDP (NLB), or sending traffic through a security appliance before it reaches the app (GWLB)?

Why it looks right

ALB is the most familiar load balancer and correctly answers most web-traffic questions. Candidates apply it to GWLB scenarios because "security" and "load balancer" both appear in the description.

Amazon EFSAmazon EBSAmazon S3
#2

Shared file system vs. block volume vs. object store

All three store data persistently, so candidates choose based on familiarity rather than access pattern.

Deciding signal

EBS is a block device that attaches to a single EC2 instance (or multiple with Multi-Attach on io1/io2). It behaves like a local disk. EFS is a managed NFS file system that multiple EC2 instances or containers can mount simultaneously — the right answer when the scenario involves shared storage across multiple compute resources. S3 is object storage accessed over HTTP, suited for large unstructured files, backups, and static assets — not mountable as a file system in standard architectures. The key question is who needs access: one instance (EBS), multiple instances concurrently (EFS), or applications via API (S3).

Quick check

Does one instance need block storage (EBS), do multiple instances need shared file access (EFS), or does the application read/write objects over HTTP (S3)?

Why it looks right

S3 is the most recognizable storage service and is a common wrong answer for "shared storage" questions because it is accessible from anywhere — but S3 is not a POSIX file system and cannot be mounted by applications that expect file system semantics.

Amazon SQSAmazon SNSAmazon EventBridge
#3

Durable queue vs. push fanout vs. rule-based event routing

All three decouple services, so candidates pick based on the word "messaging" alone.

Deciding signal

SQS holds messages until a consumer retrieves them. It suits worker-style patterns where processing durability matters and one consumer handles each message. SNS pushes a message simultaneously to all its subscribers — Lambda, SQS queues, HTTP endpoints, email — making it the right answer for fanout to multiple systems. EventBridge routes events based on rules that match event content: source, detail-type, or specific fields. It is the right answer when the scenario involves multiple downstream consumers with different filtering criteria, or when events from AWS services or SaaS need routing without custom code. The deciding factor is routing intelligence: SQS has none, SNS fans out equally to all subscribers, EventBridge filters by content.

Quick check

Should one consumer process each message durably (SQS), should all subscribers receive it simultaneously (SNS), or should different consumers receive it based on the event content (EventBridge)?

Why it looks right

SNS is the common wrong answer on EventBridge questions because both push to multiple targets. The distinction is filtering: SNS delivers to all subscribers; EventBridge applies per-rule conditions.

Amazon Kinesis Data StreamsAmazon Kinesis Data Firehose
#4

Custom real-time consumers vs. managed delivery to a destination

Both handle streaming data, so candidates treat them as interchangeable ingest options.

Deciding signal

Kinesis Data Streams is a real-time data stream where your application code reads records, controls retention (1–365 days), and can replay data. It requires you to build or manage consumers. Kinesis Data Firehose is a fully managed delivery service: it receives data and delivers it automatically to S3, Redshift, OpenSearch, or Splunk with optional transformation — no consumer code required. When the scenario involves custom processing logic, real-time analytics, or the need to replay data, Data Streams. When the scenario describes loading streaming data into a destination with minimal operational overhead, Firehose.

Quick check

Does the workload need custom consumer code and replay capability (Data Streams), or automatic delivery to a specific destination without consumer management (Firehose)?

Why it looks right

Both are described as streaming services, and Firehose sounds like it provides more capability. Candidates underestimate what "managed delivery" means — Firehose has no consumer API because it handles delivery itself.

Amazon AuroraAmazon RDS
#5

AWS-optimized cloud-native relational vs. managed standard engines

Both are managed relational databases and Aurora supports MySQL and PostgreSQL compatibility, so candidates treat them as equivalent.

Deciding signal

RDS supports six engines: MySQL, PostgreSQL, Oracle, SQL Server, MariaDB, and Aurora. When the scenario specifies Oracle or SQL Server licensing requirements, RDS is required. Aurora is AWS-proprietary storage that is up to 5× faster than standard MySQL and 3× faster than standard PostgreSQL, with storage that automatically grows up to 128 TB. Aurora also offers Aurora Serverless for variable or unpredictable workloads with automatic capacity scaling. The exam typically signals Aurora when the scenario involves high availability, read scalability across up to 15 read replicas, or workloads described as "high-performance MySQL/PostgreSQL-compatible." If a specific non-Aurora engine is named or licensing is mentioned, use RDS.

Quick check

Does the scenario require a specific engine like Oracle or SQL Server (RDS), or does it describe high performance, scale, and MySQL/PostgreSQL compatibility (Aurora)?

Why it looks right

Aurora is often the better answer on SAA-C03 performance questions, but candidates overlook it when the question mentions MySQL — not realizing Aurora MySQL-Compatible is a valid answer when performance is the constraint.

ElastiCache for RedisElastiCache for Memcached
#6

Feature-rich persistent cache vs. simple horizontal cache

Both are in-memory caches under the ElastiCache brand, so candidates pick without checking whether the workload needs persistence.

Deciding signal

Redis supports persistence, Multi-AZ replication, pub/sub messaging, sorted sets, geospatial indexes, and Lua scripting. It is the right answer for session storage requiring failover, leaderboards, real-time analytics, or any use case that requires data to survive a node restart. Memcached is simpler: multi-threaded, horizontally scalable with no persistence, no replication, and no complex data structures. When the question mentions session state, pub/sub, or sorted data, Redis. When the question describes simple key-value caching with a large number of nodes and no persistence requirement, Memcached.

Quick check

Does the workload require persistence, replication, or complex data structures (Redis), or simple multi-threaded key-value caching with no durability requirement (Memcached)?

Why it looks right

Redis is the more capable engine and is frequently the right answer, so candidates apply it universally. Memcached is correct in the narrow case where multi-threading and horizontal simplicity are explicitly described as requirements.

VPC Gateway EndpointVPC Interface Endpoint (PrivateLink)
#7

Free route-table entry for S3/DynamoDB vs. ENI for all other services

Both provide private VPC access to AWS services, so candidates pick one without checking which service they are connecting to.

Deciding signal

Gateway Endpoints are free, use route table entries, and work only for Amazon S3 and Amazon DynamoDB. No ENI is created; traffic is routed at the network layer. Interface Endpoints use AWS PrivateLink, create Elastic Network Interfaces in your VPC with private IPs, and support most other AWS services (SQS, SNS, Secrets Manager, SSM, and many more). Interface Endpoints have an hourly cost. When the scenario asks for private access to S3 or DynamoDB from within a VPC, Gateway Endpoint. For any other AWS service, Interface Endpoint. If the scenario mentions "private DNS" or "ENI with private IP," that signals an Interface Endpoint.

Quick check

Is the target service S3 or DynamoDB (Gateway Endpoint), or is it any other AWS service requiring a private IP endpoint in the VPC (Interface Endpoint)?

Why it looks right

Gateway Endpoints are frequently forgotten because Interface Endpoints (PrivateLink) are more prominent in AWS documentation. Candidates default to Interface Endpoints for S3 even though Gateway Endpoints exist specifically for that use case at no cost.

Amazon CloudFrontAWS Global Accelerator
#8

Edge caching vs. backbone routing for non-cacheable workloads

Both reduce global latency, so candidates reach for CloudFront whenever a question mentions users in multiple regions.

Deciding signal

CloudFront caches content at edge locations. Its latency benefit depends on cache hits and applies to cacheable assets: images, static files, API responses that can be cached. Dynamic content unique per request cannot be cached, so CloudFront provides no latency benefit for it beyond the first request. Global Accelerator routes all traffic over the AWS backbone network using anycast IP addresses, regardless of cacheability. It reduces latency for dynamic, stateful, non-cacheable workloads — gaming, VoIP, IoT, or APIs where every request is unique. The deciding question is cacheability.

Quick check

Is the content cacheable (CloudFront can help), or is every request unique and dynamic (Global Accelerator)?

Why it looks right

CloudFront is associated with "global performance" broadly, and candidates apply it to dynamic workloads where caching provides no benefit. Global Accelerator is less familiar and gets overlooked.

RDS Read ReplicasRDS Multi-AZ
#9

Read scaling vs. high availability failover

Both add a second database instance, so candidates treat them as equivalent solutions for "database resilience."

Deciding signal

Multi-AZ creates a synchronous standby replica in a different Availability Zone for automatic failover. It is strictly for availability — the standby cannot serve read traffic in standard RDS. Read Replicas are asynchronous copies that can serve SELECT queries, reducing read load on the primary. They are a read-scaling mechanism, not a failover mechanism — promoting a read replica to primary is a manual operation. When the scenario describes "reduce read load" or "offload reporting queries," Read Replicas. When the scenario describes "automatic failover" or "survive an AZ failure," Multi-AZ.

Quick check

Is the requirement to reduce read load (Read Replicas) or to survive an AZ failure with automatic failover (Multi-AZ)?

Why it looks right

Both options involve a second database instance, and candidates conflate "redundancy" with "high availability." Multi-AZ is not a performance feature; Read Replicas are not a failover feature.

IAM RolesS3 Bucket Policies / Resource-based Policies
#10

Identity-attached permissions vs. resource-attached permissions

Both control access to AWS resources, so candidates default to IAM roles without considering which approach fits cross-account patterns.

Deciding signal

IAM roles attach to identities — EC2 instances, Lambda functions, users — and define what those identities can do. To access a resource in another account using a role, the calling principal must assume the role first. Resource-based policies (S3 bucket policies, SQS queue policies, Lambda resource policies) attach to the resource itself and define who can access it. They can grant cross-account access directly without requiring the caller to assume a role, which removes one step. On SAA-C03, resource-based policies are specifically the answer when the scenario describes granting another account access to a resource with "minimal operational overhead" or "without role assumption."

Quick check

Does the scenario involve granting access to an identity you control (IAM role), or granting a specific external account or service direct access to a resource (resource-based policy)?

Why it looks right

IAM roles are the default mental model for access control. Resource-based policies are a less-prominent tool that directly enables cross-account access without the assume-role step — the exam specifically tests whether candidates know this distinction.

Train these confusions, not just read them

10 SAA-C03 questions. Pattern-tagged with trap analysis. Free, no signup required.

Start SAA-C03 Mini-Trainer →