AWS WAFAWS ShieldAWS Firewall ManagerAWS Network Firewall
#1All four are AWS network security services and appear together in "protect our infrastructure" questions.
Deciding signal
WAF operates at Layer 7 and blocks web requests matching rules you define: SQL injection, XSS, IP allow/block lists, rate limiting. Shield Standard is automatic DDoS protection included at no cost; Shield Advanced adds a response team, financial protection, and advanced mitigation for L3/L4/L7 attacks. Firewall Manager is a governance layer — it centrally manages WAF rules, Shield Advanced subscriptions, and Network Firewall policies across multiple accounts and VPCs in an organization. Network Firewall is a stateful, managed firewall deployed within VPCs that inspects and filters traffic at Layer 3–7, including non-HTTP protocols, with IDS/IPS capabilities. The deciding factor is what layer and what scope: web traffic rules (WAF), DDoS absorption (Shield), cross-account policy (Firewall Manager), or VPC-level deep packet inspection (Network Firewall).
Quick check
Is this filtering HTTP/HTTPS web requests (WAF), absorbing volumetric DDoS (Shield), managing rules across accounts (Firewall Manager), or inspecting all VPC traffic including non-web protocols (Network Firewall)?
Why it looks right
WAF and Shield are the most familiar and candidates apply them to scenarios requiring Network Firewall. The key signal for Network Firewall is VPC-level deployment and non-HTTP protocol inspection.
AWS Transit GatewayVPC PeeringAWS PrivateLink
#2All three connect VPCs privately, so candidates pick based on which they associate with "private connectivity."
Deciding signal
VPC Peering creates a direct connection between exactly two VPCs — traffic routes directly, not through intermediate hops, and peering is non-transitive (A-B and B-C does not enable A-C). It suits small numbers of VPCs. Transit Gateway is a hub: VPCs and on-premises networks attach to it, enabling transitive routing at scale across many VPCs, accounts, and Regions. It is the right answer when more than a handful of VPCs need full-mesh or hub-and-spoke connectivity. PrivateLink is different in kind — it exposes a specific service endpoint (ALB, NLB, or an internal service) to consumers in other VPCs or accounts without requiring full network connectivity. It is the right answer when one VPC needs to expose a service, not open full network access.
Quick check
Is this a direct connection between two VPCs (Peering), hub-and-spoke routing for many VPCs (Transit Gateway), or exposing a specific service to another VPC without full network access (PrivateLink)?
Why it looks right
Transit Gateway is the SAP-C02-level answer for multi-VPC connectivity and candidates apply it to PrivateLink scenarios where only a specific service endpoint needs to be shared, not entire VPC routing.
AWS Direct ConnectAWS Site-to-Site VPNDirect Connect Gateway
#3All three connect on-premises networks to AWS, so candidates pick Direct Connect as the "better" option without checking regional scope.
Deciding signal
Site-to-Site VPN creates an encrypted IPsec tunnel over the public internet. It is fast to provision, costs less, and is the right answer when the scenario emphasizes speed of setup or when Direct Connect is not yet in place. Direct Connect is a dedicated private network connection from your data center to AWS — it provides consistent bandwidth and latency but takes weeks to provision. A single Direct Connect connection attaches to a single AWS Region. Direct Connect Gateway extends a Direct Connect connection to VPCs in multiple Regions and multiple accounts. When the scenario describes a single on-premises data center connecting to multiple AWS Regions over Direct Connect, Direct Connect Gateway is required.
Quick check
Is this an encrypted tunnel over the internet (VPN), a dedicated private connection to one Region (Direct Connect), or a Direct Connect connection shared across multiple Regions or accounts (Direct Connect Gateway)?
Why it looks right
Direct Connect Gateway is overlooked because candidates think Direct Connect alone handles all private connectivity scenarios. The multi-Region or multi-account requirement is the signal that Direct Connect Gateway is needed.
AWS OrganizationsAWS Control TowerAWS Service Catalog
#4All three manage resources and access across multiple accounts, so candidates blur them in multi-account governance questions.
Deciding signal
Organizations is the foundation: it groups accounts into OUs and enables SCPs for account-level permission guardrails. It does not provision or configure accounts. Control Tower is built on top of Organizations and automates the setup of a multi-account landing zone — it provisions guardrails, baselines accounts with logging and compliance controls, and manages the Account Factory. Service Catalog is about product governance within accounts: it lets administrators define approved infrastructure products (backed by CloudFormation) so that end users can self-provision them without direct IAM access. When the scenario involves establishing a new multi-account environment with baseline security, Control Tower. When it involves managing account-level permission boundaries across an existing organization, Organizations SCPs. When it involves self-service provisioning of pre-approved resources, Service Catalog.
Quick check
Is this about account-level permission guardrails (Organizations SCPs), automating a multi-account landing zone with baseline controls (Control Tower), or self-service provisioning of approved products (Service Catalog)?
Why it looks right
Organizations and Control Tower are often confused because Control Tower uses Organizations internally. The distinction is automation: Organizations requires you to configure everything; Control Tower automates the baseline.
Amazon CognitoAWS IAM Identity Center (SSO)
#5Both provide single sign-on capabilities, so candidates pick based on the phrase "SSO" without checking the user base.
Deciding signal
Cognito manages authentication for customer-facing applications: web and mobile app users sign up, sign in, and receive tokens (JWT). It handles millions of end-users with social federation (Google, Facebook), SAML, and OpenID Connect support. IAM Identity Center is for workforce identities — employees, contractors, partners — who need access to multiple AWS accounts and business applications through a single AWS access portal. It integrates with Active Directory, Okta, and other enterprise IdPs. The exam distinguishes them by who the users are: external customers accessing your application (Cognito) versus internal employees accessing AWS accounts (IAM Identity Center).
Quick check
Are the users external customers logging into your application (Cognito), or internal employees who need SSO access to AWS accounts and business tools (IAM Identity Center)?
Why it looks right
Both are described as identity services with SSO capability. Candidates apply Cognito to all identity scenarios — it is the right answer only when the users are application end-users, not AWS account users.
CloudFront FunctionsLambda@Edge
#6Both run code at CloudFront edge locations, so candidates treat them as alternatives without checking execution constraints.
Deciding signal
CloudFront Functions run at the viewer request and viewer response stages only, have a sub-millisecond execution budget, and are limited to JavaScript with no network calls, file system access, or external integrations. They are appropriate for simple manipulations: URL rewrites, header additions or removals, simple A/B testing logic. Lambda@Edge runs at all four CloudFront events (viewer request/response, origin request/response), has full Lambda capabilities including network calls and larger memory/timeout limits, and supports Node.js and Python. When the scenario involves calling an external API, reading from DynamoDB, or executing more than trivial request manipulation, Lambda@Edge. When the scenario describes simple header manipulation or URL rewriting with minimal latency overhead, CloudFront Functions.
Quick check
Is this simple request/response manipulation with no external calls needed (CloudFront Functions), or does the edge logic require network access, larger compute, or origin-stage execution (Lambda@Edge)?
Why it looks right
Lambda@Edge is the more powerful option and candidates default to it. CloudFront Functions are the correct answer when the logic is simple and the exam emphasizes minimal latency or lowest cost.
S3 ReplicationAWS DataSyncAWS Transfer Family
#7All three move data between storage locations, so candidates pick S3 Replication as the default without checking the source.
Deciding signal
S3 Replication (CRR for cross-region, SRR for same-region) automatically copies new objects between S3 buckets — it is S3-to-S3 only and works for continuous object-level replication. DataSync is an online data transfer service for moving large volumes of data between on-premises storage (NFS, SMB, Hadoop), AWS storage (S3, EFS, FSx), and other locations — it is optimized for large-scale migrations with bandwidth scheduling and checksum verification. Transfer Family provides managed SFTP, FTPS, and FTP endpoints that write to S3 or EFS, enabling partners or customers to use existing file transfer tools. When the scenario involves external parties uploading via SFTP, Transfer Family. When it involves migrating or syncing large on-premises datasets, DataSync. When it involves keeping S3 buckets in sync, S3 Replication.
Quick check
Is this S3-to-S3 continuous replication (S3 Replication), large-scale migration from on-premises or other storage (DataSync), or enabling SFTP/FTP uploads from external clients (Transfer Family)?
Why it looks right
S3 Replication is the default answer for S3 data movement. DataSync and Transfer Family are correct when the source is not S3 or when the access protocol is SFTP/FTP.
AWS Lake FormationAWS GlueAmazon Athena
#8All three appear in data lake architectures, so candidates conflate their roles when a question mentions analytics.
Deciding signal
Glue is an ETL service with a managed Spark environment and a Data Catalog that stores metadata about your data lake. It discovers schemas, transforms data, and catalogs datasets. Athena is a serverless SQL query engine that reads data directly from S3 using schema-on-read — it uses the Glue Data Catalog as its metadata store. Lake Formation builds on Glue and adds a governance layer: it manages fine-grained access control over data lake resources (databases, tables, columns) using a centralized permissions model that goes beyond IAM. When the scenario involves transforming raw data into structured datasets, Glue. When it involves querying S3 data with SQL, Athena. When it involves controlling which users can access which tables or columns in the data lake, Lake Formation.
Quick check
Is this about transforming and cataloging data (Glue), querying S3 with SQL (Athena), or enforcing fine-grained access control across the data lake (Lake Formation)?
Why it looks right
Glue and Athena are both associated with "analytics" and candidates conflate them. Lake Formation is frequently missed because candidates do not distinguish data lake governance from data access.
Amazon RDS ProxyAmazon Aurora ServerlessAmazon ElastiCache
#9All three appear in scalability questions involving databases, so candidates pick based on which they associate with "scale."
Deciding signal
RDS Proxy sits between Lambda functions (or other clients) and RDS/Aurora instances, pooling database connections to prevent connection exhaustion — a specific problem when Lambda functions scale to thousands of concurrent instances each opening a new connection. Aurora Serverless automatically scales database capacity up and down based on workload, including scaling to zero during idle periods. ElastiCache adds an in-memory cache in front of the database to serve repeated reads without hitting the database at all. When the scenario involves Lambda and database connection limits, RDS Proxy. When it involves a variable or unpredictable database workload with no traffic at off-hours, Aurora Serverless. When it involves reducing repeated read load on the database, ElastiCache.
Quick check
Is the problem connection exhaustion from many Lambda instances (RDS Proxy), variable compute capacity for the database itself (Aurora Serverless), or reducing database read load with a cache (ElastiCache)?
Why it looks right
ElastiCache is the familiar "scale the database" answer. RDS Proxy is correct specifically when Lambda connection management is the stated problem — a pattern the exam specifically tests.
10 SAP-C02 questions. Pattern-tagged with trap analysis. Free, no signup required.
Start SAP-C02 Mini-Trainer →