Near-Right Architecture — AWS Advanced Networking (ANS-C01)
Two options were architecturally valid — you picked the one that violates a constraint buried in the scenario. Read constraints before evaluating answers.
Connectivity established, constraint still violated
The scenario establishes a hybrid architecture with latency and bandwidth requirements. Direct Connect with a private VIF satisfies both — so candidates select it. The exam, however, is asking which configuration satisfies the requirement given existing infrastructure constraints: a VPN already in place and a 30-day lead time for a new Direct Connect circuit. Near-right answers connect the dots technically but ignore operational and timeline constraints that shift the correct choice.
The Scenario
A company needs a real-time analytics dashboard querying petabytes of log data. The question offers Athena with S3 and Redshift Serverless. Both query structured data at scale. But the scenario says "sub-second response times for repeated queries" — Athena scans S3 on every query (seconds to minutes), while Redshift caches results and returns sub-second on repeats. The constraint is latency on repeated queries, not raw query capability. You picked Athena because it is serverless and cheaper per query, but the access pattern eliminates it.
How to Spot It
- •When both answers use real AWS services that address the primary use case, re-read for the performance constraint. "Sub-second," "real-time," "single-digit millisecond" each eliminate different services. Athena is not sub-second. DynamoDB is not for complex joins. Aurora is not for petabyte-scale analytics.
- •Look for protocol-level constraints. If the scenario says TCP traffic with client IP preservation, that eliminates CloudFront (HTTP/HTTPS only) and points to Global Accelerator + NLB. If it says HTTP with caching, that eliminates Global Accelerator.
- •If you find yourself thinking "both could work," the exam is testing constraint reading. Check for: latency target, protocol, data volume, ordering requirement, or compliance region restriction.
Decision Rules
Whether Route 53 Resolver rules and private hosted zone associations are created once in a centralized networking account and shared to spoke VPCs via AWS RAM, or whether per-account Resolver endpoints and hosted zone associations are deployed independently in each spoke account.
Choose between CloudFormation StackSets (declarative, drift-detectable, native multi-account orchestration) and Lambda-based imperative SDK automation (event-driven but requiring custom state management) for repeatable multi-account network provisioning.
Whether to establish Transit Gateway hub-and-spoke network routing or to use PrivateLink service-endpoint exposure when consumer VPCs carry overlapping CIDRs and only single-service access—not arbitrary VPC-to-VPC traffic—is required.
Whether redundancy achieved by two circuits at the same DX location satisfies a single-facility-failure availability requirement, or whether a geographically independent backup path via Site-to-Site VPN terminating on Transit Gateway is required.
Whether transitive routing with centralized inspection at scale is better satisfied by a Transit Gateway shared via AWS RAM (hub-and-spoke with a single inspection attachment) or a full-mesh of VPC peering connections (which cannot route transitively through a third VPC and require n*(n-1)/2 individual constructs).
Whether a Link Aggregation Group at a single Direct Connect location satisfies a 99.99% SLA that explicitly requires surviving a facility-level failure, versus two Direct Connect connections terminating at geographically separate locations attached to a Transit Gateway with a VPN backup path.
Determine whether VPC peering or a Transit Gateway shared via AWS RAM satisfies the transitive-routing and zero-reconfiguration-on-account-addition constraints for any-to-any connectivity across 30+ VPCs in 8 accounts.
Whether to centralize hybrid routing through Transit Gateway attached to a Direct Connect Gateway versus provisioning individual Site-to-Site VPN connections per VPC, given that route-propagation-limits and transitive routing requirements become binding constraints at large VPC counts.
When packet-loss symptoms require packet-header or payload-level analysis — specifically MTU path discovery and fragmentation diagnosis — VPC Traffic Mirroring must be selected over VPC Flow Logs, which capture only five-tuple connection metadata and accept/reject outcomes and cannot expose frame size, DSCP markings, or IP fragmentation flags.
Whether to terminate individual Direct Connect private virtual interfaces per VPC or consolidate hybrid routing through a Transit Gateway paired with a Direct Connect Gateway, which stays within route-propagation limits and absorbs VPC growth behind a single on-premises BGP session.
Whether to deploy a Transit Gateway attachment mesh (per-attachment-hour plus per-GB processing fee) or a VPC full-mesh peering topology (no per-GB processing surcharge) when the VPC count is low enough that peering route-table complexity remains manageable.
Whether to attach each VPC individually to Direct Connect private virtual interfaces (which multiplies BGP prefix advertisements and breaches per-VIF route limits at this scale) or consolidate all VPCs through a Transit Gateway paired with a Direct Connect gateway (which aggregates CIDR advertisements into a single BGP session and enables transitive routing within service limits).
Select VPC Traffic Mirroring over VPC Flow Logs when the diagnostic symptom requires inspection of actual packet headers (e.g., DF-bit set, observed MTU ceiling) to confirm jumbo-frame fragmentation, because Flow Logs capture only connection-level accept/reject metadata and cannot expose packet-size or IP-header fields.
Whether VPC Flow Logs (connection metadata only) or VPC Traffic Mirroring (full packet payload delivery to an inspection target) satisfies a compliance mandate that explicitly requires payload-level content inspection, not just connection-level visibility.
Whether to enable MACsec on the dedicated Direct Connect connection (Layer 2, no tunneling penalty, Connectivity Association Key management) versus overlaying a Site-to-Site VPN tunnel (IPsec Layer 3, satisfies encryption compliance but introduces tunneling latency and certificate or PSK rotation overhead that the scenario explicitly prohibits).
Whether VPC Flow Logs (connection metadata only: source/destination IP, port, protocol, byte count) or VPC Traffic Mirroring (full raw packet payload copy to a monitoring target) satisfies the compliance mandate requiring payload-level inspection and pattern matching within live traffic.
Whether to layer a Site-to-Site VPN IPsec tunnel over the existing Direct Connect connection to deliver mandatory cryptographic encryption, rather than treating Direct Connect's dedicated private circuit as a compliant substitute for encryption.
Whether edge-layer origin-group failover (CloudFront, HTTP-error-code triggered, no DNS TTL dependency) or DNS-layer health-check failover (Route 53, TTL-bounded) satisfies a seconds-level RTO requirement for cacheable HTTP traffic served worldwide.
Choose CloudFront origin group failover over Route 53 DNS health-check failover when the recovery constraint is sub-second, HTTP-error-code-triggered rerouting for cacheable content served from a global edge layer.
Which load balancer type supports both a TLS security policy enforcing TLS 1.2 minimum AND native AWS WAF attachment for L7 inspection — and why the NLB-based alternative satisfies one constraint while breaking the other.
Whether AWS WAF and Shield Advanced satisfy an east-west intra-VPC stateful L7 inspection mandate, or whether AWS Network Firewall deployed in a dedicated inspection subnet with route-table steering is required.
Domain Coverage
Difficulty Breakdown
Related Patterns