AWS · SAA-C03

Performance Architecture — AWS Solutions Architect (SAA-C03)

6%of exam questions (12 of 200)

Latency source determines which acceleration layer applies

Architecture requirement: reduce latency for a globally distributed user base. Competing choices: CloudFront (HTTP/S caching at edge PoPs), Global Accelerator (TCP/UDP routing to optimal AWS endpoint), ElastiCache (in-memory query caching), DAX (microsecond DynamoDB reads). Deciding constraint: is latency from static content delivery, network routing inefficiency, database reads, or repeated computation? The question's latency description identifies which layer is the bottleneck. CloudFront and Global Accelerator are not interchangeable — one caches content, the other optimizes network path.

What This Pattern Tests

The exam presents a performance requirement and tests architectural pattern selection. For ML workloads like MLS-C01, SageMaker endpoint auto-scaling adjusts instance count based on InvocationsPerInstance metrics, while multi-model endpoints share a single endpoint across models to reduce cost. For AIF-C01, Bedrock provisioned throughput reserves model capacity for predictable latency, while on-demand throughput works for variable workloads. For data engineering on DEA-C01, Glue job performance depends on DPU allocation — too few DPUs bottleneck Spark shuffles, too many waste money on small datasets. Redshift Serverless scales RPUs automatically, while provisioned clusters need manual resize. The trap is scaling compute when the bottleneck is data shuffling, or provisioning throughput for a bursty workload that should use on-demand.

Decision Axis

Bottleneck identification before scaling: compute-bound = more instances/DPUs, I/O-bound = better partitioning, latency-bound = caching or provisioned capacity.

Associated Traps

More Top Traps on This Exam

Decision Rules

Select between Amazon FSx for Lustre and Amazon EFS for a compute-intensive HPC cluster where the explicit sustained-throughput magnitude and latency SLA, not the shared-access requirement alone, determine the correct service.

Amazon FSx for LustreAmazon Elastic File System (Amazon EFS)Amazon EC2

Whether to use ALB (Layer 7 HTTP/S with native path-based routing) or NLB (Layer 4 high-throughput TCP) for an HTTPS microservices workload where URL path-based target group routing is a hard requirement.

Elastic Load BalancingAmazon EC2 Auto Scaling

Whether NLB's native static IP per AZ capability satisfies both the TCP protocol and partner firewall whitelisting constraints without layering additional services or provisioning virtual appliance infrastructure, making NLB the lowest-cost correct fit versus GWLB (appliance-chaining overhead with no inspection need) or ALB (Layer 7 HTTP-only, no native static IP).

Elastic Load BalancingAmazon Virtual Private Cloud (Amazon VPC)

Domain Coverage

Design High-Performing Architectures

Difficulty Breakdown

Easy: 4Medium: 4Hard: 4