Multi-Service Tradeoff — AWS Cloud Practitioner (CLF-C02)
Serverless, containers, or queues — the constraint decides
Lambda, ECS, EKS, and SQS each solve compute or decoupling problems at different operational layers. The deciding constraint is usually overhead tolerance, scaling model, or whether work is event-driven versus persistent. EKS signals container orchestration at scale with existing Kubernetes investment; Lambda signals stateless, event-triggered execution with zero infrastructure management; SQS signals durable decoupling between producers and consumers.
What This Pattern Tests
The exam gives you a decoupling requirement and tests whether you pick the right messaging service. SQS is point-to-point with at-least-once delivery (Standard) or exactly-once (FIFO, 3,000 msg/s with batching). SNS is pub/sub fan-out to multiple subscribers. EventBridge is content-based routing with schema registry and 35+ AWS service sources. The trap is choosing SQS for fan-out (use SNS) or SNS for ordered processing (use SQS FIFO). DynamoDB vs. Aurora vs. ElastiCache follows the same pattern: key-value at any scale vs. relational joins vs. microsecond reads from memory.
Decision Axis
Communication pattern (point-to-point vs. fan-out vs. content routing) and data access pattern (key-value vs. relational vs. cache) determine the service.
Associated Traps
Decision Rules
When the dominant constraint is zero upfront infrastructure cost combined with low-latency global content delivery, prefer the managed edge-distribution service (CloudFront) over a DNS-routing service (Route 53), because content caching at edge locations—not DNS resolution—satisfies both the capex-to-opex and latency requirements simultaneously.
Whether the scenario demands an active elasticity mechanism that right-sizes capacity in real time (Auto Scaling) or a cost-visibility/alerting tool that reports on spending (Cost Explorer, Budgets) or surfaces static recommendations (Trusted Advisor) — the correct answer must operationalize the design principle, not merely expose or report on data about it.
Identify 'trading capital expense for variable expense' as the AWS Cloud benefit triggered by an upfront-cost elimination requirement, distinguishing it from a service-level capability and from responsibilities that remain with the customer regardless of automation.
Select the Cost Optimization pillar's elasticity principle (AWS Auto Scaling) as the governing answer, rather than conflating a purchasing-commitment option (Reserved Instances) or a static right-sizing tool (AWS Compute Optimizer) with the pillar's actual design intent for variable-demand workloads.
Identify capex-to-opex shift as the correct cloud benefit and reject distractors that either name a service feature (elasticity via Auto Scaling), misplace cost responsibility by implying AWS absorbs the customer's usage charges, or redirect to an unrelated benefit (global reach).
Moving from EC2 to RDS transfers OS and database engine patching responsibility to AWS but never transfers customer accountability for encryption key management, IAM access policy authorship, or data classification.
Determine which operational responsibility (OS and DB engine patching) transfers to AWS when moving from EC2 to RDS, and confirm that encryption key custody via KMS customer-managed keys remains a customer responsibility even on a fully managed service.
Which specific operational responsibilities transfer to AWS when moving a database from EC2 to RDS, and which data-layer controls remain permanently with the customer?
Whether migrating from EC2 to Lambda transfers IAM execution role configuration and function permission boundaries to AWS, or leaves them as a permanent customer responsibility.
Moving to a fully managed storage service offloads infrastructure duties to AWS but never transfers data-layer access control — the customer must configure bucket policies and IAM permissions regardless of S3's managed status.
Moving to Amazon RDS transfers infrastructure and patching duties to AWS but never transfers ownership of KMS key policies, key rotation configuration, or key access grants—those remain exclusively customer responsibilities regardless of how managed the compute or database layer is.
Does adopting Lambda's pay-per-invocation pricing model transfer customer-managed KMS key rotation and policy management to AWS, or does the customer retain that accountability regardless of compute abstraction level?
When the workload is dynamic and non-cacheable, choose AWS Global Accelerator over Amazon CloudFront, because CloudFront's latency advantage depends on cache hits and provides no meaningful benefit for unique per-request responses.
When a workload's single-invocation runtime exceeds Lambda's 15-minute maximum execution limit AND no-server-management is required, AWS Fargate is the correct serverless compute choice over Lambda or EC2.
When Kubernetes compatibility is absent from the requirements, select ECS over EKS to minimize orchestration overhead; reject Fargate as the orchestration answer because it is a compute engine that runs beneath ECS or EKS, not a standalone scheduler.
Whether a team that wants to upload application code and delegate all infrastructure management to AWS should choose a managed PaaS (Elastic Beanstalk) versus an event-driven serverless compute service (Lambda) or an IaaS option (EC2).
When the workload has variable per-item attributes, a key-value access pattern, and sub-millisecond latency at scale, choose a managed NoSQL store (DynamoDB) rather than a managed relational database (RDS) whose fixed-schema and join model is mismatched to the data model.
When a scenario requires hybrid connectivity with encryption-in-transit as an inherent service property and cost minimization as the dominant constraint, Site-to-Site VPN wins over Direct Connect because IPsec encryption is built in and there are no dedicated-port charges; Direct Connect fails because it neither encrypts by default nor minimizes recurring cost.
Does the scenario require customer-configured edge caching to offload origin load (CloudFront) or AWS-managed backbone routing that accelerates network paths without caching (Global Accelerator)?
Which party — AWS or the customer — is responsible for configuring the S3 Lifecycle policy that transitions objects to S3 Glacier after 60 days of inactivity?
Select AWS Backup with a defined backup plan and vault lifecycle rule when the retention scope spans EBS and EFS, because S3 lifecycle policies only govern object transitions inside S3 buckets and cannot fulfill cross-service backup retention obligations.
Choose Amazon Comprehend over Amazon SageMaker AI when the task is a standard NLP operation and the team must not own model development or training responsibilities.
Whether Amazon Athena's serverless model eliminates the customer's obligation to configure IAM policies, S3 bucket permissions, and encryption settings, or whether those controls remain permanently customer-owned regardless of the managed-service boundary.
Choose between a durable pull-based queue (SQS) that retains messages until confirmed consumption versus a push-based notification service (SNS) that delivers once and discards, given the dominant constraint of resilient, worker-paced async decoupling.
Which service supports push-based fanout to heterogeneous subscriber types (email, Lambda, HTTPS) in a single publish operation versus an email-only delivery channel?
When the requirement is to isolate per-request latency across distributed service call chains — not aggregate metrics — select AWS X-Ray because it captures trace segments at each service hop; CloudWatch cannot reconstruct request-level call graphs across service boundaries.
When access scope is browser-only with no need for a persistent desktop, WorkSpaces Secure Browser is the correct fit; choosing WorkSpaces imposes full-desktop monthly per-user pricing for a use case that requires only an isolated browser session.
Domain Coverage
Difficulty Breakdown