AWS · MLS-C01

Cost Optimization — AWS Machine Learning (MLS-C01)

10%of exam questions (20 of 200)

Utilization pattern determines commitment tier, not instance size.

Savings Plans, Reserved Instances, Spot, and Cost Explorer each address a different dimension of the cost problem. The exam will give you a workload profile — steady-state training, bursty inference, exploratory notebooks — and expect you to match commitment level to utilization certainty. Spot absorbs interruptible training. Savings Plans cover predictable inference capacity. Cost Explorer diagnoses; it doesn't reduce spend. Map the tool to the utilization shape before selecting.

What This Pattern Tests

The exam presents a running workload and asks you to reduce costs. Spot Instances save 60-90% but require fault tolerance. Savings Plans save 20-40% with 1-year or 3-year commitment to a consistent compute spend (flexible across instance families with Compute Savings Plans). Reserved Instances save similarly but lock to specific instance types and regions. Graviton (ARM) instances offer ~20% better price-performance than x86. The trap is recommending Spot for a latency-sensitive web API (interruptions cause errors) or Reserved Instances for a workload that runs 2 hours per day (break-even requires ~40% utilization).

Decision Axis

Workload characteristics (fault tolerance, utilization pattern, flexibility needs) determine which pricing model applies — not just the discount percentage.

Associated Traps

Decision Rules

Whether to replace a custom SageMaker ASR training pipeline with Amazon Transcribe for a commodity speech-to-text workload given a hard 40% cost-per-inference reduction mandate — i.e., apply the build-vs-buy threshold governing constraint.

Amazon TranscribeAmazon SageMakerAWS Batch

Whether to replace a custom SageMaker-hosted ASR model with Amazon Transcribe for a commodity standard-English speech-to-text workload, given that the cost-per-inference reduction mandate and variable call volume make the always-on custom endpoint unjustifiable.

Amazon TranscribeAmazon SageMaker

When the speech recognition task covers standard language and does not require a domain-differentiated model, replace the custom SageMaker ASR endpoint with Amazon Transcribe batch transcription to eliminate always-on instance-hour billing, data-labeling cost, and retraining overhead — defaulting to the managed AI service wherever the commodity threshold is met.

Amazon SageMakerAmazon Transcribe

Whether to use Amazon Comprehend's managed API for commodity sentiment and key-phrase extraction or build and host a custom SageMaker NLP model, given the cost-per-inference mandate, standard English text, and zero dedicated ML operations capacity.

Amazon SageMakerAmazon Comprehend

Whether to right-size or reserve the provisioned SageMaker endpoint versus replacing the custom TTS model with Amazon Polly's pay-per-character managed service, eliminating idle capacity waste across twenty hours per day and removing ongoing model maintenance costs.

Amazon SageMakerAmazon Polly

Domain Coverage

Machine Learning Implementation and Operations

Difficulty Breakdown

Medium: 4Hard: 4Expert: 12