Security And Governance Boundary — AWS AI Practitioner (AIF-C01)
Detection and Access Control Are Not the Same Scope
When a scenario lists IAM, IAM Identity Center, GuardDuty, and Security Hub together, candidates conflate prevention with detection. IAM Identity Center governs federated access across multiple accounts; IAM enforces permissions within one. GuardDuty detects anomalous behavior; Security Hub aggregates findings from multiple detective services. Match the verb in the scenario to the service function: "centrally manage access" signals Identity Center; "detect anomalous API calls" signals GuardDuty. The service name alone won't get you there.
What This Pattern Tests
The exam describes a security requirement and tests which access control layer applies. IAM policies attach to principals (users, roles). Resource policies attach to resources (S3 bucket policies, KMS key policies). SCPs restrict what an entire AWS account can do. Permission boundaries cap what an IAM entity can be granted. The trap is applying EC2-level security group thinking to Lambda (which uses IAM execution roles), or writing an IAM policy when an SCP is needed for account-wide restriction. S3 Block Public Access, VPC endpoint policies, and Organizations tag policies each add another control plane the exam expects you to distinguish.
Decision Axis
Control scope determines the mechanism: principal-level (IAM), resource-level (resource policies), account-level (SCPs), or network-level (security groups, NACLs).
Associated Traps
More Top Traps on This Exam
Decision Rules
When a regulated GenAI deployment requires output-level human validation with a documented decision trail, choose the service that intercepts model outputs and manages human review workflows rather than one that captures operational metrics or aggregates infrastructure-level evidence.
Which AWS service provides customer-managed encryption key control at the infrastructure layer for a Bedrock-based GenAI application, distinct from model-level content safety controls?
Whether the stated auditability mandate is satisfied by a human-in-the-loop review mechanism that produces a durable audit trail (Amazon A2I) versus a model-level content control that filters outputs but generates no human-review record (Amazon Bedrock Guardrails or prompt engineering).
Which AWS service enforces the encryption key ownership and audit boundary for patient data at rest, as required by a HIPAA customer-managed key control mandate on a Bedrock-hosted application?
Whether to apply a human-in-the-loop governance control (Amazon A2I) versus an operational monitoring or logging tool (Amazon CloudWatch) to satisfy a regulatory requirement for per-output human auditability of GenAI decisions.
Whether customer-managed KMS keys (BYOK) satisfy a data-encryption sovereignty mandate versus services that detect or log compliance signals without controlling the cryptographic boundary of Bedrock invocation payloads.
Determine whether PII prevention belongs at the AI inference output layer (Bedrock Guardrails) or the data-storage discovery layer (Macie), and select the service that enforces the policy at the correct layer.
Select the control layer — application-layer AI content filtering versus infrastructure-layer network isolation — that directly enforces policy on model inputs and outputs rather than on network traffic routing.
Apply an AI application-layer control (Amazon Bedrock Guardrails) rather than an infrastructure-layer control when the threat is prompt injection — a semantic content-manipulation risk that network routing, encryption, and IAM policy documents cannot intercept.
Select the control that operates at the AI application output layer to filter PII from live inference responses, not a storage-layer detection service that acts on data at rest.
Domain Coverage
Difficulty Breakdown