GCP PCATrap Reference

Commonly Confused Services on the GCP PCA

The GCP Professional Cloud Architect exam is primarily shaped by business requirements, case-study constraints, and organizational fit — most questions require reasoning about trade-offs, not service identification alone. This page covers the service pairs that genuinely require quick disambiguation as part of that broader reasoning.

Each section below gives you the deciding signal, a quick check, and why the wrong answer keeps looking right.

Cloud Pub/SubCloud TasksCloud Scheduler
#1

Real-time event streaming vs. managed task queue vs. cron scheduling

All three trigger asynchronous processing, so candidates blur messaging with scheduling.

Deciding signal

Cloud Pub/Sub is a global, real-time messaging service for event-driven architectures — publishers push messages and one or more subscribers receive them asynchronously. It is the right answer for decoupling services, streaming event data, and fan-out to multiple consumers. Cloud Tasks manages a queue of work to be executed by HTTP endpoints or Cloud Run services — you explicitly enqueue tasks and control retry, rate limiting, and deduplication. It is the right answer for managing a backlog of discrete units of work with controlled execution. Cloud Scheduler is a fully managed cron job service — it sends HTTP/HTTPS requests or Pub/Sub messages on a defined schedule (every hour, at midnight, etc.). When the scenario involves triggering something on a time schedule, Scheduler. Managing a queue of tasks with rate limiting, Tasks. Real-time event messaging between services, Pub/Sub.

Quick check

Is this real-time messaging between services (Pub/Sub), managing a queue of tasks with controlled rate and retry (Cloud Tasks), or triggering an action on a time-based schedule (Cloud Scheduler)?

Why it looks right

Pub/Sub is the most prominent async messaging service and candidates apply it to task queue scenarios. Cloud Tasks is the correct answer when the scenario describes a work queue with explicit enqueue-and-execute semantics, not event broadcasting.

Cloud DataflowCloud DataprocBigQuery
#2

Serverless stream/batch ETL vs. managed Hadoop/Spark cluster vs. analytics warehouse

All three process large datasets and appear in data pipeline questions without candidates distinguishing processing model from query model.

Deciding signal

Cloud Dataflow is a fully managed, serverless data processing service based on Apache Beam — it processes both streaming and batch data without managing infrastructure. It is the right answer for ETL pipelines, streaming analytics, and data transformations where serverless management is preferred. Cloud Dataproc is a managed Spark and Hadoop service — you provision clusters and run Spark jobs, Hive queries, or Pig scripts. It is the right answer when the workload requires existing Spark or Hadoop code, libraries, or specific configuration not supported by Dataflow. BigQuery is a serverless analytics warehouse — it runs SQL queries over large stored datasets and is not an ETL or stream processing tool. When the scenario involves SQL analytics on stored data, BigQuery. ETL transformations on streaming or batch data, Dataflow. Custom Spark or Hadoop workloads, Dataproc.

Quick check

Is this SQL analytics on stored data (BigQuery), serverless ETL on streaming or batch data (Dataflow), or running custom Spark/Hadoop jobs on a managed cluster (Dataproc)?

Why it looks right

Dataflow and Dataproc both run distributed data processing and candidates conflate them. Dataproc is correct when existing Apache Spark or Hadoop code is explicitly mentioned — Dataflow cannot run arbitrary Spark code.

Cloud SpannerCloud SQLFirestoreCloud Bigtable
#3

Global relational vs. regional managed SQL vs. document NoSQL vs. wide-column NoSQL

All four are managed database services and candidates apply Cloud SQL as the default relational answer.

Deciding signal

Cloud SQL is a fully managed MySQL, PostgreSQL, or SQL Server database in a single region — standard relational with up to 96 vCPUs and 624 GB RAM. Best for existing relational workloads with regional scope. Cloud Spanner is a globally distributed, horizontally scalable relational database with strong consistency across regions and ACID transactions — the right answer when the scenario requires relational semantics at global scale that Cloud SQL cannot provide. Firestore is a serverless document database with real-time sync and offline support — best for mobile and web applications with document-based data models. Cloud Bigtable is a NoSQL wide-column store optimized for time-series, IoT, and analytical workloads requiring high write throughput and low latency at petabyte scale.

Quick check

Is this a regional relational database (Cloud SQL), globally distributed relational at scale (Spanner), document-based NoSQL for mobile/web (Firestore), or wide-column NoSQL for time-series or IoT at massive scale (Bigtable)?

Why it looks right

Cloud SQL is the default relational answer. Cloud Spanner is specifically correct when the scenario explicitly describes global distribution, multi-region writes, or scale that exceeds Cloud SQL — not for standard single-region relational workloads.

Cloud ArmorCloud CDNCloud Load Balancing
#4

WAF and DDoS protection vs. content caching vs. traffic distribution

All three sit in front of backend services, so candidates blur security with distribution and caching.

Deciding signal

Cloud Load Balancing distributes traffic across backend instances — it is the foundational layer for availability and scaling. Cloud CDN caches content at Google edge POPs — it reduces latency and origin load for cacheable static content (images, JavaScript, video). Cloud Armor is a WAF and DDoS protection service that integrates with Cloud Load Balancing — it applies security policies (IP allow/deny, rate limiting, OWASP rule groups, adaptive DDoS protection) to HTTP/HTTPS load-balanced traffic. Armor does not distribute traffic; CDN does not protect against attacks. The architecture is typically: Cloud Load Balancing → Cloud Armor → Cloud CDN → backend. When a scenario involves blocking web attacks or DDoS, Cloud Armor. Caching static content, Cloud CDN. Distributing requests to backends, Cloud Load Balancing.

Quick check

Is this distributing traffic to backends (Cloud Load Balancing), caching static content at edge (Cloud CDN), or applying WAF rules and DDoS protection (Cloud Armor)?

Why it looks right

Cloud Load Balancing is the default "in front of the backend" answer. Cloud Armor is the specific answer when the scenario involves threat protection — it is an add-on security layer, not a traffic distribution service.

Cloud RunGoogle Kubernetes Engine (GKE)App EngineCloud Functions
#5

Serverless containers vs. managed Kubernetes vs. managed PaaS vs. event functions

All four run application code without full VM management, so candidates apply GKE as the default container answer.

Deciding signal

For PCA scenarios, the key signal is operational ownership. GKE is the answer when the scenario requires Kubernetes API compatibility, custom operators, specific cluster configuration, or stateful workloads needing persistent volumes and node-level control — situations where the team accepts Kubernetes operational overhead because the workload demands it. Cloud Run is the answer when the scenario describes containerized services where the team wants no cluster management — auto-scaling, scale-to-zero, and fully managed infrastructure. Cloud Functions is for short, stateless, event-triggered tasks. App Engine standard is for managed web app runtimes without containers. PCA tests whether you can recognize which ownership model the scenario implies, not just which service runs containers.

Quick check

Does the workload require Kubernetes API compatibility, custom operators, or cluster-level control (GKE)? Or containers with no cluster management overhead (Cloud Run)? Or short event-triggered functions (Cloud Functions)?

Why it looks right

GKE is the default container platform answer. Cloud Run is correct when the scenario avoids cluster management complexity — and for PCA, distinguishing "containers with cluster control" from "containers without it" is the actual test.

VPC PeeringShared VPCCloud InterconnectCloud VPN
#6

Direct VPC routing vs. shared network governance vs. dedicated circuit vs. encrypted tunnel

All four connect networks, so candidates apply VPC Peering to all multi-network scenarios.

Deciding signal

VPC Peering connects two VPCs privately — traffic routes between them via Google internal network, non-transitive (A-B and B-C does not enable A-C). Shared VPC designates a host project whose VPC is shared with service projects — the service projects use the host VPC subnets, enabling centralized network management and billing. Cloud Interconnect provides a dedicated private circuit from your data center to Google — Dedicated Interconnect for direct colocated connectivity, Partner Interconnect through a provider. Cloud VPN creates an encrypted IPsec tunnel from on-premises to Google over the public internet. When the scenario involves sharing one VPC across multiple GCP projects with centralized management, Shared VPC. On-premises dedicated private connectivity, Cloud Interconnect. On-premises encrypted internet tunnel, Cloud VPN.

Quick check

Is this routing between two GCP VPCs (VPC Peering), sharing one VPC across multiple GCP projects (Shared VPC), dedicated private on-premises connectivity (Cloud Interconnect), or encrypted tunnel from on-premises (Cloud VPN)?

Why it looks right

VPC Peering is the default inter-VPC answer. Shared VPC is the correct answer when the scenario describes multiple GCP projects sharing network resources under centralized control — a distinct model from peering.

Cloud KMSCloud HSMSecret Manager
#7

Managed key service vs. hardware-backed keys vs. secret storage and versioning

All three store cryptographic material or sensitive values, so candidates apply Secret Manager to all secrets scenarios.

Deciding signal

Cloud KMS is a managed key management service — it creates, rotates, and applies encryption keys for encrypting data stored in GCS, BigQuery, and other services. Keys are stored in Google-managed infrastructure. Cloud HSM is a hardware security module backend for Cloud KMS — it provides FIPS 140-2 Level 3 validated hardware for key operations when regulations require hardware-backed key custody. It is not a separate service you deploy; it is a key protection level within Cloud KMS. Secret Manager stores and versions application secrets — API keys, passwords, tokens — with access control, audit logging, and automatic replication. When the scenario involves managing encryption keys for GCP services, Cloud KMS. Hardware compliance for key operations, Cloud HSM (as a KMS protection level). Storing and accessing application secrets, Secret Manager.

Quick check

Is this managing encryption keys for GCP services (Cloud KMS), requiring hardware-validated key protection for compliance (Cloud HSM protection level), or storing and retrieving application secrets like API keys and passwords (Secret Manager)?

Why it looks right

Secret Manager is the default "store sensitive values" answer. Cloud KMS is correct when the scenario involves data encryption key management — a different purpose from application secret storage.

Cloud IAMOrganization Policies
#8

Identity permission grants vs. resource configuration constraints

Both control what happens in GCP, so candidates treat them as alternative access control tools.

Deciding signal

Cloud IAM grants permissions to identities — users, service accounts, groups — at the resource level. It controls what authenticated principals can do: "this service account can create BigQuery tables in this project." Organization Policies define constraints on resource configurations regardless of what users are permitted to do — they can restrict which regions resources can be created in, prevent public IP addresses on Compute instances, disable service account key creation, or require uniform bucket-level access on Cloud Storage. A principal with IAM permission to create a Cloud Storage bucket can still be blocked from creating a bucket without uniform access if an Organization Policy enforces it. IAM controls identities; Org Policy controls resource behavior.

Quick check

Is the requirement to control what identities can do on resources (Cloud IAM), or to enforce that resources are created with specific configurations across the organization regardless of who creates them (Organization Policies)?

Why it looks right

Cloud IAM is the default access control answer. Organization Policies are the correct answer when the scenario describes enforcing a configuration standard across all resources in an org or folder — a constraint on resource properties, not on user permissions.

Train these confusions, not just read them

10 GCP PCA questions. Pattern-tagged with trap analysis. Free, no signup required.

Start PCA Mini-Trainer →